Design is the Implementation

Russell Holt
August 1998

Imagine first a non technical user creating systems more complex, more robust, more reliable, with longer life, and in a millionth of the time it takes scores of programmers to build a system today.

Is that possible?
  Imagine now a software system that builds itself. It adapts itself to new situations. It grows. It changes. Useless components are discarded, and new kinds are created as needed. Always, constantly, without ceasing.

New needs, scope, requirements, problems cause it to constantly adapt. Because of this, there is no point when it is done, no goal to be achieved other than to be complete at any given time. Like a river; though it constantly changes it is never incomplete.

The only constant, the only certainty about it is that it will change, perhaps even the way that it will change.

Is this possible?
  What's the difference between these two scenarios? Is there a difference? Clearly something about the way we develop software will have to change if either of these two are to be possible.

This paper presents the view that the need to evolve is everpresent but inherently unpredictable, so a capacity for rapid evolution is more important than design foresight; instead of the conscious design of evolutionary abilities into software, such an ability should be a fundamental property of software development technologies. Such technology would allow software development to focus on the problem at hand rather than evolutionary infrastructure; even the simple knowledge that one is never locked into design assumptions will be very freeing.

Architecture is a hypothesis about the future. It provides a framework that we expect to fit all future needs and problems.

Modern houses, for example, are built with electric wiring and plumbing because it's very probable that the residents will want to use electric appliances and running water. It is not easy to add such features to a house which does not already have them. If only the builders of my 1860 farmhouse had spent more design time and seen the general case.

Software architecture is very similar in this regard. Useful and successful software will almost always outlive some or all of its basic design assumptions. Consider the year 2000 problem in legacy system software. This software has outlived its originally anticipated lifetime by a good twenty years or more, and the invalidity of a two-digit year is affecting our entire economy - how small an assumption was that?
The software, like the house, hasn't changed, but the world around it has. And now it, too, must evolve. If it can't cost-effectively adapt, it must be replaced entirely. The cycle will repeat until we learn how to build systems that are designed to change, whose assumptions, however apparently small, can change, too.  

What happens when we encounter unanticipated situations, new problems? That is, what happens when the hypothesis that is an architecture becomes out of date? Surely we can't possibly anticipate the specific ways a system will need to change, but we can certainly know that it will have to change.
Where software differs from the physical world is that software itself (the "implementation") is abstract. In some sense, an architecture on paper is more concrete than the software it describes; as humans we innately think of THINGS and relationships and draw them as boxes and lines, but no such THINGS ever really exist in software.
Which is more abstract? Which represents which?

12 80 00 0F 01 00 00 00
90 10 20 2C 40 00 DC 72
01 00 00 00 92 10 00 08
90 10 00 09 D2 07 A0 48
7F FF EE 51 01 00 00 00

If this is so, what separates software design from implementation other than our perception? Hasn't the clear distinction between design and construction in the physical world strongly influenced the way we build software technology - and the ways our software allows us to build other software?

Software design is our understanding of and expectation for an implementation. Iteration on a design is not difficult at all; it is iteration of an implementation where all the problems are found. Partially this is because design can only go so deep; many details are left to the implementation, as when we say we'll code a design; it is the implementation of those details which can uncover bigger design flaws, and hence the abandonment of the waterfall methodology. Fixing a design flaw means adjusting your understanding of and expectation for the implementation (the act of redesign), and then changing all of the implementation details which are affected by (depend on) this shift (the act of reimplementation). Iterative and evolutionary techniques are key to the success of today's object methodologies.

Unfortunately, it is very common that implementation will uncover design flaws which cannot be addressed through design changes given real world resource constraints. We can't constantly change basic assumptions with today's technology, after all; we are just able to determine whether a source code file should be recompiled or not based on which other files have been modified - a far cry from automatically adapting entire networks of complex dependencies for new assumptions.

As implementation separate from design is evolved, our understanding of it will necessarily decrease to the point where we don't really know what is there anymore (we don't know its design), it isn't correct, more flaws have been introduced in attempts to fix other flaws, and we must start over. Or live with it as is - flawed and unable to be extended any more.

It is the rapid and reliable evolution of software implementation which is fundamentally unsupported by current software development technologies.

Fred Brooks illustrates this idea very clearly:

All repairs tend to destroy the structure, to increase the entropy and disorder of the system. Less and less effort is spent on fixing original design flaws; more and more is spent on fixing flaws introduced by earlier fixes.

The Mythical Man-Month, Frederick P. Brooks, 1975.
Why doesn't anyone read it?
  This hasn't changed since 1975, despite the tremendous variety of software development tools and techniques which have come and gone. No matter how one approaches software development, the above scenario is always reached at some point.

Today, our most advanced software techniques plan for growth by attempting to forseeing the future of a complex system, though evidently the analytical general case continues to, and will always be elusive. Every new case has the possibility to fundamentally change the whole, and because our understanding of the problem (design) will always outpace our ability to evolve our implemenations to meet it, we simply allow our systems to be incorrect by necessity.

But even if the future is known, even if we're modeling something that we do in fact know perfectly well - that is to say, the general case is understood well enough, what cannot be foreseen is how well this design will be implemented. The general case does not and cannot imply a perfectly suitable implementation, because the specific way that a design is implemented is absolutely key to its ability to grow and adapt as well.

Therefore, the ability to react quickly and reliably to meet new requirements is more important than the ability to forsee the future. The ability to change the hypothesis to meet new conditions is more important than the accuracy of the hypothesis itself.  
  The worst situation is when a hypothesis about the future has already been set in stone when it proves to be incomplete, inaccurate, or flawed in some fundamental way. Current software development technology and techniques makes this situation very common; it happens all the time.

Design (what we want the system to be) cannot be easily changed because of implementation restrictions (what the system really is), so their implementations "code around" the design - and our understanding of the system decreases significantly.


  In the case of the year 2000 problem, the most problematic assumption was not the two-digit year, it was the expected short lifespan of the software. Had this software been written to support rapid adaptation to new requirements, that is to say, had long term evolution been a deep and meaningful design requirement, design assumptions such as the two digit year would be able to be changed.  
  Object oriented design with its important attribute encapsulation is an excellent way to address this issue; perhaps the best we know. But current technology places severe limitations on its concepts; direct support for evolution is only possible within an architectural hypothesis. As long as it is only the guts of an object that need to evolve, the system can chug along essentially untouched.

If, however, it is the way the objects interact in a much larger sense (the design, the architecture) that requires significant evolution, the scope of the necessary changes can expand from simply adjusting the behavior of one object all the way to complete system redesign - and thus, complete reimplementation. While it's not as if adjusting or reimplementing the guts of an object is easy, but graceful and rapid architectural evolution of an actual implementation is simply not feasible in the real world today.
  Hierarchical classification is an excellent way to think about a problem. But a specific classification scheme does not reflect reality; it only reflects our current assumptions and perceptions, which will shift from time to time, or from one application of a "reusable class library" to another. As these ideas change over time, we adjust our classification scheme to reflect this. In object terminology, this is called refactoring. As helpful and necessary as this seems to be, it's just design. The unfortunate reality is that the most general and immovable concepts are at the top of an object classification hierarchy, and current implementation technologies make them the most difficult to change because everything below depends on their very definition: such changes can easily be equivalent to total reimplementation.

The bigger the hierarchy, the more set in stone its top level classes are. Eventually, they become fossils, and subclasses n+1 generations removed will try to hide their irrelevant ancestry with a host of tricks and work-arounds.

Imagine how many books would become obsolete if science decided that mammal was not an appropriate biological classification. Imagine if such decisions were made every six months. Welcome to the software industry.

...what we observe is not nature in itself, but nature exposed to our method of questioning.


  We know how to plan for change, and we know how to build software to anticipate certain kinds of changes. But how do we know when to do this? We can't afford to spend the time all the time; nothing would ever get done. How do we know whether we will have new requirements? How do we know our software will be successful? How do we know whether to invest the time necessary to design for evolution? How do we know whether or not we should be looking for the general, flexible and design-extensible case?

Wouldn't it be better to be able to have some fundamental technology that inherently allows a system to be gracefully evolved over time whether or not this need was originally anticipated?

What we need is a way - a technology - to guarantee the ability of our systems to cost effectively evolve, as needed, quickly and reliably.

We don't need to see the future because we can't.
  The question now is what does it mean - how can we do this?  

the abstract design layer