You are hereA New Chore for Sisyphus / Chapter 10 – How to manufacture software

Chapter 10 – How to manufacture software


By DaveAtFraud - Posted on 13 July 2010

RFC 1925-11: In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

Since early in my career in software development I have heard senior managers express the desire that software development be turned into more of a manufacturing process. This implied primarily a desire that the process become more predictable and, especially, that it become cheaper. The nebulous goal was to somehow have an order for some custom software product come in and the development team would simply assemble a collection of off-the-shelf and reusable components to fulfill the customer's order. The pieces would all fit together and the assembly line would kick out the desired product on time and at little cost but with a tidy profit.

Looking back at how the state of the software development art has progressed over my career, to large degree, this has happened although not in quite the way these managers envisioned. Application builders, end user oriented development tools such as Visual Basic, fourth generation languages, retail off-the-shelf libraries, etc. have made it possible for non-programmers to easily develop the capabilities they need without the overhead of a large software development effort. To be sure, these developers aren't the “unskilled labor” that was the goal envisioned for “manufactured software”, but like so many other things, the computer has made it possible for the initiator to create an end product without getting anyone else involved. Unfortunately, as was discussed in chapter 7, the resulting programs tend to not be as efficient or as robust as those developed by a professional development team. If the results of such a development effort are to be used in “production”, a professional development team must “productize” the program by doing things like making the user interface user friendly, validating inputs, and, if necessary, optimizing internal logic. On the other hand, these programs are frequently good enough; especially if they are only intended for internal use.

Huge categories of software development tasks are now solvable with off-the-shelf solutions or amenable to development using either rapid development techniques such as application builders or, at worst (from a cost perspective), agile development. On the other hand, those development projects pushing the boundaries of what can be done have stubbornly continued to require the application of both software development and engineering expertise.

The problem won't go away

The inefficiencies inherent in stringing together a collection of off-the-shelf components aren't sufficient when the desire is to efficiently solve a cutting-edge, inherently difficult problem. There is and always will be a set of software development problems that tax the resources (CPU, I/O bandwidth, storage, network speed, etc.) of the current technology. Development organizations will continue to attempt to solve these hard problems because providing a solution promises a lucrative return to those who can solve such a problem. There will always be a need for developers who can wring the last ounce of performance from the hardware and a demand that this development be done under time to market pressure. Finally, that these projects are on the frontier of what can be done means that the projects will have a fairly high level of inherent difficulty. This level of inherent difficulty will most likely be driven by overall program complexity but may include any or all of the other factors that drive inherent difficulty.

In many ways there is nothing new about this phenomenon. The application of computers to various “real world” problems has always been constrained in some way by the available hardware. This was true prior to the personal computer's introduction in the “mainframe days.” It was true even (and, sometimes, especially1) when the customer was some arm of the U.S. Department of Defense and hardware cost was almost not a consideration. What has changed is the way in which competitive, time-to-market pressures on retail products have brought about a continuing parade of software development abominations such as the project described in Chapter 1.

Given the above, it would seem that the “perfect storm” of hard projects and time-to-market pressures is not going to suddenly go away. Improved technologies will continue to open new problems to possible solution by computers. These problems, by definition, are sufficiently complex that only clever attack and application of the latest hardware has a chance of solving them. Newly available, inexpensive hardware opens only the possibility of such an attack. It is only through engineering of the solution that the attack can be successful and such engineering takes time.

Software tools are incapable of providing such a solution in and of themselves. Likewise, agile development methodologies are unlikely to scale up to such problems. In addition, agile methodologies lack the collaborative problem solving practices required to successfully attack large, complex problems. Outsourcing the development of such a solution is unlikely to be successful since the outsourcer faces the same problem solving constraints as the originator. Worse, outsourcing the development of such a solution transfers significant technology to the outsourcer. This technology could be used as the basis of a sustainable competitive advantage if protected and exploited. Finally, simply throwing a large number of developers at such a problem is unlikely to result in a satisfactory solution. If by some miracle such an approach appears to solve the problem, the quality of the underlying product is likely to be so low that the advantage gained by the death march development will soon turn into Sisyphus' rock.

For the foreseeable future we will be stuck in a world which continues to demand that large, hard software development projects be solved on impossible schedules. Other than a very few lucky cases where throwing lavish resources of both tools and people at a problem randomly works, these projects will not be truly ready for deployment sooner than the schedule dictated by each project's inherent difficulty.

Other industries have dealt with similar problems

As with so many aspects of software development, this is an engineering problem that has been successfully faced by other engineering disciplines. For a variety of reasons, the software development community and software project managers have chosen to ignore the solution. Every other engineering discipline recognizes that hard problems are hard to solve and, thus, take longer. It's possible that only the existence of physical evidence of failures in other disciplines keeps them from flailing repeatedly at hard problems the way software development organizations are prone to do. It seems more likely that these other disciplines have recognized the futility and hazards of such efforts and have learned to live with this reality.

The ability to create new and novel solutions to problems that have never been solved before is what sets apart those with true engineering expertise from those who merely imitate what others have done before them. We have already seen that such problem domain expertise is a competitive advantage. This expertise is in specifically knowing how to solve the inherently difficult portions of a given software development project. The goal of a software development organization should not be to run away from or otherwise ignore the difficult aspects of their particular problem. Instead, it should be to find how to build on their expertise at solving such problems to erect a barrier to competition with their area of expertise. Software development organizations that have not recognized this fact have, instead, produced flat, simple, bloatware2 that has led to the ability of the open source community to create competitive products for so many previously proprietary software products. This is not to minimize the achievements of the open source community but to point out that strictly voluntary, open source projects have no mechanism in place to perform, as an example, the ease of use research that made WordPerfect the penultimate keyboard oriented word processor of the DOS world.

Turn inherent difficulty into an advantage

The first step in dealing with inherent difficulty is not to fight it but to embrace it. Elegantly solving the inherently difficult portions of a problem is what sets a program apart from its competitors. Building a development team that both understands the problem domain and how that domain constrains possible solutions allows the team to create solutions to the problem set as rapidly as is possible. It may be desirable to have such a solution sooner but a quality solution to the problem will not happen any sooner than the schedule dictated by the problem's inherent difficulty. Only a team well versed in solving such problems is capable of solving the problem at the low end of the schedule duration predicted by the problem's inherent difficulty. A scratch development team with little or no domain expertise will take much longer if they succeed at all.

Inherently difficult portions of the problem can be decoupled from those parts of the development that are easier. This is the point at which software developers should take a cue from those who manufacture tangible goods. The basic engineering of such goods changes only slowly while the manufacturer finds new “convenience” functions or decorative features to add so that new models can be released from time to time. For software projects this means going beyond what would be considered a good, object-oriented design. The architecture of the program should take into account that the core logic that solves the inherently difficult portion of the problem will be on a different development schedule from the rest of the project. The interfaces to this core logic should reflect this need for clean separation. Put another way, the program architecture should reflect the reality of how the program components will be developed. This architecture must provide logical separation of the core logic that implements the inherently difficult portions of the program from the remaining program elements.

A new architectural approach

This approach goes beyond simple modular development or an object oriented analysis and design approach. Modular development or object-oriented approaches decompose the program into convenient pieces for development. Typically, the analysis of the decomposition stops at the point that the development effort can be partitioned among developers and, at best, certain components are identified as possibly being reusable. The “manufactured software” approach has the development team and especially the system architect analyze the various pieces of the product to determine their likely volatility over time and their criticality. Since the core of the program should correspond to its primary function (and thus market or use), the core functionality should be among those pieces that are both critical and non-volatile. This core functionality should then be put on a planned, predictive development process with formal engineering discipline. This longer development cycle implies that an effort should also be made to keep what is determined to be core functionality as small as possible.

Software development teams frequently treat all aspects of a program relatively the same and for good reason. A trivial error in the user interface can make a significant feature as unusable as if the feature itself had the error or, for that matter, wasn’t present. Yet solidly building the underlying technology for the feature and then separately developing the means to exploit the feature doesn’t seem to be an alternative that many people have tried. If the application program interface (API) to the feature is well-defined, various cuts at exploiting the functionality are well within either the ability of an XP team or agile development team to build very quickly and quite well. The restriction is that the methodology used should be tailored to the size and complexity of the expected effort.

Heresy

Software development tends to attract practitioners who demand an aesthetically clean program. The thought of tailoring the design of the program to the way it will be maintained is a heresy. Unfortunately for such true believers, current software development practices rarely yield an aesthetically clean result. In the end, they get neither as schedule pressures drive projects to use death march development methodologies that yield neither a maintainable product nor an aesthetically clean implementation. Carefully designing the product to be maintainable however lets the core of the product be developed cleanly and aesthetically while allowing the peripheral aspects to evolve rapidly in response to market demands. As somewhat of a “cop-out”, if the market demands extensive changes to your product core, you have deeper problems than a software methodology can help3. Instead, you should be focused on what changes you need to make to your business plan.

Software developers frequently repeat the cry of, “Give us clear, firm requirements!” Unfortunately, the lack of reliable crystal balls means that those who craft the requirements are often “flying blind.” This is especially true in development of retail market software products. Ideally, the product should meet the requirements of the marketplace as they will exist when the product is released. To make matters worse, no one will really know what these requirements are even when that release happens. The software development approach described above restricts the need for firm requirements to only the core product functionality. This allows the user interface and peripheral functionality to evolve rapidly to meet changing customer needs.

This architectural approach somewhat flies in the face of what a good, object oriented design would look like. Such a design would probably have “user objects” within the core functionality. These objects would be visible and the user would have the capability to directly manipulate them through the user interface. This approach tends to make the program's core logic as volatile as the latest user's whim for new functionality in the user interface. This is not to say that the core logic shouldn't present such a capability but it should only be presented through a well-understood, well-defined and robust program interface.

Handling such a design restriction is no different than if the core logic were purchased from a third party developer. This is exactly how the development team and product management needs to think of the code that implements the core logic. Adopting such an architectural separation approach ensures that the inherently difficult portions of the project are confined to a small number of software components and do not spread throughout the program4. Conversely, the developers of the core logic need to attack the problem as they would for developing a general-purpose engine that solves the problem. This especially means that the controls and output interface be made as generic as possible. This way changing user requirements can easily be implemented without requiring changes to the core functionality.

Fins or stripes?

The development cycle of the project now starts to resemble the development cycle of tangible goods. Superficial changes can be readily implemented with little risk that the change will “break” the underlying functionality of the project. The core logic goes through only well thought out, evolutionary changes using a predictive product engineering and development methodology. The development schedule for the remainder of the project will be far more malleable. The methodology used for its development can be tailored to the specifics of the problem at hand. Changes to the user interface or ancillary functions to implement the latest requests can be implemented with the same ease that a car manufacturer can add or delete stripes or fins.

Recall that the lightweight engineering methodology described in the previous chapter identified both the inherently difficult aspects of the development effort as well as those aspects that were amenable to development using agile methodologies. Not unsurprisingly, these inherently difficult aspects of the product will, for the most part, map to the product's core logic. The remainder of the product can be quickly and easily changed as market pressures demand. The isolation of the inherently difficult aspects of the product in the core logic means that any development effort that affects the remaining functionality can probably be done using a lightweight software development methodology.

This approach also results in a return to major, minor and patch releases for software updates. Significant changes to the core functionality result in major releases that incorporate significant, new functionality. Minor releases typically do not change the core functionality and only provide additional, capabilities, ease of use features and such. Patches typically only affect the portions of the program outside of the core functionality since the predictive development methodology it follows allows for both rigorous development and thorough testing. Further, only a well-tested subset of the core functionality need be exposed at the time of a major release. Only beta testers and development partners are granted access to any new core functionality until it is well “shaken out.” This limited access to new functionality minimizes the risk that some debilitating bug will occur for the bulk of the product's users.

Finally, the isolation of the inherently difficult components allows the design of these components to be as efficient as the design team wishes to pursue. This functional isolation may result in some responsiveness penalty for user access but the actual core processing development is free to focus on efficiently solving “the problem.” The ability to solve the core, inherently difficult problem in cutting edge development projects is the ultimate arbiter of the success of the project. This design approach allows any processing that incurs a significant performance penalty to be moved out of the inherently difficult processing to ancillary functions where it can better be tolerated.

As an example, during development of the project described in Chapter 1, a report that provided a near real-time view of the on-going processing worked reasonably well only when the system was not heavily loaded. Under heavy system loads, this same function could bring the system to a near halt. Worse, the data presented under such a system load was nearly useless since only a tiny fraction of the data could actually be presented. Adding to the absurdity of this situation, if the system could have somehow kept up under such load conditions and presented the actual, real-time data, the data would have scrolled off the screen faster than the user could have possibly comprehended it.

We ended up with the performance of a not very useful or even meaningful ancillary function driving the design of the overall system. There were several unsuccessful attempts to redesign this functionality to either keep up with the data or present the data to the user in a meaningful manner. Since these two objectives could not concurrently be met, the broken functionality continued to drive the system's performance. We would have done better to design the underlying system to keep up under heavy loads and then decide what views of the data were meaningful to the user under both heavy and light processing loads.

Development cycle length

It is up to the development team working with both project management and either the user (for internal development efforts) or marketing (for commercial products) to determine the development cycle time for the inherently difficult portions of a program or product. The functionality included in a particular cycle, the length of the cycle, or both can be adjusted. Ultimately, the inherent difficulty of the functionality to be developed determines the schedule required for the development effort. If the schedule required is longer than desired then either the desired completion date must be adjusted to allow for the functionality to be developed or the functionality to be developed must be adjusted to fit the desired schedule. This functionality will not be available with reasonable quality any sooner. Attempts to force it to be available sooner are both likely to fail and to introduce deep rooted, systemic flaws in the product.

Putting it all together

Clearly, for new products, it behooves the development organization to perform enough of a functional analysis to understand the complexity of the product they are about to develop. Given the results of the functional analysis, the appropriate development methodology can be determined for the various project components. Unfortunately, software development efforts rarely start from such a clean slate. More likely, the development effort is about enhancing some existing system that evolved to its current state with little or no consideration given to a functional analysis of the system.

For existing systems and especially for those that have evolved over time, it is absolutely critical to perform functional analyses of the system both as it exists and as it is desired. This is not to say that an effort to immediately build the desired system should be launched. It only says that the functionality of the desired system needs to be fully understood. The reality is that very few organizations can afford to rewrite even a moderately large system from the ground up. Worse, many of the attempts to do so have all too frequently end up as total failures.

The functional analyses of both the existing system and the desired system provide the end-points of the system development road map. While the “desired system” end-point will probably never be reached (it will have moved long before the system could be developed), the functional analysis of the desired system allows the development organization to more clearly identify the core, inherently difficult portions of the existing system. As with the development approach for new systems described above, the development team can than begin the task of functionally isolating this core processing up to and including rewriting it. Existing capabilities can usually be maintained by designing access methods that map legacy capabilities to the new implementation. The difference between these access methods and the patches and kludges of the death march are the access methods are designed with full knowledge of their roles and requirements. Over time, existing, peripheral functionality can be redesigned and migrated to no longer need the legacy access methods.

The problem determines the schedule

Regardless of whether the development effort is for some new project or to enhance an existing program, maintaining the system's functional analysis allows the development team to understand the impact of any future change request. If the problem the system originally set out to solve has a high level of inherent difficulty, significant changes to the core functionality will entail the schedule costs of an inherently difficult development effort. This will be the case regardless of whether the functional analysis has been completed or the functional separation of the core processing has been achieved. Without the functional analysis and achieving architectural separation of the inherently difficult, core logic, this estimate will be at best an educated guess. With a functional analysis and architectural separation there is a high likelihood that the estimate is accurate. The only question is whether both the development organization and management understand that the schedule required for the project is determined by the nature of the problem to be solved.

Implementing the requested change with an accurate estimate of the resources and schedule required means that the quality of the system will not be compromised. With a good understanding of the impact that a requested change will have on the core functionality of the system, the development team has a good chance of being able to accurately estimate both the development resources and schedule required to implement the requested change. This estimate of the schedule duration and resources for the effort may not be what management wants to hear but it can be fairly accurate if based on facts and knowledge.

The alternative approach to solving the problem usually takes longer and costs far more. This cost isn't just the salaries for a few additional maintenance programmers and possibly a few extra customer service personnel. It costs the company both its ability to develop future products and its reputation.


1For a variety of reasons but driven to a large degree by cost, most of the “really interesting” and, thus, really difficult large scale software development efforts prior to the rise of personal computers were typically defense related or other government agency contracted (e.g., air traffic control) applications. Businesses tended to be content with the available, off-the-shelf billing, inventory, payroll, etc. applications with at most some customization due to a large extent to the prohibitive expense and risks associated with doing anything else. Government projects applied existing, predictive project management models to competitive bid software development projects to choose a contractor based on proposals submitted prior to any significant work being done.. The competitive bid process meant that the staffing and schedule were typically “ambitious” but the acceptance criteria and various software development quality initiatives usually insured that an acceptable level of quality was eventually achieved in the final product.

2I define “bloatware” as software products that contain a multitude of noncritical, poorly implemented features. Some have also dubbed this phenomenon “feature-itis” in that features keep getting added but little is done to actually enhance the way the user performs the core function of the program. It is far easier to add such features than to actually determine how to make the program better support the user's needs.

3It is possible to follow the market to remarkable extremes. In the mid-1990s, I worked with a cross-platform development tool from a company called Neuron Data. Neuron Data's original product was a neural network. They also created a cross-platform development tool to allow their neural network to be utilized on any of several platforms. It turned out that the cross-platform development tool sold better than their neural network which put them into the software tool development business.

4Joel (of Joel on Software) has asserted that program complexity is fairly uniformly distributed throughout a program. This is simply due to not attempting to architecturally isolate the complex, inherently difficult software components. I have been involved in a number of large-scale software efforts that successfully used the isolation approach although we didn't realize that that was what we were doing at the time. It was simply a fall-out from the project's functional decomposition.


This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.