You are hereA New Chore for Sisyphus / Chapter 7 – Chasing the accident

Chapter 7 – Chasing the accident


By DaveAtFraud - Posted on 08 July 2010

RFC 1925-10: Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.
(corollary). It is always possible to add another level of indirection.

Through the years, software development tools have been created in an attempt to solve the problems that remained unsolvable or tedious and labor intensive for each new methodology. Before the current attempts to accelerate software development by using agile methodologies, the hoped for accelerator was the integrated development environment (IDE). If the IDE alone didn't provide a sufficient productivity boost, proprietary vendors enhanced their IDEs by tying them to some particular flavor of platform specific, “visual” programming. Before these tools were in vogue, the hope was that the development methodologies such as the RUP built around object oriented (OO) analysis, design and programming would do the trick. The object-oriented development approach was soon supplemented by object oriented programming languages such as C++ and java that allowed developers to more directly express object-oriented designs. When objects were still misunderstood or abused, there were attempts to further lock-down the object definitions with hard typing (e.g., as implemented by the Ada programming language) or make the description of an object-oriented design more rigorous by expressing the design using the Unified Markup Language (UML).

Turning the clock back even further, tools such as context sensitive screen editors and symbolic debuggers were touted as providing the hoped for productivity increase when they appeared. Such editors eliminated a lot of the drudgery of programming using character editors or punch cards while symbolic debuggers greatly improved the ability of developers to examine a program for flaws. Back a little further in time and the hoped for productivity increases were to be provided by the methodologies of structured analysis, design and programming. When these techniques were not sufficient, they were soon tied to using structured programming languages.

Over the years the search for the magic cure to software development taking too long for the product quality achieved has swung back and forth between technological solutions and methodological solutions. Not unsurprisingly, tools for diagnosing the problems that agile methodologies are unable to address such as “hot spot identification1” for production systems comprise the current crop of cutting edge tools. Technology solutions in the form of new and better software tools typically attack the expression and construction of the software project while methodological solutions either define the project's organization or determine how the developers look at the problem. Continual technological advances such as ever increasing computing power have enabled both new development tools and methodologies. As an example, the common use of structured programming languages and structured programming became feasible when target system power was sufficient to no longer require low-level, machine language code optimizations. Typically, a new methodological approach is first promulgated to address the shortcomings of the previous approach. The new methodology is then followed by a new set of more appropriate tools. The tools first allow code to be developed using the methodology but then evolve toward tools that attempt to enforce the “correct2” instantiation of the methodology.

This continuous evolution of both methodologies and tools should, apparently, culminate in some penultimate combination that creates programs as rapidly and with as few errors as is possible. Object-oriented software development paired with the RUP as a methodology seemed to be that final answer. Unfortunately for software development organizations (but not for those who sell software development tools), the economic drivers of software development are not constant. This is due to both changes in the marketplace and technology. What object-oriented development paired with the RUP cannot do is produce software when the requirements are volatile or the project needs to be completed on a very short schedule. These shortcomings are some of the drivers that led to the creation of agile software development methodologies.

Automated test tools cannot solve agile limitations

Agile methodologies are now bumping up against scalability and integration issues as described in the previous chapter. This has turned the search for the next software productivity boost back towards tools with the current emphasis on daily or continuous builds3 and built-in, automated build testing. At best, these approaches provide a simple “smoke test4” for the ongoing development effort. At worst, automated build tests are little more than an attempt to test quality into the software product. This approach to integration and testing attempts to substitute expending more CPU cycles by running and re-running trivial tests for well designed, pertinent testing base on functional requirements. In no way can test automation replace actually solving the underlying inherently difficult problem, should one exist.

If the development team is using an agile methodology such as XP to develop an appropriate (not inherently difficult) task, automated build tests simply confirm what was assumed at the beginning. The individual pieces of the project can be developed in parallel with little or no coordination between the contributing teams and, if such tests succeed, the product is meeting the user-approved tests. On the other hand, if the project has a high level of inherent difficulty, such tests are meaningful only if the test team provides a set of tests that actually exercise the full system; tests that exercise the full system as it would be used by the end user.

Just as more code does not mean a better solution to a software project, more testing as measured by CPU cycles blindly expended does not result in a higher quality software product. Unless the test team is involved in defining tests that meaningfully exercise the full system, automated build testing at best provides an automated mechanism for re-running unit level tests. These unit level tests are tests that the developers should have been running all along before checking their code in (and thus making it subject to the automated build tests). It should not be surprising that developing the needed system level tests requires the same level of functional analysis that is required to come up with the software solution5. For an inherently difficult problem, the proof is in successfully running system level or “integration” tests that demonstrate that the actual system works as a whole. This is decidedly different than running and re-running unit level tests that only exercise localized components.

The problem won't go away

Each software development advance, whether in tools or methodology, has addressed what appears to be the largest and most tractable part of what was considered to be part of the essence of software development at the time. Invariably, the problem addressed was actually accident6. We now have “off-the-shelf” solutions for projects that in earlier time would have required a significant custom software development effort. Likewise, we have tools for project development such as fourth generation languages, simplified languages such as Visual Basic and interface generators that allow relatively non-technical personnel to express solutions to an ever-wider range of problems.

What were previously difficult tasks have become much easier. The efficiency of new tool sets allow such solutions to be expressed with only a tiny fraction of the personnel as compared to what would have been required if the entire solution had to be developed from scratch. This reduction in project size also means that what was once a fairly intractable development effort due to the combination of both difficulty and project size now can be accomplished with a much smaller number of people focused only on solving the essence of the problem. The smaller number of people required means that collaboration issues that would have bogged down the effort in earlier times will not occur. The additional personnel and the need to coordinate their efforts were only required in order to solve what, in hindsight, were actually peripheral or “accidental” issues.

Each advance in the underlying hardware technology that allows such “canned” approaches to solving problems also makes it feasible to tackle new problems that push the boundaries of what is possible. The new technological capability allows problems to be solved for which there are currently no canned solutions. New tools and methodologies for approaching such problems may be what enables the solution. Alternatively, what now makes the problem solvable may be the newly available, comparatively cheap, raw computing power. The availability of such power means that new, larger and more complex problems can be assailed7. Either way, the cutting edge of software development is always stuck using the most machine efficient tools to extract the last available CPU cycle or byte of communications bandwidth to solve some “new” problem. In fact, most such problems aren't new but attacking them was prohibitively expensive until the cost of the required computation dropped enough.

New tools and off-the-shelf software solutions allow many “old” problems to be readily and easily solved. Fortunately for those of us who make a living at software development, new, cutting-edge projects go beyond what was done in the past. The essence of each such new project entails solving a new, bigger and more difficult problem. As long as the costs of computation and communication continue to decrease, there is no end in sight to the development of ever more complex custom software. The very nature of the problems that cannot be solved using previously available technologies and off-the-shelf solutions means that these projects will continue to be inherently difficult.

The essence is the problem

As of the time this chapter is being written (2006), it appears that the current generation of software development tools has addressed the majority of the nominal accidental problems facing software developers. Integrated development environments, whether commercial or free, allow the developer to concentrate on expressing the solution to the problem at hand. This work is accomplished with only the smallest vestiges of the accidental problems of even a few years ago intruding on the developers' thought processes. Such development environments not only address the editing of code and the running of symbolic debuggers but also include automated coordination8 with other users in the form of integrated access to source code control systems. This technical capability along with various agile or light-weight development methodologies means that a developer can theoretically devote almost one hundred percent of his or her work energies to expressing the solution to their piece of the overall project.

Agile development methodologies, by both their successes and limitations, show that there are still problems that require more than what is provided by even the most modern tools and methodologies. The combination of agile methodologies and modern software development tool sets allow developers to fully concentrate on attacking their piece of the essence of a project. As defined and sometimes practiced, XP appears to get as close as possible to eliminating all “accidental” programming tasks. Requirements are minimally documented as a set of user stories and executing user-approved tests created from these stories validates the code being developed. This includes even any required coordination between XP teams being done informally as a “shout across the bullpen.” All that is left to the developers is the creation of code; the essence of expressing a solution to the user's requirements.

The tools available to developers using agile methodologies are more than sufficient for such projects. These tools fully support writing and testing the code that solves the problem and managing that code once it has been created. Very little else is required although a text editor helps with writing up the user stories and associated test plans. Unfortunately, agile methodologies do not support the solving of larger, complex problems. The limitation of agile methodologies is that some problem solutions require more than just cranking out code. Worse, the engineering analysis, software development, and project management tools available cannot overcome this limitation. When the problem at hand has a high level of inherent difficulty, a tool set is needed that allows for the efficient collaborative solution of such a problem; not just writing, testing, and managing code.

But there are collaboration tools!

A single individual could theoretically attempt any software development project if only there was sufficient time. For a project of any significance, there is rarely enough time for this approach and multiple developers are assigned. If the project is not inherently difficult, it can be worked using an agile methodology. The schedule duration required for the project can be adjusted by assigning more people to the effort up to the maximum number of people feasible for such methodologies. Even if the project is inherently difficult but the size of the team is less than about seven, there is still a possibility that a small, tight-knit team can pull off the project within a relatively short amount of time. This is only possible if the project technical lead can fully comprehend the task and the other team members act almost as his or her surrogates in completing their assigned sub-tasks.

Once a project outgrows that which can be fully understood by a single technical lead, project architect, or chief engineer, the task of implementing that project becomes constrained by the ability of the team to collaboratively solve a problem none of the participants completely understand. When developing such a large system, a chief architect or chief engineer may understand the total system functionality but only at a high level of abstraction. Conversely, a first level manager may understand the complete functionality that his team is charged with developing. But this same manager will only have partial knowledge of the other pieces of the project his particular piece interfaces with. Beyond these “adjacent” pieces, such a manager or technical lead will only have a spotty picture of the overall project. Even worse, that picture will frequently be inexact or reflect the state of the project as of the last major design review. In between the chief engineer and the first level manager may be layers of middle managers with even fuzzier understandings of the project since they have neither full visibility into the technical details nor a comprehensive view of the entire project.

No one on the project understands the project completely in all of its scope and all of its details. Instead, each responsible individual at different levels of the project development hierarchy relies on abstracting both lower and adjacent levels of the project. This allows them to carry out development of their particular piece. From personal experience, additional factors include both the complexity of the pieces that subordinate managers are tasked with developing and the competency of the subordinate managers and their staffs. All of these factors are quite variable in the real world and can be further complicated by outsourcing as is discussed more fully in the next chapter. This outsourcing can be either to external developers and subcontractors or, internally, within a matrix management organization.

It is at this point in the project size and complexity continuum that both current methodologies and currently available tools are found wanting. The available tools allow large amounts of code to be generated and managed but it is not just the generation of more code that solves the problem. Available automated test tools and techniques allow significant portions of the developmental baseline to be tested but usually only at a superficial level. Predictive methodologies such as a classic waterfall or the RUP address such a problem but are assumed to take too long while newer, agile methodologies cannot scale to projects of this size and complexity. Unfortunately, neither current tools nor methodologies can significantly shrink the schedule duration required for large, inherently difficult problems.

Tools can't solve the real problem

Due to this deficiency, attempts have been made to move tools “upstream” in the development process. This requires creating tools to address the needs of specification writers and those who actually design the higher level logic of these larger and more complex systems. These attempts have typically resulted in visualization tools that allow the developer to better see the program design (examples are flow charting tools and the various program design language compilers) or organizational tools such as requirements traceability tools (ClearCase, etc.).

There have also been attempts to move tools downstream to address the needs of testers. These downstream products automate a variety of test functions from running the tests themselves to tracking test results. These attempts to support the needs of testers have yielded a number of test automation tools that allow either the same set of tests to be executed in the exact same manner repeatedly or which allow a planned series of tests to be run over various input ranges. In addition, tools have been created to automate bug or issue tracking and the creation and maintenance of test documentation and results. There are also the aforementioned tools for requirements traceability. These tools aid in determining if the resulting system actually does what it is supposed to do or what known problems exist that need to be fixed.

What is missed by all of the tools, techniques and technologies is that none of them actually address the fundamental issue of solving the underlying problem. They only address the issues of coordinating the description of the system that will solve the problem (both functional requirements and design), expressing the solution once the design is known, or testing specific details of the implemented solution to see if it executes as intended. None of the tools actually address the issue of finding the implementation independent solution to the problem known as a software design.

To understand this inability of software tools to automatically create a design it is necessary to understand the difference between program design and implementing or expressing that design through the activity known as coding. A software program design is the conceptual, relatively implementation independent solution of the problem. Implementation is the expression of the design in one or more computer programs through coding. Tools can only:

  • capture the specification of the required program,
  • support its implementation by helping the developer initially write the required code,
  • aid in debugging the result,
  • manage the files that constitute the program, and
  • run tests on the program to see if it behaves as the tester intends.

These actions all only involve manipulating data as directed by the developer, specification author or tester. Design tools, at most, allow the developer to visualize a possible set of logical interactions that constitutes some small portion of the design. No “design tool” somehow suggests or invents the necessary logic. Only the developer is capable of abstracting the requirements of the project into a collection of actions and decision logic for solving the problem. This collection of actions and decision logic can then be captured as the program design and eventually implemented as the program.

Designing the solution to an inherently difficult problem

Given this differentiation between design and implementation, we can now look at what makes the design process for inherently difficult projects so intractable to being made easier by software tools. Given a small problem, a single individual can create an algorithm for solving the problem and that solution can be expressed through a number of different tools or techniques. Review of the solution by others may yield suggestions for improvements, which the original author may or may not choose to incorporate. This process becomes infinitely more complex when multiple individuals are charged with finding a solution to a larger, interrelated problem. Each must now solve his or her own piece of the puzzle plus review whether that solution fits with the pieces created by the others and, similarly, whether each solution created by the others fits with what he or she has created. Also complicating this process is the real world constraint that each contributor will probably have their own particular expertise and may or may not be at all conversant with each particular expertise of the other contributors.

This process is usually accomplished iteratively. The program design evolves with each design iteration providing a greater level of detail. In addition to capturing additional internal program design details, the evolving design also captures a higher level of coordination between each contributor's individual design. Throughout this process, there is rarely an absolutely “right” solution but rather just a “better” solution. On the other hand, there are an infinite number of possible “wrong” solutions. It is up to the design team, possibly in cooperation with the customer or user, to determine when the design is sufficient or “good enough.” This iterative refinement process for finding a solution is a necessary factor in the optimal design creation process. For non-trivial problems, it is highly unlikely that the ultimate final solution will suddenly or immediately jump out for anyone charged with finding it. More likely, each participant finds part of the solution and the problem solving process melds this collection of partial solutions into a solution to the overall problem.

This need to logically compare the developing solution to itself for internal consistency and to the requirements is a fundamental constraint on the extent to which any automated tool can accelerate the problem solving process. Even a perfect communication tool that acted as the fictional, “Vulcan mind meld” would only present each participant with an unrefined data dump. This dump would include the complete collection of requirements, physical constraints, known approaches, lessons learned, thoughts about lunch, etc. that the other participants carry around with them. It is the iterative design process that elicits the specific factors applicable to the problem at hand from the collection of knowledge carried around by each participant in the design process. The same process weighs each input for applicability. Inputs that contribute to the solution are accepted while the rest of the information the participants carry around as it is discarded when seen to be inappropriate, incorrect, or unnecessary.

Thus, a design tool can only facilitate the exchange of ideas. It cannot substitute for the intellectual process of determining which of these various details are applicable to the problem at hand. Finally, it is entirely possible that the individual who ends up contributing any particular piece of the solution doesn’t even know that they are already “carrying around” this piece of the solution (or the background to create it). This only comes out when someone asks the correct question, in the correct way, at the correct time and only after any possible alternative has been shown to be insufficient.

Even at this point, the only way the current candidate solution can be recognized as the correct solution is if the same iterative refinement process is applied to it. If it is found to both solve the particular problem and not have the flaws of the alternatives then it may be sufficient. This is the stuff of pure creative thought and analytical problem solving. At best, some possible solutions may be amenable to exploration by prototyping. Tools can aid in the dissemination of the current set of problems and proposed initial solutions but rarely in finding the actual resolution. Finally, discovery of the existence of a solution does not mean that no other solution exists nor does it mean that this particular solution is the best one or even that it is “good enough.” Only continued analysis can provide this information and only the development team can make the assessment that such a solution is “good enough.” If not, the search for a better solution must continue.

Tool limitations

Software design tools can at most provide a communications infrastructure that allows the members of the design team to each share their approach to the problem with each other. This capability has existed from the time of hard-copy printed text with “red pen” edits through the current generation of shared document editing tools and other “groupware” and “collaborationware.” The result continues to be that each member of the design team has to understand the implications of both what the others have contributed and their feedback as to his or her own contribution. That is, there are two fundamental constraints on accelerating this process that are both rooted firmly in human cognition.

First, each of the contributors must create his or her own cut at their assigned piece of the puzzle. This creative process may involve solving a specific problem that, for all intents and purposes, has not been solved before. That is, each contributor must first create a solution to their piece of the essence of the problem before he or she can express that solution as a program. Second the contributor must fit into this solution the inputs of the others on the design team and determine whether any of those inputs are fundamentally at variance with his or her piece of the solution to the puzzle. In other words, the development team must collaboratively solve the inherently difficult problem that requires a shared solution that no one of them can create on their own. Neither of these activities is now or is ever likely to be amenable to automation. At best, the tools that are now available have removed a significant portion of what is “accidental” in achieving a shared view of the solution.

The argument here is not that software tools, off-the-shelf components or better methodologies cannot increase developer productivity. To be sure, the use of many software tools has resulted in significant increases in programmer productivity as compared to their productivity prior to the introduction of the tool. Beyond the direct increase in number of lines of code created, there has been a commensurate decrease in the error rate. This decrease in the error rate occurred as tasks that had been the subject of all too human error are now performed consistently and as perfectly as the supporting program(s) allow. Likewise, each new generation of software development methodologies has allowed new problems to be successfully attacked with fewer resources and with a potentially higher quality final product. Finally, the availability of increasingly sophisticated off-the-shelf components means that a variety of solutions can be simply purchased or, at worst, assembled from available components.

The details

The limitations of technological fixes to software development efforts taking “too long” when dealing with inherently difficult problems can best be explained by looking at each of the four categories of such fixes.

  1. Developer tools such as integrated development environments (IDEs) and their component parts such as context sensitive editors, symbolic debuggers, integrated source code management functions, code difference tools, etc. strictly help the developer express the solution to his or her particular piece of the software development project. The closest such tool sets come to addressing the issues of developing solutions to inherently difficult problems are in their source code management functions. These allow the developer to determine how two pieces of the project are not fitting together but provide no clue as to what is the correct fix to the problem. If the problem is inherently difficult, such a fix may only exists at the system design level.

    As noted previously, solving inherently difficult problems must be addressed at the system design level. IDEs only remove the programmer's accidental tasks associated with expressing such a solution. They provide no assistance for the creation of the solution at the design level. Even integrated access to code management tools only means that one developer cannot overwrite another's contribution. If the partial solutions from two (or more) developers are incompatible, the developers involved must still work out the design level, common solution between them.

  2. Application builders and simplified languages such as Visual Basic allow non-programmers to develop solutions to problems. By allowing the solution to be expressed at a fairly high level of abstraction, application builders together with off-the-shelf components may allow a subject matter expert to directly express a solution without having to involve a professional development organization. This eliminates the need to communicate the system requirements from the expert to the development team but also means that no functional analysis is done to determine the feasibility of the solution.

    My experience with such solutions has been that they often constitute a good prototype. Unfortunately, the lack of programming skills on the part of the subject matter expert turned developer often means that the solution is not ready for use in a production environment. The subject matter experts all too often produce code that is buggy, not robust enough for use in a production environment due to lack of attention to error handling and off nominal conditions, and such code rarely scales well due to their lack of understanding of real world programming constraints. Examples of such constraints are the proposed solution may include an algorithm whose run time increases exponentially with the size of the input data set or which has database accesses that require a large number of outer joins that are not bounded. If such a solution is still deemed suitable and management would rather pay for more hardware to allow the inelegant solution, the problem is temporarily solved (until the data set grows again or some input case that isn't handled well becomes common).

  3. Building from purchased components is subject to the limitation that the builder must still understand the problem well enough to know which components to use and how to assemble them. This approach may work well in creating a capability for internal use but hardly makes sense for commercial products. There is no competitive advantage for products that are simply an assembly of components that competitors can also easily purchase. If there are no other significant components in such a product, solutions to the problem will rapidly become commoditized. Conversely, this may be a good thing for internal projects since it means that an off-the-shelf solution may be forthcoming but it is the death knell for a retail product.
  4. Test tools allow testing of pieces of a project but require that people supply the functional insight into the system before such tools can be extended to testing at the system level. This is similar to the way that text editors and development environments allow pieces of the program to be expressed but provide little leverage when it comes to assembling the overall product from the pieces.

    In order to test a system with a high level of inherent difficulty, the test team needs to design tests to ensure that the system, as a whole, is exercised by the tests. These tests can then be implemented through test tools to automate their execution. Such test automation can only be accomplished after the test designer understands the required system and then designs a test that fully exercises the system. As noted previously, this requires the same level of insight into the system as is required to create a set of functional requirements for the system.

    Test automation tools, in their current form, simply do not even begin to scratch the surface of providing this capability. True system testing requires insight into both the system design as it was created and some idealized system that perfectly implements the system level requirements. Barring such a system-level functional description, no amount of testing whether by hand or with automated tools can ever confirm that the system works as desired.

    What makes these efforts even more difficult is that final product testing is actually supposed to confirm these same two very different aspects of the implementation. Testing only that the system meets the requirements captured from a functional decomposition only confirms that, at an abstract level, the system solves the problem at hand. Underlying this system level solution is the actual implementation or code that purportedly expresses that solution. The correctness of this code, unfortunately, must also be confirmed at the system level. As an example, the code management tools described in item one do not provide a mechanism for ensuring that what is supposed to be a shared abstraction is actually understood by all who must utilize it. These tools only provide a mechanism for ensuring that any piece of the program that utilizes a shared abstraction accesses the same code that implements the abstraction. Likewise, a build does not ensure that these elements all correctly respect the abstraction of the shared component. Only a valid design can do that. Testing may be able to determine this but with the restriction that only a limited number of data values can be tested. Problems can also occur if the abstraction ends up being “leaky9” for any combination of special data values, off-nominal conditions, or simply over time.

    It is possible to have a “clean compile”, link, build and even successfully run various unit-level tests and still not have a functional program because an abstraction is broken. This may be due to leaky abstractions or it may be that the design wasn’t sufficiently well understood in the same way by multiple developers. That is, each developer’s piece of the project is absolutely correct given the developer’s understanding of the project but, because the developers have different understandings of the project, the various pieces from each developer don’t work together10.

Attempts to extend software tools to include functional analysis and overall program design have been universally unsuccessful even when the problems at hand are relatively easy. Only testing of “easy” problems has seen substantial progress in automated tools. No progress has been made at extending automated tools to any of these tasks when the problem is inherently difficult. Software development tools seem to be limited to expressing a solution, superficially testing that expression and managing the pieces of the solution. When applied to inherently difficult problems, such tools can only minimize the extent to which accidental tasks impede the analysis or development efforts. If the problem is at all difficult, both the solution to the problem and testing of that solution require human insight into how to actually solve the problem.

It only gets worse as the problem gets bigger

Just as a reminder, beyond the discovery, description and implementation of a software solution, there are a number of other factors that affect the success of larger software projects. As the size of the project grows, the impact of the overall project grows and what portion of the project is strictly software diminishes. There are tools for systems engineering and there are tools for software development. The two do not tend to overlap. It takes human intelligence to translate the system level specification created by the customer or the systems engineers into first functional specifications and then software to implement the functionality allocated to the software solution. The system level specification will encompass the entire system but software productivity tools, by their nature, only address the implementation of the software portion of the project. This divergence of the software solution from the overall project solution further limits what can be expected of software tools when looking at large projects.

Still no 'silver bullet'

The search for the software development “silver bullet” is the continued attempt to find a combination of tools, project organization and development methodology that somehow brings massive productivity gains to the ever larger and more complex problems facing software developers. It has been several years since I read The Mythical Man-Month and of course the part of the book that stuck with me the most was the eponymous essay. The twentieth anniversary edition of The Mythical Man-Month included several additional essays including two that directly addressed the possibility that software development tools would somehow significantly increase programmer productivity. These essays are No Silver Bullet and 'No Silver Bullet' Refired. Sadly, after a re-reading of both essays I can only report that what was stated in 1986 and restated in 1995 is still every bit as true today. There is still no “Silver Bullet.”

There isn't and, in all likelihood, there never will be a technological “silver bullet” solution for software development taking “too long.” Especially unlikely is the creation of a “silver bullet” that addresses solving an inherently difficult software development problem. While today's software development problems will continually be cycled into the set of problems for which there are canned solutions, there will always be another inherently difficult problem tomorrow for which no such solution exists. Tools will be crucial in the creation of solutions to such problems but they are no substitute for the problem solving ability of the development team.


1Even when done correctly, agile methodologies tend to not be able to address system level or global optimization needs. This can result in “hot spots” and bottlenecks that adversely affect the system's performance.

2Methodologies tend to splinter somewhat like religions into sects and cults. Each sect or cult, of course, implements the methodology in the one “true” way and all of the others are heretics. This usually means that there is a proliferation of incompatible tools as a methodology matures.

3The practice of compiling and linking all of the related or dependent elements of a software product to create a more or less complete instance of the final software product.

4Think small appliance repair by an amateur. You put it all back together, plug it in and watch for smoke. No smoke means that it might work; smoke means that there is a problem.

5From personal experience, it is nearly impossible to come up with a set of comprehensive functional tests on a complex system while working strictly as a “black-box” tester if no functional analysis existed prior to the system being implemented.

6Fred Brooks in his essay “No Silver Bullet,” (MMM reference) separated software development tasks into “accident” and “essence”. Essence being the pure thought problem of inventing and expressing a programmatic approach to solving the problem at hand and accident being all of the sundry tasks required to produce that solution in machine executable form. To a large extent, “inherent difficulty” is the measure of the difficulty of implementing the essence of a project. To be absolutely clear, I include technical coordination tasks that appear to many to be accidental in measuring the inherent difficulty of implementing the essence of a large project. It can be argued that there is no simple, categorical dividing line between these two project attributes since a factor that is considered accident on one project may well be part of the essence of another. As an example, memory and CPU constraints may be negligible for a sensor display that runs on a workstation but are an essential part of the problem for the same sensor display as part of an embedded system. Thus, what is “accident” somewhat depends on the nature of the project.

7Moore's Law (See http://en.wikipedia.org/wiki/Moore's_law) is largely responsible for the fact that there are always new, cutting edge problems to solve. Each advance in computing power and decrease in the cost of computation predicted by Moore's Law means that there are now new problems to solve that previously would not have been economically feasible to attack.

8There does seem to be quite a bit of confusion caused by automated source code control systems not being able to somehow automatically provide the necessary collaborative problem solving. I liken this confusion to that of beginning programming students who don't understand why their program can compile with no errors but not give the correct answer.

9Previously referenced: Joel on Software. Leaky Abstractions essay.

10A recent and spectacular example of this was the loss of a Mars probe due to one development teams working in English units of measurement and the other using metric.


This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.