You are hereA New Chore for Sisyphus / Chapter 4 - It is always something

Chapter 4 - It is always something


By DaveAtFraud - Posted on 06 July 2010

RFC 1925-7: It is always something
(corollary). Good, Fast, Cheap: Pick any two (you can't have all three)1.

Now we have a new tool to help us understand why some software development projects are “easy” and others aren't. Easy here doesn't mean that there isn't hard work including lots of code to write and then integrate and test. Rather, easy means that the code gets written and, somehow, the pieces fit together, the project “just works,” and the customer or user is happy with the result. Sometimes this happens with traditional software development methodologies, sometimes this happens with agile development methodologies and sometimes it even happens with the non-methodology of a death march. The participants look back on the effort with pride, quote their success for their performance appraisals or in their resumes, and congratulate themselves on a job well done.

A problem with comparing different custom software development projects is that they are each, by definition, unique. If the requirements for a particular effort weren't in some way unique, an existing, off-the-shelf solution would be available. Likewise, even the same team implementing a very similar project has the experience of their previous effort to draw on while a new team has their own unique experience to draw on going into a project. The inherent difficulty of a particular effort is an abstract measure of the shortest schedule that can be obtained for implementing a particular project when all other factors are assumed to be nominal. Even for a problem with fairly high level of inherent difficulty, if the development team has extensive experience in the problem domain, there is a chance the team can pull off the miracle that inherent difficulty says should not be possible... but that's not the way the smart money bets.

That a specific team and especially one with experience in the problem space can sometimes pull off such a miracle provides a clue to solving the human side of the equation in determining why inherently difficult problems can sometimes be solved with apparently less effort than expected. Inherently difficult problems come in all sizes and, as noted previously, there is a direct relationship between the likelihood of successfully solving a problem within a given schedule duration and the size of the team needed to solve the problem.

A problem complexity threshold exists at the point at which multiple team members must collaboratively solve the problem as opposed to simply being able to cooperatively solve the problem by each taking a relatively separate piece to work on. To understand why this threshold exists it is necessary to understand something about how individual people solve problems and how groups of people work together to solve a problem. In particular, it is necessary to understand how groups of people either cooperatively or collaboratively solve problems and what are the differences between cooperative and collaborative problem solving. Further, it will be shown that the interactions necessary to collaboratively solve a problem are the limiting factor in constraining how rapidly an inherently difficult problem can be solved.

Cooperation versus collaboration

Understanding of both cooperative and collaborative problem solving processes starts with understanding how an individual solves an analytic problem. This process is different than simply recalling a previously learned answer1. As is frequently the case for a software development project, if no such previous solution is available, the problem must be solved for the first time. If the problem is large enough, several people may be assigned the task of creating the solution. Further, if the nature of the problem is such that it cannot be readily decomposed into a number of smaller problems, how fast and how well such a solution can be developed is strictly dependent on how fast the team charged with developing the solution can collaboratively solve the problem.

As noted above, problem solving of this type is quite different than simply recalling a previously learned answer2 or even recalling a previously defined approach for attacking a particular class of problems (e.g., a design pattern). It is more akin to developing a mathematical proof of a theory where the conjecture is neither known to be true or false at the outset. This type of effort generally consists of both attempting to find counter-examples and taking stabs at proving the theorem. Both efforts assist in finding the answer as testing possible counter-examples yields insight into the nature of the conjecture while attempts at the proof often lead to hints of possible counterexamples.

At some point (hopefully), the correct approach for the proof or a valid counter-example becomes evident in a spark of insight. A look at the history of attempts to prove various famous mathematical theorems such as Fermat’s Last Theorem3 and the four color map theorem4 provides some hints to the non-mathematically minded as to the required mental processes involved in this type of activity. The proof of the four color map theory, in particular, provides significant insight into this process as various attempted classical proofs were all shown to be insufficient over time and only a somewhat controversial (at the time) brute force proof method developed collaboratively and using computers eventually provided the proof.

There are hard problems

That not all problems are equally difficult to solve should be obvious to anyone who attempts to solve puzzles. A trivial game such as “Minesweeper” or Solitaire in its various forms, crossword puzzles, Sudoku, etc. may be easy or difficult even though the abstract problem description for any instance of the puzzle is the same. As an example, for Minesweeper, the goal is to identify all of mines within a given number of grid squares. Some arrangements of the same number of mines are easy to find and some are not. Following a methodical approach may or may not allow the particular game to be solved although using a particular methodology is probably better than no methodology at all if your aim is to solve the game regardless of how many attempts it requires.

This is much like software development although the development team can use a number of tools to determine beforehand the relative difficulty of the problem they are faced with. If solving a problem as simple as Minesweeper can have varying levels of difficulty, it seems absurd to not recognize that software development projects have differing levels of difficulty and this level of difficulty for a particular project can be independent from the overall size of the problem. In the past, difficulty could often be found in the fundamental algorithms that needed to be derived in order to implement the project. This is rarely the case now and, more likely, the complexity of the project is in fitting together the various pieces that comprise the project in order to solve the overall problem.

Some might argue that modern software development efforts only implement known approaches to previously solved problems and that a sufficiently broad collection of design patterns is sufficient. Several agile development methodologies start with identifying the “pattern” to be implemented from a collection of known solutions. To some extent, this may be true but it still requires that either the preliminary design or functional requirements be developed far enough that such previously solved problems can be identified and the applicability of the known solution to the problem at hand be confirmed. More likely, the full problem solution requires the application of such solutions in a new or novel way. Otherwise, there would probably be an acceptable off-the-shelf solution, that could be purchased. This isn't to say that an agile methodology isn't applicable but only that there is still a level of analysis to perform to identify the correct pattern and confirm that the unique factors of this particular implementation do not in some way invalidate the pattern.

Finding and implementing solutions to non-trivial computer programming problems becomes even more intractable when the development of such a solution involves several people all working to solve some aspect of the same problem concurrently and collaboratively. In order to collaboratively solve the problem, the team must first decompose the problem into smaller pieces since the goal is to have different members of the team responsible for different pieces of the problem. For complex problems, this decomposition is itself part of solving the problem. It must be done well or it will make the implementation of the solution to the problem more difficult. How well the problem can be solved concurrently and collaboratively is dependent on both how well the problem can be decomposed into smaller pieces and the extent to which the individual pieces must still interact.

To the extent that the pieces have minimal interaction, a cooperative implementation approach may be feasible. If, for all intents and purposes, the nature of the problem is such that it can be decomposed into a set of individual sub-problems that can each be solved fairly independently then a cooperative problem solving effort will suffice. Agile development methodologies excel at solving such problems and even a non-methodology such as “code and fix” or a death march can yield acceptable results when applied to such problems. If there are interdependencies but the problem is small enough, it may be possible to attack it with team members working independently as long as the team is cognizant of these interdependencies. On the other hand, if the problem is large, complex and strongly coupled, the solution to one piece impacts or constrains the solution to another. If this sort of interdependence is common, it is unlikely that an agile methodology is appropriate. At this point, the project schedule is constrained by the need to concurrently solve the interdependent problems or, using our new terminology, by the inherent difficulty of developing and implementing a solution to the problem.

The need to decompose the problem

Even the most fanatical death march or rapid agile methodology starts off with problem decomposition. Indeed, this may be the only architectural or system level design work done for the death march. Likewise, for agile methodologies, if done well, this work along with pattern identification may be the only such work required. Either way, it is only by decomposing a larger problem into its constituent parts that multiple people can be assigned. As the program size grows, the number of appropriately sized parcels required to decompose it for assigning to individual developers also grows. If the elements are interdependent, as the number of elements grows, the technical coordination structure required to track and coordinate the implementation of those elements also grows. Since the elements also interact, the number of communications channels required to ensure that those implementing each element coordinate their efforts also grows. If there is significant, non-trivial interaction between elements, the developers must understand sufficiently the other elements that interact with the elements that they are implementing. This need to maintain a coordinating structure to ensure that a coherent solution is developed is what drives the increasingly large management hierarchy of a predictive project as the project size grows.

Unless an organization attempting to solve a large, complex problem takes into account the amount of non-trivial interdependence in the problem they are attempting to solve, they risk losing sight of the problem. It is far easier and the schedule apparently much shorter if the organization focuses on the supposedly more tractable problem of just cranking out code. Unfortunately, the only way this sort of interdependence can be recognized is by actually doing a functional decomposition and then examining the pieces that have been identified for both relationship to the other elements and algorithmic complexity.

Conservation of difficulty

If feasible, capturing significant algorithmic complexity in one or more modules may well help achieve a good design. The complexity of the resulting module or modules means that an agile development methodology will not be suitable for these modules. Likewise, distributing the complexity among several pieces tends to then create dependencies between pieces that also make the overall project unsuitable for agile development and is probably a bad idea strictly from a correct software design perspective. At this point the design complexity of the overall solution is imposing what can best be described as “conservation of difficulty” on any solution. Conservation of difficulty means that the problem’s design complexity can be moved around, collected or distributed but it isn’t going to go away.

If an initial stab at a functional decomposition results in significant dependency between elements, it is possible, but by no means guaranteed, that a “better” problem decomposition could eliminate such dependencies. As discussed previously, a good decomposition should yield components that are small, tight and clean. Without expending the engineering and schedule resources to develop such a decomposition, the development organization runs the risk of making a difficult task even harder by proceeding with a flawed, overly complex design. Unfortunately, the effort required to create and verify such a decomposition takes time and the verification process generally requires logically reassembling the notional pieces to confirm that they do, indeed, solve the problem. Finally, conservation of difficulty means that at least some of the resulting modules will be internally complex even if their overall design is small, tight and clean. These modules will be difficult to build and test but this is preferable to having the whole project be difficult to build and test.

Benefits of iterative refinement

Interestingly, the process of collaboratively refining a design solution to an inherently difficult problem yields a solution that is usually significantly superior to any initial straw-man design. This is another facet of collaborative problem solving that is missed by most attempts to provide automated tools to ease the process. Even providing the fictional “Vulcan mind meld” among those participating in a collaborative problem solving exercise would not provide the optimization achieved by questioning and justifying answers. Given such a “mind meld,” along with the few pertinent facts would come a flood of irrelevant information that each participant would then need to winnow down to that which is pertinent to the problem at hand. Further, only the questions of the others can sometimes make a participant in such an exercise aware that some odd tidbit of their knowledge or experience is pertinent to the problem and the others don't know what questions to ask until these questions arise from attempting to solve the problem.

This winnowing of the pertinent from the interesting or even irrelevant can best be accomplished in a collegial environment in which the various contributors can openly and easily exchange ideas and discuss the myriad possible solutions. This process may sound somewhat similar to brainstorming or the coordinating conversations within an agile development bullpen but it is far more structured and rigorous. It is this process of iterative refinement of the approach that makes collaborative problem solving so powerful. Only through the exchange of information and the identification of which pieces of information or ideas each participant holds that are valuable in determining the overall solution can something approaching an optimal solution be created.

A baseline approach

As a baseline for examining how other methodologies or proposed solutions handle solving inherently difficult problems, we will first look at how the traditional heavyweight or waterfall methodologies deal with such problems. Specifically, we will look at the following “heavyweight” practices to see how they adopt to solving inherently difficult problems:

  1. Creation of functional requirements,
  2. Creation of a preliminary design, and
  3. Creation of a detailed design.

It should be noted that the rigorous requirements for completing specific tasks and achieving a given level of understanding of the project that characterize heavyweight methodologies should not be confused with the characteristic “ceremonies” that are found especially in software development efforts for government customers. There need not be a “dog and pony show” at the end of each of these steps. Such a review is only required if that is what the customer calls for. Conversely, if there is no formal review with an external customer, the development team must honestly appraise the results of each activity in concert with the other project stake holders to ensure that the developing solution to the problem is valid.

The creation of functional requirements entails decomposing the system level requirements into functional components. These components are then further decomposed into collections of individual, testable functional requirements5. The process of decomposing system requirements into functional, testable requirements allows the development team to understand the functional and data dependencies among the components of the system. Understanding of data dependencies allows the team to also define the data transformations necessary to satisfy these dependencies. Documenting this information in implementation independent functional requirements then provides a structure for defining what it means to say that “the system works” or that some specific piece of the system works. Inherently difficult problems tend to require complicated data transformations and have extensive data and functional dependencies among the components regardless of how much time is spent refining the functional decomposition. A good functional decomposition will minimize such dependencies but itself takes time to create.

A preliminary or conceptual design based on the functional decomposition is generally created at the same time as the functional requirements are being finalized. The preliminary design provides a functional and concrete expression of the system required to implement what has been described in the functional requirements. Systems with low inherent difficulty typically have a trivial preliminary design since there are neither complicated algorithms to elucidate nor extensive interfaces and tight functional dependencies between the various components to describe.

A detailed design provides a mapping of the functional components onto implementable software components and the algorithms required to achieve the desired functionality. This step also allows the full definition of the system's interface requirements to be captured as proposed data structures and/or procedure specifications. As with the preliminary design, the system level detailed design tends to be trivial if the proposed system has low inherent difficulty. This is not to say that the internal program design for each component is trivial. Most developers have learned that a valid detailed design is required prior to implementing any reasonably complex software functionality. The alternative is best described as “stream of consciousness” programming which works even less well than stream of consciousness writing. A further indication of low inherent difficulty is if the system level detailed design varies only slightly from the preliminary design.

Application of the baseline approach

A system may be assumed to be inherently difficult to implement based solely on the system level requirements. Otherwise, it is only during the drafting of the functional requirements that the difficulty or ease of implementing the system becomes fully visible. It is the process of creating the functional requirements and a corresponding preliminary design for a development project that provides the development team with sufficient insight into the nature of the problem being attempted to determine its inherent difficulty. Given a high level of inherent difficulty, the constraints imposed by the need for collaborative problem solving determine how quickly such a system can be implemented. Where collaborative problem solving is required by high inherent difficulty, no less than the time required for the team to solve the problem can produce a solution that meets the requirements. Further, simply describing this solution in a set of functional requirements and preliminary design is only the first step. Implementing such a system will require maintaining “heavyweight” processes for any inherently difficult component throughout the development process.

Accepting this fact and applying an appropriate software development process to the problem may allow the system to be implemented successfully. At a minimum, the resulting development effort will require the schedule duration predicted for applying a predictive development process to the inherently difficult functions. Ignoring this fact and attempting to solve the problem with either an inappropriate methodology or no methodology at all will take much longer and the results of the attempt will not be satisfactory.

But what about using an agile methodology?

Agile methodologies provide an alternative to the creation of functional requirements and a preliminary design in the form of identifying a design pattern and the collection of “user stories.” If a functional decomposition of the problem is sufficiently clean, the design pattern and user stories provide a valid alternative to analytically creating a functional decomposition and functional requirements. If, however, the collection of user stories reveals significant functional coupling or other characteristics of an inherently difficult problem, it is unlikely that an agile methodology can successfully solve the problem at hand. This likelihood diminishes as the size of the problem to be solved grows. If this is the case, the collection of user stories provides valid input to defining the desired user interaction and the effort is by no means wasted.

Attempts to circumvent Callon's Law

The next four chapters will look at the four ways in which software development organizations attempt to circumvent Callon's Law when the problem to be solved has a high level of inherent difficulty. These are:

  1. the classic death march,
  2. agile development methodologies,
  3. software development tools, and
  4. outsourcing or open sourcing the project.

Showing why a death march fails when confronted by an inherently difficult problem is about as difficult as shooting fish in a barrel. However, it is useful to show that such a failure isn't simply a matter of bad luck or incompetent developers. Looking at a death march confronted by an inherently difficult problem also provides insights that will be applicable as we look at the other attempts to circumvent Callon's Law.


1This corollary has become known as “Callon's Law.” Management's consistent failure to recognize the applicability of Callon's Law to software development planning was one inspiration for this book.

2Although recalling such an applicable answer can short-circuit the need to solve the problem by simply reducing the “new” problem to a previously solved “old” problem. This is the beauty of software reuse since, not only is the problem itself solved but also, a previous implementation of the solution is applied which means at least some of the bugs have probably been wrung out or at least are now somewhat understood.

3Fermat's Last Theorem states that if an integer n is greater than two, then an + bn = cn has no solutions in non-zero integers a, b, and c. Fermat claimed to have found a proof of this in 1637 but his proof was never found. The theorem was finally proved in 1995.

4The four color map theorem states that given any plane separated into regions, such as a political map of the counties of a state, the regions may be colored using no more than four colors in such a way that no two adjacent regions receive the same color. The theorem was first proposed in 1852 and the proof was only published in 1976.

5For this book I'll ignore the possibility that the system level requirements could be allocated to hardware, operational procedures or software. For our purposes, we're only concerned with the system requirements that will be implemented in software.


This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.