You are hereA New Chore for Sisyphus / Chapter 3 – Just assume time travel is practical

Chapter 3 – Just assume time travel is practical


By DaveAtFraud - Posted on 05 July 2010

RFC 1925-2: No matter how hard you push and no matter what the priority, you can't increase the speed of light.
(corollary): No matter how hard you try, you can't make a baby in much less than 9 months. Trying to speed this up *might* make it slower, but it won't make it happen any quicker.

The rationale for a death march software development effort (if there ever really is a rational decision to embark on such a fiasco) really doesn’t matter1. Unlike the world of a cartoon such as “Dilbert2,” absurdly schedule constrained projects such as these aren't conceived by sadistic managers just to torture their employees. I am absolutely sure that the project described in Chapter 1 had an apparently compelling business case for what was attempted even if I wasn't privy to the decision process. Also, as with the project described in Chapter 1, a futile, failing effort does nothing to fulfill the requirements of that business case. The only way to fulfill the requirements of the business case is to have a successful development effort. Other than temporarily preserving the development manager's job, unsuccessful efforts tend to not provide a positive influence on the company's bottom line or reputation. On the other hand, such a development effort is not undertaken in a vacuum. Time to market pressures and hard schedule constraints drive businesses continually to try to shorten product development cycles.

For internal development, the project wouldn't be attempted unless there was a compelling business case. For software products, not keeping up with the competition means the product becomes non-competitive. Clearly, a futile, failing effort does not accomplish the desired end result in either case. If “the project” really is that important then ensuring its success is also critical. This means understanding what is being attempted and pragmatically planning the effort required. If such a plan calls for a twelve month effort but marketing says they “have to have it” in six months, attempts to compress the development schedule into six months will not work. Instead of the desired six month project or even the originally planned twelve month effort, expect a fifteen month death march that produces a buggy, poorly thought-out, hard to maintain piece of junk. Worse, the development team will be burned out and in an even deeper hole when it comes to attempting the next release. For the project described in Chapter 1, seven months was an ambitious but probably achievable schedule, three months was when management wanted it and ten months was how long it took before there was something resembling what was asked for that could be inflicted on customers... bugs, instabilities and all.

How many managers phrase a particular schedule edict as “do or die” and then neither succeed nor die? A significant percentage of upper level managers in many businesses have neither engineering training nor a production management background. Instead, most came up through the ranks in sales and/or marketing. Stretch goals in sales or marketing are the norm since it is axiomatic that management should never set a sales or lead generation quotas so low that they provide no challenge. This may work well in sales or marketing but it results in a horrid product when blindly applied to engineering efforts in general and software development projects in particular.

Out of control

For the purposes of this book, we will be talking about a generic software project or program development effort that has external factors driving the schedule, the required functionality and the expected quality of the final product. Examples of such external schedule constraints could be a desire to beat the competition’s next release to market, another Y2K or it could be something as mundane as marketing wanting to have something to announce at the next trade show. Requirements could be driven by the functionality of a competitor or by the needs of a large customer or user group. The functionality of the resulting effort must be acceptable to the end users or customers. A development “failure” (a buggy, unacceptable product) results in at least imagined dire consequences (a big customer is threatening to go with a competitor's product) if not real damage to the business in lost sales and market share.

A typical example would be a company or department with a development staff budget that allows for a total staff of between ten and twenty-five people. If the number of developers involved is below about ten, simple, small group management techniques are generally sufficient even if a traditional or waterfall methodology is used. As will be discussed in more detail later, efforts involving fewer than about ten developers also tend to be highly dependent on the individual developers' skills, morale, etc. which makes predicting the organizational behavior nearly impossible. This level of effort is also easily within the scale factor for agile development methodologies if the nature of the effort is amenable to one of these strategies (much more on this later). Even so, the high end of this range (above about seven individual contributors) will severely tax the development team manager unless there are vestiges of sub-groups emerging, with more experienced developers providing the sub-group technical leadership but without cost and/or schedule responsibility for their informal group.

On the other hand, any company or organization that attempts a software project involving more than about twenty-five developers without a defined development methodology3 and organization is in far more trouble than a single book of this size could ever hope to solve. That many large, commercial software products exhibit the long term consequences of the practices described in Chapter 2 probably means that the companies producing these products are employing a software development methodology that emphasizes short term schedule considerations over product quality.

A critical characteristic is that the technology or development effort is core to the business. This same model applies to some extent to the technical development arm of a larger business that relies on that technology for its competitive edge. Thus, the development group gets high-level, system requirements from marketing or from the user organization's management. These high level requirements are generally not testable but are descriptive of the desired customer's or user's experience. Marketing and/or upper management also sets schedule expectations. These will generally be based on calendar events such as trade shows, the end of the financial reporting period or some other event that provides an appropriate forum for product announcements. Alternatively, the schedule could be determined by other factors such as a competitor’s expected release or a major customer's demands for certain new features that have little or nothing to do with the required effort. The main point is that both the functionality required and the desired schedule are likely outside of the development team’s control. More likely, given a competitive environment, they may be outside of the company’s control.

Choose a methodology

Regardless of what drives the project's requirements or its schedule, the project is at hand. The question is how best to attack it with the available resources and within the desired schedule? The broad choices are:

  1. Take a predictive approach following a classic development methodology possibly as modified to allow some level of iteration.
  2. Follow an agile methodology such as eXtreme Programming (XP) , Scrum or DSDM.
  3. Add enough additional bodies to the existing organization and methodology (if there is one worthy of being called such) and charge off coding since there obviously isn’t time to do anything else.

While there are variations and specific methodologies (e.g., RUP for predictive methodologies or the mentioned XP, Scrum and DSDM for agile methodologies) it will initially be sufficient to look at the possible development methodology choices as defined by these broad classes.

Everybody does it

Regardless of the overall approach, at a minimum, any project needs to be decomposed into assignable components before work can commence. As with the project described in Chapter 1, if nothing else, this decomposition allows coding responsibility to be assigned. More mature development organizations will perform a functional analysis to come up with a functional decomposition and utilize the results of this decomposition to plan the effort and understand the relationship between the various components. For predictive or planned development efforts, this effort roughly corresponds to drafting functional requirements and completing a preliminary design for the project. For agile development methodologies, this portion of the project corresponds to identifying an applicable design pattern, collecting user stories, and getting user approval of test plans for each component.

Projects that neglect this step because “there isn't time” do so at extreme risk. Functional requirements, whether as formal specifications or user stories, allow the customer or project manager to determine if the development organization really understands what has been asked for. The preliminary design provides the same level of understanding to the development organization in the form of a concrete technical approach to implementing the project. Needless to say, no functional requirements worthy of being called such were produced for the development effort described in Chapter 1. This was based on past experience with previous (but smaller) development efforts that were completed without either a detailed project plan or solid functional requirements. Instead of creating testable functional specifications, the developers collected a few notes and e-mails that clarified the most obvious lose ends from the system level specification, which, for the project described in Chapter 1, was called the marketing requirements document (MRD).

Work estimates

Without work estimates based on a preliminary design and functional requirements, the project can find itself mired in a death march without even knowing it is coming. For the effort described in Chapter 1, such estimates weren’t needed to see that a death march was coming and creating them would only have made clear how hopeless the task was in front of us. Even a vague understanding of the effort being attempted and the schedule made it clear that things were going to get ugly. Unfortunately for many projects, this is the extent to which management attempts to forecast when the program will actually be completed since the schedule isn't subject to discussion or change.

With neither functional requirements nor a preliminary design, it is highly unlikely that a development organization will be able to accurately estimate the effort or schedule required to develop the requested program. Lack of a preliminary design will frequently leave the development organization grappling with unexpected technical obstacles. No functional requirements means that development continues until management decides its politically impossible to continue to delay the release, management or the customer decides that its “good enough” or the project runs out of money to continue funding development.

An agile methodology such as XP deals with this problem by taking as many iterations as is necessary to complete the desired functionality. DSDM, on the other hand, adjusts the scope of the effort to match the time available but with the concurrence of the users. Under either approach, development of user stories and corresponding test cases allows management and the developers to determine the scope of the proposed effort and, for an organization experienced at applying such a methodology, to spot potentially problematic areas of the proposed effort. Examples of such problem areas include intricate dependencies between what had previously been assumed to be independent elements and highly algorithmic or computationally complex program components that cannot easily be tested or demonstrated to be correct.

Developmental quanta: the work unit

Given a functional decomposition and preliminary design or user stories and test cases, the development team is in a position to determine a projected schedule and the staffing needs for the proposed effort. This is generally accomplished by using the functional decomposition or collected, related user stories to identify assignments that are suitable for an individual developer or agile team4. For the purposes of this book, we will define these assignable development tasks as “work units.” Ideally, each work unit is a portion of the processing to be developed that represents a coherent, logically complete piece of the problem to be solved. It should be noted that the nature of the problem to be solved implicitly determines the work units under this definition whereas most development organizations partition a problem into assignable tasks based on the developers available to work the problem. Their approach may be convenient but it frequently obscures significant parts of the problem.

Another way of describing a work unit is it is a quantum of functionality to be developed. A well-defined work unit can only be broken down into smaller pieces by arbitrarily dividing up the functionality. A well-defined work unit will not contain multiple, significant, intrinsically implemented abstractions since each such abstraction should be its own work unit. A work unit may, on the other hand, utilize such abstractions that are implemented by other work units. At a minimum, each significant abstraction should become a work unit on its own. If the decomposition of the project into such work units results in a number of small work units then more than one such work unit may be assigned to the same developer. Large abstractions (e.g., a module that implements the data persistence model for a project) may require further partitioning into smaller pieces (see previous footnote).

The reader is cautioned to be careful and not to confuse such a work assignment (a collection of work units to be developed) with a work unit. Work units in my terminology are coherent functional fragments of the program to be developed. The alternative usage of “work unit” is just a developer's collected assignments which may or may not have any functional coherency nor necessarily represent some identifiable functional component of the program to be developed.

Work unit criteria

In a well thought out design, the work units will be “small”, “tight” and “clean”. I won’t attempt to provide a formal definition of these attributes but the following discussion should be sufficient:

  • Small describes both the amount of effort required to develop the work unit (it should easily be within the ability of a single developer or XP team) and the scope of the functionality to be developed. Small scope means that it should be easy to describe the functionality implemented by the work unit.
  • Tight describes the relatedness of the functionality to be developed to implement the work unit. As with the scope constraint of small, the pieces of the work unit should work together closely while having very clearly defined and distinct relationships with other elements of the project.
  • Clean describes these relationships with the other elements of the project. Access to the functionality of the work unit or the data it encompasses should be through a minimal set of well-defined interfaces.

If the project planning and decomposition are hurried and haphazard, as might be expected with a severely schedule constrained project, the work units definitions will only be worse. With a poorly thought through preliminary design, the work units tend to be large and loosely coupled internally but are poorly abstracted, ill-defined, or have deep or complicated boundaries with other work units. That is, they will not be small, tight and clean. It is important to point out that such a poor preliminary design only makes solving the problem more difficult. At this point it is sufficient that work units have been identified and a cast of developers assembled to take on the task of implementing them.

Once the work units have been defined, the difficulty of developing each piece and the effort of assembling these pieces into the coherent whole that is “the project” determines the schedule for the project. The work units will each take a given amount of time to develop and the assembly process of putting the individual work units together to meet the system level requirements will also take a certain amount of time. Further decomposition of the problem itself will take time and potentially aggravates the amount of time it will take to assemble the pieces. Leaving the project in larger pieces will result in the development of the individual pieces taking longer and only minimally reduces the schedule required to assemble the pieces. That these two factors can be traded off against each other (more pieces for a shorter development period; fewer pieces for a longer development period) and, especially, by leaving the project in one piece, The project's maximum schedule duration can be determined.

Assignment of additional resources to a more thoroughly decomposed project will result in a shorter schedule but only up to a point as the discussion in Chapter 1 about hiring 10,000 programmers for a 10,000 line project showed. Thus, there must be a particular assignment of resources and project decomposition into work units that will result in the shortest possible schedule.

The Inherent Difficulty Conjecture

For a given software program to be developed, there is a minimum feasible schedule duration such that the application of additional resources to the development team will not result in a shorter development period.

It should be noted that specifics of the development organization (personalities, experience level, organizational capability, etc.) and pragmatic considerations such as available staff will cause the achievable development period to be longer. If this minimum schedule is longer than is desirable, something else has to give because the program, as defined, will not be ready any sooner.

The problem is that this predicted schedule may be longer than desired since, as noted above, real world schedule deadlines rarely have anything to do with the estimated engineering effort required for developing a project. Unfortunately, as the effort described in Chapter 1 showed, development schedules are nowhere near infinitely malleable. The trick in successfully developing quality software is to bring these two factors together such that the desired functionality is achievable and the corresponding, implied schedule fits the external constraints. Although software project scheduling is hardly an exact science, from my experience, consistent results within ten percent of what is actually achieved are definitely feasible and the accuracy of such estimates gets better over time if the organization continues to do work in the same problem space. Given the inherent difficulty conjecture, arbitrarily demanding a highly constrained schedule for a given functional development will rarely, if ever, yield the desired result.

Another way to look at the “inherent difficulty conjecture” is as follows. To develop a given program, a set of individual problems must be solved. As the number of these problems increases, the size of the team required to solve these problems within a given schedule duration increases. To the extent that these individual pieces must be made to work together, as the number of individual contributors increases, a technical management structure must be put into place to coordinate the efforts of the individual contributors. The time required for determining and creating the individual pieces and the effort required to assemble these pieces into the final product or project puts an absolute constraint on how quickly the effort can be completed. Attempts to defeat this constraint by assigning additional personnel, imposing forced overtime on a smaller development team5 or outsourcing some or all of the effort (this is discussed in more detail in Chapter 8) cannot significantly reduce the schedule duration and can actually lengthen it.

The role of problem complexity

It should be noted that there are two very specific caveats included in the inherent difficulty conjecture. The first is that the time required for creation of the individual pieces tends have a minimal possible duration since attempts to break up tightly coupled processing while not increasing the schedule will rarely succeed in achieving the imposed schedule. Such an effort fails because either significant effort (a well done detailed design) is required in order to cleanly partition the processing among several developers or a poor effort at the partitioning simply results in a longer period of integration. This leads to the second caveat and that is, a software development project schedule is also constrained by the effort required to make the individual pieces work together. If only a minimal level of interoperability is required and the correct functioning of the individual pieces of software can be readily determined, a project may fall into a domain that is suitable for an agile development methodology. These methodologies emphasize the creation of individual functioning pieces but don’t support integration of tightly coupled processing concurrently created by multiple developers. Both of these factors can be summed up in the word “complexity.” Projects with a low complexity can sometimes be completed on a fairly aggressive or compressed schedule while projects with a high complexity cannot.

For quite some time it has been known that complexity is the bane of a successful software project. This has shown up as attempts to measure the complexity of a particular module6 or program and research into arcane fields such as computational complexity and algorithmic complexity. Past attempts at measuring software complexity have been more focused on either the computational cost of a particular algorithm or on the maintainability of individual programs and modules. There have also been efforts to use data coupling as a measure of software project complexity. As the overall complexity of a specific software development problem is somewhat of a given, focusing on the complexity of individual modules makes sense for an organization considering how maintainable a project will be. For the development organization, the problem is the problem so the implementing organization has little control over the overall complexity of the problem they are faced with. However, it is the complexity of the problem to be solved and any significant data coupling of the pieces of the program that implements the solution that drive the inherent difficulty of a project.

Regardless of the complexity measure used, software practitioners recognize that it is the nature of the problem that one is attempting to solve that governs the complexity of the implementation that solves the problem. As with the implementation of something as well understood as a program to do sorting, it is possible to reduce the complexity of individual modules that implement the algorithm. Burying pieces of the solution in separate subprograms is one way to accomplish this at the expense of both slower execution and, if anything, a more obfuscated implementation. This approach means that the pieces of the solution can be forced to satisfy some arbitrary maximum complexity metric but the nature of the algorithm, also known as the design complexity7, determines the overall simplicity or complexity of the solution. Thus, it is always possible to reduce the complexity of a specific module by breaking the module into smaller pieces but this doesn’t reduce the complexity of the algorithm being implemented. This complexity is inherent in the algorithm and is the algorithm's design complexity.

Determining project complexity

It is important to understand the difference between the inherent difficulty of a problem, the computational complexity of the expression of the problem as given by the cyclomatic or other complexity measures and the managerial difficulty of directing the effort that solves the problem. While there has been extensive research into computational complexity8, such complexity can generally be taken as a given within the scope of typical commercial product development or internal business automation efforts. Hopefully, no one will attempt to find a polynomial time solution to a NP-complete or NP-hard problem as part of the type of development effort being discussed in this book.

On the other hand, design cyclomatic complexity clearly drives the complexity of the expression of the program. The design complexity can also drive the managerial effort required for the project since the complexity of what is being attempted can determine the amount of interaction required of the personnel involved in the project. Another possible driver of managerial complexity may be the need to maintain a significant shared data abstraction across various program elements. Such a shared data abstraction results in strong data coupling of these elements. If this requires coordinating the development of a large enough number of otherwise simple program elements, the coordination effort can still give rise to a management problem even though the expression of each element is relatively easy. As an example, just coordinating the implementation of a consistent user interface to an application can pose a significant development hurdle. Each “screen” is in and of itself “relatively easy” to develop but the cumulative effort to create a common, consistent “look and feel” can be herculean.

Managing project complexity

The inherent difficulty of a software development effort is strongly related to the managerial problem of coordinating the various pieces of a project such that the pieces each fulfill their individual requirements and fit together to solve the overall system requirements. Simply having an organization in place that supposedly will develop and integrate a system that will accomplish its requirements is not the same as actually executing that assignment. Having the organization in place just makes the execution possible but it is the inherent difficulty of the project that drives the achievable schedule for the development effort.

The key to a successful software development effort is to understand the inherent difficulty in what is being attempted. This step is critical to understanding the feasibility of the project being attempted and, especially, whether all aspects of the project can be completed within the desired schedule. The inherent difficulty of the project is the most significant factor in determining the likely success or failure of the project given a particular schedule and staffing level. Unlike cyclomatic complexity or algorithmic complexity, inherent difficulty includes:

  • the effort to decompose the project into individual pieces,
  • the actual effort to develop and unit test these pieces,
  • the technical task of coordinating the contributions of the various individual developers assigned to the project, and
  • the engineering or integration effort required to meld the individual pieces into a fully functional whole.

In order to successfully execute a “real world” development effort, all of these problems must be addressed as a whole since, typically, one can be made easier but only at the expense of making another or several others more difficult.

Another way of describing this issue is that dividing a task allows the work to be done in parallel but this division of effort then requires that the pieces be integrated (made to work together) to provide the full functionality of the task. The equilibrium point is reached when the gains achieved by additional partitioning of the task are balanced by the effort required to first divide the project into pieces and then, after development, integrate the pieces. One can hope that the pieces will all just fit together but that rarely happens due to the proliferation of communications paths. Such a communication path exists in a project any time two developers working on different elements of the project must coordinate their efforts in some way.

This need for coordination means that crafting the pieces such that they fit together takes time and effort that apparently doesn’t contribute to just cranking out code. This is particularly true where a shared abstraction of a key element must be maintained across a variety of program elements. As if the problem of defining and creating such an abstract representations was not difficult enough, the real world has a tendency to further complicate the problem. The underlying representation of an abstraction tends to “leak up9” into the abstract data type. How these “leaks” occur will tend to vary across the different program elements but they will occur.

Side bar: A simplistic non-software example would be building a brick fireplace and chimney

A pile of bricks, a pile of mortar and some assorted pieces of metal for the flue do not make a fireplace and chimney. The bricks, mortar and flue pieces must all be correctly assembled in order to have a functional fireplace and chimney. With significant planning and coordination, some of the effort of building the chimney can be done in parallel but at the expense of actually doing the planning and coordination. It should also be noted that lifting the now larger pieces into place is more difficult than just placing the individual bricks and there is still some risk that the pieces won’t fit together.

Even if everything goes as planned and the pieces all fit, the correct functioning of the assembled system can only be confirmed after all of the pieces are in place. Worse, this correct functioning can be affected by factors “out of scope” to the “chimney development effort” like back drafts, lack of make up air, or a bird deciding that that chimney makes a fine place to build a nest. Further, if some key component such as the flue is left out or doesn’t function correctly, the cost of correcting the error is greater than if the operation of the flue had been confirmed when the nearby bricks were initially put into place. At that point, only a single tier or so of bricks would need to be removed to correct the problem.

The lessons here are:

  1. The parallel creation of component pieces allows the work to be completed faster but the larger pieces are harder to work with.
  2. If the planning isn’t correctly done, there is a risk of catastrophic failure and even small issues can be more difficult to correct.
  3. Any changes to the requirements may result in rework to already assembled, but not yet in place, parts of the chimney as well as any part of the chimney already completed.
  4. The success of the project from the customer’s perspective can be influenced by factors that the “chimney development team” considers to be out of scope or beyond their control.

Finally, the larger the effort, the more likely the effort is to be governed by normative productivity. Small efforts are subject to fits of genius or liberal applications of caffeine such that a few highly skilled developers can often develop significant functionality in a very short amount of time. This sort of magic doesn't scale up to larger efforts because larger efforts tend to include large amounts of very mundane functionality. Also, as the problem grows, there tends to be a divergence of the software aspect of the solution and the total solution. Another way of putting this is that large projects tend to have a more significant impact on the organization that will be using the project and not just on the actual users of the program.

Why us?

Unfortunately, a large number of commercial software development efforts attempted tend to fall into this very borderland of possible tasks. It may be possible to accomplish these tasks with a single small team but the size and difficulty of the task may mean that a larger team is required with the larger team's attendant inefficiencies but greater capabilities. The large number of projects in this size range also reflects the state of most organizations' development ability. Only a few organizations have developed the capability to successfully step up to the challenge of even medium sized development efforts (a minimum of 25 man-years applied to a product release is probably a good lower threshold for “medium”). Yet competitive pressures push organizations that are not up to this challenge to continually attempt to grow beyond smaller projects (5 man-years or less applied to a product release). The numbers given are somewhat arbitrary and roughly correspond to a 25 person development team working on a project for a year for a “medium sized project” and a 10 person development team working on a project for 6 months as the upper threshold for a “small project.” Larger efforts are beyond the scope of this book. Such efforts will usually follow a predictive methodology since these are the only methodologies that have been successfully applied to such larger efforts with any consistent level of success. On the other hand, the state of much commercial software indicates that many organizations are doing something else or not correctly executing their supposed predictive methodology.

The decision as to whether a task can be accomplished by a small team as opposed to a larger team will frequently need to be made based on the specific people involved. Team productivity is very dependent on a number of intangible variables including the amount of experience the team has in the particular problem space, working conditions, morale, individual personalities, etc. While this book can provide guidance for understanding the factors involved, it is not possible to provide definitive or specific guidance. Such specific guidance would presuppose knowledge of the abilities, mind-sets, personal circumstances, etc. of the team involved (e.g., your star programmer from last year's project may have other things on her mind if she is now a new mother).

On the other hand, as noted above, the larger the task, the more likely it is that the development team will not perform significantly better than the organization's historical productivity. Smaller tasks can only be estimated with accuracy if the specific individual's working the project are known. Even this may change based on external factors. Estimates for larger tasks should assume that the productivity of the staff developing the project will run close to what has been accomplished in the past by the same organization. This assumption is especially true if it is based on past projects in the same or a similar problem domain. It should be noted that it is the successful execution of these larger projects that seems to cause the most trouble and are thus the subject of this book. It should also be noted that it is unlikely that some new technology or methodology will provide a magical “silver bullet” that will somehow allow the project to be completed with significantly better productivity than has been seen in the past (see Chapter 7).

The most important aspect of understanding why small-scale development methodologies do not scale up is to understand the dynamics of the small development team. The definition of such a team is the size of the team and their corresponding assigned development problem allows a single individual serving as technical lead or manager to fully understand any team member’s part of the project and how that part fits into the whole. Whether the methodology applied is some agile development methodology or a classic waterfall methodology, the key is that, at this scale of effort, this individual could implement the entire project given enough calendar time or could implement any one piece of the project. In effect, this individual integrates the project as it is created and ensures the correct interaction of the various project pieces.

Once the project exceeds what can be fully understood by a single individual, the dynamics of the project change from being strictly about writing code to managing the writing of the code that solves the problem. For complex problems, this management problem includes managing the solution of the problem before any code can effectively be written. This effect can also be seen with smaller projects (e.g., seven or fewer developers) that are managed by a non-technical “project manager.” If such a project has a significant level of inherent difficulty, a non-technical manager will be severely challenged to manage the creation of the solution unless the team includes or has access to a technical lead. In this scenario, the non-technical project manager must recognize the interdependence of the individual tasks and ensure that the assigned developers appropriately coordinate their efforts or the project will, at a minimum, take far longer than expected and may even fail.

Determining inherent difficulty

The following aspects of a software development project determine its inherent difficulty:

  • The extent to which the project can be easily decomposed into small, individual components that can be worked in parallel by multiple developers. Experience with both larger scale software development efforts (significantly over 100 man years total effort) and smaller efforts indicates that any software development effort can be decomposed into such smaller functional components. For complex problems, finding a “good” decomposition that minimizes the functional coupling of these program components may be a non-trivial task in and of itself. This leads to the next characteristic.
  • The degree of functional coupling of the various program elements such that distinct program elements must interact in order for the system to meet its requirements. Regardless of the functional decomposition, strong functional coupling may remain between the decomposed program elements. A better functional decomposition may provide a lower level of functional coupling but finding that better functional decomposition takes engineering effort. To the extent that the functional coupling reflects the problem's design complexity, the complex interactions are then simply moved to within certain components. This yields our next characteristic.
  • As with most classical problems such as sorting, there appears to be a limiting, minimal design complexity for any software development project. That is, regardless of the specific approach adopted by the development team, there will be a point at which additional effort to refine the design will not achieve a less complex approach. For the problem at hand, even the “best” functional decomposition will still include significant functional coupling. Besides functional coupling, systems may have significant data coupling which gives our final characteristic. It should also be noted that even finding a good, let alone optimal, functional design takes time and is an exquisitely difficult engineering effort for complex problems.
  • The existence of significant data coupling instead of or in addition to functional coupling also impacts the inherent difficulty of a project. If the operation of the program requires that a large amount of intricate data be shared (e.g., in an underlying database), there will be extensive logical coupling of disparate program elements that otherwise function asynchronously. This was one of the characteristics of the project described in Chapter 1 that caused repeated problems since there was no overall data persistence model for the product and creating such a model would have taken far longer than the time supposedly available.

All software development projects have an inherent difficulty. For small projects, the level of inherent difficulty mainly constrains who can be assigned to the project. Small programs with a high level of inherent difficulty for their size require, at a minimum, the assignment of a close knit team of very competent developers if the project is to succeed. For very difficult problems, even a small team may require the application of a full “waterfall” methodology to ensure that the intricacies of the desired system are both captured in the requirements and then correctly implemented. For larger projects, the inherent difficulty becomes a very different technical coordination problem that determines which software development methodologies are appropriate for the project as well as the minimum schedule for the project. The remainder of this book will look at why larger development efforts with a high level of inherent difficulty cannot be completed quickly.

The next chapter will look at why human nature as applied to collaborative problem solving is the ultimate cause of inherently difficult problems taking longer to solve. We can then revisit the project described in Chapter 1 and see how the practices described in Chapter 2 cannot succeed when a project has any appreciable level of inherent difficulty. Worse still, these practices, when applied to a problem with high inherent difficulty, can be shown to inevitably result in the long-term consequences described in Chapter 2. Once we have completed this analysis we can then look at why other attempts to somehow quickly solve problems with a high level of inherent difficulty cannot succeed.


1In his book, Death March, Ed Yourdon provides a comprehensive and fascinating review of the various types of death march projects, the ways in which a development effort can become a death march and some very valuable survival information for those developers unlucky enough to be so condemned. For this book, I am only concerned about one particular type of death march and that is the schedule constrained software development effort and the only driver is assumed to be, “... it must be done sooner than that.” I will even go so far as restrict my discussion to the “price is no object” variant since I wish to show that, as with the project described in Chapter 1, even the most opulent expenditure of resources cannot succeed in developing quality software for a complex system without a sufficient schedule.

2If you aren't familiar with Dilbert, please visit http://www.dilbert.com/.

3Methodologies exist to overcome the human limitations of both the individual developers and of the development team as a whole. Ideally, the practices prescribed by a methodology are mutually supporting and are designed as a safeguard against human frailties. By following a defined methodology, the project has these additional supports to ensure that things don’t somehow “go wrong.” Following an ad-hoc development process leaves the product at the mercy of everything just “going right” which will rarely, if ever, happen. A multitude of people have pointed out that Murphy (as in Murphy's Law) loves software and something will go wrong. The question on a software development project is not whether or not something will go wrong but, instead, how many things will go wrong and how severe will be their impact?

4Full decomposition of complex projects or projects that include large amounts of tightly coupled processing may not be feasible until a detailed design has been created. The only way this can be discovered is by attempting to create such a decomposition into assignable work units.

5This approach tends not to be sustainable as the team members eventually conclude that there is more to life than work. An additional limitation occurs over time as fatigue and time pressures cause an increase in the error rate and staff turnover effectively increases the size of the team. Typically, these errors are trivial coding errors but they contribute to the overall “noise” in the system. As described in Chapter 2, sufficient schedule pressure over a long enough period of time will result in deep, systemic problems. See also Peopleware by DeMarco and Lister for a more in depth discussion of the consequences such a strategy has for the development team.

6There are several software complexity metrics. A couple of the best known are McCabe(1976) : Complexity Measure, IEEE Transacions on Software Engineering, Volume 2, No 4, pp 308-320, December 1976 and Halstead(1977) : Elements of Software Science, New York, Elsevier North-Holland, 1977

7McCabe, Thomas J., and Charles Butler, "Design Complexity Measurement and Testing," Communications of the ACM, 32, pp. 1415-1425, December 1989.

8For example, sorting seems to be at best o((n)(log(n))), other problems are o(n**2), some are NP-Complete and others are NP-Hard. Some, such as factoring a number that is the product of two large primes, tantalizingly can't be proved to be NP-Complete but that hardly means that they are easy. Reference to A Guide to the Theory of NP-Completeness, Garey,M. and Johnson, D. and The Design and Analysis of Computer Algorithms, Aho, Hopcroft and Ulman.

9Leaky Abstractions, Joel on Software (previously referenced)


This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.