You are hereA New Chore for Sisyphus / Chapter 6 – It is more complicated than you think

Chapter 6 – It is more complicated than you think

By DaveAtFraud - Posted on 07 July 2010

RFC 1925-5: It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.

A valid question at this point would be, could proper application of an agile development methodology have saved the project described in Chapter 1? The chaotic development effort that resulted from throwing a number of bodies at the development task was most assuredly anything but following a defined agile methodology. That some of the developers even occasionally babbled “agile-speak” to justify what they were doing didn't change the fact that the project was a classical death march.

Some aspects of this project could have been done better by following almost any other methodology rather than the death march approach that blindly threw bodies at the problem. Unfortunately, this project included development tasks that were unsuitable for development using an agile methodology. That is, this project contained several components that were constrained by their inherent difficulty. These components desperately needed the more robust, collaborative problem solving practices of a predictive methodology. Thus, the factor determining the appropriate methodology for this and probably many other projects is not the number of lines of code to be developed. Instead, the difficulty or complexity of specific aspects of the problem or of the problem as a whole determines the appropriate development methodology.

Agile methodologies are a collection of development approaches that can be characterized as utilizing a “light-weight,” iterative and adaptive approach to software development. This is in contrast to the extensively planned and more predictive approaches of traditional, “heavy-weight” development methodologies. When applied to an appropriate problem, agile methodologies can provide amazing increases in developer productivity while achieving greater customer satisfaction. In addition, agile methodologies can solve software development problems that would be difficult or impossible to solve with a predictive methodology. In particular, projects that have high requirements volatility or, at best, vaguely defined requirements can frequently be accomplished with an agile methodology. Such projects would be completely intractable with a predictive methodology. Understanding which projects can be accomplished using agile methodologies and which cannot also provides additional insight into how the inherent difficulty of a given project constrains the achievable schedule for a project.

As much as management would like for agile methodologies to be the methodological “silver bullet” that solves the software productivity problem, these methodologies are only suitable for certain specific problem domains. To be sure, when correctly applied to problems in these domains, agile methodologies achieve significant gains in productivity and, as discussed above, may even be able to successfully complete a project that would not be feasible for a predictive methodology. Unfortunately, agile methodologies don’t appear to be applicable to development projects with a high level of inherent difficulty.

Size begets structure

For some time, software development projects were assumed to require a development team whose size and expected schedule were a simple function of the number of lines of code to be developed. Scaling factors were included for specific attributes of the project being attempted but the number of developer-months required and, thus, the implied managerial structure were calculated from this line of code estimate. This set of assumptions is the foundation of the Constructive Cost Model (COCOMO)1. This rigidly structured, predictive approach to software development solidified as the “one true path” in government standards such as the various forms of DoD-STD-2167 and successor industry standards such as ISO/IEC 12207. These standards include rigid requirements for the overall project plan and, to a large extent, the project infrastructure.

Under this approach, all projects are assumed to best be accomplished by following a “waterfall model.” This approach demands strict delineation of system analysis and requirements writing, preliminary design, detailed design, coding, integration and test and final acceptance tests. Similarly, the problem's functional decomposition is rigidly reflected in the project organization. Functional groups with specific technical expertise are designated to craft their particular piece of the overall project. Further, each piece is only allowed to interact with other project pieces through well-defined and fully specified interfaces. Variations on the waterfall to allow development in smaller iterations are an acceptable alternative practice (e.g., an iterative implementation of the RUP or spiral model2) but these iterations are themselves complete, waterfall mini-projects. Organizations that did not follow such a waterfall methodology were assumed to be immature or, at best, using a methodology that is not well-defined at their significant risk.

The agile approach

High managerial overhead, numerous unsuccessful projects, lack of flexibility, and inability to quickly respond to customer requested changes in these methods resulted in the revolution that has come to be known as agile or lightweight methodologies. Management’s “silver bullet” of the moment for dispatching long project schedules is the adoption of one of the various agile development methodology flavors. Barring the actual implementation of a defined agile methodology, management will fall back on using agile development terminology to describe what are still essentially quintessential death marches. Application of an agile development methodology to an appropriate task has a very good chance of working and producing the desired result; simply using agile development terminology to describe the activities of a death march will not. Nor will attempting to use an agile methodology on a project that is not suitable for such techniques.

The following description of agile development methodologies is taken from the Wikipedia3 article on “Agile Software Development”

Most agile methods attempt to minimize risk by developing software in short timeboxes, called iterations, which typically last one to four weeks. Each iteration is like a miniature software project of its own, and includes all the tasks necessary to release the mini-increment of new functionality: planning, requirements analysis, design, coding, testing, and documentation. While an iteration may not add enough functionality to warrant releasing the product, an agile software project intends to be capable of releasing new software at the end of every iteration. At the end of each iteration, the team reevaluates project priorities.

Agile methods emphasize real-time communication, preferably face-to-face, over written documents. Most agile teams are located in a bullpen and include all the people necessary to finish software. At a minimum, this includes programmers and their "customers." (Customers are the people who define the product. They may be product managers, business analysts, or actual customers.) The bullpen may also include testers, interaction designers, technical writers, and management.

Agile methods also emphasize working software as the primary measure of progress. Combined with the preference for face-to-face communication, agile methods produce very little written documentation relative to other methods. This has resulted in criticism of agile methods as being undisciplined hacking.

Interested readers are also invited to read both the “Agile Manifesto” and the “Principles Behind the Agile Manifesto4” to gain a better understanding of the philosophy behind agile methodologies.

Interestingly, the agile philosophy of developing software in small, demonstrable iterations comes very close to Fred Brooks' call for “growing” software5 as described in his “No Silver Bullet” essay written over thirty years ago. To a great extent, agile methodologies are a logical extension of earlier attempts to quickly and successfully develop software. Previous attempts to short circuit the long development cycles of predictive methodologies include Rapid Application Development (RAD), Joint Application development (JAD) and rapid prototyping. A feature of these approaches includes having the user or customer work with immediate results rather than paper specifications. While a number of different (at least to some degree) agile methodologies have been defined, agile methodologies typically share the following characteristics:

Iterative – Agile or lightweight methodologies take the approach of the spiral model to its extreme. Sequential, complete mini-projects are executed over multiple time-boxes that are typically each at most several weeks in length. This approach allows immediate feedback but it comes with the constraint of only being able to complete a small subset of the desired functionality within each iteration. This approach works well for projects that aren’t inherently difficult. Given such a problem, the piecemeal implementation has few interdependencies that hinder the effort. The approach tends to fail with projects that are inherently difficult since the short development cycle does not allow sufficient time to solve large, complex problems that are characterized by having significant interdependencies. This is a result of agile methodologies not having a formal coordination structure in place that allows for input from multiple contributors. The informal coordination of the project bullpen works well enough for problems with low inherent difficulty but a formal coordination structure is necessary if a methodology is going to solve large, complex, interdependent problems.

Informal non-user documentation – Agile or lightweight methodologies focus on developing working code from user stories. The code is then tested to test plans agreed to by the user. Little or no documentation concerning the internal implementation is produced other than comments within the code. Significant changes to any code typically involve a re-write or, in agile terms, a refactoring. If the code being refactored has significant dependencies with other components, maintaining these interfaces is at the mercy of any comments included by the developer or the understandability of the original developer’s code.

Co-location with “customer” - The informal specification of the project in user stories and test plans still leaves significant room for the developers to interpret what they are creating. Co-location of a customer representative with the development team allows for instantaneous and continuous feedback as to the acceptability of the evolving product. This can only work where there is such a “customer representative” and their priority assignment is to fulfill that role for the development effort.

Co-location of team – The lack of formal internal or developmental documentation (formal requirements specifications, interface specifications, design documents, etc.) means that the team must rely on informal communication for sharing information that more formal methodologies produce as tangible documents. Most agile methodologies even go so far as to describe the optimal team environment as a bullpen with little or no private space. It should be noted that this puts an upper limit on the number of personnel who can be assigned to an agile project. The noise level and shear size of the space required eventually limits how well informal “shout across the room” communication methods work. Also, such a bullpen environment is not suitable for the type of intensive, long duration, collaborative problem solving required for attacking inherently difficult problems.

Immediacy – Agile methodologies emphasize only attempting to attack the problem at hand with little or no consideration given to meeting possible future requirements. This is captured in the expression “you aren’t gonna need it” or YAGNI. A developer concerned with future considerations will get a “YAGNI” reminder to get them to refocus on the current iteration. The downside of this approach is that new features frequently require a significant re-write since no consideration was given to future needs during the previous iteration. This approach is required as part of the methodology since any functionality developed must be demonstrable.

Working code – Agile methods emphasize the creation of working code over specifications and paper designs. The positive side of this is the customer can actually work with what could be a portion of the final product rather than only see mock-ups and design documents. The down side is that complex interactions are not amenable to immediate coding.

There's more than one way to do it

Agile methodologies have been evolving and diversifying since the time the Agile Manifesto was created in early 2001. The large number of apparently different agile methodologies can be traced to the need to adapt and refine any given methodology when it is applied to a specific problem domain or corporate culture. This evolution has given rise to at least three distinctly different approaches to agile development that illustrate the limitations of agile development techniques when applied to different problem domains.

We will look at eXtreme Programming (XP), scrum and DSDM as our “representative sample” of agile development methodologies. These variations on implementing the “Agile Manifesto” allow (may be customized by the development organization) or require6 (differentiates a given methodology from similar approaches) more or less coordination between developers. Specifically, XP is at its best when dealing with stove-piped development projects and only informal coordination between developers is required with scrum and DSDM providing mechanisms and interaction structures for more deliberate coordination. The level of coordination imposed by each methodology limits the extent to which it can scale up. This scaling limitation is in addition to methodology inherent scaling limitations such as just how many people can effectively be considered to be co-located.

Perhaps best known of the agile development methodologies is eXtreme Programming or XP. XP is known for its unique practice of having two developers work concurrently at the same workstation. Under this practice, known as “pair programming,” one developer typically is concerned with the expression of the code and the other is focused on higher level issues such as the overall design and requirements. These roles constantly revolve such that today’s coder is tomorrow's designer. The same practice of pair programming that allows high productivity on “stove-piped” problems limits the applicability of XP to inherently difficult problems that require significant coordination between developers. Any significant coordination between teams requires achieving an understanding between four individuals instead of two. This doubles the communication burden required since full understanding between all four of the individuals involved must be established. This problem only becomes worse as the number of teams that need to coordinate in order to solve a particular issue grows.

Scrum has been used to develop more difficult problems that aren’t as cleanly separable. Scrum seems to have a maximum team size of approximately ten individual contributors7. The daily informal meeting from which this methodology takes its name (“scrum” is the rugby term for a huddle) provides a forum where coordination issues can be brought up and, hopefully, solved or at least an “off-line” meeting of the interested parties arranged. The high level of coordination that is provided by the daily scrum allows this methodology to be used to attack problems with a higher level of inherent difficulty than XP. Unfortunately, the methodology doesn't usually scale well above a team size of about ten. Attempts to run multiple scrum teams in parallel to attack larger problems run into coordination problems as predicted by the inherent difficulty conjecture. This limitation on team size obviously limits the size of the problems scrum can be applied to.

Although the scrum approach to development has been applied to software projects, it has been most successful as a means of organizing engineering efforts such as creating system level or functional requirements. The bullpen of the XP development effort isn't required although this puts a premium on the daily scrum for surfacing issues and solutions. The scrum approach of attempting to immediately solve issues as they come up as either part of the scrum or as an immediate “off-line” meeting between the required parties still puts a premium on co-location and informal structure in order to facilitate such meetings.

Dynamic Systems Development Method (DSDM)8 is an iterative, pragmatic process focused on delivering business solutions quickly and efficiently. DSDM primarily differs from both scrum and XP in that it has its best uses where the time requirement is fixed. Both scrum and XP tacitly assume that iterative cycles continue until the desired functionality is achieved. DSDM works within a fixed schedule to develop as much of the desired (note that I didn't say required) functionality as possible.

DSDM focuses on the efficient delivery of the business solution rather than just providing a methodology for organizing the team (scrum) or individual developer activity (XP). A significant portion of the project activity is to ensure the feasibility and business sense of a project before it is created. This is only superficially like predictive methodologies. The critical difference between these methodologies is that DSDM focuses on prioritizing user requests using a sieve described by the mnemonic MoSCoW (Must Have, Should Have, Could Have, Won't Have). In order to ensure that the solution meets the needs of all interested parties, DSDM stresses cooperation and collaboration. This stress on meeting the needs of all interested parties frequently means that DSDM is best suited for internal development efforts. With internal development efforts, the users, those who will support the product, management and even external entities such as customers or suppliers can reasonably be involved in the determination of the project's requirements and their priority.

Pointing out the obvious, DSDM achieves a fixed schedule by focusing the effort on the ”must have” requirements, then the “should have” requirements and, only if the schedule permits, working on the “could have” functionality. “Won't have” functionality requests are deferred to some future iteration. The dramatic results achieved by DSDM flow from the same 80/20 rule mentioned elsewhere: twenty percent of the code supports eighty percent of the functionality so attempt to build only the twenty percent of the code that gives the user the most useful functionality. While some of the code with low functional impact is concerned with error recovery and bounds checking, quite a bit of it is usually created to handle “corner cases” that only occur infrequently. Not implementing such functionality is more acceptable for an internal use program where the user understands the business logic behind not handling a specific case; users of a retail or mass-market product will typically not be so understanding. This is especially true if the supposedly “corner case” turns out to be somewhat nominal for a particular user.

Why co-location?

The traditional approaches to software development have generally stressed co-locating all of the people working on a particular aspect of the problem to ensure good communication within the group. Communication between groups required the creation and control of formal interface definitions. This approach plays well with the goal of modular software development whether those modules be identified by structured design and analysis or through object oriented design and analysis. While the individual techniques are different, the end result is that well-defined and delineated functions are identified, designed, built, integrated and, possibly, made available for later reuse. In this case, isolation of teams working on different elements is a desirable characteristic of the process. Given such an arrangement, the teams are forced to fully define and subscribe to detailed interface specifications The longer development schedule required by such a rigorous development approach is just one of its inherent costs.

The agile team in a “bullpen” seeks to expand the good communications and dynamic interactions of the small team to the project team as a whole. The goal is to extend the high productivity of small teams to the entire project. With XP in particular, informal coordination across elements is substituted for the detailed formal specifications of a predictive methodology. Co-location with the customer (or customer representative) allows for rapid (but, again, informal) resolution of questions about requirements or feedback concerning the proposed implementation of a specific functionality.

A survey of current experience as documented by a variety of on-line articles and postings indicates that typically a maximum team size of approximately ten to twelve developers9 is consistent across all agile methodologies10. Of course there are exceptions to this and understanding what are the characteristics of a project that make it amenable to a larger team size allows a software development manager to determine whether a larger team and hopefully shorter schedule is feasible for his or her project.

When is an agile development approach appropriate?

Barry Boehm and Richard Turner in Balancing Agility and Discipline: A Guide for the Perplexed11 suggest that risk analysis can be used to choose between using an agile methodology and a more traditional, predictive methodology. The authors suggest that both sides of the project continuum have their own home ground. The choice of appropriate methodology for a particular project is determined by how close the project is to one of the “home grounds.” Some of the criteria, such as the level of experience of the development team, are subjective while other criteria, such as “criticality,” are less so. Thus, we have the home grounds for agile and predictive development methodologies:

Agile home ground:

Low criticality
Senior developers
High requirements change
Small number of developers
Culture that thrives on chaos

Predictive home ground:

High criticality
Junior developers
Low requirements change
Large number of developers
Culture that demands order

It should be emphasized that these “home grounds” are just specific regions of a vast multi-dimensional continuum. Software development projects will rarely exhibit all of the characteristics of either home ground and, more likely, will exhibit some characteristics of both. These “home ground” factors provide managerial criteria for analyzing a given project so that the risk of using either an agile or a predictive methodology can be determined. To these criteria I would also add the project’s inherent difficulty and, its verification twin, the need to be able to show, rigorously, that the program is working as specified.

I use the term “verification twin” to describe rigorous acceptance criteria since inherent difficulty and formal acceptance or verification requirements are frequently just two different aspects of the same system characteristic. Large software systems that include complex and exacting calculations are frequently simply not feasible to prove “by hand” except for a very small number of test values12. Verification of such systems relies on being able to see into the development process and constrain that process such that a known result is produced. Especially critical is constraining the interactions between system elements to be well behaved and predictable across all inputs and all possible input domains. Examples of such systems include radar tracking and targeting systems, flight navigation systems, fly-by-wire aircraft controls, large numerical programs such as optimization projects and computer imaging systems (Computer Aided Tomography, Magnetic Resonance Interferometry, etc.).

But what about the problem itself?

When applied to a problem that is solvable in multiple, shallow iterations, agile techniques are very appropriate and this fits quite well with the concept of inherent difficulty. These methodologies emphasize the creation of simple, immediate solutions during each iteration. If such a solution from an earlier iteration is found to not be extensible to the next level of required functionality, it gets rewritten (refactored in agile parlance) to include the needed, additional functionality. When applied to larger problems with a high level of inherent difficulty, agile methodologies experience predictable problems.

DSDM, when confronted with such problems, starts looking a lot like a spiral model implementation of a classic predictive methodology. This occurs because the complex interdependencies of the problem require formal coordination between teams. XP and scrum simply do not scale up to large, complex projects but for different reasons. The maximum scrum size of about ten people is probably just a consequence of human nature. The informal huddle simply becomes too large to work effectively above that size. XP has problems when confronted by any large, complex problem due to the practice of pair programming. Creating the solution to such a problem requires collaborative problem solving between multiple XP developer teams. XP's characteristic pair programming results in a proliferation of communications paths since collaboration among teams means that both people in each team need to reach a common understanding. This is bad enough when only two teams need to collaborate (six communications paths) and becomes progressively worse as the number of teams who must collaborate increases (three teams means fifteen communications paths, four teams gives 28 communications paths, etc.). If the problem at hand has few interdependencies, this issue is not significant. As the number and complexity of the interdependencies grow, the number of people involved in finding the solution becomes unwieldy.

That the complexity of the project is a constraint may surprise some proponents of agile development, but complex code is typically both difficult to develop and difficult to test. This complexity also tends to constrain the usefulness of the user story as a basis of the test plan. In such a system, the complexity of calculations or decision logic aren’t visible to the end user. As an example, consider radar-tracking software that applies Kalman filtering algorithms to possible objects reported by the radar and then displays the result, possibly filtered by some user determined selection criteria. The bulk of this processing is hidden from the user and it is doubtful that an actual user such as an air traffic controller will be at all familiar with the required tracking and filtering algorithms; they just want to know where the airplanes are.

Defining, developing and then maintaining a complicated shared abstraction also puts a constraint on the applicability of agile methodologies since finding the definition of such an object that meets all needs is usually a non-trivial, highly collaborative design and development effort in and of itself. Maintaining this abstraction through multiple iterations further constrains the applicability of agile methodologies. It may be possible for agile methodologies to work around the periphery of such an object as long as the abstraction is fully understood, well implemented and then maintained.

When agile meets inherent difficulty

Each of our archetypical agile development methodologies responds differently when confronted by inherently difficult projects of increasing size and complexity. Scrum behaves the best within its overall team size constraint since the daily scrum and problem solving approach adopt well to solving complex or difficult problems. Sadly, scrum seems to be limited to a team size of at most about ten contributors. This size limitation is most likely driven by the nature of collaborative problem solving. That is, above about ten individual contributors, the number of communications channels becomes so large that even the daily scrum is insufficient to coordinate the efforts of all of the contributors. As noted previously, attempting to run multiple scrums in parallel to solve larger problems also doesn't seem to work (again, those pesky communications paths). At this point, the project should move to a predictive methodology with its formal communications artifacts.

DSDM also behaves well when confronted by increasingly large, inherently difficult projects. Here again DSDM's typical maximum project size of up to about six cross functional teams each made up of at most six people constrains the typical project to about twelve to eighteen software developers. Due to the cross-discipline approach inherent to DSDM teams, not all members of a team will be developers so the total number of developers will be somewhat less than the total number of project personnel. Thus, in effect, DSDM is also constrained to what can be developed by no more than twenty developers within the overall project schedule and within any iteration's time-box. While such development can produce a surprising amount of functionality, the total effort applied during a six-month project is still only about ten percent of the lower bound for large software projects.

If the project at hand has a high level of inherent difficulty, the various teams must devote some of their effort to coordinating with other teams since inherently difficult projects are characterized by high functional or data coupling. If the teams unwisely press onward to meet their own narrow, functional requirements, the overall project runs the risk that the various pieces will not play together. Likewise, the need to create a large shared object or work with a large body of shared data requires that the various teams collaborate closely. Under any of these circumstances, the DSDM project starts to resemble a spiral model, predictive project. The other practices of DSDM (e.g., MoSCoW and the 80/20 approach) may allow a fairly quick implementation but only so long as the requirements prioritization is coordinated among the various teams. This needed coordination takes us back to our earlier statement that DSDM, when confronted by larger, inherently difficult projects, begins to look a lot like a spiral model implementation of a predictive methodology. The flexibility of only implementing the requirements that fit within the current iteration's schedule can no longer be allowed if one team's “could have” or “won't have” is needed to match up to another team's “must have”.

XP probably fairs least well when confronted with increasingly large, inherently difficult projects. This is directly attributable to the lack of functional analysis in this methodology. At most, a system metaphor or pattern is identified, which determines the system's design. Each iteration of the project then builds more of the system's functionality to complete this design. If there is no system metaphor for a new, inherently difficult project, the methodology simply has no mechanism for the collaborative engineering effort required to architect a new and novel project. The approach of creating a “spike solution” by having a developer team work for a couple of weeks on a prototype is hardly adequate for problems where a full functional analysis may take multiple man-years.

For many problems, XP methods work surprisingly well; for some, the development group becomes bogged down. Continual “refactorings” are required to address the previous iteration's shortcomings but these same refactorings always seem to introduce a new batch of problems. Typically, these problems occur where multiple pieces of the overall project must interact with each other or the user. The ultimate cause of the continuing difficulty is one or more of the pieces of software or the user is never quite satisfied. In addition, this behavior may be seen within an individual piece of the project that seems to keep growing in complexity after each refactoring and that shows no signs of “converging” to an finished product. This is just another way of saying that the development team has stumbled into an inherently difficult project. One that requires a collaborative solution to multifaceted, interdependent problems.

Refactoring only appears to work when a solution exists. This usually means that a series of partial solutions exist and that series of solutions “converge” to a complete solution. If this is true, multiple refactorings are required for the developer to eventually reach the complete solution. When such a complete solution does not exist, refactoring is just another word for rewriting or starting over. If this is the case, each refactioring tends to just replace one set of bugs with another. Another limitation on XP for solving complex, interdependent problems is that each piece of the problem to be solved must be solvable within an iteration’s time-box or as a “spike solution13.” This limitation puts an upper limit on both the complexity of the code that can be developed and the complexity of the interactions of any one piece of code can have with the other project elements being developed concurrently. This is especially true if there is a need to maintain a complicated shared abstraction across several program elements. If any of these characteristics of inherent difficulty occur, XP development methodologies tend to fail or at best flail.

A final factor that also typically constrains the use of both scrum and XP methodologies was mentioned previously as the extent to which there are non-software parts of the overall project. Proponents of these agile development methodologies may claim that these factors fall outside of the realm of software development. These factors do however, constrain when a particular methodology may be used none the less. The methodologies in place for attacking large and complex projects also include the non-software impacts of the project as part of the overall project management methodology. Of the agile development methodologies in our brief survey, only DSDM begins to have the tools for dealing with the organizational impact of a fairly large-scale software project.

The nature of the problem determines the appropriate methodology

Hopefully by now it has become clear that “agile software development,” when correctly applied, is not just a new buzzword for a death march. Agile development methodologies eliminate or severely curtail the structure and bureaucracy of predictive methodologies. These bureaucratic and organizational structures are replaced with well-defined and carefully engineered collections of practices that still ensure that the product developed meets the user’s or customer’s needs.

Under agile methodologies, the rigid requirement for approved specifications is replaced by the collection of user stories and the demand that working code be shown instead. Likewise, design reviews and the typical trappings of predictive development methodologies such as unit development folders with prescribed contents no longer exist. These are replaced by practices such as pair programming to ensure that the code both meets some higher level design and meets quality requirements. At the project management level, complicated schedules purporting to predict the future direction of the project from inception to completion are also no longer needed or desired. They are replaced by an iterative approach to building a small amount of the project within a clearly specified time period and then repeating this approach until the problem is solved. While all of the processes of the various agile methodologies may not be as refined and mature as those of the classic, predictive methodologies, they are well-defined sets of practices that clearly differentiate agile methodologies from a chaotic death march.

Unfortunately, agile development methodologies appear not to be suitable for all software development efforts. While the “home ground” criteria provides some guidance as to which projects are appropriate for each methodology, the inherent difficulty of the problem to be attacked must also be considered. Application of the inherent difficulty criteria for determining which methodology to follow means only projects with highly separable, conceptually shallow components are amenable to agile development. Projects with strongly coupled or algorithmically complex components will generally need a more formal and predictive development environment. Attempts to force agile methodologies onto inherently difficult projects result in many of the same symptoms and consequences as attempting the same development project with a death march development effort. This isn't a fault of agile methodologies but only a reflection of the fact that agile methodologies lack the collaborative problem solving structures required for attacking large, inherently difficult projects.

1Software Engineering Economics by Barry W. Boehm.

2See B. W. Boehm, “A spiral model of software development and enhancement,” IEEE Computer, 1988. It should be noted that the spiral model is a means to reduce the technical risks in very large development efforts while allowing user feedback into the evolving product.

3Taken from the Agile Software Development article as of 5 December 2005. Citation and copyright.

4The “Agile Manifesto” may be found at while the “Principles Behind the Agile Manifesto” may be found at

5Reference TMMM

6It is not the intent of this book to provide a “how to” for implementing an agile development methodology. I will point out that one key factor in successfully doing so is to take any adopted agile methodology as defining a methodological “lower bound” for the development organization. XP in particular has been characterized as a “methodological house of cards” (see Extreme Programming Refactored: The Case Against XP by Matt Stephens and Doug Rosenberg. Apress, 2003) such that the removal of any XP practice runs the risk that the resulting crippled methodology will collapse in one way or another.

7The Scrum Development Process for Small Teams, Linda Rising and Noman S. Janoff. IEEE Software July/August 2000.

8Interested users wanting to know more about DSDM are invited to visit the official DSDM web site at A more complete, high level description of DSDM along with links to other DSDM resources can be found at

9DSDM has larger cross-functional teams but the maximum number of people actually writing code on a DSDM project is usually around ten or twelve developers.

10Feature Driven Development (FDD) has been used on large projects. See The project schedules were fifteen months with fifty people followed by a second phase with two hundred and fifty people over eighteen months. I do not have sufficient information as to the nature of the project to provide additional details.

11Boehm, B. and Turner, R. Balancing Agility and Discipline: A Guide for the Perplexed. Addison-Wesley 2003.

12Frequently, such systems can only be verified by intensive data analysis and simulations that may be as complex as or even more complex than the system itself.

13n XP parlance, a spike solution is achieved by assigning a team to develop a particularly thorny aspect of the problem outside of the constraints of a normal timebox. This works well for relatively small, self-contained pieces of complex code but not at all for code that must have complex interactions with a substantial part of the remainder of the project.

This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.