You are hereA New Chore for Sisyphus / Chapter 9 – The “wrong” answer

Chapter 9 – The “wrong” answer

By DaveAtFraud - Posted on 13 July 2010

RFC 1925-10: One size never fits all.

There are two dimensions of software development project difficulty. These are project size and the inherent difficulty of the underlying problem. Where a particular problem falls in this two-dimensional space determines which of the various development methodologies is most suitable for attacking the particular problem. Specific methodologies may not work at all or may give a less optimal result in the form of higher cost, longer schedule, less satisfied customers, or any combination of these. The minimum schedule duration required for attacking the problem using the optimal methodology is, by definition, the shortest schedule possible. Organizational factors or customer preferences may further confine which methodologies can be applied. It is also possible that no methodology is appropriate for a project (e.g., a very large project with high complexity and high requirements volatility) as it is currently defined. This means that it's time to redefine the project.

Historically, all big software development problems were difficult due to “accidental” factors that arose in solving any such problem. As modern software tools and methodologies have decreased the impact of accidental factors, many moderate sized and smaller efforts have become amenable to solving with agile methodologies. Up to a certain point, large “easy” problems can be solved efficiently with agile methodologies while small hard problems tend to be best attacked by an individual (very small problems) or a close-knit team. In all cases, the project's inherent difficulty constrains which methodologies can be used to attack the problem and, together with the size of the team, establish the minimum schedule duration for the project. The project may take much longer under different staffing levels or with different methodologies.

The underlying, limiting constraint for all of the various attempts to accelerate a project's schedule is the extent to which the development team needs to concurrently and collaboratively solve the core software development problems that really define the project. A death march simply ignores this constraint and wildly flails at the problem to produce a result that, at best, only superficially provides the required functionality. Such an approach has as much chance at solving inherently difficult problems as the proverbial million monkeys randomly typing1. Agile methods can only successfully accelerate development efforts that do not require significant collaborative problem solving from the development team. Tools cannot substitute for the thought processes required for collaboratively solving complex, interdependent problems. Tools may aid in the communications required for creating such a solution but the creation of the solution requires a number of contributors to concurrently solve interdependent aspects of the problem. No tool has yet been found that significantly replaces human thought processes in this situation. Finally, making the problem somebody else's problem only transfers the responsibility for creating a solution that is still constrained by the need for concurrent, collaborative problem solving. Whoever else is doing the work will either need to do the engineering to create the solution or their result will have the same shortcomings as an in-house effort attempted on an overly constrained schedule.

That each attempt to somehow accelerate the schedule required for solving an inherently difficult problem fails is no surprise. It is the failure to recognize this constraint and attempts to force such a project to a shorter than feasible schedule that turns a challenging development task into a punishing ordeal for both the developers and the eventual users. The underlying, inherently difficult engineering problem must be solved or the development team will thrash and flail from one release to the next. The team will end up stuck in an endless cycle of applying layer upon layer of new patches, kludges, lash-ups and quick fixes to address individual symptoms. Such an approach has no chance of addressing the underlying malady of insufficient engineering and the resulting lack of understanding of both the problem and the requirements for its solution.

Avoiding the steeper, higher hill

Given a proper engineering foundation, even an inherently difficult project can continually be extended and enhanced. Without such a foundation, the project team finds themselves facing a heavier rock and a steeper, higher hill at each release. The continual accumulation of patches, kludges, lash-ups and quick fixes means that the chore of getting over the hill with these impediments becomes more difficult with each release. This effect turns software development under such conditions into a punishment suitable for Sisyphus. Sisyphus had to continually roll the same rock up the same hill only to see it roll back down. Without a proper engineering foundation, the development team gets both a heavier rock and a higher hill. The heavier rock is due to the accumulation of past attempts to quickly fix problems for which no quick fix exists while the higher hill is in the form of demands for new features. Finally, the requested unrealistic schedule is rarely if ever met. This results in compression of the schedule for the next release which effectively makes the hill steeper still. Sisyphus would be quite happy to stick with his familiar rock and hill.

Only by performing a thorough engineering analysis and by applying appropriate software engineering practices can an inherently difficult problem be effectively solved. If the problem is sufficiently large or complex, it is beyond the ability of a single individual to solve working alone and multiple people must be assigned. This is not a bad thing since there is a very strong argument that the solution arrived at by such a team is superior to one crafted by a single individual due to the peer review process inherent in this approach. The collaborative nature of the team problem solving requires that the growing partial solution be continually verified and refined. That is, each attempt to extend or refine the developing partial solution toward completion requires that the incremental change be verified as both moving the solution forward toward a complete solution and not breaking the existing partial solution. None of the approaches to accelerate a software development effort that have been discussed provide any assistance in the inventive process of concurrent, collaborative problem solving that characterizes this kind of effort. Obviously, these approaches do not provide a mechanism for significantly accelerating it.

To many, the alternative is worse

Unfortunately, the only known approach for performing concurrent, collaborative problem solving is using a heavyweight development process such as a classic waterfall methodology or the RUP. The usual melodramatic objections to using a heavyweight methodology have to do with linking the engineering that is accomplished as part of that process to the high ceremony, documentation driven process that is usually instantiated for defense and aerospace related projects. This instantiation almost always involves a government customer either directly or indirectly (e.g., development of a “dual use” airplane that may eventually also be sold to government customers). This “high ceremony” approach to system development was crafted to allow the external customer sufficient insight into the project's progress. This level of control, predictability and visibility is not surprising given that the customer was expected to literally spend millions if not billions of dollars on the project. Aerospace applications in particular, whether military or civilian, also involved systems developed for highly regulated applications. This, again, entailed giving the regulatory agency insight into the development process.

Many of the most visible, external trappings of this process arose in the defense and aerospace sectors. Commercial aerospace companies 1) were familiar with the defense related instantiation of the process and 2) often needed the artifacts from this process for civilian certification or to support sales of dual use products. These were the businesses that could successfully construct very large-scale software systems. While the high ceremony process wasn't always successful, the chances of a successful result using such a process were still better than if no process was followed at all.

In this market segment, the customer or potential customer was almost always external. Especially for defense related projects during the cold war era, the government was paying the cost of the development and wanted to see tangible progress and a verifiable approach. The result was the “dog and pony show” development review. This situation was only made worse by external independent validation and verification (IV&V) contractors2 who had to justify their cost by finding some sort of fault. This meant that project reviews captured project status and the evolving design as it existed at some time in the past. The goal for the contractor was to present a faultlessly consistent view of the project even if it didn't reflect current thinking. Finally, both the procuring agency and the users frequently had little or no background in software development, which meant that there was more emphasis on appearance and consistency with less on engineering.

The resulting high-ceremony, “dog and pony show” was a product which met the needs of a very specific customer community. It is not central to predictive methodologies but has become inextricably associated with a predictive development approach. This linking is especially prevalent among those who don't think they need any engineering discipline to solve a complex software development problem. What we have seen though is that such engineering discipline is needed for some projects even if the high-ceremony trappings are not. The high-ceremony aspects of a traditional predictive development approach can be replaced by a low-ceremony internal review that emphasizes verifying the engineering products. There is nothing that says that requirements cannot be reviewed and accepted by marketing, product management and the test organization or that designs cannot be peer reviewed in much the same way that code reviews are conducted. It is possible to have the engineering discipline of a predictive methodology without the ceremony. Such internal reviews are frequently conducted by organizations following predictive methodologies prior to presenting much more polished results at a high ceremony project review that includes the customer, users, the IV&V contractor and possibly others.

How does this differ from agile methodologies?

Alternatives such as agile methodologies attempt to both cut out the ceremonial portions (the so-called “dog and pony show” massive reviews) as well as the underlying engineering efforts behind the work products produced for these reviews. This may work for projects with a low level of inherent difficulty but will fail when applied to complex and interdependent problems. A lightweight but still structured and rigorous software development methodology would retain the engineering effort while eliminating the ceremonial aspects.

Development organizations need to consider development methodologies the same way they consider programming languages, integrated development environments or other tools. One size does not fit all and the nature of the problem to be attacked determines the appropriate methodology. In order for this approach to work, the development organization must still expend the engineering effort and produce internally usable engineering artifacts such as functional specifications, internal and external interface specifications, design documentation for inherently difficult portions of the project, and system and functional test plans and procedures. This is hardly a new and novel invention. As noted previously, most organizations that follow predictive methodologies utilize just such a process internally and only shift to the high-ceremony process to produce the required artifacts and requisite “dog and pony show” for various reviews and other project milestones.

In outline form, this project methodology would proceed along the following lines:

  1. The process starts with the creation of a system specification or marketing requirements document. This document should describe the scope and capabilities of the desired system3. Functional details may be included but should be considered notional since this level of detail belongs in the functional specification. An exception is the extent to which the new capabilities need to conform to or interface with existing systems. In this case, specifying functional details is very appropriate.
  2. The next step is to perform an initial functional analysis and decomposition of the system specification or marketing requirements document. Both the objective requirements of the system specification and knowledge of the extent that any portions of the specification are volatile should drive the effort. The functional decomposition should attempt to isolate those portions of the requirements that are volatile since such functionality tends to be more amenable to agile development.
  3. Based on this initial functional decomposition, kick off the collection of user stories while continuing to analytically refine the functional decomposition.
  4. To the extent that some functions appear to have low complexity and little or no functional or data coupling to the remainder of the project, these functions can be “set aside” and the agile methodology allowed to continue. If the entire project contains no significant inherently difficult pieces, the project is probably suitable for agile development and all that remains is to refine the collected user stories into an initial functional decomposition. At this point the user stories can be used as functional requirements for implementing the project.
  5. Pieces of high complexity or with strong functional or data coupling, if any, require additional “refinement” and the functional analysis should be allowed to continue. It is possible that additional analysis will result in a better decomposition that removes some or all of the complexity or coupling. At some point, the engineering team may decide that no such decomposition exists and these portions of the project are inherently difficult. It must be emphasized that this is an engineering decision based on the complexity of the problem at hand. If such a conclusion is not acceptable, the only possible alternative is to ask the engineering team which requirements of the project drive its inherent difficulty. A decision can then be made as to whether these aspects of the project requirements can be “reworked” to remove some or all of the complexity that makes the project inherently difficult.
  6. The artifacts of the functional analysis should be a set of functional requirements along with the corresponding interface specifications. To the extent that any “agile suitable” pieces were identified in step four, the interface specifications should include well-defined interfaces to these pieces.
  7. The functional specifications along with the collection of user stories should be subject to a functional design review that includes the system specification author(s), the quality assurance team and any other affected parties (product support, sales, etc.). The functional specifications should, at a minimum, be reviewed for completeness, logical coherency and testability.
  8. Assuming the collection of user stories and functional specifications is found to be acceptable, development continues with the schedule duration required for developing, testing and integrating the inherently difficult portion of the project being the gating constraint on how soon the project can be available. Having a good functional decomposition on which to base the schedule estimate for the inherently difficult portion of the project allows the development organization to develop a fairly accurate schedule estimate for this effort.

The careful reader will have noted two distinct flaws in this proposed lightweight engineering methodology. First, the project is partitioned into pieces suitable for agile development and into pieces that need more engineering effort but there is no attempt to coordinate these two portions of the project. Second, the pieces that require additional engineering effort will probably take much longer than management desires.

All we have succeeded in doing is coming up with the “wrong answer”. Management, marketing and sales all want the product tomorrow. Not six months after the schedule they only grudgingly allowed for development. Thus, this answer isn't wrong in the sense of being incorrect like saying “one plus one equals three”. It is “wrong” in the sense that it isn't what anyone wants to hear. That it is “wrong” in this sense hardly means that it can be ignored.

Why not start “agile”?

An alternative approach might attempt to swap steps two and three. This would kick off the agile methodology's collection of user stories first and only perform a functional analysis if it was deemed necessary. At the current level of maturity of agile software development methodologies, it is unclear whether there is a formal decision point in the application of the methodology that determines whether the methodology is inappropriate for any particular piece of the project or the project as a whole. Instead, the ability of the developers assigned to each piece is relied upon for that determination. This reliance on the individual developer is one of the driving factors for requiring “senior developers” as part of the agile methodology home ground.

Human nature implies that it is better to make this determination before expectations are set. Given the adoption of an agile development methodology for a particular effort, expectations will be set and plans made that assume the effort will be completed on the “agile” schedule. Finding out after the project has started that a piece of the project or the entire project doesn’t seem to be amenable to solving with agile methods means that this schedule will not be met. This bit of news will not be something that management wants to hear. It thus becomes critical to define objective criteria such as inherent difficulty to determine whether a particular project can be developed using an agile methodology.

Living with “the wrong answer”

It is only by expending engineering resources to perform a valid functional analysis and decomposition that inherently difficult projects can be identified and appropriately attacked. Conversely, at any point in the decomposition the engineering team may come up with a low complexity solution that allows the development effort to rapidly continue using an agile methodology. Further, to the extent that the project is not inherently difficult, this approach plays quite well with agile methodologies. Little effort is expended beyond documenting the rationale for the functional decomposition adopted and ensuring that the project has a low level of inherent difficulty. Any interfaces between functional elements are identified earlier than would otherwise be the case. This means that appropriate actions can be taken to ensure that suitable coordination is achieved. As an example, the iterations of the affected components can be coordinated such that the little “coupled functionality” that exists will be developed at the same time. It should also be noted that performing such an analysis significantly decreases the risks associated with the project. Performing at least a minimal level of functional analysis removes the risk that significant inherent complexity is lurking somewhere within the desired functionality.

It is possible that an agile engineering methodology such as scrum can provide an organizational framework that allows this process to be completed fairly quickly. As with attempts to accelerate coding, engineering methodologies are also limited by the time it takes to solve the underlying problem. If the nature of the underlying problem means that this solution must include solving complex, interdependent pieces of the overall problem concurrently and collaboratively, the solution will take some time to develop. An approach such as scrum provides an excellent organizational framework for solving these problems but it is the thought processes of the individuals involved that actually solves the problem. Thus, as with software development tools, scrum can only help this process by removing distractions, facilitating required communications and allowing the team to concentrate on the essence of the problem. It is not a magic solution that will allow any required engineering to be accomplished within a short, arbitrarily fixed schedule.

Other considerations

There is no simple answer to determine how much is too much to bite off. Much depends on intangibles and a small, highly experienced team may be able to implement twice the functionality in half the schedule as a less experienced team twice their size. As the size of a project grows, the likelihood that this will happen shrinks to insignificance. Thus, larger projects will generally follow the productivity trends of past projects of a similar size in the organization. Smaller projects also tend to follow past organizational productivity trends for similar sized projects but miracles do happen and a several-fold increase in productivity could happen on the next project. But that's not the way the smart money would bet.

Like the “safe harbor” financial caveat says; past performance is no indicator of future results. A small team that has proved itself capable of producing above average results may be able to do so again on some future assignment. On the other hand, a change in the nature of the project such as a higher level of inherent difficulty may just as easily defeat them. This is just another way of saying that past organizational behavior is your best indicator of what the same organization may be capable of producing in the future.

The larger the project, the more likely it is that the organization will only be capable of repeating its past productivity. This is why the SCMM is so important to companies that tend to focus on large projects. The SCMM methodology is about optimizing the behavior of large organizations on large projects. The goal is to make the outcome of these projects initially more predictable and, hopefully later, more efficient. Unfortunately for those still looking for a magical answer that allows large, complex software projects to be completed quickly, this efficiency increase will be incremental and will primarily occur as a higher quality product which takes only slightly less time to produce.

Welcome to reality

The project approach outlined in this chapter does not change this reality. It does allow an organization to take advantage of the higher productivity and shorter product development cycles of agile development methodologies if the problem being worked is at least partially amenable to such a methodology. This approach also allows the development organization to manage some of the risks associated with agile methodologies by determining if the problem at hand is not amenable to agile development. While this approach was created primarily for small to moderate sized efforts, it scales well to larger efforts. Pieces of a large project that are amenable to agile development can be identified using this approach. These pieces can then be developed using an agile methodology unless there are non-technical factors limiting such an approach.

There is still the question of what to do if part or the entire problem has a high level of inherent difficulty. These parts or possibly the entire project will not be ready “soon enough.” This also is a reality and no amount of management fulmination can change it. Since this reality cannot be changed, the best alternative is to find an approach to software product development that doesn't rely on something that will not happen.

1During the death march described in Chapter 1, a disk on the configuration management server suffered a head crash. It can be argued that the scrambled bits that resulted represented our best chance at making the schedule since there was at least some probability that the bits so scrambled might now actually meet our requirements. Although eventually proved true as the project progressed, this observation was not popular with management.

2Frequently only competitors of the contractor have the expertise to review such a project. This relationship to the actual contractor provides further incentive to find fault by an erstwhile competitor who is also the IV&V contractor.

3If the system specification will be used for system acceptance, it is beholden upon the requesting organization to definitize the requested functionality and performance (possibly in cooperation with the developing organization). Its what they're buying.

This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.