You are hereA New Chore for Sisyphus / Appendix A - Summary of consequences

Appendix A - Summary of consequences


By DaveAtFraud - Posted on 13 July 2010

Table 1 summarizes the software development practices, resulting flaws, and short-term and log-term consequences of attempting to force an inherently difficult project to completion on too short of a schedule. This material was originally presented in Chapter 2 which provides a cause and effect analysis causally linking each practice to the result and the ensuing consequences.


























































































Practice



Result



Near-Term Consequence



Long-Term Consequence



Coding before functional requirements defined.



Early code represents uncoordinated fragments of the ultimate
solution.



Lack of consistency and coherency in the implementation. Code can only be extended to additional cases with extreme difficulty and significant breakage.



Program is hard to use due to inconsistencies in the implementation. Code fragments developed in a functional vacuum rarely mate correctly with pieces developed after the problem is better understood.



Not developing formal functional requirements.



Uncoordinated program development that results in a product that "only an engineer could love." No functional testing other than developer tests to their own understanding of the program.



Coordination issues between interdependent functions are not found until an attempt is made to integrate the complete system. This delays the final product testing.



Idiosyncrasies and quirks of separately developed pieces become "locked in." The patch code required to make the pieces work together is fragile and brittle. Such kludged internal program interfaces also frequently result in security flaws.



No preliminary design.



Program interdependencies are not identified. There is no technical basis for project scheduling "guesses."



The system acretes functionality where it is convenient for the development organization; not necessarily where it makes sense to those who will use the system.



Patches, kludges, splices and fixes required to splice interdependent functionality together. User interface does not reflect the end-user's needs.



Lack of common abstract objects.



Multiple program elements must deal with raw data or data that reflects the lowest level implementation.



Frequent wide-spread breakage during testing as badly behaved "real" data exposes code that does not handle all possible data values.



Program is difficult to maintain and modify since multiple, disparate sections of code each deal with raw data contingencies as the author saw fit.



Continual or frequent "refactoring"



Constantly debugging what is effectively new code. Iterative development methods should converge to a final finished product.



The program remains unstable and incomplete.



Hiding an incomplete understanding of what is required behind agile-speak means the required functionality has little or no chance of ever being completed. Versions released to users must also be changed.



Cut-and-paste sub-classing



Multiple, near duplicate copies of code occur within the program.



The program is difficult to integrate and test because seemingly identical functionality is not implemented with the same code.



The program is difficult to maintain and modify since each copy of the replicated functionality must be separately maintained.



Lack of user interface consistency



Program is hard to document, hard to test and hard to use.



Documentation must attempt to explain away differences in the program's presentation or actions that make no sense. Testers must take these differences into account when designing test cases



Users face a steep learning curve since similar actions or results will not be presented in the same way. This increases the load on the support organization.



Unrealistic unit testing



The pieces work but the assembled pieces do not.



A lengthy integration period is required in order to get the individual pieces of code to "play together."



Integration testing is not a substitute for thorough, valid unit tests. Tests of boundary values and off-nominal conditions are frequently prevented by other program elements. Not testing these cases has a way of coming back to haunt the evelopers.



Hiding the problem



Program errors are masked from the end user rather than fixed.



The masked errors tend to still cause instabilities. If recovery includes restarting some portion of the processing, performance will be impacted.



Eventually the ultimate cause of the problem must be identified and a real solution implemented. Users who need a data case that causes the problem will not be satisfied.



Extreme focus on meeting the minimal interpretation of the requirements



Code only works for very nominal actions and data. Boundaries and off-nominal conditions are ignored or not allowed regardless of how needed they are by the end user.



Development schedules lengthen due to scope creep as specific cases that must be handled are identified.



Users find that the program rarely meets their needs since boundary values and off-nominal conditions will occur in the real world. Again, customer support's load is increased as customers seek to make the program fit their needs.



Attempts to test quality into the code



Additional testing staff and the expenditure of lots of CPU cycles in an attempt to find all of the bugs.



Extensive testing of superficial functionality builds a false confidence that the program is ready for release.



The design flaws will eventually be found. If not by the test team than by customers. If found by the test team, the development schedule will be lengthened to accommodate fixing these problems.



Attempts to automate testing of functionality still being developed



Substitution of automated build tests for thorough, valid white box tests by developers.



Test resources are diverted from designing valid system integration tests to attempting to automate testing that only the developers can do.



To the extent that system testing is not accomplished due to the diversion of test resources, more design level flaws will remain.



Proliferation of patch releases



Development resources needed for the "next release" are diverted to fixing problems in the last release. Customers must deal with a steady stream of new patch releases that may include functional changes.



Diversion of developer resources from the next release to fixing the last release causes further schedule compression for the next release. There is a tendency to take an even shorter term view when creating these patches since it is assumed that the next release will ship as scheduled. Thus, the patches frequently are not intended to hold up over time which is exactly what they'll have to do when the next release is also delayed. The near-term and long-term consequences are the same.



This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.