You are hereA New Chore for Sisyphus / Chapter 8 - Make it somebody else's problem
Chapter 8 - Make it somebody else's problem
RFC 1925-6: It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it.
(corollary) It is always possible to add another level of indirection.
The previous three chapters have demonstrated that a development organization faced with the problem of too much functionality to build and too little schedule to build it in is unlikely to succeed by just adding staff and charging blindly at the problem, by attempting to force fit the requisite development effort into one of several agile development methodologies, or by purchasing some latest and greatest tool-set or off the shelf components. The approach of using an agile methodology may work; but only if the problem being attempted doesn't have a high degree of inherent difficulty. Inherent difficulty constrains how fast such a problem can be solved regardless of the tools or development methodology used or the lack thereof. This leaves the company and the development organization facing the unhappy prospect that the badly needed new product or revision to an existing product cannot be built “in time.” Since this state of affairs is obviously unpalatable, another solution must be found. The two most common panaceas are outsource1 the effort or make the project an open source effort.
In theory, outsourcing allows a development organization to deploy additional resources against a development problem that would not be feasible with the current development staff and adding sufficient resources is not feasible given existing labor constraints2. In this discussion, I will assume that outsourcing means actually assigning development responsibility for some portion of a product or project to an external organization. This is in contrast to bringing in contract developers who are assimilated into the existing development organization for at least the duration of the project. At a minimum, this means that the outsourcing contract specifies the agreed to cost basis3 and schedule for the effort as well as specifying functional and possibly performance requirements for the outsourced portion of the product.
Outsourcing pieces of the problem doesn’t change the inherent difficulty of the problem. Outsourcing only transfers responsibility for development of some pieces of the project to another organization or a vendor. Besides not addressing the issue of the inherent difficulty of what is being attempted, there is also a tendency for outsourcing to just replace bad in-house software development practices with the vendor’s bad practices. On the other hand, an organization that recognizes the inherent difficulty of the proposed effort and attacks that effort with an appropriate methodology is more likely to succeed in the effort. Regardless of the approach taken, the minimum feasible project schedule will be determined by the inherent difficulty of the project.
SCMM to the rescue?
An organization's software development capability as measured by the Software Capability Maturity Model4 (SCMM) provides some insight into how well an organization may be able to perform such a development task. The SCMM only measures the organization's ability to manage and execute a development effort, not the quality of the resulting software. I have seen some fairly horrid software produced by outsource vendors who claimed to be at SCMM level five5. While the product performed as specified and was produced on the contracted schedule (as expected by their SCMM level), the implementation was not maintainable nor extensible due to the poor underlying design. It is unclear whether this was intentional on the part of the vendor as a means of ensuring continued business, the result of the requested schedule, or both.
Communication is again the problem
Unfortunately, outsourcing a portion of a software development effort also means that a constrained communication channel is imposed between the original development group and the developers at the outsourcer. At a minimum, any technical change that may change the scope of work for the outsourced effort is now subject to contractual review by the outsourcer. In addition, unless the outsourcer co-locates with the original development team, there will typically also be geographic impediments to the free flow of information between the teams (e.g., the teams are in different time zones). As with the agile methodology bullpen, nothing beats face to face communications. This is especially true when trying to attack an inherently difficult development effort. These communications constraints can be further aggravated if there are also cultural and language differences between the teams (e.g., when the outsourcer is in another country).
This constrained communication channel may not be a significant factor if the overall project is not inherently difficult or a good functional decomposition has allocated a consistent set of development requirements to the outsourcer. That is, the piece of the project to be developed by the outsourcer has few, only minor, or well-specified interactions with the remaining, in-house effort. If this is the case, the application of additional developers stands a good chance of successfully reducing the required schedule. This effect is very similar to the way in which a project that is amenable to agile development methodologies can be split up among multiple developers to achieve the desired schedule. If the project is inherently difficult and the outsourced effort has significant interaction with the remaining effort, the schedule required will, in all probability, actually be made longer. This is in comparison to the optimal case, which has a nearly frictionless communications channel between the outsourcer and the in-house team.
With such a frictionless communication channel, the schedule will still be, at best, what was achievable by an appropriately staffed, strictly in-house effort. Without such a clear communications channel and with even the slightest bit of contractual, cultural or language friction, the effort will probably take much longer. Additional schedule will be required in order to either achieve the required development coordination or to integrate the separately developed pieces. That is, effort and time must be expended “up front” to carefully specify what the outsourcer is responsible for delivering or an even greater effort and more time will be required later in the project to integrate the portion developed by the outsourcer. This is absolutely a case of “either pay now or pay later” but the price in extended schedule duration must be paid.
A contracted death march is still a death march
Organizations that do not understand the limitations that inherent difficulty places on a project's schedule tend to look at outsourcing as a means of throwing large numbers of developers at a development problem. It appears to eliminate the problem of finding enough qualified developers (as regular employees, contractors or otherwise) locally and then managing the additional developers once they are brought on-board. Unfortunately for such organizations, successfully executing such a scheme is contingent on developing high quality, functional requirements if the project has a high level of inherent difficulty. Hurriedly pushing out a poorly done functional specification to the outsourcer will result in a spaghetti code implementation that reflects the immature functional specification. The resulting implementation may meet the contract requirements but will be extremely difficult to integrate with the in-house effort. Worse, it rarely provides an extensible or even maintainable basis for future development. It will be interesting to see if outsourcing firms become adept at meeting the letter of the requirements for such development. Alternatively, such organizations may recognize that unsuccessful efforts tarnish their reputation and demand solid functional specifications before proceeding.
The discussion thus far applies to custom software developed for in-house use as well as software developed for commercial or retail projects and products. The question has only been whether outsourcing some or all of a development effort will significantly “bring in” the schedule required for the development effort. Not unsurprisingly, an outsourcer is as constrained by the inherent difficulty of a project as an in-house development team. Further, outsourcing only part of an inherently difficult project may significantly increase the schedule required for the effort due to constriction of the communication channel between the in-house developers and the outsourcer. There is far more at stake for commercial or retail product development efforts.
,i>It's worse for software products
For the remainder of this chapter, I am going to restrict my discussion to commercial or “shrink-wrap” software6 products. That is, software developed in clearly distinguishable packaged revisions and then sold to multiple customers. Typically, this software will be installed at multiple locations and at multiple customers. Such software is created with an expectation that follow-on revenue will be generated by producing new versions (upgrades) of the product and/or by some sort of “maintenance” or service agreement.
The “essence” of a commercial software product (where essence is used in the same sense as in Chapter 7) is what defines that product. It is the understanding of this essence, the knowledge base for such a product and an understanding of the user community for that product that sets a product apart from its competitors. This essence, knowledge base, understanding of the problem space and, perhaps most importantly, knowledge of how to develop a software product that expresses this essence is the fundamental intellectual property of a company that develops such a product. The critical intellectual property of a software development company is not some collection of obscure software patents or copyrights to each published version of the product. It is the ability of the company's software development organization to develop world class products using their knowledge of the problem's essence.
Patents and copyrights offer no protection
Software patents only protect clever but narrow, partial solutions. Copyright only protects a published work from unauthorized reproduction. In both cases, what is protected is some static artifact of the development team's creative problem solving ability. It is only the creative problem solving ability of the development team that truly matters7 and intellectual property laws cannot protect this. This knowledge base of how to solve problems can't be stolen; it can only be lost.
Ancillary tasks and minor or unimaginative pieces of a product can usually be economically outsourced at little risk. Conversely, any attempt to outsource the development of the core product functionality places the development organization's creative problem solving ability at risk. This risk is not so much that the outsourcer or someone else will somehow steal this knowledge although some of this knowledge or expertise will “transfer8” to them. What is far more important that is at risk when core development is outsourced is the development team's knowledge of the problem space and the organization's expertise at solving these problems.
The critical risk is that this in-house knowledge base will be so diminished that it will no longer be able to provide a competitive advantage. The same reasons why knowledge of this essence of the product represents the most critical “intellectual property” means that those portions of a product that implement this essence are most likely to be the portions of the product that are constrained by inherent difficulty. Further, if some ancillary piece of the product is also inherently difficult to develop (e.g., installation, upgrade or migration software); the knowledge base required for developing this functionality presents an additional competitive barrier. This knowledge also forms a part of the development team's collective knowledge base.
There are several possibilities as to why this is so:
- The product may implement some arcane domain knowledge as its core functionality.
- The product presents a knowledge base (e.g., tax software) that is arcane and difficult to master.
- While the core functionality of the product may be well known, the product presents this functionality in a way that is superior to the competition.
One need only look at the characteristics of market leading products in competitive product arenas to see why a particular product rises above its competitors. The success of such products is based on the strength of the development team's understanding of the problem domain. Having such problem domain knowledge so as to be able to successfully upgrade an existing customer or migrate a customer who previously used a competitor's product is also a huge advantage over potential competitors.
Learning by doing
An additional problem with outsourcing is that development organizations get better at implementing a particular type of program if they continue to implement, more or less, the same program or type of program repeatedly. Familiarity with the problem space allows an organization to avoid a number of pitfalls that they may otherwise have to learn about “the hard way.” If factors like time to market are important over a lengthy product life cycle, outsourcing development means that some other organization is learning these lessons. Finally, while the outsourcer will probably respect intellectual property rights, they can and will apply the unprotectable domain expertise learned on one contract at their next client where this expertise is useful. This goes beyond just coding techniques as the outsourcer gains additional design level experience in developing software for the particular problem domain.
In a perfect world it would theoretically be possible to retain such a knowledge base in a systems engineering group while outsourcing the implementation. This is the same approach that allows a significant portion of physical manufacturing for tangible goods to be outsourced. Unfortunately for those who simply want a new software product or release as soon as possible, the knowledge of how to implement the program at both the design and code level is as much a part of the core knowledge base as being able to define the product and functional requirements. As was discussed in the previous chapter, tools provide an aid to developers in expressing the solution to a software development problem. There is still quite a bit of problem domain specific art left to the developer to express the problem's solution in code.
Development of the core functionality of the program is the most critical part to keep “in-house” since it is the piece that implements the domain expertise. This will typically also be the piece of the product that has the highest level of inherent difficulty and consequently that is least amenable to schedule acceleration. This piece of the program requires the translation of this organization based, real world, domain expertise into the software product.
This knowledge is a competitive advantage
If a product is to somehow be differentiated from those of the competition, this domain expertise must be built upon within the development organization in order for it to be fully exploited. For any commercial software product to become and remain successful, it must innovatively solve the real world problem at hand. This innovation must be continuous. Aggressive sales tactics, punitive contract lock-ins and flashy marketing can only briefly keep a stale product from total extinction. Competitive pressures eventually erode the market for such a product unless the underlying technology is eventually updated. It is the development organization's understanding of the problem domain that differentiates both the organization and the product from those of competitors.
An attempt to outsource any part of the product development effort must be examined from the perspective of what portion of the problem domain development expertise also goes to the outsourcer along with the outsourced effort. This expertise represents a significant level of “know-how” (data structures that naturally map to the problem space, pitfalls, “right” choices for design decisions, etc.), that is highly specific to the problem space. This knowledge only exists in the shared experience of the development team.
It should also be noted that, if a solution can be bought by one organization, it can also be bought by their competition. At best, the competitive advantage from incorporating such a solution is fleeting since the feature is literally “for sale” to anyone willing to pay the price. Finally, if there is no such unique problem domain knowledge then there is nothing that differentiates the product from the competition. If there is no core problem to understand (or such understanding has become passé9) then the product will become commoditized and it may be worthwhile to look into radical alternatives such as going open source for development and providing services or customizations (see below).
Competitive advantage comes from innovation and creative problem solving. Blindly throwing resources at the problem doesn't work regardless of whether those resources work within an organic development organization or whether they work at an outsourcing vendor. Even if product pricing allows such an approach, any competitor can do the same thing. If one company can draw up a set of requirements and outsource them for development, so can their competition. If the best someone can do is beat up their development team to work long hours developing a poorly thought-out, mediocre product, so can their competition. Only by intelligently developing a truly innovative and unique product can a company really come up with something that will beat the competition.
The core functionality of a product should not be outsourced. The organizational knowledge base associated with successfully developing an inherently difficult product is a sustainable competitive advantage if it is maintained and built upon. This advantage cannot be patented nor copyrighted but also cannot be stolen since it “lives” in the development organization that can solve such problems. For truly innovative software products, there is no “secret sauce” recipe nor some hush-hush trade secret but only a collection of people who know how to work with one another to continuously and creatively solve complex software development problems within a particular problem domain. It is this collective understanding of the problem domain together along with the ability of the team to use it to collaboratively solve new aspects of the problem that are the real “intellectual property” of the company. Previous product releases become obsolete and today's clever patent becomes tomorrow's technological blind alley. Only the development team can continue to provide new and innovative solutions to new projects drawn from the problem space they know.
What if there isn't any core knowledge?
To the extent that the product itself has no inherently difficult, core functionality, the product will probably become commoditized. This commoditization may even be to the extent that open source “competition” will arise if there is sufficient demand for the functionality. Although there currently seems to be some resistance to open source solutions maintained through FUD10 campaigns and vendor lock-in contracts, this situation cannot be expected to continue indefinitely. Open source solutions will be adopted by a growing number of users in the product categories where such solutions exist. These solutions offer the user a very viable alternative to for profit products at basically no cost11.
It is only through the identification and exploitation of research that is not amenable to open source methodologies that commercial products can maintain a value proposition that justifies their cost as opposed to free alternatives12. One need only look at the suite of open source applications available to see that standard office applications (e.g., Open Office13, K-Office), general purpose relational databases (MySQL and Postgres), graphics manipulation programs (the GIMP), browsers (Firefox and Mozilla), e-mail clients (Thunderbird, Evolution and Mozilla Mail) and a multitude of other historically for cost applications14 are now available for free and that these free alternatives are fully competitive with the competing “for cost” commercial products. Going beyond just user applications, Linux and the various BSD15 flavors provide a viable alternative operating systems for both desktop and server applications.
That free and open source programs can provide fully competitive alternatives in so many product categories speaks volumes about feature bloat in the corresponding “for profit” alternatives. This feature bloat comes at the expense of applying comparable resources to understanding the problem domain and building problem space expertise. Such expertise becomes a barrier to entry to open source competitors as well as other, for profit, ventures.
It is relatively easy to outsource the development of non-core, new features but it is difficult to truly understand the problem spaces of a highly varied user community and then provide a product that meets these disparate users' needs. While the near term profit from a plethora on new, non-core features may appear to be sufficient reward, the long-term result is erosion of any technological competitive barriers. At some point, competition in some form will arise. If that competitive product is a free, open source alternative, the requirement for significant value added from a for-cost product becomes huge. At the same time, the expertise of the development team has been diluted due to the need to maintain the multitudinous features. As noted above, anti-competitive measures by an incumbent provider can only delay the inevitable in this situation. Eventually, the for-cost product must justify its expense relative to the free, open source software it's competing against.
On the other hand, as competitive pressures grow and force the development team to attempt to do more but with fewer resources, serious consideration should be given to open sourcing the project or portions of the project. If the resources to fund the development of the product are lacking but there is a significant need to update the product, the open source route becomes even more attractive. Finally, this is especially true if there is a problem domain knowledge base that can still be exploited as part of a service or customization offering.
Open source development is also constrained by inherent difficulty
Unfortunately, open sourcing a project will not typically result in a significant acceleration of the schedule although the quality achieved within a “reasonable schedule” will be much higher. Open source projects achieve their high quality and rapid development by applying large numbers of people to a project. As with agile development methodologies or outsourcing, to the extent that these people can work on the project autonomously on labor intensive tasks such as coding, the project schedule is accelerated. Applying such a development approach to an inherently difficult problem does not work since the larger number of people will not be able to collaboratively solve such a problem any faster. The problems open source development has meeting scheduled releases of major updates shows that open source as a development methodology is not somehow immune to inherent difficulty. This lack of immunity occurs regardless of the number of contributors available to the project. Two recent examples are the much later than hoped for releases of the 2.6 version of the Linux kernel and the 1.5 and 2.0 versions of the Firefox browser.
That developer resources available to open source projects are not a solution to inherent difficulty provides additional insight into the nature of inherent difficulty. For a given problem to be collaboratively solved there is an optimal and a maximum number of people who can effectively contribute to the solution of a problem at any point during the project. During functional decomposition and preliminary design, these two numbers do not significantly differ since the additional capabilities brought by any added personnel are offset by the need for additional communication channels between all of those involved in solving the problem. Since the number of communications channels within the group is on the order of the number of people in the group, squared16, this number can rapidly become unmanageable for large groups. It is only through managing this process that the approach for solving the problem is created.
This need for additional communications channels is the gating constraint that means adding more people rarely accelerates a collaborative problem solving process. Each contributor ends up spending more and more time communicating to effectively collaborate with the other contributors but this takes time away from being able to actually attack and solve their piece of the problem at hand. The “infinite” developer resources available to an open source project aren't helpful when it comes to collaboratively solving inherently difficult projects. Where open source methods excel is in attacking problems that only require at most loose cooperation between otherwise autonomous developers.
What open sourcing can do
Successful large open source projects start by establishing a shared abstraction for the core of the overall effort. Frequently this is accomplished during early development by creating a series of zero-point (e.g., 0.1, 0.2, ... 0.9) versions of the proposed project. These zero-point iterations allow design trade-offs to be performed as well as gaining real world experience with the project since the open source development methodology allows such early iterations to be exposed to anyone willing to try them. This is the equivalent to crafting a good preliminary design and functional decomposition of the overall project. For many projects there is a single “guru” who functions as the chief architect or engineer for the project and who enforces this abstraction. If the project is sufficiently large, the guru will typically also have a small set of trusted “lieutenants” to both collaboratively create and enforce this abstraction.
When considering whether to “open source” a previously closed source project, keeping this role of guru or chief architect as a paid, in-house position along with a core team of developers makes it possible to both manage and maintain some level of creative control over such a project. Several commercial ventures have gone this route with varying degrees of success. Significant examples include the Netscape browser, SUN's release of their Solaris operating system, java and Star Office and its fully open source “twin” Open Office. Larger organizations such as those listed have also crafted their own “open source” licenses in an attempt to maintain some additional level of control over the open source product. This approach has to be done very carefully. A heavy handed efforts to fully control such a project by a commercial interest runs the risk of not attracting sufficient open source developers to carry the project.
The creative control of the in-house contributors is exercised in much the same way that a company maintains long-term overall project direction when a portion of the project has been outsourced. Any portion of the project that is inherently difficult should be kept in-house if possible, under close control if that is not possible or at least under a close watch. As explained above, this portion of the project is not amenable to acceleration by any methodology including going open source until fairly late in the development cycle (detailed design, coding and testing). Coding, of course, is labor intensive and having more coders available accelerates this portion of the project with very little quality cost so long as the in-house group is able to enforce adherence to any shared abstractions and development standards.
Likewise, it is highly likely that bringing in more people with full access to the source code during integration and test will allow the final testing to achieve a higher quality product but achieving this high quality will still take time. The larger number of people available can more rapidly spot coding errors and even some design errors but the real integration effort required to make the various parts work together eventually becomes a test of the overall program design. As noted previously, low-level errors such as coding errors add to the noise that must be eliminated to achieve an integrated product. Eliminating this noise more quickly allows the real integration work of ensuring that the major components work together as desired to be attacked sooner.
And what it can't
Effectively, open sourcing a product while at least attempting to manage the resulting project and maintain creative control is just another form of outsourcing. Unfortunately, the methods of the open source community are no more able to rapidly solve inherently difficult problems than those available to a paid outsourcer or to an in-house staff. The resulting project will typically be available a little earlier due to the copious resources available during the labor-intensive parts of the project but these activities rarely drive the overall schedule. These same resources also mean that the resulting project will be of a higher quality than what would be accomplished with either just in-house resources or if some portion of the project was outsourced. This increase in quality also means that the coding schedule will not be significantly reduced since development shortcuts that result in malpractices such as those described in Chapter 2 are not tolerated by the open source community. Finally, all of this comes at the “cost” of disclosing what, to some, is very valuable “intellectual property.”
Similarly, outsourcing all or part of an inherently difficult project has its costs while not significantly reducing the schedule required for the project. The outsourcer is just as constrained by inherent difficulty, as is in-house staff. Dividing an inherently difficult development task between one or more outsourcers and possibly keeping some portion of the project in-house is even worse since the inherently difficult problem must now be worked while attempting to communicate using contractually constrained communications channels. This problem can be made far worse if there are geographic or cultural distances between the groups working the problem that further constrain their efforts to communicate. In addition to the direct cost of hiring outsourcers, the knowledge transfer that takes place and the diminished and diluted in-house expertise erode the ability of the company to maintain a competitive advantage through their knowledge of the product's problem domain.
Making it somebody else's problem doesn't make it go away
In either case, making the problem “somebody else's problem” does not somehow make the problem's inherent difficulty go away. Open source will yield a high quality product but it will take as long as the project's inherent difficulty dictates. Outsourcing can yield an acceptable product but only given at least as much time as a comparable in-house effort would require. Forcing an outsourcer to deliver on an insufficient schedule has the same dire consequences as forcing the in-house staff to deliver on an impossible schedule plus yields an unmaintainable product that will probably need to be rewritten before it can be maintained let alone extended. Only by understanding what can and cannot be accomplished with a product while making part of the product development “somebody else's problem,” can management, hopefully, make informed decisions as to what can reasonably be expected from such efforts.
1I will not differentiate between outsourcing and offshoring. It is sufficient that the work be contracted to an outside development firm. Although there are significant geopolitical ramifications to offshoring, these are outside the scope of this book. I will only point out where offshoring considerations mean that achieving the desired development goal is even less likely.
1I will not differentiate between outsourcing and offshoring. It is sufficient that the work be contracted to an outside development firm. Although there are significant geopolitical ramifications to offshoring, these are outside the scope of this book. I will only point out where offshoring considerations mean that achieving the desired development goal is even less likely.
2Also, I will not distinguish between legal considerations, company policy, lack of a sufficient local labor pool, or internal company political machinations for the decision to outsource a portion of the effort.
3The cost basis for the outsourced effort could be fixed price, cost plus incentive fee, fixed fee, etc. Assuming that the primary reason an outsourcer is being considered is that the effort will take too long if attempted strictly with currently available, organic resources, it is sufficient to assume for this discussion that the outsourcer can be incentivized to meet the desired schedule and/or contractually penalized for not meeting it.
5This isn't as contradictory as it seems on the surface. The SCMM measures an organization's ability to manage their software development process. Organizations that must maintain what they produce have an incentive to ensure that their software process results in a reliable, maintainable and extensible product. A commercial outsource vendor's optimal process would be one which meets the contractual requirements at the lowest possible cost with little or no concern for the internal quality of the effort except as it affects meeting the contract.
6It is possible that some in-house developed, proprietary, custom software provides a unique and significant competitive advantage to a business. If you must consider how to develop and maintain such software, by all means, keep reading. If, as is more likely the case, you are using commercially available business process software, you may still find this chapter interesting if you use what you will learn to evaluate your software vendors. There may also be other factors that cause you to develop and maintain such software in-house (e.g., quick turn-around to requests for changes and bug fixes). Unless there are such extenuating circumstances, serious consideration should be given to out-sourcing such development work. At a minimum, management pressures to make such development and maintenance competitive with an outsourcer should be met with the explanation that the full benefit of continuing to do this work in-house must consider these other factors.
7As an example of how important this knowledge base is, consider the futile attempts of The SCO Group (TSCOG) to legally “put the genie back in the bottle.” They want to, somehow, make this knowledge base for Unix development proprietary again. Unfortunately for TSCOG, their case against IBM is based on the hopelessly invalid assumption that this knowledge base is somewhere within the code and somehow expressed as “methods and concepts.”
8More precisely, the outsourcer will develop certain skills associated with solving the problem. Some of this skill development will be through knowledge transfer but the outsourcer's development team will recreate the bulk of it based on the personalities and foibles of their development team. As these skills are no longer in demand at the original developer, they will atrophy there.
9An example of this is the emphasis the DOS word processors put on being amenable to fast typing. WordPerfect, in particular, invested in significant research on how to best organize the various function keys to allow for fast typing. For a variety of reasons, the market changed to move away from an emphasis on typing speed to more of an emphasis on ease of use and “what you see is what you get” applications. The market changed and what had been a competitive advantage became largely irrelevant (except to the touch typists who still preferred the keyboard to a GUI).
10Fear, Uncertainty, Doubt
11There are a number of other of advantages to using open source solutions such as avoiding vendor lock-in and freedom from draconian licensing requirements that are outside of the scope of this book.
12As a long-time Linux user, I am continually pleasantly surprised by the rich and robust applications available as open source and by the strength and stability of Linux itself. Having worked at one of the most innovative companies of the time, TRW, I still maintain that there are advanced technology tasks that only a commercial venture can accomplish.
13This book was originally written using Open Office Writer running on Fedora Core Linux. The web version was created using Drupal running on a CentOS server and accessed from my desktop system which runs Ubuntu Linux.
14To say nothing about free and open source programs such as Apache (web serving) and sendmail (e-mail server) that have historically dominated their product categories.
15Berkeley Software Distribution, sometimes called Berkeley Unix is the Unix derivative originally distributed by the University of California, Berkeley, starting in the 1970s. The name is also used collectively for the modern descendants of these distributions. Like Linux, current BSD variants are open source and available at no charge. There are significant licensing differences between BSD software (BSD License) and Linux (GNU General Public License or GPL).
16This number is given by the number of edges of the complete graph with number of vertexes equal in number to the number of personnel assigned to the problem. If there are n people assigned to the problem, the number of such communication channels is given by n!/(2!)(n – 2)! which simplifies to ½(n2 – n).
This work is copyrighted by David G. Miller, and is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.