The cultural sources of acquisition risk: Part I.
I listened recently to a guest speaker at the Defense Acquisition University--a highly accomplished program manager--address our program management class (PMT See photomultiplier tube. 401). He emphasized a point that he'd made on previous visits to the university: "Collecting metrics poses a subtle danger. It leads people to believe that program management is a science. But it's not science, it's art. Metrics are no substitute for walking around and finding out the real problems."
As a case writer for PMT401, DAU's 10-week program managers' course, I have developed 15 cases and read dozens more that are used in the course. A major theme of the course is identifying and managing risks in acquisition programs. Given that theme, I was struck by how many of the cases (both my own and those written by others) deal with the art rather than the science of program management. Even if the immediate issue in the case is technical or financial or contractual, the underlying problem is frequently associated with roles, power structures, agendas, and other aspects of defense acquisition culture. A good deal of the classroom discussion focuses on understanding these underlying cultural issues so that students can respond to them effectively when they come up against them on the job.
As an example, if the immediate situation in the case is that a program funding overrun is looming (a funding issue), then the underlying cultural issue might be any of the following:
* The program was sold to the leadership at its inception with an unrealistically low cost estimate.
* The user kept changing requirements over the objections of the program manager.
* Key contractor personnel left the program, despite concerns voiced by the government program manager.
Each of these underlying cultural issues could provoke a classroom discussion in which students think critically about the culture they operate in. Some guiding questions might be: How did this aspect of our culture come about? Whose interests are served? What would be involved in changing it? If it can't be changed, what's the best way for a PM to deal with it?
Through my case writing and teaching experience, I have compiled a list of seven quirks, oddities The Oddities were a professional wrestling stable in the WWF. History
The Jackyl formed the group in 1998 and called them "The Parade of Human Oddities." The group consisted of "freakish" wrestlers, including the masked Golga (formerly Earthquake, whose mask had , and potential dysfunctions that seem present in the cultures of program offices and the overall defense acquisition system. The original purpose of my list was to remind me of things to listen for as I do case interviews. But it later occurred to me that the list could serve as a research agenda for those interested in conducting formal research on acquisition culture. My first three cultural observations follow.
The Reification re·i·fy
tr.v. re·i·fied, re·i·fy·ing, re·i·fies
To regard or treat (an abstraction) as if it had concrete or material existence.
[Latin r of Risk
reify reify - To regard (something abstract) as a material thing. \re-e-fi\. To regard something abstract as a material or concrete thing. (Webster)
No matter how often program risks are documented and briefed, they are ultimately a description of what the PM worries about, which is not necessarily what he or she should be worrying about. This can become apparent in post-mortem analyses of failed programs; the events that doomed the program are often absent or underemphasized on prior risk charts.
One program manager showed me a PowerPoint[R] slide depicting a risk matrix for his program. The vertical axis portrayed probability and the horizontal axis severity. Cells on the risk matrix were colored green, yellow, or red to convey the intensity of the particular risk. The PM spoke of the vigorous efforts to attack the red cells on the chart and transform them to at least yellow and, it was hoped, to green.
When I probed the staff, I was told that the probabilities and severities were best guesses, often by people who were no longer with the program. And the risks were a reflection of funding and time constraints. If time and money were increased, most risks would turn green; if time and money were reduced, more risks would turn red.
If one were to start over, asking a different group of informed people to construct a risk matrix, would it come out the same as the one I saw that day? I'm not sure. If one examines the program risks that are highlighted within reports from the Government Accountability Office The Government Accountability Office (GAO) is the audit, evaluation, and investigative arm of the United States Congress, and thus an agency in the Legislative Branch of the United States Government. , one can see that the risks perceived by the GAO analysts often differ from those of the program office. Such differences of opinion are documented in the rebuttal rebuttal n. evidence introduced to counter, disprove or contradict the opposition's evidence or a presumption, or responsive legal argument. section at the end of the GAO report.
So I think an awareness of the culture should cue us to avoid reifying a given risk chart and help us acknowledge that it's probably not the whole story. Perhaps a truer description of program risks would entail:
* Showing more explicitly the relation between risk and schedule. Three risk matrices could be constructed: the first based on current schedule constraints, the second supposing a six-month schedule extension, the third supposing a 12-month extension. Such a presentation would highlight the notion that risks are often just statements about the confidence in an underlying schedule.
* Making sure that core risks (problems that actually occurred on prior programs) are included on the risk matrix of future programs. For example, we know from experience that future funding instability is a core risk on virtually all large programs, but it often doesn't make it onto the risk chart. We also know from experience that on virtually all large programs, the requirements will change, but that risk also is often absent. Some PMs have told me that these risks don't warrant inclusion because they are outside of a PM's control. Yet if this is a rule of the culture--don't discuss risks that you cannot control--then the utility of the risk chart as a tool for anticipating problems is limited.
The key point for students in PMT401 is to avoid viewing any given risk chart as ground truth. Key risks have likely been overlooked and others have probably been miscalculated. Because of the inherent subjectivity that went into the construction of any given risk chart, it is probably more art than science--and more a work in progress than a concrete depiction of a program.
An avenue for future acquisition research would be to look at correlation between the risks perceived within a program office, and those perceived by independent experts such as the GAO, the inspectors general, the science boards, etc. To what degree is the risk assessment similar? Is there a pattern to the differences? If there is a pattern to the differences, does it point to any better ways of assessing program risk?
The Unreality of Schedule
Several of our cases deal with milestones for initial operating capability Noun 1. operating capability - the capability of a technological system to perform as intended
capability, capableness - the quality of being capable -- physically or intellectually or legally; "he worked to the limits of his (IOC IOC
International Olympic Committee
IOC n abbr (= International Olympic Committee) → COI m
IOC n abbr (= ) that are patently unachievable. During classroom discussion, the students are quick to vilify the protagonist in the case (usually the government PM). Typical student comments are, "He should have raised it up his chain of command a long time ago"; "Bad news doesn't get better with age"; and "He should never have signed up to such a schedule in the first place." But if the facilitator of the case is skillful skill·ful
1. Possessing or exercising skill; expert. See Synonyms at proficient.
2. Characterized by, exhibiting, or requiring skill. , it doesn't take long for students to look beyond the protagonist's shortcomings A shortcoming is a character flaw.
Shortcomings may also be:
One PM showed me a succession of Gantt charts for the program she managed (a large automated information system The term automated information system means an assembly of computer hardware, software, firmware, or any combination of these, configured to accomplish specific information-handling operations, such as communication, computation, dissemination, processing, and storage of with Acquisition Category (ACAT ACAT
See: Automated Customer Account Transfer ) 1AM). The Gantt chart from the beginning of the program showed a sequence
of development phases based on the idea that lessons learned from one phase would inform the next. Awarding the program contract took much longer than expected, but the mandated date for IOC stayed constant. As a result, the newest Gantt chart showed almost total concurrency Operations that are performed simultaneously within the computer. For example, dual-core CPUs provide complete overlapping of two independent processes. See dual core, hyperthreading, multiprocessing, multitasking, multithreading, SMP and MPP.
concurrency - multitasking for all development phases and substantial schedule compression within each phase. A set of key tasks, originally planned to occur sequentially over two years, were now to occur in parallel over six months. I wondered aloud if the new Gantt chart was feasible and was told, "It is, because that's my Service's position, and we haven't given up on it."
The existence of such unreal schedules seems to be a feature of the cultural landscape of defense acquisition. Like the fable of the Emperor's New Clothes Emperor’s New Clothes
supposedly invisible to unworthy people; in reality, nonexistent. [Dan. Lit.: Andersen’s Fairy Tales]
See : Illusion
Emperor’s New Clothes , no one wants to be the first to point out the problem. This cultural feature is, I believe, related to what Irving Janis Irving L. Janis (26 May 1918 - 15 November 1990) was a research psychologist at Yale University and a professor emeritus at the University of California, Berkeley most famous for his theory of "groupthink" which described the systematic errors made by groups when taking collective calls "victims of groupthink group·think
The act or practice of reasoning or decision-making by a group, especially when characterized by uncritical acceptance or conformity to prevailing points of view.
Noun 1. " in his eponymous e·pon·y·mous
Of, relating to, or constituting an eponym.
[From Greek epnumos; see eponym. 1972 book. People can get so committed to a date that to question it is tantamount to sedition sedition (sĭdĭ`shən), in law, acts or words tending to upset the authority of a government. The scope of the offense was broad in early common law, which even permitted prosecution for a remark insulting to the king. . The problem with unreal schedules is, of course, that the bubble will eventually burst, and blame will be meted out Adj. 1. meted out - given out in portions
apportioned, dealt out, doled out, parceled out
distributed - spread out or scattered about or divided up .
But I think a secondary problem is more serious: Attempting to compress development schedules, especially for software, can backfire by generating rework re·work
tr.v. re·worked, re·work·ing, re·works
1. To work over again; revise.
2. To subject to a repeated or new process.
n. cycles and increasing defect rates. In his book The Mythical Man-Month See Brook's law and estimating a programming job. , Frederick Brooks famously observed that adding people to a late software project makes it later. A corollary to Brook's Law "Adding manpower to a late software project makes it later." By Fred Brooks, author of "The Mythical Man-Month," published in 1975. The extra human communications required to add another member to a programming team is considerably more than anyone ever expects. might be: Compressing an already ambitious software project schedule can make it later.
An interesting avenue for future researchers would be to look at the evolution of program schedules over time. How much compression and overlap occurs as program managers try to keep commitments for IOC? How do they rationalize ambitious schedules? At what point do they acknowledge defeat? And how are they able to evade the earned value management Earned Value Management (commonly abbreviated and referred to just as EVM) is a project management technique that seeks to measure forward progress in an objective manner. EVM is touted as having a unique ability to combine measurements of technical performance (i.e. system, which is supposed to provide an early warning system for cost and schedule overruns?
Another avenue for future research is the potential role of critical chain project management within DoD acquisition. Eliyahu Goldratt, in his book Critical Chain, suggests that focusing on project buffer consumption rather than task completions can keep schedules more real. A number of defense programs have adopted CCPM CCPM Critical Chain Project Management
CCPM Cyclic/Collective Pitch Mixing (aviation/rotorcraft)
CCPM California College of Podiatric Medicine
CCPM Canadian College of Physicists in Medicine , and it would be useful to compare their results against traditional programs and see if claimed benefits are realized.
The Pretense of a Stable Requirements Baseline
The Services and the Department have robust and thorough processes and systems for identifying needed capabilities that drive the acquisition process. Yet once a program is launched, the functions and performance required of the system under development inevitably change. It seems an oddity odd·i·ty
n. pl. odd·i·ties
1. One that is odd.
2. The state or quality of being odd; strangeness.
1. of the culture that a history-based estimate of requirements volatility isn't folded into the initial estimate of time and cost.
Notwithstanding the fact that virtually every prior program has suffered from requirements volatility, the culture of defense acquisition seems to be to pretend that the current program will be the exception. It is planned, funded, scheduled, and managed as though the initial requirements baseline will stand. Even if the program is constructed as an evolutionary acquisition, there is still an implicit assumption that the requirements for each increment To add a number to another number. Incrementing a counter means adding 1 to its current value. are stable.
When the inevitable requirements changes do come, it causes a shock to the government program office and the supporting contractor organization. The contract has to be revised, new funds identified, and the program has to be replanned. A significant amount of the total time and effort within a large program office is responding to such changes.
As individual program budgets are aggregated into Service and Department plans, the implicit funding gap for future changes grows accordingly. This system-level gap soon creates pressure to cancel some programs in order to fund the rest--a grossly inefficient way of managing funds because sunk costs Sunk costs
Costs that have been incurred and cannot be reversed. on cancelled program (opportunity costs Opportunity costs
The difference in the actual performance of a particular investment and some other desired investment adjusted for fixed costs and execution costs. It often refers to the most valuable alternative that is given up. ) are lost in the process. The sunk costs are rarely accumulated and discussed, and the system-level inefficiency of the entire process is largely unperceived. Future research could contribute to understanding this syndrome by tracking cancelled programs and accumulating both sunk costs and termination costs. How do those costs compare to the funds that are freed to pay for surviving programs? Understanding the system-level inefficiencies might help engender en·gen·der
v. en·gen·dered, en·gen·der·ing, en·gen·ders
1. To bring into existence; give rise to: "Every cloud engenders not a storm" a change to a culture that funds programs based on historical levels of change.
Another avenue for future researchers is to compare DoD practice with other venues in which expected requirements volatility is explicitly acknowledged and built into the plan. This is commonplace, for example, in commercial Web site development. It is assumed that the customer will change his or her mind repeatedly during both the development and the Web site's life. And it is assumed that the underlying Web technologies will turn over numerous times during the life of the site. In that venue, it is considered only common sense to create budgets and schedules that embody these assumptions. Why does the same sense seem absent from large defense programs?
A New Viewpoint
The three cultural features discussed above suggest that classroom attention on a cultural viewpoint of acquisition risks, problems, and issues would be time well spent. The greater challenge, of course, is encouraging the acquisition workforce to consider the cultural view as they make plans and execute programs. In the next issue of Defense AT & L, I will present the remaining four elements of my own cultural viewpoint.
The author welcomes comments and questions. Contact him at firstname.lastname@example.org.
Roman is a professor of acquisition management at DAU and has held a succession of acquisition jobs in the information technology career field. He has a doctorate in information and decision systems from The George Washington University George Washington University, at Washington, D.C.; coeducational; chartered 1821 as Columbian College (one of the first nonsectarian colleges), opened 1822, became a university in 1873, renamed 1904. .