Printer Friendly

Incentive contracts: the attributes that matter most in driving favorable outcomes.


Incentive contracts have been in place for many years. They represent just one of many contractual tools the Department of Defense has at its disposal to drive certain performance behaviors. Lately, the usefulness of incentive contracts has come into question. The dividends have not been readily apparent. This research study set out to determine what generally afforded strong correlations between incentive-type contracts and expected performance outcomes. Twenty-five weapon system acquisition program offices were interviewed in various stages of their acquisition life cycle. A standardized questionnaire-survey was used to capture the data. This article addresses the findings and includes a few key recommendations intended to highlight learning assets available to the acquisition workforce on the use of incentive contracts.


In the past several years, major weapon system development programs have drawn significant attention. The reasons are varied. In some cases, costs have skyrocketed; schedules have experienced significant delays; and performance levels have failed to meet government expectations despite the employment of management tools designed to control costs, preserve schedule, and influence performance outcomes. Some of these management tools, including contractual measures as originally conceived and specified by the Federal Acquisition Regulation (FAR), can give tremendous flexibility to the implementation of government contracts. Indeed, contractual measures is only one of many handy tools in a program manager's toolkit to help drive performance behavior. However, the Government Accountability Office (GAO) recently identified an apparent disconnect between the use of certain measures like incentives and expected outcomes in weapon system acquisitions. In short, it appeared that incentives were not driving performance outcomes as originally envisioned.

The GAO looked closely at the use of incentives in the Department of Defense (DoD). It conducted structured interviews with contracting and program officials representing 92 contracts from a study population of 597 DoD incentive-type contracts active between 1999 and 2003. In its December 2006 report (GAO-06-66), GAO asserted that "DoD has paid billions in Award and Incentive Fees without favorably influencing performance" (GAO, 2005). In essence, the GAO found few results that could be directly traced to the award of incentives. Not surprisingly, its findings set off a few alarms including the efficacy of incentives in general. Were these incentive strategies ill-conceived? Were they poorly applied? Did they work as advertised'? Have they outlived their usefulness'? What went wrong? These and many other questions immediately surfaced in the Acquisition, Technology and Logistics (AT&L) community. In response to these concerns, (then) Under Secretary of Defense for Acquisition, Technology and Logistics Ken Krieg asked Defense Acquisition University President Frank J. Anderson, Jr. to conduct a research effort designed to better understand where award/incentive fee contracts had a favorable impact on performance outcomes. Consequently, DAU assembled a small team of subject matter experts from its combined regional workforce to understand the suspicious divide. Rather than search for even more verification where incentives failed, however, the research would focus on where incentives succeeded. More specifically, where have incentives specifically worked, why were they effective, and what could be done to restore confidence in incentive contracts? Invariably, confidence in incentive contracts--which has been frequently challenged in the past--would have to be restored in order to garner continued support and calm the critics. Otherwise, the usefulness of incentive strategies would weaken and their continuance become a target of increased scrutiny and uncertainty.

In late April 2006, Anderson met with members of the research team (Table 1) and challenged them to: 1) determine what generally afforded strong correlations between incentives and desired performance outcomes and why; 2) recommend which DAU curricula should be adjusted as a result of the research team's findings in both the near- and far-term: and finally, 3) make both lessons learned and best practices widely available through DAU's Community of Practice (COP) Web site. Simply stated, proven techniques that drove favorable outcomes had to be made accessible to the AT&L-wide community right away; the research had to be purposeful.


Up front, the DAU research team carefully reviewed the GAO's report and looked especially close at two of its most critical findings. The GAO had claimed that:

* DoD engages in practices that undermine efforts to motivate contractor performance and that do not hold contractors accountable for achieving desired acquisition outcomes; and

* DoD Programs frequently pay most of the available award fee for what they describe as improved contractor performance, regardless of whether acquisition outcomes fell far short of DoD's expectations, were satisfactory, or exceeded expectations (GAO, 2005).

These two declarations created a veritable research passageway into better understanding what techniques indeed drove favorable performance outcomes--the basis of DAU's research. It also addressed the fundamental problem (Table 2). The implementation of Award/Incentive Fee contracts in DoD is not producing the desired/intended outcomes; and in some cases, the acquisition community may not be implementing Award/Incentive contracts correctly. Invariably, programs should embrace an incentive fee strategy that achieves and sustains maximum contractor performance with a measurable value to the government.


To both frame and bound the study efforts, DAU's research team developed a few imperatives. The research would not dispute the validity of incentives nor serve as a reclama to the GAO report; the research would look for both deterministic and probabilistic incentive attributes: and finally, the research would begin with a few key assumptions:

* Improved contractor performance has not always been achieved through the use of award fee/incentive fee contracts.

* Award/incentive fee contracts can be powerful tools to favorably influence contractor performance in conjunction with good acquisition fundamentals.

* Empirical evidence/measurable results could play a pivotal role in award/incentive fee determinations.

* GAO conclusions on the ineffectiveness of award/incentive fee contracting could be a result of certain ineffective practices that could be undermining policy.

These ground rules and assumptions would serve as a guidepost throughout this research project. Together, they would help keep the research focused and fixed on the target without drifting from the "end-state" research objective.


For calibration purposes and in search of additional detail, DAU met with the primary authors of the GAO-06-66 Report, Tom Denomme and Ron Schwen, in mid-June 2006 and tunneled deeper into their study findings. Both individuals were very informative. They identified supplementary observations during face-to-face discussions including two striking assessments: (1) Performance outcomes were sometimes unrealistic, and (2) technical performance measures (e.g., predictors of technical progress along a program's pathway) did not seem to factor much in the overall Award Fee (T. Denomme and R. Schwen, personal communication, June 20, 2006). It also became clear that the GAO viewed incentives as a reward, not just a motivational tool.

After allowing for the GAO's additional comments and conducting a fair amount of deliberation on the DAU's research direction, the team fashioned a basic game plan. It centered on a correlational research methodology. In other words, the research study would target the relationships between certain criterion variables (e.g., the "motivators") and projected outcomes (e.g., the "successes").

Ideally, program offices would have the most effective criterion variables and help confirm what Award/Incentive Fee techniques were making a difference--a material difference. As part of their incentive strategy, program offices normally select certain criteria depending on what outcome(s) they need to achieve. Invariably, the team felt there had to be a few invaluable practices underway. After all, program offices would probably have abandoned or significantly reduced the use of incentives if they were not making a difference and selected an alternative course of action instead.


Contract incentives are varied, but understanding them and appropriately applying them is crucial. In its basic form, an incentive is really an extraordinary tool for certain applications. They come in many varieties. However, all are designed to drive some kind of desired outcome through the use of monetary awards or lack thereof. Incentives can be extremely useful when vigilantly and carefully applied, and in accordance with FAR 16.401, they are designed to drive specific acquisition objectives by:

* Establishing reasonable and attainable targets that are clearly communicated to the contractor; and

* Including appropriate incentive arrangements designed to:

** Motivate contractor efforts that might not otherwise be emphasized; and

** Discourage contractor inefficiency and waste.

By design, incentives are also tightly integrated into overall acquisition strategies for very specific purposes in DoD contracts. They can help reduce risk; they can help combat uncertainty; and they can also help drive favorable behavior throughout a program's life cycle. By their nature, "incentives should result in expected outcomes" as Defense Procurement and Acquisition Policy Director Shay Assad reinforced at the Program Executive Officer/Systems Command (PEO/SYSCOM) Commanders' Conference held at Fort Belvoir, VA, November 7-8, 2006 (S. Assad, personal communication, November 8, 2006). Of course, understanding when and how to apply incentives is just as important and may be the tallest hurdle. More specifically and in accordance with the FAR 16.401 and 16.403:
   Incentive contracts are appropriate when a firm-fixed-price
   contract is not appropriate and the required supplies or services
   can be acquired at lower costs and, in certain instances, with
   improved delivery or technical performance, by relating the amount
   of profit or fee payable under the contract to the contractor's
   performance ... a fixed-price incentive (firm target) contract is
   appropriate when the parties can negotiate at the outset a firm
   target cost, target profit, and profit adjustment formula that will
   provide a fair and reasonable incentive and a ceiling that provides
   for the contractor to assume an appropriate share of the risk.

Even though the concept of incentive-type contracts sounds straightforward, its execution is far from simple, especially in an environment like DoD where funding instability, technology barriers, leadership changes, and even cultural barriers frequently reign. Each element alone can potentially handicap a program as program managers would attest. The presence of all four factors can be taxing. Nonetheless, each of the incentive contract varieties (Figure 1) offers hope if they are properly planned and well-executed by creating a correlation to expected outcomes; integrated within an overall acquisition strategy: and designed to meet specific program goals from the outset as often as they might vary.


Incentive contracts that use an award fee component have a very specific application especially if:

* The work to be performed is such that it is neither feasible nor effective to devise predetermined objective incentive targets applicable to cost, technical performance, or schedule;

* The likelihood of meeting acquisition objectives will be enhanced by using a contract that effectively motivates the contractor toward exceptional performance and provides the government with the flexibility to evaluate both actual performance and the conditions under which it was achieved; and

* Any additional administrative effort and cost required to monitor and evaluate performance are justified by the expected benefits.


DAU's research phase officially began with a review of prior related work including the GAO report (e.g., its findings and potential areas of further interest) and other associated initiatives. As expected, the GAO report sounded a warning bell for incentive contracts in general, and many DoD organizations began to take a closer look at their respective portfolios to unearth any "execution" flaws.

After conducting an abbreviated literature review of incentive-type contracts, the research team found a great deal of writing on the subject. Even before the GAO published its report, a few agency, headquarters staff organizations, and field units had already initiated their own internal reviews and audits of incentives. Some even followed with specific guidance in some cases. A few examined root causes where incentive contracts failed and identified remedies to overcome what might be ineffective incentive practices. These investigations were insightful; they also validated some of the same findings that were eventually uncovered in this research. While many concentrated on various aspects of Award/Incentive fee contracts, none focused exclusively on what drives favorable outcomes.

Interestingly enough, incentive-type contracts have been around for some time and used quite often in one form or another. Even Wilbur and Orville Wright's "Wright Flyer" contract awarded in February 1908 with the U.S. Army has been argued by some as a classic incentive contract that was based on two key objective criteria--speed and endurance (Snyder, 2001). Over the years, many other government contracts eventually contained incentive-like features. Nonetheless, the National Aeronautics and Space Administration (NASA) had been largely credited with successfully instituting formal incentive contracts since the early 1960s. In the last several years though, incentive contracts have required some further clarification. Senior Pentagon officials like (then) Assistant Secretary of the Navy for Research, Development and Acquisition John J. Young, Jr., have provided more specific direction and elucidation. On December 23, 2004, he issued specific guidance to make his stand clear. He emphasized, "If use of an award fee is appropriate, a portion of the award fee pool should be available for the contractor to earn based on objective criteria and a portion on the basis of subjective criteria." He also asserted that "contractors should have to earn fees or profits they receive based on their performance" (Young, 2004).

In 2006, the Secretary of the Air Force for Acquisition Directorate (SAF/AQX) sponsored a Contract Incentives Study under the watchful eye of its Acquisition Transformation Action Council (ATAC). After accepting some of the GAO's findings, the council formed an internal analysis group to find ways to execute the award/incentive fee process more effectively and efficiently. The analysis group sampled 43 acquisition category (ACAT) I, II, and III programs in their portfolio through the use of survey questions, divided among four specific groups. They drew a few conclusions on the award/incentive process after soliciting responses from four perspectives: 1) monitors, 2) program managers/principal contracting officers, 3) Fee Determining Officials (FDOs), and 4) award fee board members. The following points represent a high-level view of the aggregate group (Miller, 2006).

* Award Fee accomplishes its goals.

* Award Fee has a significant influence on the contractor's behavior.

* Criteria should move toward an appropriate combination of objective and subjective (e.g., 80 percent objective, 20 percent subjective).

* An overwhelming perception prevails that the Air Force accomplishes its goals with respect to award fee.

After vetting their findings, they found that incentives did not necessarily control costs nor improve performance when dealing with highly complex and technical programs with long development cycles. They were generally supportive of moving to more objective-based incentive approaches.

About the same time period, the Air Force Space Command's Space & Missile Systems Center (SMC) located in Los Angeles, CA, took a hard look at the use of incentives. A draft Incentives Guide, dated October i, 2006, soon emerged. SMC's guidebook illuminated a number of ideal practices including the linkage between incentives and mission success outcomes. Its authors developed seven core principles designed to govern incentive contracts in general (Air Force Materiel Command, 2006).

* Cost-Plus-Award-Fee (CPAF) contracts, with subjective award fee criteria, will no longer be the preferred incentive approach.

* Cost-Plus-Incentive-Fee (CPIF) contracts, with a potential award fee, are highly encouraged.

* Incentives need to consider the phase of the acquisition program (National Security Space directive, NSS-03-01), the maturity of the technology, and the product line (spacecraft, launch vehicle, ground systems, and user equipment).

* Acquisition strategies need to discuss performance, schedule, and cost incentives, and their order of importance to the program.

* Award fee plans should link fees to mission success, achievements, deliverables, and objective results.

* Award fee plans should include both objective/quantitative and subjective award fee criteria.

* The incentive arrangement needs to ensure the contractor has a stake in the outcome (i.e., no fee will be earned for mission failures).

SMC's incentive guide further reinforced what many DoD organizations have increasingly begun to amplify and embed in their incentive contracts--greater emphasis on objective criteria.

Just recently, the 109th U.S. Congress has also taken more specific action. The John Warner National Defense Authorization Act for Fiscal Year 2007, Public Law 109-364, October 17, 2006, sec. 814, now requires the Secretary of Defense to issue guidance to:

1. Ensure that all new contracts using award fees link such fees to acquisition outcomes (which shall be defined in terms of program cost, schedule, and performance);

2. Establish standards for identifying the appropriate level of officials authorized to approve the use of award and incentive fees in new contracts;

3. Provide guidance on the circumstances in which contractor performance may be judged to be "excellent" or "superior" and the percentage of the available award fee which contractors should be paid for such performance;

4. Establish standards for determining the percentage of the available award fee, if any, which contractors should be paid for performance that is judged to be "acceptable," "average," "expected," "good," or "satisfactory";

5. Ensure that no award fee may be paid for contractor performance that is judged to be below satisfactory performance or performance that does not meet the basic requirements of the contract;

6. Provide specific direction on the circumstances, if any, in which it may be appropriate to roll over award fees that are not earned in one award fee period to a subsequent award fee period or periods;

7. Ensure consistent use of guidelines and definitions relating to award and incentive fees across the military departments and Defense Agencies;

8. Ensure that the Department of Defense:

* Collects relevant data on award and incentive fees paid to contractors

* Has mechanisms in place to evaluate such data on a regular basis;

9. Include performance measures to evaluate the effectiveness of award and incentive fees as a tool for improving contractor performance and achieving desired program outcomes; and

10. Provide mechanisms for sharing proven incentive strategies for the acquisition of different types of products and services among contracting and program management officials.

Whether fully justified or not, Congress became uncomfortable with the track record of incentive contracts and subsequently emphasized key factors affecting their use. The next phase of this research might become even more constructive since it centered on the collection of data that might show it influenced favorable outcomes.


The research team recognized certain research and resource limitations--primarily the number of programs available for interview in a limited period of time. Ultimately, the team decided to interview members from at least 25 representative weapon system acquisition programs (Table 3). Data collected from these first 25 would also serve as the starting point for best practices. The research team selected programs in various phases of the acquisition life cycle to confirm what particular award and/or incentive techniques (if any) indeed created strong correlations to performance outcomes. The interviewees would include agency directors, program executive officers, program/product managers, procurement contracting officers, and systems engineers in government program offices.

The research team considered a number of research methodologies and eventually settled on a questionnaire-survey that targeted the identification of specific techniques that drove (or heavily influenced) favorable performance. The team's data collection approach afforded the simultaneous and normalized collection of key data. DAU's regional collocation with acquisition organizations created a significant geographical advantage. Each of the regional research team members could concentrate on acquisition organizations they already support within their respective locations. They knew their customer base well. The sub-division of regional teams also permitted relatively easy access to the program offices they occasionally assist.

To help chart the course for a sound questionnaire used for this research activity and limit the chance for any ambiguity, a Work Breakdown Structure (WBS) construct seemed ideal. It not only represented a very common and familiar acquisition artifact found in every DoD program, but also appeared to be a very fitting instrument. Like a schematic, it represented a high-level blueprint and easily accommodated the decomposition of survey questions into logical content categories as described by Figure 2.

The research team maneuvered the survey questions into five logical categories during three separate working sessions. The categories were: 1) Stage Setters, 2) Expectations, 3) Metrics, 4) Outcomes, and 5) a General Category. Each category contained very specific questions--all designed to narrow the search for techniques that drove expected outcomes. Even though the team built a single survey, it accommodated both incentive and award fee contracts. Figure 2 represents a rendering of the decomposition for Award Fee.



Strongly Communicated Expectations and Feedback: Frequent and unambiguous communication/feedback made a noticeable difference for incentive contracts. Even though incentive contracts require some additional administrative burden, the outcome justified the increased workload of feedback for most programs under this research review. Continuous and open dialogue at both junior and senior levels led to early discovery and timely reconciliation of many known issues and helped keep a program on track.

* Space Based Infra-Red Surveillance System (SBIRS)-High created a specialized response team that routinely tackled issues as a result of a flight software quandary originally uncovered by monthly reports. Their team "pays a lot more attention, has a lot more discussion, and serves almost like a first level of evaluation" (personal communication, September 18, 2006).

* E2D summarized what many others echoed: "We are very open with the contractor. We have no secrets. If they win, we win. Communication is extremely important. The contractor is never surprised by what they get" (personal communication, October 12, 2006).

* Missile Defense Agency (MDA) Sensors instituted "emphasis letters" during their award periods to stress the importance of certain outcomes or "events" even more (personal communication, September 18, 2006).

* Multi-Mission Maritime Aircraft (MMA) employed what they called a "barometer report" during interim reviews to ensure that information from monitors was readily available to management at critical junctures. "Our contractor takes it very seriously. Each report is very detailed. The contractor understands well in advance what we see. If the contractor was heading in the wrong direction, early intervention was crucial" (personal communication, October 12, 2006). In some cases, sharing certain information prematurely and without a proper context could have unintended consequences.

* In one program, when the contractor received a reduced award, program office personnel were "uninvited" from a few key intermittent reviews. Program office personnel were viewed as critics, not full partners. The program office quickly instituted monthly reviews with the proviso that progress should be measured not only by results but also the agility to take any necessary corrective action(s). Things quickly turned around.

* F-15 and Global Hawk used more informal monthly feedback sessions to surface known issues or raise any potential concerns (personal communication, August 28, 2006: July 27, 2006).

* Space Tracking and Surveillance System (STSS) government and contractor program managers meet every Friday to "just talk and keep the lines of communication wide open--little issues sometimes surface and can be reconciled almost immediately" according to the government program manager (personal communication, September 27, 2006).

* Missile Defense Kinetic Weapons found communication and expectation management had a direct connection to favorable outcomes (personal communication, October 11, 2006).

* B-2 created a glossary tool to improve communication during the evaluation briefings, which proved extremely beneficial when team member changes occurred as they frequently did. This was particularly important in milestone terms, especially in the clarification of "first flight" (personal communication, August 11, 2006). Many organizations found that a strongly prepared and focused evaluation board along with upper management support were very important elements and made a difference.

* In E2D, "during the evaluation period, everyone has a binder, copy of the plan, contractor self-assessment, monitor evaluations, historical information, etc. You need this commitment to make this work" (personal communication, October 12, 2006). Ultimately, a set of expectations known by all and a disciplined award fee board structure along with refined mechanics seemed to help strengthen the viability of incentives.

* In one case, feedback had a multiplying effect. Missile Defense and Countermeasures found their contractor performing process improvement reviews based upon mid-term guidance and Air Force determinations (personal communication, September 22, 2006).

Undeniably, open and frequent communication/feedback is a driving force behind the effective execution of incentive contracts.

Metrics. The selection of key and enduring measures within an evaluation period, and measures that could be connected to subsequent evaluation periods made a noticeable difference for incentive contracts. Key measures can validate whether or not a program achieved certain necessary intermediate milestones along a program's critical glide path. They confirm program momentum. They serve as an early warning system--a bellwether--and answer the age-old question, "Are we on track?" They also fill a huge role as performance benchmarks. Those interviewed under this research project said when they effectively employed key measures, it also helped them navigate their program pathway despite the unavoidable programmatic turbulence. Their measures surfaced as two types: objective and/or subjective. Without question, selecting the correct type of measure presented the biggest challenge.

The ability to hard-wire them to achievable outcomes makes objective measures like Technical Performance Measures (TPMs). Cost Performance Indices (CPIs), Schedule Performance Indices (SPIs), etc., invaluable gauges. They serve as tremendous forecasting devices when carefully connected to outcomes.

* STSS used objective measures in the form of Key Performance Events (KPEs) such as "ground contractor satellite operations (LSOC) facilities established, spacecraft available for space vehicle integration and test, and thermal vacuum test complete" (personal communication, September 27, 2006). According to the STSS program manager, the contractor also had to "show me that the system worked in the intended environment."

* SBIRS used objective measures in the form of Mission Success Criteria (MSI) like ITS/Increment 1 capability and IMCSB-1 System delivered. They reported a significant change when they amplified the importance and subsequent inclusion of mission success in the form of tangible, measurable outcomes (personal communication, September 18, 2006).

* Missile Defense Kinetic Weapons felt technical performance outcomes were ideally suited for objective measures especially since they rely heavily on test flights where mission success is key (personal communication, October 11, 2006).

* Global Positioning System (GPS) targeted specific milestones/events that either demonstrated space-qualified processes or the completion of space-qualified parts--both critical elements since they directly supported the development of the Gallium Arsenide (GaAs) Solar Arrays used to power the spacecraft. GPS also found that its prime contractor welcomed objective measures in the form of tangible milestones such as specific task completion and scheduled deliveries (personal communication, September 20, 2006).

* MDA Sensors recognized their software development risk early on since many algorithms came from the Theater High Altitude Area Defense (now Terminal High-Altitude Area Defense) system as Government-Furnished Equipment (GFE). Consequently, they used incentives to drive integration efforts of these algorithms along a well-defined pathway (personal communication, September 18, 2006).

* STSS found cost controls to be powerful measures, especially if the contractor could share in the savings (personal communication, September 27, 2006).

* AV-8 (Harrier) made cost a primary objective criterion since the "work was known, [they] just wanted to keep costs down, and there was a firm design specification in place with expectations of little to no modification" (personal communication, September 22, 2006).

* Total Integrated Engine Revitalization tied incentives directly to the achievement of doubling Mean Time for Depot Repair from 700 hrs to 1400 hrs (personal communication, August 14, 2006). FCS found incentives more useful when based on delivery of critical sub-components since they were so vital to the aggregate system (personal communication, August 14, 2006).

* Many others like F-15 and GPS found that incentives became more strongly correlated to outcomes when they jointly developed incentive criteria with their respective contractors and incorporated risk management as a major variable in the overall equation (personal communication, August 28, 2006; September 17, 2006).

* In the STSS, both the government and contractor co-developed the incentive criteria to ensure they were meaningful, achievable, useful, measurable, and enduring (personal communication, September 27, 2006).

* Subjective criteria, the more elastic of the two measure types, depend on certain factors such as judgment, beliefs, and propensity to yield specific outcomes. These measures found their way into many programs including STSS, Rapid Attack Identification Detection and Reporting System (RAIDRS), Global Hawk, and B-2. Each of these programs called for highly effective and comprehensive systems engineering processes, and strengthened their incentives to enforce it. RAIDRS also found subjective measures "afforded some freedom of action and much needed flexibility," (personal communication, September 19, 2006).

* E2D and Advanced Extremely High Frequency (AEHF) found they could "more effectively influence how their prime contractor managed subcontractor behavior through subjective means" (personal communication, September 20, 2006).

* AEHF used subjective measures to drive management responsiveness and effective communication" (personal communication, September 20, 2006).

* C-17 inserted customer satisfaction into their overall incentive equation through the use of customer surveys in the context of a CPIF contract that was primarily objective in nature (personal communication, August 16, 2006).

* In a few program offices like STSS, program personnel found the selection of key outcomes can also make evaluation periods more enduring by creating a bridge between one award fee period and the next. They employed 34 KPEs that spanned nine periods from FY02-FY06 (personal communication, September 27, 2006). In retrospect, these aggregate KPEs kept everyone who was involved with the execution of STSS focused on the goal line, which went well beyond single award fee periods (personal communication, September 27, 2006). Like others, those who structured their award fee plan also understood the delicate balance among cost, schedule, and performance incentives, which have been successful in motivating the contractor to take a long-term view of program and mission success rather than a more short-term view of performance during any specific period.

What we found particularly interesting was the increased use of objective measures in Award Fee type contracts. We noticed a strong tendency by the organizations interviewed to find more objective and tangible measures in the conduct of their incentive strategies that incorporated award fee. Objective measures that were used as criterion variables seemed to fill an air gap by demonstrating the attainment of certain intermediate milestones and irrefutable performance outcomes. Subjective measures were still important, especially since they verified qualitative characteristics; but the combination of objective and subjective measures created some of the strongest correlations to expected outcomes.

Rollover. Rollover, the process of moving unearned award or fee into a subsequent award period, has received a generous amount of consideration lately, but STSS used it sparingly. In nine award fee periods, STSS used the rollover provision just once (personal communication, September 27, 2006). Initially, government evaluators felt the contractor took a little too much mission assurance risk with the hardware. STSS weighed the options and concluded "they were willing to forgive the fact that the contractors made them very uneasy during one period as long as the satellites worked as intended 'on orbit' in the end." Consequently, a portion of the unearned fee was rolled over to the Mission Success Fee portion of the award lee plan. STSS also felt that in periods where they did not implement the rollover provision, the contractor should be taking the appropriate corrective action anyway in order to earn the larger fees at the end of the program. Consequently, there was no reason to provide additional incentives to correct behavior that seemed to occur anyway.

The incorporation of base fee in award fee contracts made a noticeable difference. Of the 25 organizational interviews, many used some form of base tee on CPAF contracts. Numerous organizations implementing CPAF valued base fees as a leverage tool. Even though the Defense Federal Acquisition Regulation Supplement (DFARS) 216.405-2(c)(iii) allows up to 3 percent of the estimated cost of the contract exclusive of fee, a contractor could provide "best efforts" for the award fee term and still receive no award. As a result, there was some pressure on the government to provide a portion of the award tee for "best efforts."

* F-15 found themselves in such a predicament since they originally planned to only pay award fee for "'excellence" (personal communication, August 28, 2006). However, during deliberations the contractor asked for consideration of a base fee based if it met discrete contractual terms and conditions. The F-15 eventually agreed and implemented a 3 percent base lee giving the Systems Program Office (SPO) ample flexibility to award the remaining balance for "excellence."

* Other CPAF program offices appeared to recognize the value of a base fee. FCS incorporated a base fee, all objective in nature (personal communication, August 14, 2006).

* Missile Defense Kinetic Weapons found base fee flexibility to be "just right for responsiveness, and timeliness and cost considerations" (personal communication, October 11, 2006).

* Biological Detection System included a 3 percent base fee that also became a source for employee bonuses (personal communication, August 14, 2006).

* Global Hawk are revising their contract to include a 3 percent base fee to distinguish excellence from best efforts (personal communication, August 28, 2006).

* Others like STSS are looking at the prospect of incorporating key performance events into base fee (personal communication, August 21, 2006).

Our research team found that senior defense industry personnel welcomed the use of base fee to better delineate the difference between "best efforts" (e.g., fee) and "excellence" (e.g., award).

Trained and Experienced Personnel made a noticeable difference for incentive contracts. Nothing seems to have a more dramatic impact in DoD than training and experience. Training draws its roots from practical experience. It is systematic. We learn from our successes and failures in the field and make adjustments accordingly in the way we train. The mantra "train like we fight, fight like we train" is pervasive in warfighter training across DoD, and ultimately leads to advantages on the battlefield. Without question, practical experience helps build better training programs. It can overcome unforeseen shortfalls and the inevitable prevailing uncertainty even with proven systems. The same mindset applies to incentive-type contracts. Program managers that had formalized instruction and/or coached their personnel on the use of incentives indicated they more favorably influenced outcomes.

* F-15 felt "training and experience made a huge difference" (personal communication, August 28, 2006).

* RAIDRS instituted a robust series of Murder Boards for review of assessments generated by performance monitors prior to each Air Force Review Board (AFRB). All performance monitors were required to sit through the review of all other assessments. The process ensured consistent communication with all on the expectations for their assessments in terms of quality, format, scope, etc. (personal communication, September 19, 2006). RAIDRS found that those who go through the process once consistently provide excellent assessments in the future and pass on their lessons learned to others, resulting in a faster review in succeeding periods (personal communication, September 19, 2006).

* MDA summed up what the remaining interviewees reiterated. Aside from the specialty training a few of their personnel have received in multiple courses covering incentive contracts, everyone seems to receive the training they need to make incentive contracts work (personal communication, September 22, 2006).


Even though the research team did not meet individually with industry representatives, contractor perspectives were considered an important element of this research. The team found an expedient method to collect industry's thoughts on award-fee incentives. During mid-summer 2006 and before the interview process started with government program offices, DAU hosted an Industry Day at Fort Belvoir, VA. With non-attribution safeguards in place, 18 senior-level defense industry representatives participated and spoke freely about their experience with incentive contracts. Their views were enlightening. In many cases, industry confirmed what data the research team found through held interviews. Table 4 captures industry's aggregate views and positions after much interactive and lively discussion.

Interestingly enough, many of the 18 statements can be associated with the four specific categories that influenced outcomes: 1) Strongly Communicated Expectations and Feedback; 2) Relevant, Achievable, and Enduring Measures within an Evaluation Period; 3) Base Fee; and 4) Training and Experience. Industry comments can also be further subdivided into two general categories--Planning and Execution.


So, what about incentives? Are they still a good tool to drive performance behaviors despite the recent criticism and doubt? Have organizations found a way to effectively apply incentives and demonstrate the usefulness of incentives? The answer to all of these questions is "yes." There is no "one size fits all," but the incentive attributes that seemed to matter the most in influencing performance outcomes for the 25 programs, and generally afforded strong correlations between incentives and desired performance are outlined in Table 5.

Ideally, an optimal incentive strategy features these and perhaps other attributes in the context of cost, schedule, and performance factors forged together as a unified accord. In practice, cost, schedule, and performance are strongly interdependent and tend to interfere with one another's outcomes. Influencing all three, while not at the expense of one another, becomes a delicate balancing act and the challenge for any incentive strategy. For example, emphasizing technical performance could come at the expense of cost and scheduled deliveries. Emphasizing schedule and/or cost could easily come at the expense of technical performance. Nonetheless, all our interviewees developed incentive strategies that carefully considered the weighting aspect of these three attributes depending on certain program priorities, distinctive program phases, and certain aim points.

One prevailing element distinguishes DoD and other U.S. Government agencies from general industry. Unlike simple commercial development efforts, DoD builds and sustains many "one-of-a-kind" systems that count on "cutting-edge" technologies, operate in unforgiving or threatening conditions, and come under enemy fire. Invariably, motivational contracting tools like incentives can help organizations managing those systems overcome numerous obstacles and reach very definitive outcomes. Incentives provide tremendous flexibility for the implementation of certain government contracts. Incentives are certainly no panacea, but, if used wisely and judiciously, can help programs either achieve difficult milestones and/or recover lost ground by allowing organizations to make the necessary course adjustments as they navigate the inevitable turbulent programmatic waters.


What DAU curricula should be adjusted as a result of the research team's findings in both the near- and far-term; and how can DAU make both lessons learned and best practices widely available? First, the acquisition contracting workforce, particularly contract specialists working with incentive contracts, must possess a certain understanding of incentive contracts. Therefore, it seems reasonable that every functional area should have or at least consider an introductory lesson on incentive contracts that incorporates lessons learned and best business practices during the many training opportunities that abound. In the meantime, and before the curricula development teams make their respective determinations, a number of learning assets are already available for immediate review and required updates. Aside from a couple of specialized incentive contract lessons embedded in a few Defense Acquisition Workforce Improvement Act (DAWIA) contracting and budgeting courses, DAU offers two 24/7, online Continuous Learning Modules (CLMs) that can help guide organizations with their incentive selection and subsequent development pathway. The first, Contractual Incentives (CLC 018), "focuses on understanding the balance between government and industry goals and objectives in crafting an effective incentive strategy ... that effectively motivates and incentivizes the contractor to deliver what the government needs, when it needs it, and within budget." The second, Provisional Award Fees (CLC 034), addresses the 2003 rule that permits award fee payments to be made anytime prior to the interim or final evaluation.

Both CLMs are useful, but do not address the execution "essentials." An "Incentive Contracts" CLM that is more comprehensive and readily available to the acquisition community is necessary to provide much more assistance on the mechanics and implementation of incentive contracts. Additionally, exploiting the knowledge of seasoned professionals through the increasingly popular collaborative medium called Communities of Practice (CoPs) on the DAU Acquisition Community Connection (ACC) Web site can offer access to a wide array of current experiences and lessons learned regarding incentives ranging from the general to the specific. DAU has already established a site rich in information on the ACC--Award and Incentive Fee Contracts, at Access to these and other collaborative training aids is critically important because once an incentive strategy is in place, its maximum value truly depends on its ability to implement known techniques that can drive favorable outcomes. No better source of experts exists than those who face contract incentive challenges every day--the acquisition workforce professionals who are charged with appropriately implementing the techniques that drive outcomes--appreciably.


A subordinate article regarding this research project was published in an earlier Defense Acquisition Review Journal, Vol. 48, by two of the seven research team members involved in the same research activity. This article is more extensive and captures the findings of all 25 organizations interviewed in the conduct of this Office of the Secretary of Defense (OSD)-sponsored research. Special thanks are extended to the entire research team, who devoted many hours developing this research approach, conducting interviews, and analyzing the detailed data: Karen Byrd, Leslie Deneault, Alan Gilbreth, Sylvester Hubbard, Leonardo Manning, and Ralph Mitchell.


Air Force Materiel Command. (2006). Draft Incentive Guide. Space and Missile Systems Center. Wright-Patterson Air Force Base, OH: Author.

Department of Defense (DoD), General Services Administration & National Aeronautics and Space Administration. (2005, March). Federal Acquisition Regulation. Washington, DC: U.S. Government Printing Office.

Miller, J. (2006, August 7). Contract Incentives Transformation Initiative Group (TIG) Brief for the Acquisition Transformation Action Council (ATAC). Pentagon, Washington, DC: Office of the Secretary of the Air Force (Acquisition).

Snyder, T. J. (2001, Summer). Analysis of Air Force contract implementation. Air Force Journal of Logistics, 25(2), 14-21.

United States Congress. (2006. October 17). National Defense Authorization Act for Fiscal Year 2007. H.R. 5122, 109th Congress. Washington, DC: U.S. Government Printing Office.

United States Government Accountability Office. (2005, December). Defense acquisitions: DoD has paid billions in award and incentive fees regardless of acquisition outcome. GAO Report 06-66. Washington, DC: U.S. Government Accountability Office.

Young, J. J. (2004, December 23). Contract profit and incentives arrangements. Policy Memorandum. Assistant Secretary of the Navy for Research, Development and Acquisition. Retrieved October 2006 from

Mr. Robert L. Tremaine is associate dean at the DAU West Region. Prior to joining DAU, he served over 26 years in the U.S. Air Force in acquisition assignments that spanned air, missile, and space. He holds a B.S. from the U.S. Air Force Academy and an M.S from the Air Force Institute of Technology. He graduated from the Canadian Forces Command and Staff, and U.S. Army War Colleges. He is level III certified in Program Management and Systems Planning, Research, Development and Engineering.

(E-mail address:

Defense Acquisition University (DAU) Research Team Members

Mr. Robert L. Tremaine    DAU West (Project Lead)
Mr. Alan Gilbreth         DAU Mid-West
Mr. Sylvester Hubbard     DAU Mid-West
Mr. Ralph Mitchell        DAU South
Mr. Leonardo Manning      DAU Capital and Northeast
Ms. Karen Byrd            DAU Capital and Northeast
Ms. Leslie Deneault       DAU Capital and Northeast


Problem Statement: The implementation of Award/Incentive Fee contracts
in DoD is not producing the desired/intended outcomes. In some cases,
the acquisition community may not be implementing Award/Incentive Fee
Contracts correctly.

Research Objective: DAU needs to understand where Award/Incentive Fee
contracts made a favorable difference and why.

End State: Programs need to embrace an incentive fee strategy that
achieves and sustains maximum contractor performance with a measurable
value to the government.


Organizations Who Supported this Research Activity through Interviews

Space Based Infra-Red System        Future Combat Systems (FCS)

Global Positioning System (GPS)     Total Integrated Engine
                                    Revitalization Program

Rapid Attack Identification         Air Mobility Command (AMC)
Detection and Reporting System      Contractor Tactical Terminal
(RAIDRS)                            Operations

Advanced Extremely High             Global Transportation Network
Frequency (AEHF) System             (GTN)

Space Tracking and Surveillance     Biological Detection System
System (STSS)

Air Force Satellite Control         Missile Defense Kinetic Weapons
Network (AFSCN)

B-2 Aircraft-Radar Modernization    Missile Defense Sensors
Program-Frequency Change

C-17 Aircraft-Sustainment           Missile Defense Targets andi

F-15 Aircraft-Suite 6 Software      Missile Defense C2BMC
Upgrade for A-D & E Models

F-16 Aircraft-Operational Flight    Multi-Mission Maritime Aircraft
Program Development                 (MMA)

Global Hawk Unmanned Aerial         E2D (Major upgrade to E2C)

MH-60 Black Hawk Helicopter         AV-8 (Harrier)

Marine Expeditionary Fighting
Vehicle (EFV)


1. Government construction of the award fee plan (including metrics,
incentives, etc.) may not link with the offeror's proposed solution or

2. Award fee is sometimes not the proper contract type to achieve
program outcomes.

3. In some cases, the intended goal(s) of award fee contracts are

4. Contracts without base fee can cause problems.

5. In some cases, the government does not follow its own policies on
award fee.

6. On occasion, award fee evaluation criteria are poorly explained or
justified, and communication of award fee goals and criteria are not
clearly explained.

7. It is difficult to establish the relationship between awards for
month-to-month activities to the goals of a multiple-year program. The
linkage is not always apparent.

8. Administration of award fee criteria can change post-award and
create problems during contract execution.

9. The government may not manage or evaluate the award fee criteria
as agreed and planned.

10. Post-award administration of award fee contracts is time- and

11. In some cases, government personnel are not adequately trained in
managing award fee contracts.

12. Desired outcomes are not always driven by the award fee because of
insufficient funds available and subjectivity of the final evaluation.

13. Inconsistency in the timing of the award in line with the
evaluation criteria and uncertainty of expected profitability before
award pose additional problems.

14. Contracting parties and stakeholders have different perceptions of
the purpose of award fees.

15. In some cases, there is government failure to understand the
economics of defense contracting and its impact on government

16. From time to time, there is inappropriate use of award fee


17. Award fee is not targeted at creating fair shareholder value
(or financial advantage to the private company) in line with actual
performance. Metrics are sometimes not meaningful and are "fuzzy"
in line with "fuzzy" requirements. Sometimes they are too
subjective and do not measure outcomes that are sought by DoD.

18. Award fees that require the contractor to exceed the requirements
of the contract motivate requirements creep or "gold-plating."


Key Attributes     Relevant, Achievable and Enduring Measures
                   Frequent and Unambiguous Communication/Feedback
                   Trained and Experienced Personnel

                   No Base Fee                Base Fee

                   Cost Plus Incentive        Cost Plus Award Fee
                   Fee (CPIF)                 (CPAF)

                   Description                Description

Differentiation    Provides for the           Provides for a fee
                   initially negotiated       consisting of (1) a
                   fee to be adjusted later   base amount fixed at
                   by a formula based on      inception of the
                   the relationship of        contract and (2) an
                   total allowable costs to   award amount that the
                   total target costs;        contractor may earn
                   specifies a target cost,   in whole or in part
                   a target fee, minimum      during performance and
                   and maximum fees, and a    that is sufficient to
                   fee adjustment formula.    provide motivation
                   After contract             for excellence in such
                   performance, the fee       areas as quality,
                   payable to the             timeliness, technical
                   contractor is determined   ingenuity, and cost
                   in accordance with the     effective management.
                   formula. The formula       The amount of the award
                   provides, within limits,   fee to be paid is
                   for increases in fee       determined by the
                   above target fee when      Government's judgmental
                   total allowable costs      evaluation of the
                   are less than target       contractor's performance
                   costs, and decreases in    in terms of the criteria
                   fee below target fee       stated in the contract.
                   when total allowable       This determination and
                   costs exceed target        the methodology for
                   costs. This increase       determining the award
                   or decrease is intended    fee are unilateral
                   to provide an incentive    decisions made solely
                   for the contractor to      at the discretion of
                   manage the contract        the Government.
                   effectively. When total
                   allowable cost is
                   greater than or less
                   than the range of costs
                   within which the fee-
                   adjustment formula
                   operates, the contractor
                   is paid total allowable
                   costs, plus the minimum
                   or maximum fee.

Application        Incentive Contracts

                     * Motivate contractor efforts that might not
                       otherwise be emphasized
                     * Discourage contractor inefficiency and waste
COPYRIGHT 2008 Defense Acquisition University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Tremaine, Robert L.
Publication:Defense A R Journal
Article Type:Company overview
Date:Dec 1, 2008
Previous Article:Defense ARJ executive editor.
Next Article:Maximizing warfighter capability using surveyed necessity measurement: application to the USAF F-15C fleet.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters