Printer Friendly

Best value source selection: the Air Force approach, Part I.

In using the best value approach, the government seeks to award to an offeror whose bid gives the greatest confidence that it will best and most affordably meet requirements. This may result in an award to a higher-rated, higher-priced offeror where the decision is consistent with the evaluation factors and the source selection authority (SSA) reasonably determines that the technical superiority and/or overall business approach and/or superior past performance of the higher-priced offeror outweighs the cost difference. The SSA, using sound business judgment, bases the source selection decision on an integrated assessment of the evaluation factors and subfactors. Now, it might also be said that the use of the term "best value" is a misnomer and that we are using this term where we actually mean "trade-off."

Regardless of the process used, any award, including awards in a sealed bid selection, should represent the best value. The question is how to determine the best value. In trade-off source selections, we have recognized that paying more for some non-cost aspects is worth it.

The Air Force tends not to use quantitative methods for source selections. Proposals do not receive numeric grades. So how does the Air Force run source selections? The Air Force Supplement to the Federal Acquisition Regulation (AFFARS) part 5315 provides our guidance for source selections.

Four Selection Factors

We use four different factors: mission capability, proposal risk, cost/price, and past performance. Mission capability may be composed of any combination of subfactors, though these typically include technical performance and management capabilities (other subfactors are also acceptable; however, having more than six subfactors requires approval by the SSA) and is rated using a color scale, which is described later in this article. Every mission capability subfactor is also rated for proposal risk (high, medium, or low). The evaluation of cost and past performance rating are not described in this article, except as how they fit into the integrated assessment of the proposals. Cost and past performance factors typically do not have subfactors assigned to them.

The four factors are ranked in order of importance, and two or more factors may have equal ratings. For instance, we may rank in descending order of importance: mission capability, past performance, cost/price, and risk. We may also state that mission capability and past performance are equal in rating yet of greater importance than the remaining two. In our system, we state the relative importance of factors, typically using terms like "significantly," "more important," "equal," or "less important," rather than stating, for example, that mission capability is twice as important as cost/price.

The subfactors are also rank-ordered in the same manner as factors. Again, subfactors may be equal in importance and we do not assign a mathematical differential between them. Finally, according to AFFARS, past performance must be at least as important as the most important non-cost factor.

Defining the Terms

Each evaluator (or advisor) examines the proposals for his or her assigned area of responsibility. Section M of the request for proposal (RFP) contains a detailed explanation of the manner in which proposals will be evaluated--a description of what constitutes an adequate or acceptable proposal. It may sometimes also include a description of what constitutes a better-than-acceptable proposal.

In light of the definitions, the evaluators assign strengths, inadequacies, and deficiencies in the area of mission capability at the subfactor level. The definitions from AFFARS part 5315 and FAR part 14 are as follows:

Strength--A significant, outstanding, or exceptional aspect of an offeror's proposal that has merit and exceeds the specified performance or capability requirements in a way beneficial to the Air Force, and either will be included in the contract or is inherent in the offeror's process

Proposal Inadequacy--An aspect or omission from an offeror's proposal that may contribute to a failure in meeting specified minimum performance or capability requirements

Deficiency--A material failure of a proposal to meet a government requirement or a combination of significant weaknesses in a proposal that increases the risk of unsuccessful contract performance to an unacceptable level.

A few clarifications are still in order. If a proposal meets--only meets--the requirements of an adequate or acceptable proposal, that particular aspect will not have any strengths, inadequacies, or deficiencies. The proposal rating is green.

Strength

There are two things to note in the definition of "strength." The first is the wording "in a way that is beneficial to the Air Force" (or, for a more generalized situation, the government). This means that simply being better than acceptable is not sufficient, in and of itself, to warrant being assigned as a strength. For example, suppose we have an aircraft that requires the capability to cruise at Mach 2; one offeror proposes an aircraft that cruises at Mach 2.1, but we determine that cruising at Mach 2.1 offers no operational benefit. So even though Mach 2.1 is better than the required capability, the proposed increase in cruise speed does not meet the definition and is not considered a strength. A second offeror proposes an aircraft that cruises at Mach 2.5. Here we determine that cruising at Mach 2.5 offers increased survivability of the aircraft from attack. This is better than the required capability and offers a benefit, so it is rated as a strength.

The second thing to note in the definition of strength is "and will either be included in the contract or is inherent in the offeror's process." The first part of this, "will ... be included in the contract" is easy to understand. In the example just used, where we have a proposal of an aircraft with a cruising speed of Mach 2.5, this will be incorporated into the contract to become the contractual requirement.

The second part "or is inherent in the offeror's process" is, perhaps, harder to understand. Let us use cost accounting as an example. The requirement is the ability to track expenditures within two weeks of their being accrued. The offeror's accounting system, however, is good enough to enable us to track expenditures within a day of their being accrued. This is better than the requirement, and we determine that this offers us the benefit of being able to track earned value deviations more efficiently and therefore ensure cost and schedule accountability to a greater degree. This could be rated as a strength, but not necessarily written into the contract because it is a normal part of that offeror's operations.

Inadequacy or Deficiency?

The next point of clarification regards the difference between inadequacies and deficiencies. The first is one of scope. Say we have a performance requirement that we determine is not critical, and we would be willing to "CAIV" [cost as an independent variable] it. As an example, we have a requirement that the maximum system weight shall not exceed 5 pounds. An offeror proposes a system that weighs 6 pounds, and the added weight means that the system survivability is better and offers further benefits by requiring fewer spares and lower life cycle cost. We determine that the combination of improved survivability, reduced spares, and lower life cycle cost is a good tradeoff for the increased weight. This capability does not meet the weight requirement and thus should not be acceptable, but the trade-off is such that we have an inadequacy rather than a deficiency.

As a counter example, let us say that we require a helmet weighing no more than 3 pounds. The offeror proposes a helmet that weighs 4 pounds. The extra weight will result in a greater occurrence of neck injuries under g-loading conditions. This is a safety issue and described in the RFP as a key performance parameter (not subject to trade-off). We are, therefore, not prepared to accept a 4-pound helmet, and thus the proposal has a deficiency, not an inadequacy.

A second potential difference between inadequacies and deficiencies is one of clarity. In other words, it is the difference between requiring a proposal revision or not requiring a proposal revision. The offeror proposes a process, which we fully understand and determine is not acceptable. This proposal is deficient because the offeror would have to change the process for it to be acceptable. This would require a proposal revision if the government initiates discussions culminating in a request for a final proposal revision (FPR). A second offeror proposes a process that we don't fully understand but which seems not to be acceptable (as we understand it). This second proposal is inadequate rather than deficient because clarification (a better explanation of the process) may lead to our determining that the process is adequate. Therefore, the offeror doesn't need to change to the proposed process, but only provide some further explanation. This is not cause for a proposal revision.

A third potential difference is failure to follow the requirements of section L (the instructions to the offerors). If something that was supposed to be included in the proposal is missing, the proposal is deficient. Providing the offeror an opportunity to submit additional items to the proposal after the RFP closing date would require the government to issue an FPR.

The fourth potential difference is in the definition of deficiencies, "or a combination of significant weaknesses in a proposal ... to an unacceptable level." We haven't discussed weaknesses yet because they relate to risk, not to the color ratings. But essentially, a combination of risks that makes the overall program proposal risk exceedingly high and therefore extremely difficult to manage could be considered a deficiency.

Color it Best Value

Once we have completed the determination assignment of strengths, inadequacies, and deficiencies to each proposal, we need to assess the subfactors and assign color ratings to each. The explanations of the four color ratings --blue, green, yellow, and red--come from the AFFARS, Part 5315:

Blue/Exceptional--Exceeds specified minimum performance or capability requirements in a way beneficial to the Air Force

Green/Acceptable--Meets specified minimum performance or capability requirements necessary for acceptable contract performance

Yellow/Marginal--Does not clearly meet some specified minimum performance or capability requirements necessary for acceptable contract performance, but any proposal inadequacies are correctable

Red/Unacceptable--Fails to meet specified minimum performance or capability requirements. Proposals with an unacceptable rating are not awardable.

Here things become fuzzy. Some believe this fuzziness is beneficial, and others view it as problematical. Let's look at mission capability. We have looked through a proposal and determined which subfactors exhibit strengths, inadequacies, or deficiencies. Based upon these determinations, the appropriate subfactor is given a color rating. There is no numeric requirement for strengths versus inadequacies or even deficiencies to assign a particular color (though it is important that we are consistent in how we do so within a source selection). In other words, just because a particular proposal has more strengths than it does inadequacies and deficiencies combined, does not mean that it receives a blue rating, nor does it necessarily indicate that it is even a green rating. Earlier we also discussed a proposal for which a subfactor is simply acceptable, not having any strengths, inadequacies, or deficiencies. By definition, the rating for that subfactor is green.

The presence of deficiencies would lead us to a yellow or a red rating (particularly a red rating) because a deficiency is typically the failure of a proposal to meet a government requirement, making for an un-awardable contract. The question here is whether the shortfalls can be traded for strengths in a CAIV analysis. These deficiencies would, of course, have to be in minor, relatively unimportant areas and would require the modification of the system specification prior to the signing of the contract. Earlier, we used the example of system weight--a non-key performance parameter requirement--exceeding the 5-pound limit and thereby enabling more important performance requirements than our threshold requirements. This means that the initial deficiencies could become either acceptable or inadequacies in the final analysis without any change to the proposal. This assumes that some sort of CAIV analysis statement was included in the RFP. It is theoretically possible, therefore, to have a green rating with deficiencies in the initial ratings, but not in a final rating.

Some will argue that since neither the requirements nor the proposals have changed, these items are still deficiencies, but "acceptable deficiencies," a category not recognized by either the FAR or AFFARS. The reasoning is that neither FAR nor AFFARS has been changed sufficiently to recognize the full impact of CAIV in the source selection process.

If, however (noting our definition of a deficiency), the deficiency is one that increases the risk of a successful contract performance to unacceptable levels, it is not likely that any justification will suffice. In the end, however a team chooses to handle this type of situation, the written ratings justification is critically important and must be able to stand up to the "reasonable person test"--in other words, could a reasonable outsider, looking at the justification agree with the determination? (It would be logical to expect that we next roll these subfactor ratings up into an overall factor rating; however, this goes against the strictures of AFFARS part 5315.)

Proposal Risk

Proposal risk does not receive a color rating. Instead it receives one of the following assessments (from the AFFARS, Part 5315):

High--Likely to cause significant disruption of schedule, increased cost or degradation of performance; risk may be unacceptable even with special contractor emphasis and close government monitoring

Moderate--Can potentially cause some disruption of schedule, increased cost, or degradation of performance; special contractor emphasis and close government monitoring will probably be able to overcome difficulties Low--Has little potential to cause disruption of schedule, increased cost or degradation of performance; normal contractor effort and normal government monitoring will probably overcome difficulties.

The proposal risk is based upon weaknesses associated with the offeror's proposed approach and is assessed at the subfactor level. Weaknesses are the narratives of the elements of the proposal that add risk. As opposed to strengths, inadequacies, and deficiencies associated with color ratings, the Air Force identifies weaknesses for risk. Typically, these weaknesses describe areas of moderate or high risk requiring additional oversight, cost, and/or schedule increases; these areas have the potential to degrade performance and lead to the likelihood of unsuccessful contract performance.

There is generally no necessary correlation between the risk and the color rating. Areas that generate strengths can also generate risks. Thus, a particular subfactor of a proposal could see a weakness narrative on the very same proposal that has a strength narrative. For instance, a very strong technical approach may be very risky because it most likely can't be accomplished in the required contract timeframe. Conversely, a proposal that is inadequate or deficient may or may not have a weakness.

The subfactors, and factors of proposal risk normally mirror those that are involved in the color rating aspect of the source selection. What this means is that we rate the mission capability subfactors for risk as well as determining strengths, inadequacies, and deficiencies. Occasionally, there may be some subfactors that receive a color rating, but are not assessed for risk. The sub-contracting plan (often a subfactor in program management) is one such area where this is often the case.

Part II of this article will touch very briefly on cost and past performance, then go on to address another part of the process--one that some people consider fuzzy: the integrated assessment.

Differences Between Color Rating and Risk

Strengths, inadequacies, and deficiencies deal with this question: Does what the offeror promises (or more formally proposes) meet our needs? This is irrespective of whether you believe the offeror can actually accomplish what they propose. This is the source selection's color rating aspect. (The exception to this is the issue regarding combinations of weaknesses and deficiencies.)

The official definition of weakness (from the FAR) is "a flaw in the proposal that increases the risk of unsuccessful contract performance." A "significant weakness" in the proposal is a flaw that "appreciably increases the risk of unsuccessful contract performance." Weaknesses deal with the question: Given the approach, what is the likelihood that it will drive up costs, degrade performance, extend schedule, or require additional oversight?

A different way to pose that question is this: What is the likelihood that the offeror can actually deliver what they promise? And in the context of determining risk, it doesn't matter whether what is proposed meets our needs or not. This is the source selection's risk aspect.

Editor's note: The author welcomes questions and comments. He can be contacted at alex.slate@brooks.af.mil.

Slate is a facilitator at the Brooks City-Base Acquisition Center of Excellence. He has been a program manager, test manager, and laboratory principal investigator during his civil service career.
COPYRIGHT 2004 Defense Acquisition University Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:BEST PRACTICES
Author:Slate, Alexander R.
Publication:Defense AT & L
Geographic Code:1USA
Date:Sep 1, 2004
Words:2783
Previous Article:The Hanscom learning organization: a solution for the information age.
Next Article:NAVSEA's acquisition systems engineering intern program: enjoying success, looking for improvement.
Topics:


Related Articles
Tips for reducing coatings failures with solvents.
Acquisition transformation: lead into gold?
Memo to war planners: no post-it notes or stubby pencils allowed.
Best value source selection: the Air Force approach, Part II.
STANDARD Missile value engineering (VE) program: a best practices role model.
Electronic Systems Center Public Affairs (March 3, 2006): center charting 'Smart' course with Blue Teams.
Best practices trap: make sure that the shoe actually fits.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters