A new operational assessment paradigm: splitting the stoplights.
Joint Publication 3-30, Command and Control for Joint Air Operations, gives the air component commander responsibility for assessing "the results of joint air operations." (1) Air Force Operational Tactics, Techniques, and Procedures (AFOTTP) 2-3.2, Air and Space Operations Center, assigns this responsibility to the JFACC's operational assessment team (OAT). (2) Doctrinal guidance on how to conduct assessment focuses mainly on tactical-level assessments, including battle damage assessment (BDA) and munitions effectiveness assessment (MEA). Guidance specific to assessment at the operational level describes a general process of "rolling up" the tactical-level assessments using the strategy-to-task linkage developed by the Strategy Division.
Using this delineation from task to objective as the foundation for assessment remains the same regardless of whether the air component is supported or supporting. There are, however, significant differences in how one builds an assessment on that foundation. When the air component assumes a supporting role, uncertainties exist in determining the goal to be assessed, building tactical-assessment input to the OA process, and evaluating and reporting effects across components.
With airpower as supporter, the operational objective might read, "Support command X in achieving effect A." So the air component has two goals: it must provide support to command X and do so with the purpose of achieving effect A. Which of these goals should the assessment measure?
Both approaches have advantages and disadvantages. Looking at things from an effects-based perspective (achieve effect A) is generally the preferred approach to assessment because it captures progress toward the overall goal and highlights opportunities to improve strategy. When the air component lends support, however, the JFACC is responsible neither for determining the overall desired effect nor for developing the overall strategy to meet it. In this case, effects-based assessment (EBA) may fail to identify shortfalls of the current strategy (since the air component--hence the OAT--may not have insight into the strategy). Furthermore, even if weaknesses in the strategy become apparent, the JFACC has limited ability to implement improvements since the supported component has that responsibility. These limitations reduce the utility of EBA when the air component provides support.
On the other hand, if the OAT focuses on providing support (support command X), it can confine the assessment to tasks and objectives under the JFACC's control, thus improving the ability to use assessment to shape strategy. Such an assessment, though, may go no further than measuring whether or not the air component gave the supported commander what he or she asked for. This relies on the supported commander to determine how best to employ airpower and never addresses the overall desired effect, much less the causal link between airpower actions and achievement of that effect.
To reap the benefits of both approaches and mitigate the drawbacks, we have introduced a "split assessment." For each objective, the OAT presents two assessments: one of progress toward the overall joint effect and one of airpower's contribution toward that effect (see fig.). We use a modification of the stoplight chart. The color of the top half of the block represents a qualitative assessment (green, yellow, or red) of airpower's contribution, and that of the bottom half indicates a like assessment of the overall joint effect.
The OAT assesses airpower's contribution based on the strategy-to-task structure. Largely drawing on performance-based metrics, this portion of the assessment in some cases includes a roll-up to the tactical, objective level. However, since the JFACC does not have ultimate responsibility for reaching the operational objective, the top half of the block usually doesn't reflect the level of attaining the overall objective. Instead it indicates the air component's contribution to the overall joint effect. The top half deals with actions and effects under the JFACC's control, lends itself to shaping air strategy, and supports the JFACC's decision making on the best use of limited resources.
Since the supported commander must produce the overall joint effect, the assessment of progress toward that objective falls under his or her control as well. The OAT does not perform the assessment that determines the color in the bottom half of the block; the supported command performs that function. It exists primarily to benefit the JFACC's situational awareness and to provide context for the top half of the block. Assessment of the joint effect, by itself, should not dictate changes to strategy. Although we discuss the split assessment here in the context of the JFACC's acting as a supported commander, one could apply the same technique more broadly to assess other enabling functions, such as intelligence, surveillance, and reconnaissance (ISR); space operations; and information operations.
Split assessment measures the support of the commander and the attainment of specific effects, but some difficulties remain when the air component assumes a supporting role. Specifically, this situation usually produces a lower operations tempo than one would find in a major air war, which results in a smaller air operations center with relatively fewer personnel. This, in turn, leads to a smaller OAT, reduced in-house tactical-assessment capabilities, and fewer attached personnel. Doctrinal OA guidance assumes that a robust in-house tactical-assessment capability exists. The various products of such an assessment, including mission assessment, BDA, and MEA, not only serve as stand-alone analyses to inform the commander, but also form the tactical-level foundation on which the OAT relies to determine performance at the operational level. Reduction of this function places a heavier burden on the OAT. First, the team must do more data mining to gather the needed tactical-level inputs. Second, in this situation the OAT often becomes the only source for in-house scientific analysis, so the commander utilizes it to answer a wide range of tactical-assessment questions normally handled by other offices.
One could consider a variety of solutions at an institutional level to address the considerable need for tactical assessment. In the short term, one could enable tactical assessment by leveraging current manning in a theater's air and space operations centers differently. In the long term, the new A-staff structure, including the A-9 (Studies and Analyses, Assessments, and Lessons Learned), might help. Perhaps the forward-deployed OAT could make more extensive use of reachback for tactical- level inputs. All of these bear further scrutiny beyond the scope of this article.
To help alleviate the tactical-assessment burden, the OAT at Air Expeditionary Force 7/8, US Central Command Air Forces, has implemented assessment information requirements (AIR), a list of specific information items, based on the strategy-to-task construct, that the team needs to feed its assessment. This is not a new idea--the OAT at Seventh Air Force uses it, and other OA organizations possibly do so as well. An analyst determines the information necessary to accurately measure each success indicator, measure of effectiveness, or measure of performance (MOP) and identifies the sources of the information. Both the specific information and the source comprise an AIR, each of which is then incorporated into the air operations directive, along with information on reporting procedures. Organizations responsible for reporting on the AIRs should have a hand in developing them if at all possible. In many cases, one can leverage an existing report or product to meet the need. For example, consider the MOP contained in a partial strategy-to-task breakdown (table 1).
This MOP gives rise to two AIRs: (1) the OAT needs to know how many EW/GCI radars are in critical areas, and (2) it needs to know how many have been destroyed. The Intelligence, Surveillance, and Reconnaissance Division (ISRD) is the source of this information (table 2). One would then expect the ISRD to report this information to the OAT periodically, in sync with the assessment cycle.
The use of AIRs does not completely alleviate the need for increased tactical-assessment capability. However, it does ensure that the OAT will have access to the lower-level inputs required for performing assessments. Identifying critical information ahead of time reduces the data-mining load and streamlines the data-reporting process by limiting the request to only that information needed to complete the assessments.
Determining the impact of air operations on achieving the desired operational-level effect represents a third challenge of assessment from a supporting role. When the air component is not responsible for achieving the effect, the OAT may not have insight into the actual results of air operations. AFOTTP 2-3.2 encourages the establishment of cross-component relationships to enhance component-level assessments. (3) But when the air component assumes a supporting role, these relationships become essential. It is especially critical that the air component ensure reliable insight into the effects it provides by establishing feedback mechanisms.
The nature of the global war on terrorism clouds this murky issue even further. From the air component's perspective alone, the emergence of multirole aircraft and other capabilities has complicated the assessment process. In current conflicts, for example, US aircraft deliver nonkinetic support, such as nontraditional ISR and "presence." Unlike an assessment of kinetic operations, whereby the air component can close the loop through BDA without input from the supported component, the air component alone cannot evaluate the ultimate effect of nonkinetic operations. One must document, report, and track the linkage between air support and end effect across components. Furthermore, the air component needs to develop enduring internal processes to evaluate nonkinetic effects at the same level of detail it does for kinetic effects.
The air component must document when, where, and with whom its aircraft are working, as well as the effect desired by the supported commander; further, it must record this information in a central location and enter the outcome of each sortie, based on feedback from the supported component. Although mission reports currently describe each sortie, one generally finds them filed in a folder rather than catalogued in a meaningful, user-friendly way that allows analysts to extract key information about the effects provided by airpower. When airpower acts in a supporting role, effects-based analysis can succeed only when the air component receives feedback from the supported component on the last portion of the effects chain.
In summary, the use of airpower in the global war on terrorism is driving changes in the way we assess our progress toward realizing operational objectives. Split assessment provides the JFACC information about his or her performance in the context of the overall joint effect, and the use of AIRs lightens the tactical-assessment load on a pared-down OAT. Finally, one must establish a close working relationship with the supported component's assessment team in order to accurately capture the effects produced by airpower. Use of all three techniques allows the OAT to provide the JFACC an assessment tailored to the current conflict, enables the development of strategy, and supports decision making on the use of limited resources.
Opportunities for continued improvement in the assessment process remain plentiful. We must acquire a clearer understanding of how airpower truly contributes to counterinsurgency operations in today's conflicts. We must learn how to perform tactical assessments for nonkinetic effects and do so with a lean, forward-deployed force. Lastly, we need to master the coordination of reporting and assessment across components. Progress in these areas will assure our continued dominance in warfare, even as the shape of the battlefield changes beneath us.
MAJ KIRSTEN MESSER, USAF MAJ SHANE DOUGHERTY, USAF
(1.) Joint Publication 3-30, Command and Control for Joint Air Operations, 5 June 2003, III-26, http://www.dtic.mil/ doctrine/jel/new_pubs/jp3_30.pdf.
(2.) Air Force Operational Tactics, Techniques, and Procedures (AFOTTP) 2-3.2, Air and Space Operations Center, 13 December 2004, sec. 3.5.
Table 1. Strategy to task Operational Tactical Tactical Task Objective Objective Air superiority Enemy Integrated Destruction of electronic- throughout the joint Air Defense System warfare (EW) / ground operations area neutralized control intercept (GCI) radars in critical areas MOP: X% of EW/GCI radars destroyed Adapted from AFOTTP 2-1.1, Air and Space Strategy, 9 August 2002, table A3-1. Table 2. Two assessment information requirements Information Required Source Number of EW/GCI radars in critical areas ISRD Number of destroyed EW/GCI radars radars in critical areas ISRD
|Printer friendly Cite/link Email Feedback|
|Author:||Messer, Kirsten; Dougherty, Shane|
|Publication:||Air & Space Power Journal|
|Date:||Sep 22, 2006|
|Previous Article:||Counterinsurgency airpower: air-ground integration for the long war.|
|Next Article:||Filling the stealth gap and enhancing Global Strike Task Force Operations.|