Timing is everything: operational assessment in a fast-paced fight.
Traditionally, we view operational assessment (OA) as part of the air tasking cycle, often depicted as a wheel (fig. 1). In a high operations tempo (OPTEMPO) environment, OA must function inside the 72-hour air tasking cycle. This article offers a procedural framework, based on the air tasking cycle, which depicts the changing relationships between assessment and the other parts of the cycle as the pace of operations increases. This framework considers inputs to and outputs from the OAT. It offers insight into the assessment process and provides the necessary context for developing and implementing process refinements within the AOCs.
Additionally, the article presents an abstract framework based on Col John Boyd's observe orient-decide-act (OODA) loop. This conceptual framework provides additional insight into some of the key challenges of providing decision-quality assessments in a high-OPTEMPO environment. Furthermore, although presented in the context of command and control of air and space operations, this framework has broader applicability. It offers a theoretical context for understanding assessment as an enabler of effective decision making in all services, at all levels of war, and even in the context of business decisions in the private sector.
[FIGURE 1 OMITTED]
Shifting Assessment Focus inside the Air Tasking Cycle
The air tasking cycle "provides a repetitive process for planning, coordination, allocation, execution, and assessment of air missions." (2) As figure 1 shows, it begins in the Strategy Division with strategy development. The strategy plans team develops the joint air and space operations plan and passes it to the strategy guidance team, which issues guidance via the air operations directive and passes it to the targeting effects team. The targeting effects team then creates the draft joint integrated prioritized target list. The next step in the cycle calls for development of the master air attack plan (MAAp), the JFACC's "timephased air and space scheme of maneuver for a given ATO [air tasking order] period." (3) The cycle then proceeds to ATO production and then to execution. The final step in the cycle--assessment--evaluates whether air and space operations are creating the desired effects and achieving the JFACC's objectives. The assessment team recommends changes to strategy, and the cycle starts over again.
Within this framework, we think of assessment as occurring between execution and strategy development, which implies that the most important relationships for the OAT are with the Combat Operations Division (execution) and the other teams within the Strategy Division (strategy development). The Combat Operations Division provides the primary input to the OAT, which sends output to the primary recipients--the strategy plans team and strategy guidance team.
Looking first at inputs to the OAT, we clearly see the vital connection between assessment and execution, the latter representing the part of the air tasking cycle that creates effects. Understanding and interpreting those effects is one of the most basic functions of assessment.
Equally important though, the OAT must have a comprehensive understanding of the plan, gained only by participating in the planning process, much of which is conducted by the strategy plans team. During planning, the OAT helps define objectives and tasks, specifying performance and effectiveness measures to use in assessing progress. Without a strong connection to the other teams in the Strategy Division, the OAT will not truly understand the objectives and desired effects it assesses, and assessment will fail.
Even that connection is not enough, however. The strategy-development process yields an operational-level conceptual presentation of the plan, but in some cases, the OAT needs a tactical-level depth of understanding. To acquire such understanding, the team needs to have knowledge of the targeting effects team and MAAp as well as the changes being made to the ATO on the operations floor. Information must flow into the OAT from all parts of the air tasking cycle. It must have connections to every other part of that cycle at all times. Figure 2 depicts the main inputs to the OAT, the darker arrows indicating that the connections to strategy development and execution are the most crucial.
As OPTEMPO increases, the relative importance of the connections to the other functions shifts. At a low OPTEMPO, most of the changes to the plan occur in strategy development. Tactical-level operations unfold at a relatively slow pace. In this unhurried environment, planners rarely make substantive changes to the joint integrated prioritized target list, MAAp, or ATO after their production.
[FIGURE 2 OMITTED]
As things start to speed up, however, substantive changes begin to occur later in the cycle. Increases in OPTEMPO have the most substantial impact at the tactical level of war. Even at extreme OPTEMPOs, the operationallevel plan naturally changes at a more measured pace than the tactical plans that support it.
Take air superiority for example. The operational-level plan to achieve air superiority by rolling back the enemy's integrated Air Defense System (IADS), destroying enemy aircraft on the ground, populating defensive-counter-air combat air patrols, and employing theater missile defense systems probably will not change significantly as the OPTEMPO increases. Certainly, we wish to do those things faster, but the overall plan will remain essentially the same.
The situation at the tactical level, however, is very different. Changing conditions in the battlespace will drive changes to the targeting effects team and the MAAP. At extremely high OPTEMPOs, the bulk of the changes to the plan may occur during execution via the dynamic targeting process. This implies that, in order to maintain a comprehensive understanding of the plan as the pace of operations increases, the OAT must strengthen its connection to the tactical-level plans. At the same time, assuming the team has already developed a solid understanding of the operational plan, it may be able to reduce its focus on changes at that level. Figure 3 depicts the changing relationships as OPTEMPO increases.
[FIGURE 3 OMITTED]
A similar shift in focus occurs with respect to information flow out of the OAT. The true value of assessment lies in offering commanders the opportunity to change course and avoid possible pitfalls, rather than reacting to events after the fact. The OAT does this by means of predictive assessment--its projection of what the assessment will be at some point in the future. In order to leverage these projections, the commander must have a mechanism to incorporate recommended changes into the plan--specifically, in the air tasking cycle, the OAT feeds those recommendations into the strategy-development process.
That approach is well suited to a low-OPTEMPO environment. During steady-state peacetime operations, for example, the commander's desired effects are broadly defined and develop slowly--over a matter of months or even years. In this environment, plans develop at a correspondingly slow pace. The OAT can pass any observations to the strategy plans team for additional consideration and planning; such observations will work their way through the other teams as part of the normal cycle.
As the pace of operations increases, however, the commander may need to implement changes more rapidly. In that case, rather than feeding changes to the strategy plans team and strategy guidance team and allowing those changes to progress through the normal cycle, the OAT may need to make recommendations directly to one of the other teams. Suppose, for example, that the assessment identifies a potential problem with the JFACC's plan which warrants a change to the MAAP. The OAT should pass that change simultaneously to the strategy plans team, the strategy guidance team, and the MAAP team. The OAT should never bypass the strategy-development function entirely. Any changes to the JFACC's guidance must be reflected in an updated air operations directive which should then be disseminated. However, passing the change to the MAAP team at the same time would enable its members to begin working it, knowing that a change to the air operations directive is forthcoming (fig. 4).
[FIGURE 4 OMITTED]
[FIGURE 5 OMITTED]
Extending this idea, figure 5 shows that as the pace of operations increases, assessment feedback moves further inside the air tasking cycle, while maintaining a persistent connection to strategy development. At the highest OPTEMPOs, assessment may provide feedback directly to the operations floor, perhaps recommending adjustments several times during a single ATO period.
In fact, this is quite often the way things work in practice. Verbal guidance provided by the JFACC in various settings is relayed to the appropriate team or teams even before revision to the air operations directive has begun. For example, during a recent exercise that involved fast-paced operations and a great deal of dynamic targeting, the JFACC received OA updates several times a day. In fact, during the most critical operations, the OAT provided him an update every two hours. If he had any concerns, they went immediately to the operations floor, where personnel made the necessary adjustments. The next air operations directive then incorporated the cumulative effect of these changes. (4)
In the author's experience, however, this is often a very informal process, usually involving much reinventing of wheels. To provide the JFACC with the best possible assessment, the OAT must have a solid understanding of the plan and a way to implement recommended changes. During a fast-paced fight, this must occur inside the 72-hour air tasking cycle. AOCs should formalize the existing ad hoc practices and use this procedural framework to stimulate discussion as well as lay the foundation for process improvements within the AOCs.
The approach described here, based on the air tasking cycle, offers a solid procedural framework for OA in a high-OPTEMPO environment within the AOCs. However its applicability remains rather narrow in scope. We develop the air tasking cycle, a task-oriented structure, to codify the tasks and intermediate products necessary to produce and execute an ATO. It is not well understood within the joint community or, for that matter, within the Air Force (outside the AOC). Assessment, particularly the effects-based variety, requires a broader theoretical structure to support discussion of the complex concepts and relationships involved. The next section describes such a structure.
Assessment and the Observe-Orient-Decide-Act Loop
The framework described above concerns itself with process improvements within the AOCs. This section, based on Colonel Boyd's OODA loop, develops a conceptual framework for discussing some of the problems plaguing assessment at high OPTEMPOs.
Colonel Boyd "thought that any conflict could be viewed as a duel wherein each adversary observes (O) his opponent's actions, orients (O) himself to the unfolding situation, decides (D) on the most appropriate response or counter-move, then acts (A)." (5) He noted that
the process of observation-orientation-decisionaction represents what takes place during the command and control process--which means that the O-O-D-A loop can be thought of as being the [command and control] loop. The second O, orientation--as the repository of our genetic heritage, cultural tradition, and previous experiences--is the most important part of the O-O-D-A loop since it shapes the way we observe, the way we decide, the way we act. (6) (emphasis in original)
Looking at assessment in this framework, we see that OA serves as part of the "orientation" piece of the JFACC's OODA loop. The OAT collects observations--usually lower-level assessments--and synthesizes them to enable the JFACC's orientation and, hence, effective decision making. This context sheds more light on why so many problems arise when we attempt to conduct assessment within the air tasking cycle during high OPTEMPO. The higher the OPTEMPO, the faster the JFACC's OODA loop must go in order to keep up. When that loop operates faster than the 72-hour air tasking cycle, assessment must keep up with it or become irrelevant.
The OODA-loop framework applies to assessment at all levels of warfare. At the combatant commander (COCOM) or joint task force (JTF) level (strategic/operational), campaign assessment provides orientation for the joint force commander's decisions. At the component level (operational), OA provides orientation for the component commander's decisions. Lastly, at the tactical level, tactical assessments of various forms provide orientation for tactical-level decisions. For example, battle damage assessment (BDA) may indicate that a target was not successfully destroyed, leading to a restrike recommendation, or perhaps an assessor on the combat-operations floor will notice a pattern in the incoming mission report (MISREP) data that will lead to an adjustment in tactics. In all cases and at all levels, assessment serves an orientation function (fig. 6).
Not only does the OODA-loop framework apply at all levels of war but also, by examining relationships between the loops at different levels, we gain insight into some of the common problems plaguing assessment today. If assessment is fundamentally an orientation function, then the products of assessment serve two customers. First and foremost, they serve the decision maker at whatever level of war the assessment is conducted (the "decide" part of the OODA loop). Second, they serve as observations to enable orientation at the nexthigher level. Figure 7 shows the relationships between OODA loops at different levels of war.
Suppose, for example, that a JTF commander is making a go/no-go decision as to whether or not to launch an amphibious assault on an adversary, and he has directed the JFACC to gain the requisite degree of air superiority to support the assault. Because the JFACC is concerned about the surface-to-air threat, he has struck a number of the enemy's IADS targets.
[FIGURE 6 OMITTED]
[FIGURE 7 OMITTED]
Looking at the tactical-level OODA loop, the JFACC's BDA team will collect information about those strikes from a variety of sources (observations). They will synthesize the observations and determine whether or not the target has been destroyed (orientation). They will then issue a BDA report that will go to the OAT (as an operational-level observation) and, if necessary, make a recommendation to restrike the target (input to decision maker).
At the operational level, the OAT will receive the BDA report (observation), using that information, along with a number of other inputs, to determine whether or not our forces have established air superiority (orientation). The team will pass the result to the JFACC (input to decision maker), who will alter his operations accordingly, and to the JTF (as a higher-level observation).
[FIGURE 8 OMITTED]
Finally, at the JTF level, the campaign assessment team will be informed that the JFACC is assessing whether he has attained the required degree of air superiority (observation). Team members will synthesize that observation, along with inputs from the other components and their own observations of the battlespace (orientation), and make a recommendation to the JTF commander regarding whether or not to proceed with the amphibious assault (input to decision maker, fig. 8).
Many of the most widespread problems with assessment at high OPTEMPOs result from disconnects between OODA loops at different levels. Take data collection and management, for example. As most people who have done OA will attest, the OAT usually spends 90 percent of its time and manpower gathering and managing data, leaving only 10 percent devoted to synthesizing the data and producing the assessment.
When confronted with the data-collection and management issue, many people immediately assume that it is a technical problem with a technical solution. Often, the proposed solution takes the form of an automated data-collection system or a massive database. Automated data handling would offer an improvement over the current approach, but no amount of automation will address the root cause of the data-management issue: a failure of tactical-level orientation processes.
According to the framework, tactical assessments (the product of orientation at the tactical level) serve as the primary inputs (observations) to the OAT. Sometimes, however, the tactical-level orientation necessary to develop inputs to the OAT doesn't actually happen. In rare cases, this results from a complete breakdown of the tactical-assessment process. Usually, however, that process works just fine within the context of the tactical-level OODA loop. In these cases, the problem emerges from a disconnect between the tactical-level OODA loop and the operational-level loop, which can occur in several ways. Sometimes we have no process in place to align them. Sometimes we lack sufficient manpower to execute the process. Sometimes the OAT has not effectively communicated its requirements to the tactical-assessment teams. And sometimes the operational-level OODA loop moves so fast than the tactical-level processes can't support it. This last reason, especially problematic, becomes more likely as the pace of operations increases.
Regardless of the cause, the result is the same. The OAT doesn't get the observations it needs. Team members must then either try to drive the orientation functions at the tactical level or resort to collecting tactical-level observations and try to do tactical- and operationallevel orientation simultaneously.
It is nearly impossible to modify tactical-level orientation processes on the fly, particularly in a high-OPTEMPO environment, so the OAT often ends up attempting both tactical and operational assessments. In such cases, the OAT tries to collect a select few high-priority tactical observations and synthesize them into an operational-level assessment (fig. 9). This approach tends to be inefficient. it is usually not feasible for a single team to do both tactical- and operational-level orientation simultaneously. The OAT generally has to do at least some tactical-level orientation to identify the important observations before beginning its operational-level orientation process. This approach also relies heavily on the OAT's comprehensive understanding of the plan to accurately determine the most important tactical observations and rapidly fuse them into an assessment. It can work if the OAT includes the right people, but it represents a Band-Aid rather than a true solution.
[FIGURE 9 OMITTED]
Consider the processing and handling of MISREPs during a recent exercise. When MISREPs come into the AOC, they are tacticallevel observations. Although one should not attempt in-depth analysis of MISREPs during the heat of battle, some tactical-level orientation can be done during ongoing combat operations. ideally, the combat reports cell or other appropriate team would review the MISREPs as they come into the AOC and issue a periodic report summarizing their content. This did not happen. Instead, the combat reports cell on the operations floor passed hundreds of MISREPs to the OAT via e-mail, over 90 percent of which indicated that the pilot had nothing significant to report. The OAT spent hours opening these e-mails and documents to find the half-dozen MISREPs that were significant. Only then could team members begin to interpret the content of the reports in the context of the JFACC's objectives. in this case, the data-management problem facing the OAT was a direct result of the failure of the tactical-level orientation function. (7)
The long-term solution to the data-collection and management issue entails investing the necessary resources and effort in tactical-level orientation. We should exercise the BDA process routinely during peacetime to enable a smooth transition to major combat operations. Furthermore, we should codify and exercise tactical-level assessment processes for friendly operations, including procedures for handling MISREPs.
Another issue that frequently arises deals with the exchange of assessment data between the JTF (or other higher headquarters) and the air component. Again, many of these problems can be traced to disconnects between the component's and the JTF's OODA loops. In this case, however, the lack of orientation at the lower level does not constitute the problem. Instead, often the JTF prefers to reserve the operational-level orientation function for itself and may disregard the orientation that occurs at the components. By requiring the components to provide what essentially amounts to observations rather than completed assessments, the JTF puts itself in a position of having to perform both the component-level and JTF-level orientation functions. This also increases the workload on an already stressed OAT that must now collect observations for transmission to the JTF in addition to performing the orientation function for which it is designed. This problem also arises at low OPTEMPOs, but in the author's experience, it tends to be more pronounced at high OPTEMPOs.
Although this practice results in duplication of effort and lower-fidelity assessment at the JTF level, it is easier to rectify than the data-collection issues. After all, the orientation occurs at the component, and the resulting product is available to the JTF whenever it cares to receive it. The key to resolving this disconnect involves building strong relationships between the JTF and component assessment teams before the shooting starts. By building this foundation, the organizations will come to understand and respect each other's processes. Coordination and data flow will improve, the OODA loops will align with one another, and the assessment processes at both levels will improve.
As these examples demonstrate, using the OODA loop as a conceptual framework offers insight into some of the more complex issues surrounding assessment in a fast-moving fight. Applying across the services and across the levels of war, it can be used to investigate and evaluate the connections between assessment processes at different organizations or levels. In general, it opens the door for development of assessment processes to handle nearly any situation in which the commander needs to make a decision and wants assessment as part of his or her orientation.
Assessment rapidly contextualizes and synthesizes a high volume of data to enable the JFACC's decision making. To do that well, the OAT must maintain a current, comprehensive understanding of the plan and a process for implementing recommended changes. The faster the operations proceed, the quicker the JFACC must make decisions--and the more valuable assessment becomes.
At these higher OPTEMPOs, the OAT must operate inside the 72-hour air tasking cycle. This article has offered a procedural framework to serve as a starting point for the development of disciplined processes for information flow to and from the OAT in a high-OPTEMPO environment. The approach outlined here allows assessment feedback to flow to the right teams quickly and efficiently, and maintains the connection to the Strategy Division--a link essential to preservation of a strategy-to-task approach.
In addition to the procedural framework described in the first half of the article, the OODA-loop construct serves as a conceptual framework for assessment. The loop's structure allows examination of connections between organizations operating at different levels of war. It offers a conceptual structure to enable understanding of some of the complex ideas and relationships involved in assessment. Finally, it illuminates some of the roadblocks to effective assessment. Hopefully, some of the insights it reveals will bring assessment one step closer to its true goal--enabling effective decision making at all levels.
(1.) Air Force Doctrine Document (AFDD) 2, Operations and Organization, 3 April 2007, 105, https://www.doctrine.af.mil/ afdcprivateweb/AFDD_page_HTML/Doctrine_Docs/AFDD2.pdf.
(2.) AFDD 2-1, Air Warfare, 22 January 2000, 50, https://www.doctrine.af.mil/afdcprivateweb/AFDD_page_HTML/Doctrine_Docs/ afdd2-1.pdf.
(3.) AFDD 2-1.9, Targeting, 8 June 2006, 41, https://www.doctrine.af.mil/ afdcprivateweb/AFDD_page_HTML/Doctrine_Docs/afdd2-1-%209.pdf.
(4.) Author's personal experience during Exercise Talisman Sabre, May 2007.
(5.) Franklin C. Spinney, "Genghis John," US naval institute Proceedings 123, no. 7 (July 1997): 47.
(6.) John R. Boyd, "Organic Design for Command and Control" (slides), May 1987, slide 26, http://www.d-n-i.net/boyd/pdf/c&c.pdf (accessed 30 June 2007).
(7.) Author's personal experience during Exercise Talisman Sabre, May 2007.
LT COL KIRSTEN R. MESSER, USAF *
* Lieutenant Colonel Messer is chief, Operational Assessments, 613th Air and Space Operations Center, Hickam AFB, Hawaii. She thanks Maj Eric Murphy, Capt Tim Cook, Mr. John Borsi, Maj Dave Gwinn, Maj Joe Morgan, Maj Dave Lyle, LT Col Dan Ourada, CDR Tom Hinderleider, and Col Tim Saffold for discussing her article's concepts or for reading and commenting on previous versions. Any remaining errors are solely those of the author.
|Printer friendly Cite/link Email Feedback|
|Author:||Messer, Kirsten R.|
|Publication:||Air & Space Power Journal|
|Date:||Jun 22, 2008|
|Previous Article:||The role of Air Force civil engineers in counterinsurgency operations.|
|Next Article:||Air-intelligence operations and training: the decisive edge for effective airpower employment.|