Printer Friendly
The Free Library
23,396,934 articles and books


Aligning maintenance metrics: improving C-5 TNMCM.

Selected Reading

Introduction

Metrics are often used as roadmaps to help us know where we have been, where we are going, and how or if we are going to get there. (1) Metrics should generally be used to gauge organizational effectiveness and efficiency and to identify trends, not as a pass or fail indicator. Individually, they are snapshots in time. (2) Metrics are a statement of what is important to your organization and embody a way of thinking about your business; when metrics change, so does people's point of view. But what exactly is a metric and what constitutes a good versus bad metric?

Air Force Instruction (AFI) 21-101, Aircraft Equipment and Maintenance Management, describes metrics, specifically maintenance management metrics, as a crucial form of information used by maintenance leaders to improve the performance of maintenance organizations, equipment, and people when compared with established goals and standards. (3) AFI 21-101 also lists four attributes for metrics including:

* Accurate and useful for decisionmaking

* Consistent and clearly linked to goals and standards

* Clearly understood and communicated

* Based on a measurable, well-defined process (4)

Dr Michael Hammer, a recognized leader in the field of process reengineering, also notes four principles of measurement.

* Measure what matters, rather than what is convenient or traditional

* Measure what matters most, rather than everything

* Measure what can be controlled, rather than what cannot be controlled

* Measure what has impact on desired business goals, rather than ends in themselves (5)

Hammer also points out several flaws with traditional metrics such as too many, fragmented, disorganized, internally focused, irrelevant to the customer, not used systematically, and not aligned with goals. (6) It is this last flaw (metrics not aligned with goals) which became a focus of examination during an Air Force Logistics Management Agency (AFLMA) study of rising Air Force total not mission capable maintenance (TNMCM) rates and potential root cause factors affecting these rates.

Background

This article is the second of a three-part series based on AFLMA project number LM200625500, the C-5 TNMCM Study II. At the request of the Air Force Materiel Command Director of Logistics (AFMC/A4), AFLMA conducted an analysis in 2006-2007 of TNMCM performance with the C-5 Galaxy aircraft as the focus. The C-5 TNMCM Study II included five objectives. One of those objectives was to determine root causes of increasing TNMCM rates for the C-5 fleet. To achieve that particular objective, an extensive, repeatable methodology was developed and utilized to scope an original list of 184 TNMCM factors down to two root causes for in-depth analysis. Those two factors were aligning maintenance capacity with demand and the logistics departure reliability (LDR) versus TNMCM paradigm. This article details the analysis of the second of these two factors.

[ILLUSTRATION OMITTED]

This second factor was also described as a disconnect or misalignment between the C-5 maintenance group (MXG) leadership's primary metric, home station logistics departure reliability (HSLDR), and one of the major command (MAJCOM) and Air Force senior leadership's primary metrics, aircraft availability (AA). The remainder of this article describes how real-world and simulated data supported the early hypothesis that HSLDR and TNMCM were not aligned metrics. Finally, a brief discussion explains why the study team believed a disconnect existed between the base-level and command-level metrics.

Primary Metrics of C-5 Maintenance Leadership

The C-5 TNMCM Study II originated because the project sponsor placed significant importance on TNMCM rates. Based on site visits and feedback from all but one C-5 MXG commander (MXG/ CC) or other MXG senior leaders, the study team determined that the primary metric of the MXG/CC was HSLDR. AA, which is directly related to the TNMCM rate, was a primary metric of higher level leadership. Major General McMahon, then AMC director of logistics (AMC/A4), spoke to the study team in December 2006 concerning aircraft availability as the future cornerstone maintenance metric [as opposed to mission capable (MC) rates]. (7) Similarly, personnel from the AMC/A4M office stated that aircraft availability is the number one concern for AMC Headquarters as opposed to MC rates. (8)

During site visits to Dover Air Force Base (AFB), Stewart Air National Guard Base, and Westover Air Reserve Base, the study team received feedback from base-level maintenance leadership concerning maintenance metrics. Some of the comments included:

"We don't manage by MC-Rate ... we don't chase the numbers. We care about departure reliability, and [the Air Force] should be looking at en route reliability." (9)

"We don't look at the TNMCM rate ... numbers aren't the issue. We focus on the mission and the flying schedule." (10)

"What's important? Anything that makes us fly. The metric for the base is departure reliability ... Ops isn't happy with a 73 percent LDR." (11)

"MC rate is way down on the list of things we pay attention to ... We're currently scrambling to meet the flying schedule. Our priorities go to the scheduled aircraft." (12)

"Our primary metric is LDR." (13)

Based on feedback from AFMC/A4 and AMC/A4 leadership, MXG/CCs at three C-5 bases, and telephone discussions with MXG leadership at other C-5 bases, the study team concluded that the primary metric of the MAJCOM A4 leadership was AA, which includes TNMCM, and that the primary metric of the MXG/CCs was HSLDR.

HSLDR, TNMCM, and AA Defined

AFI 21-101 defines the HSLDR, TNMCM, and AA metrics and their uses. Additional insight on the use of these metrics can be found in the Metrics Handbook for Maintenance Leaders.

Home-Station Logistics Departure Reliability (HSLDR) Rate. This is a leading metric used primarily by the Mobility Air Forces (MAF) for airlift aircraft. This delineates down to only first-leg departures of unit-owned aircraft departing home station. (14)

HSLDR Rate (%) = ((# of HS Departures--# of HS Logistics Delays)/# of HS Departures) x 100

Total Not Mission Capable Maintenance (TNMCM) Rate. TNMCM rate is the average percentage of possessed aircraft (calculated monthly or annually) that are unable to meet primary assigned missions for maintenance reasons.... Any aircraft that is unable to meet any of its wartime missions is considered not mission capable (NMC). The TNMCM is the amount of time aircraft are in NMCM [not mission capable maintenance] plus not mission capable both (NMCB) status. (15)

NMCB is mentioned in AFI 21-101 as the percentage of unitpossessed hours that aircraft are not mission capable due to both maintenance and supply. (16)

TNMCM (%) = ((NMCM Hrs + NMCB Hrs)/Unit Possessed Hrs) x 100

Aircraft Availability (AA) Rate. Aircraft availability is the percentage of a fleet that is in neither depot possessed status nor unit possessed NMC status. (17)

AA (%) = (MC Hours/Total Possessed Hrs) x 100

Note that TNMCM rate and AA rate are both part of the family of metrics that relate to aircraft status hours. Also important to remember is that unit possessed aircraft must be in one of four statuses:

* MC (to include partially mission capable for maintenance or supply)

* NMCM

* Not mission capable supply (NMCS)

* NCMB

Therefore, the percentage of MC hours must decrease as the percentage of NMCM, NMCS, and NMCB hours increase.

Metrics at Different Levels of the Organization

One might expect two different levels of an organization to have two different primary metrics. For the Air Force, the focus at the base maintenance level is expected to be on the tasks at hand to execute the mission on a daily basis. However, a strategic focus at the command A4 level is to be expected, looking across the availability of the entire fleet. Consider Dr Michael Hammer's presentation of this phenomenon in Table 1.

The first column in Table 1 lists the various categories across the spectrum of oversight for an organization, ranging from enterprise goals to local activities. The headings in the top row list the range of positions in the hierarchy of jobs within the organization. In general, senior leaders are primarily accountable for setting the vision and strategy across the entire business enterprise. Process owners are responsible for developing and executing operations and processes to support higher strategy, while professionals actually perform specific work tasks through various activities. Consider this same chart in terms of C-5 aircraft maintenance, shown in Table 2. The base-level focus on on-time departure reliability falls within the operating objective level, providing ready airplanes for the flying schedule. On the surface, this supports the strategic performance objectives of cargo and passenger delivery. These processes are, after all, at the core of the airlift mission. On-time departure reliability, as a measurement, only considers those airplanes scheduled to fly (departing). (19) TNMCM, on the other hand, is concerned with the categorization of aircraft status, and pertains to all possessed airplanes, regardless of whether or not there is an operational demand. (20) The takeaway here is that the study team's observations of the C-5 aircraft maintenance enterprise supported Dr Hammer's view presented in Table 1. The study team found that different levels of the C-5 maintenance hierarchy do in fact focus on different primary metrics.

Aligning Metrics

Although it may be common for different organizational levels to focus on different metrics, this split focus can be problematic for the enterprise when the pursuit of goals at the local level is not aligned to goals at the strategic level. That is, pursuit of better performance in one metric could result in suboptimal performance of higher level metrics. When this occurs, the metrics are not aligned. The study team utilized the following definition for aligned metrics:

Definition 1--Aligned Metrics. A set of metrics is said to be aligned if, with all other variables held constant, improvement in the lower level metric implies improvement of the higher level metrics.

For example, consider the priorities of a trucking company. The company is concerned with a higher level metric, known as a value measure, of increasing profit. The value measurement is in dollars. Shop managers at a truck maintenance facility use a lower level metric, known as a process measure, of reducing repair cycle time. By reducing the repair cycle time, the labor cost per truck is reduced, and each truck is returned to revenue-generating status sooner. All other variables held constant, reduced labor costs and greater numbers of operational trucks increase profit for the company. In this way, improving cycle time implies improvement in profit. (21) By Definition 1, these metrics are aligned.

Now consider the Air Force maintenance metrics of HSLDR rate and TNMCM rate. The base focus on departure reliability may have a direct effect on prioritizing unscheduled maintenance actions to best meet the flying schedule. This optimization can cause an airplane that is hard broke to be prioritized below another airplane in order to get the less broke airplane repaired more quickly and readied for the next flight. This decision, while supporting the objective of on-time departure reliability, may actually have a negative effect on the TNMCM rate. If, however, HSLDR and TNMCM were aligned, an improvement to HSLDR would imply an improvement to TNMCM. To investigate the alignment of the HSLDR, TNMCM, and AA metrics, the study team analyzed data from August 2004 through December 2006 for the 436 MXG at Dover Air Force Base (AFB). The 436 Maintenance Operations Squadron (MOS) analysis section provided the data for the HSLDR and TNMCM rates; the source for the AA rates was the Multi-Echelon Resource and Logistics Information Network.

Mathematically, metric alignment implies that two metrics are fairly strongly related. To test the correlation mathematically, the study team employed the correlation coefficient denoted by the symbol [rho] (rho). The correlation coefficient is a number between -1 and 1 which measures the degree to which two variables are linearly related and is scaled such that [rho] > 0 indicates a positive correlation between the variables. A value of [rho] = +1 implies a perfect correlation with all ordered pairs (points) falling on a straight line with a positive slope. A value of [rho] = -1 implies a perfect negative correlation with all points on a straight line with a negative slope. (22) For the purposes of this study, the study team partitioned the correlation coefficient values in the following manner:

* |[rho]| [less than or equal to] 0.20 implies a very weak correlation

* 0.20 < |[rho]| [less than or equal to] 0.50 implies a weak correlation

* 0.50 < |[rho]| [less than or equal to] 0.80 implies a moderate correlation

* 0.80 < |[rho]| [less than or equal to] 1.0 implies a strong correlation

Figure 1 illustrates the relationship between the TNMCM rate and HSLDR rate. If the metrics were aligned, the graph should show evidence of a strong negative correlation. That is, as HSLDR increased, TNMCM would decrease and vice versa. In this case, the scatter plot reveals no definite relationship, appearing more like a shotgun spread. For comparison purposes, the least squares regression line for the data is drawn and the line equation is presented. A regression equation allows for the expression of a relationship between two or more variables algebraically. From Figure 1, the correlation coefficient between HSLDR and TNMCM is very weak, with [rho] = -0.15056. Therefore, improvement of the HSLDR rate does not imply improvement of the TNMCM rate. By the study's definition, HSLDR and TNMCM were not aligned metrics.

[FIGURE 1 OMITTED]

Figure 2 illustrates the relationship between the HSLDR rate and AA rate, the primary metric at the MAJCOM A4 level. Again, the plot resembles a shotgun spread, and there is a very weak correlation coefficient with [rho] = 0.072165. HSLDR and AA do not appear aligned according to the study's definition.

Figure 3 illustrates the relationship between the TNMCM and AA rates. Here, the scatter plot reveals a negative correlation. Likewise, the correlation coefficient indicates a moderate negative correlation with [rho] = -0.77927. This evidence supports the idea that TNMCM and AA are aligned according to the study definition. As the TNMCM rate improves (decrease), the AA rate also tends to improve (increase). This result is not surprising since TNMCM and AA are a part of the same family of status-hour metrics.

In summary, Figures 1,2, and 3 suggest that TNMCM and AA are aligned, and HSLDR is not aligned with either TNMCM or AA. As stated earlier, the MXG/CC's focus on HSLDR as their primary metric, not TNMCM and AA. Therefore, the MXG/CCs and their personnel make decisions about resources and day-today operations which impact HSLDR first. Since HSLDR is not aligned with TNMCM and AA, there is no guarantee that TNMCM or AA will improve as a result of the current operations. The MXG efforts, therefore, are not directly aimed at improving TNMCM rates when they are focusing on improving HSLDR rates.

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

Experimentation Using C-5 Maintenance Priority (MXP) Simulation

In order to test the impact to TNMCM rates of base-level HSLDR-centric maintenance decisionmaking, the AFLMA study team created a discrete event simulation using Arena simulation software. The simulation facilitated an analysis of how different maintenance operations could affect the HSLDR and TNMCM rates in a controlled environment. This analysis would

be impractical to do in the real world. The following sections summarize the development and results of the C-5 maintenance priority (MXP) simulation.

MXP Problem Formulation and Objectives

The MXP model was designed to study the employment of different queuing prioritization policies and their effect on key maintenance performance metrics in the support of C-5 aircraft. These policies determine the order in which aircraft awaiting maintenance are processed. Field interviews conducted by the study team revealed that in order to improve HSLDR, the maintenance commanders gave priority to those aircraft that "have the best chance of being returned to a [fully mission capable] status in minimum time." (23) These recovery maintenance practices were utilized at both Travis AFB and Dover AFB for C-5 maintenance. (24) The MXP model labels this as the least maintenance (Mx) policy and determines the priority of queued aircraft based on the remaining man-hours of repair. Thus, the aircraft with the fewest man-hours of repair remaining relative to other queued aircraft receives top priority when maintenance resources become available. Alternatively, the most Mx policy gives priority to the aircraft with the most man-hours of repair remaining. The two remaining policies are first-in-first-out (FIFO) and last-in-first-out (LIFO). These queuing policies order aircraft according to their arrival. With FIFO, a newly arrived aircraft goes to the back of the queue. In a LIFO policy environment, a newly arrived aircraft goes to the front of the queue.

MXP Data Collection

Data for the MXP came from multiple sources. Aircraft arrival data was provided by the 436 MOS at Dover AFB for the period from January 2006 through March 2007. Manpower data was provided by the 436th Aircraft Maintenance Squadron for March and April 2007. Data for the possessed aircraft inventory, HSLDR rates, and TNMCM rates were provided by the 436 MOS for the fourth quarter fiscal year (FY) 2006. Data for the maintenance processes were taken from the Reliability and Maintainability Information System (REMIS) for fourth quarter FY 2006. The study team determined that these data sets were the most suitable given the availability of data.

MXP Assumptions

Two important assumptions were made in the formulation of the MXP simulation:

* TNMCS time was assumed to have no impact on the maintenance operations or the TNMCM rate. The impact of supply operations was assumed to be accounted for in the repair time data. The MXP does not model any TNMCS time.

* Unit possessed time for all aircraft was assumed to be constant and equal for the four maintenance policies modeled in the MXP simulation.

MXP Model Conceptualization

The MXP simulation modeled C-5 maintenance operations at Dover AFB. The simulation modeled 18 aircraft (the average number of possessed aircraft for Dover AFB in the fourth quarter FY 2006) that arrive at the base according to a daily arrival schedule with a fixed number of breaks. To achieve the desired arrival stream attributes within the Arena simulation framework, the MXP model employed three separate processes.

The first process created 18 C-5 aircraft entities at time zero. The entities then entered an arrival queue at a gate which opens according to the aircraft arrival schedule. Once opened, the gate allowed a single aircraft to proceed to the maintenance process before closing until the next arrival signal was received. The same 18 aircraft entities flowed from arrival process to the maintenance process before being recycled back to the arrival process. In this way, the model never had more than 18 aircraft in the system at one time.

The second process tracked the day of the week. A clock entity was created at time zero and thereafter stepped through the days of the week at 24-hour intervals. The simulation employed two schedules that depend on the day of the week cycle. The first was related to the maintenance process and defined how many manpower resources were available to perform maintenance on a given day. The second schedule governed the aircraft arrival pattern.

The final process related to aircraft arrivals determined when the gate should be opened allowing an aircraft to arrive and proceed to the maintenance process. These triggers were created according to a schedule derived from 15 months of aircraft arrival data at Dover AFB. The data defined day-specific discrete probability distributions of the number of aircraft arrivals. These distributions are given in Table 3.

The manpower resources and repair times required to complete the repairs were drawn from distributions based on the real-world data. The aircraft wait in the maintenance queue until resources are available for repair. Repairs are then completed in three phases.

The values in each row of Table 3 represent the probability of the particular number of arrivals (represented as 0 through 8 in the column headings) on that day of the week. Each row sums to one. These daily arrival distributions are the building blocks for a random aircraft arrival stream based on historic observations at Dover AFB. When all repairs are complete, the manpower resources are released to perform other repairs and the aircraft departs the base.

REMIS data was used to derive a discrete distribution of the number of personnel on a work crew associated with a repair action. Each repair action is assigned a randomly sized crew. Table 4 shows the crew size probability distribution used in the simulation. For example, there is a 0.519 probability that a repair action requires two maintenance personnel. When all repairs are complete, the manpower resources are released to perform other repairs and the aircraft departs the base. The data did not indicate any instances of crew sizes of seven or eight people during the timeframe of the data.

Figure 4 illustrates the overall view of the basic maintenance processes modeled in the MXP.

C-5 arrivals are triggered according to an arrival schedule. After arrival, aircraft require (seize) maintenance resources, maintenance actions are performed, and then manpower resources are released. This cycle is accomplished three times before returning the aircraft to the arrival queue.

In order to model the parallel and serial nature of aircraft maintenance actions, the study team adopted the repair bin methodology used by Balaban et al., in their mission capable rate (MCR) simulation model, which they demonstrated using the C-5 fleet. (25) In reality, certain repair actions are accomplished simultaneously with other repair actions. However, by regulation, some actions cannot be performed simultaneously with certain other maintenance actions. Balaban et al., modeled this parallel and serial operation by grouping repair actions for a given aircraft into three bins or buckets. Repairs within a given bin are performed simultaneously, but the bins are repaired serially. Thus, all repairs in bin one are completed before beginning bin two repairs. The repair time for each bin is the longest of the repair times contained in the bin. (26) The MXP model also used three bins. The first bin contained 65 percent of the total number of repair actions, the second bin contained 25 percent, and the third bin contained 10 percent. This is very similar to the probabilities used in the MCR model--60, 30, and 10 percent, respectively. (27)

MXP Model Validation

As previously stated, the least Mx priority system most closely matched the recovery maintenance practices in place at both Dover AFB and Travis AFB. Therefore, the study team deemed the least Mx model the best representation of the current, realworld process and considered this model the as-is model. The study team used the HSLDR rate in order to validate the MXP simulation with the real-world maintenance processes. After calibrating the MXP, the least Mx model achieved an HSLDR rate of 0.821 with a 95 percent confidence interval that included the real-world HSLDR rate of 0.833 for the timeframe of the data. It is important to note that the model's intended use was not as a predictive model (given C-5 break rates, how many maintenance resources are required to satisfy a given AA rate?), but only to make a relative comparison between the four given prioritization policies. The model was not designed to determine HSLDR/ TNMCM/Mx backlog or to determine maintenance manning levels.

[FIGURE 4 OMITTED]

MXP Results and Conclusions

Table 5 summarizes the MXP simulation results for the four policies examined with respect to three metrics: HSLDR, estimated TNMCM (Est TNMCM), and Sum of Mx in the queue (Mx backlog). Mx backlog covers the middle ground between the other two metrics--the prioritization policy determines which aircraft the maintenance group returns to mission capable status soonest while the remaining aircraft accrue TNMCM time. Mx backlog is a measure of the ability of the maintenance system to generate all possessed aircraft if called upon to do so. An ideal policy is one that would produce a high LDR rate, a low TNMCM rate, and a low Mx backlog. Table 5 summarizes the results for each policy with regard to these three metrics.

* Least Mx. The least Mx model was the baseline for comparison to the other Mx prioritization policies. It most closely resembled the as-is process of recovery maintenance. The HSLDR achieved in the model was representative of the real-world HSLDR rate and was used to validate the model. Likewise, the Est TNMCM rate achieved matched the realworld value for the timeframe of the data. Mx backlog for the least Mx model was the largest for the four policies considered. The Mx backlog measured the ability to improve the steady-state TNMCM rate. The higher the backlog, the harder it was for the Mx system to improve from their steady state TNMCM. Higher backlog means longer aircraft generation time.

* Most Mx. The most Mx prioritization policy had the same LDR (statistically speaking, within a 95 percent confidence interval) as the least Mx policy. Both the Est TNMCM and Mx backlog improved over the least Mx policy. This is intuitive because the most Mx policy actively applies resources to the biggest maintenance jobs first. However, the variability from day to day increased significantly with this policy. This means that the predictability and stability for scheduling purposes suffered greatly.

* FIFO. The FIFO policy had a reduced LDR when compared to the least Mx policy. However, the Est TNMCM improved, and was statistically the same as the Est TNMCM for the most Mx policy (within 95 percent confidence intervals). The Mx backlog was lower than the least Mx policy as well.

* LIFO. The LIFO policy appeared to be the least attractive with regard to the key metrics. As compared to the least Mx policy, it had a reduced LDR and increased Est TNMCM. It also had a reduced Mx backlog when compared to the least Mx policy but was the second worst of all the policies examined.

These results reveal several things about the prioritization policies and their impact to the LDR and TNMCM rates. First, LDR and TNMCM react differently depending on maintenance policy. The current policy in place (least Mx) achieves a high LDR but has a mediocre estimated TNMCM when compared to the other policies, and the worst Mx backlog, which indicates that it is very difficult to improve the TNMCM rate. It is possible to improve the TNMCM rate by changing the prioritization policy. However, the improved TNMCM would come at the cost of predictability and stability in day-to-day operations (as with most Mx policy) and LDR, as is the case with the FIFO policy. The results of the simulation added support to the original hypothesis that HSLDR and TNMCM are not aligned metrics, but did not completely confirm it. While the current system can not be modeled perfectly, the simulation results did suggest that current maintenance policies do not ensure TNMCM improvement, but do improve LDR. It is safe to conclude that TNMCM and LDR are not necessarily aligned, complementary metrics.

Several personnel interviewed during the study team's site visits suggested that awareness exists of the just-described disconnect between enterprise goals (aircraft availability) and operating objectives. "There is a huge disconnect between AMC's focus on the availability of tails (airplanes) and our focus on on-time departure reliability." (28)

Consequently, while process owners are diligently focused on supporting the strategic performance objectives of delivering cargo and passengers, they are unable to simultaneously align their performance with the enterprise goal of increased aircraft availability. (29)

Maintenance Metrics at Delta Airlines

As a means of comparing business practices, the study team elected to compare Air Force maintenance metrics with those of a leading commercial organization, Delta Airlines. The team interviewed representatives from Delta Airlines' reliability program office. The study team was told the focus of Delta's reliability program is driven by what is termed as Delays and Cancellations (D&C). (30) These are unscheduled events that have an operational impact and require a mechanical dispatch. For each delay or cancellation, there is a direct, net consequence to Delta's revenue, so there is a high priority placed on diagnosing the cause.

Delta personnel identified nine main aircraft maintenance metrics used by Delta. These metrics are summarized in Table 6. (31) Note that technical dispatch reliability (TDR) includes all maintenance related to primary delays and cancellations, whereas mechanical dispatch reliability (MDR) includes only those primary events for which the reliability program is responsible. Repairs due to damage, cannot duplicate actions, maintenance carryovers, and maintenance errors (such as over-servicing) are not included in MDR. Dispatches are the term used for all of Delta's revenue flights. (32) Although there is not an explicit hierarchy, the first two metrics, TDR and MDR, are directly linked to the daily revenue-producing flights on Delta's schedule. These metrics track the volume of, and reasons behind, delays and cancellations for a revenue flight.

Maintenance carryovers are Delta Airlines' equivalent to delayed discrepancies in the Air Force. Maintenance carryovers are repairs that may be delayed (or carried over) to a more opportune time. Unscheduled aircraft out of service (UAOOS) measures the number of aircraft out of service due to an unscheduled event (such as a broken component). Delta measures UAOOS by counting the number of aircraft in this category three times per day (0900 hours, 1200 hours, and 1800 hours), and averaging that count over specified intervals. (33) Prioritization of repair is often given to aircraft that can be returned to service quickly, but the level of impact to fleet operations may be the driving factor. (34) As an example, a broken B-777 has a much bigger impact than a broken MD-88; the MD-88 fleet has many spares, while the B-777 does not. (35) The UAOOS metric is analogous to the Air Force TNMCM rate, though it is only focused on the unscheduled aircraft and is counted in whole aircraft rather than hours. Delta's primary metrics (those driven by delays and cancellations) are not measured to an objective standard (met or not met), instead, they alert when they exceed a control limit for 2 consecutive months. (36) Additionally, Delta personnel interviewed suggested that the metrics are driving desired behavior; this is supported by measured performance, as TDR averaged 97 percent fleet-wide at the time of the original study's publication. (37)

Delta has a very clear enterprise-level value measure--profit. This clear value measure lends itself well to metric definition at the operational level, which is why Delta focuses on the D&Cs. The D&Cs have a direct net effect on the revenue producing flights, which in turn has a direct impact on profit.

Value Metrics in the Mobility Air Forces

The MAF on the other hand, seems to have two competing enterprise-level value metrics.

* Strategic Readiness. AA and TNMCM rates measure the ability of the fleet to be fully mobilized at any given time

* Operational Effectiveness. HSLDR rates measure the ability of the fleet to meet the daily mission requirements.

Conventional wisdom argues that increased strategic readiness facilitates operational effectiveness--increased AA and decreased TNMCM should lead to increased HSLDR. However, as previously shown, there is a weak correlation between HSLDR and both AA and TNMCM. Again, these metrics are not aligned.

Conclusions

This article discussed the focus on different metrics to include HSLDR, TNMCM, and AA at varying levels of the Air Force maintenance enterprise. It also demonstrated that HSLDR is aligned with neither AA nor TNMCM, as there is only a weak correlation between them. Maintainers at the wing level work to support operational effectiveness; however, higher levels of Air Force supervision appear more focused on improving strategic readiness. This disconnect in priorities was determined to be a root cause of the C-5 TNMCM rate being below Air Force standards. This article does not advocate one metric over another. That choice is left for Air Force leadership to make. This article illustrates that, in this case, the primary metrics at varying levels of aircraft maintenance are not aligned and not complementary to one another.

If the Air Force's primary goal is to improve the C-5 fleet TNMCM rate, then priorities of the maintainers in the field must change. As the MXG leadership focuses on HSLDR performance, not TNMCM, the MXP simulation indicated that improving the TNMCM rate would require an increase in resources. Therefore, in order to improve the TNMCM rate without increased resources, the maintainers in the field must make TNMCM a priority. While it is impossible to model the current system perfectly, the results suggest that current maintenance policies do not ensure TNMCM improvement, but do improve HSLDR, which is the stated priority of the MXG leadership. Therefore, the study team recommended that MAJCOM A4 leadership and MXG leadership decide on a set of metrics that are better aligned toward the same goal.

This realignment of metrics must start at the highest levels of the MAF. The MAF should choose its value measure and create a set of metrics aligned with that measure. For example, if the MAF directs that operational effectiveness is its primary value, then metrics such as Tons of Cargo Moved or Million Ton Miles Moved over a given time period could be used as the value metric. Then it must be determined whether or not metrics at lower levels are aligned with the value metric. Once that is determined, all levels of maintenance leadership will have the same overarching priorities. Dr Hammer describes the entire view as pulling it together and lists three things to consider:

* Deciding what to measure is a science

* Deciding how to measure is an art

* Using measures is a process

Recommendations

* If improving C-5 TNMCM rates is the goal, all levels of maintenance leadership must make improving TNMCM rates a priority.

* AMC should determine its priorities between operational effectiveness and strategic readiness, and determine metrics aligned with these priorities.

* Conduct a study to determine whether or not increased AA is correlated with increased operational effectiveness in million ton miles or another pertinent metric. The answer to this question will help determine the applicability of AA towards measuring operational effectiveness.

* AMC/A4 develop simpler, more concrete maintenance metrics that are easily countable and give an indication that operational effectiveness and or strategic readiness is going to be affected.

As previously mentioned, the metrics analysis, modeling, and simulation described in this article was developed as part of the larger C-5 TNMCM Study H. This is the second in a series of articles related to that study. The entire study can be found at the Defense Technical Information Center (DTIC) Private Scientific and Technical Information Network (STINE T) Web site at https://dtic-stinet.dtic.mil/.

Article Highlights

Realignment of metrics must start at the highest levels of the Mobility Air Force (MAF). The MAF should choose its value measure and create a set of metrics aligned with that measure.

At the request of the Air Force Materiel Command Director of Logistics, AFLMA conducted an analysis in 2006-2007 of total not mission capable maintenance (TNMCM) performance with the C-5 Galaxy aircraft as the focus. The C-5 TNMCM Study II included five objectives. One of those objectives was to determine root causes of increasing TNMCM rates for the C-5 fleet. To achieve that particular objective, an extensive, repeatable methodology was developed and utilized to scope an original list of 184 TNMCM factors down to two root causes for in-depth analysis. Those two factors were aligning maintenance capacity with demand and the logistics departure reliability versus the TNMCM paradigm. This article details the analysis of the second of these two factors.

This second factor was also described as a disconnect or misalignment between the C-5 maintenance group leadership's primary metric, home station logistics departure reliability (HSLDR), and one of the major command and Air Force senior leadership's primary metrics, aircraft availability. The remainder of this article describes how real-world and simulated data supported the early hypothesis that HSLDR and TNMCM were not aligned metrics. Finally, a brief discussion explains why the study team believed a disconnect existed between the base-level and commandlevel metrics.

The research demonstrated that HSLDR is aligned with neither aircraft availability nor TNMCM, as there is only a weak correlation between them. Maintainers at the wing level work to support operational effectiveness; however, higher levels of Air Force supervision appear more focused on improving strategic readiness. This disconnect in priorities was determined to be a root cause of the C-5 TNMCM rate being below Air Force standards.

If the Air Force's primary goal is to improve the C-5 fleet TNMCM rate, then priorities of the maintainers in the field must change. As the maintenance group (MXG) leadership focuses on HSLDR performance, not TNMCM, the MXP simulation indicated that improving the TNMCM rate would require an increase in resources. Therefore, in order to improve the TNMCM rate without increased resources, the maintainers in the field must make TNMCM a priority. While it is impossible to model the current system perfectly, the results suggest that current maintenance policies do not ensure TNMCM improvement, but do improve HSLDR, which is the stated priority of the MXG leadership. Therefore, the study team recommended that MAJCOM leadership and MXG leadership decide on a set of metrics that are better aligned toward the same goal.

This is the second in a three-part series of articles that examine C-5 TNMCM rates.

Article Acronyms

AA--Aircraft Availability AFB--Air Force Base AFI--Air Force Instruction AFLMA--Air Force Logistics Management Agency AFMC--Air Force Materiel Command AMC--Air Mobility Command D&C--Delays and Cancellations Est TNMCM--Estimated TNMCM FIFO--First In First Out FY--Fiscal Year HS--Home Station HSLDR--Home Station Logistics Departure Reliability LDR--Logistics Departure Reliability LIFO--Last In First Out MAF--Mobility Air Force MAJCOM--Major Command MC--Mission Capable MCO--Maintenance Carryovers MCR--Mission Capable Rate MDR--Maintenance Dispatch Reliability MOS--Maintenance Operations Squadron MX--Maintenance MXG--Maintenance Group MXP--Maintenance Priority NMC--Not Mission Capable NMCB--Not Mission Capable Both NMCM--Not Mission Capable Maintenance NMCS--Not Mission Capable Supply REMIS--Reliability and Maintainability Information System TDR--Technical Dispatch Reliability TNMCM--Total Not Mission Capable Maintenance UAOOS--Unscheduled Aircraft Out of Service

Notes

(1.) AFLMA, Metrics Handbook for Maintenance Leaders, December 2001, 3.

(2.) AFLMA, 6.

(3.) AFI 21-101, Aircraft and Equipment Maintenance Management, 29 June 2006, 23.

(4.) Ibid.

(5.) Michael Hammer, Harnessing the Power of Process, personal presentation, 22 September 2006.

(6.) Ibid.

(7.) Study team notes from meeting with AMC/A4 and AMC/A9, Scott AFB, 1 December 2006.

(8.) Study team notes from in-progress review VTC with AFMC/A4, AMC/ A4M, AF/A4MY, and OAS/XRA, VTC, 31 January 2007.

(9.) Study team notes from MXG Daily Production Meeting, 12 December 2006.

(10.) Study team notes from meeting with MXG leadership, 18 January 2007.

(11.) Study team notes from meetings with MXG leadership, 17-19 January 2007.

(12.) Study team notes from meeting with MXG leadership, 8 January 2007.

(13.) Study team notes from meeting with MXG leadership, 11 January 2007.

(14.) AFI 21-101, 27-28.

(15.) Ibid.

(16.) AFI 21-101,433.

(17.) AFI 21-101, 24-25.

(18.) Hammer.

(19.) AFI 21-101, 26.

(20.) AFI 21-101, 28.

(21.) Jason Howe, Using FleetFocus M5 for Practical Fleet Management, PowerPoint presentation, 2007, 14.

(22.) Dennis D. Wackerly, et al, Mathematical Statistics with Applications 6th ed, Pacific Grove, CA: Duxbury/Thomas Learning, Inc, 2002, 250-251.

(23.) 60th MXG/CCC, "Recovery Maintenance Brief," Power Point presentation, Travis AFB, 20 July 2006, 20; and "Recovery Maintenance Bullet Background Paper," Word document, Travis AFB, 20 July 2006, 1.

(24.) Gregory Porter, ACSSS/GFWAC, "Recovery Centered Maintenance (RCM) Talking Paper," Robins AFB, 4 May 2007, 1.

(25.) Harold S. Balaban, et al, "A Simulation Approach to Estimating Aircraft Mission Capable Rates for the United States Air Force," Proceedings of the 2000 Winter Simulation Conference, 2000, 1035-1042.

(26.) Balaban, et al, 1037.

(27.) Balaban, et al, 1040.

(28.) Study team notes from meeting with MXG leadership, 18 January 2007.

(29.) Ibid.

(30.) Jim Hylton and Jeff Finken, Delta Airlines Reliability Program Office, telephone interview, 12 March 2007.

(31.) Ibid.

(32.) Ibid.

(33.) Hylton and Finken.

(34.) Ibid.

(35.) Ibid.

(36.) Ibid.

(37.) Delta Technical Operations, http://www.delta.com/ business_programs_services/technical_operations/ about_delta_techops/experience_awards/index.jsp, 8 May 2007.

Scotty A. Pendley, Major, USAF, AFLMA Benjamin A. Thoele, FitWit Foundation Timothy W. Albrecht, USAF, AFCENT Jeremy A. Howe, Whirlpool Corporation Anthony F. Antoline, Major, USAF, AFLMA Roger D. Golden, DPA, AFLMA

Major Scotty A. Pendley is currently the Chief, Maintenance Studies Branch, Logistics Innovation Studies Division, Air Force Logistics Management Agency.

Benjamin A. Thoele is currently the Executive Director of the FitWit Foundation in Atlanta, Georgia. He previously worked as a Captain in the Air Force Logistics Management Agency as an operations researcher and Chief Logistics Analysis.

Major Timothy W. Albrecht is currently Chief Operational Assessments, AFCENT/A3XL, Long Range Plans. He previously worked in the Air Force Logistics Management Agency as Chief Analyst.

Jeremy A. Howe is currently employed with the Whirlpool Corporation, Corporate Headquarters, as the Manager, North American Region Supply Chain Metrics Team. He previously worked as a Captain in the Air Force Logistics Management Agency as Chief Munitions Analysis.

Major Anthony F. Antoline is currently the Chief Logistics Logistics Transformation Division, Air Force Logistics Management Agency.

Roger D. Golden is the Director, Air Force Logistics Management Agency.

Logistics...embraces not merely the traditional functions of supply and transportation in the field, but also war finance, ship construction, munitions manufacture, and other aspects of war economy.

--Lieutenant Colonel George C. Thorpe, USMC

Logistics comprises the means and arrangements which work out the plans of strategy and tactics. Strategy decides where to act, logistics brings the troops to that point.

--General Antoine Henri Jomini
Table 1. Accountability and Attention (18)

                             Leadership   Process Owner   Professionals

Enterprise Goals             High         Low             Medium
trate is Perpormance         High *       High            Medium
Operating Ojectives          Medium       High *          Medium
Process Perpormance          Medium       High *          High
Activity Performance         Low                          High *
* = primary accountability

Table 2. Accountability and Attention for C-5 Aircraft Maintenance

                                  AMC/A4   MXG/CC   Technicians

Enterprise Goals-increase         High *   Medium   Low
aircraft availability,
reduce costs
Strategic Performance--deliver    High *   High     Medium
cargo and passengers accuratel
and on-time
Operating Objectives--provide     Medium   High *   Medium
ready airplanes forthe flying
schedule
Process Performance--isochronal   Medium   High *   High
inspections,unscheduled repair
process

Activity Performance-inspect      Low      High     High *
and repairair danes
* = primary accountability

Table 3. Probability of Number of Aircraft Arrivals by Day of the Week

Arrivals (AC)       0       1       3       4       5       6       7

Sunday          0.231   0.461     0.2   0.093   0.015      --      --
Monday          0.092   0.139   0.292   0.215   0.108   0.092   0.047
Tuesday         0.015   0.047     0.2   0.261   0.185   0.154   0.107
Wednesday       0.015   0.077   0.093   0.307   0.308   0.138   0.062
Thursday           --   0.062   0.107   0.216   0.338   0.185   0.092
Friday          0.077   0.077   0.138   0.293   0.184   0.185   0.031
Saturday        0.169   0.416   0.246   0.061   0.062   0.046      --

Arrivals (AC)       7       8

Sunday             --      --
Monday             --   0.015
Tuesday         0.031      --
Wednesday          --      --
Thursday           --      --
Friday          0.015      --
Saturday           --      --

Table 4. Crew Size Probability

Crew Size (CS)       1       2       3       4       5       6       7

P(CS)            0.323   0.519   0.123   0.022   0.003   0.001   0.009

Table 5. Summary of MXP Results for Study Metrics

Policy     HSLDR   Est TNMCM   Mx Backlog

Least Mx   0.821     0.322         45K
Most Mx    0.816     0.305         23K
FIFO       0.764     0.307         20K
LIFO       0.735     0.393         30K

Table 6. Delta Airlines Maintenance Metrics

               Metric                        Formula

Mechanical Dispatch Reliability   100 - ((Delays + Cancellations/
(MDR)                             Revenue Departures) x 100)

Technical Dispatch Reliability    100 - ((Technical Issues/
(TDR)                             Revenue Departures) x 100)

                                  Where technical issues include
                                  dispatches for mechanical,
                                  process, policy, and paperwork
                                  issues associated with delays
                                  and cancellations.

Unscheduled Aircraft Out of       Number of Unscheduled Aircraft
Service (UAOOS) Count             Out of Service

In-Flight Shutdown Rate           (Total Inflight Shutdowns
(IFSDR)                           x 1,000)/ Total Engine Hours

Maintenance Carryovers            Number of Maintenance Carryovers
(MCO) Count

MEL Count                         Number of Restricted Items

Unscheduled Removal Rate          (Total Unscheduled Removals
(Used for the Engines and         x 1,000)/Total Hours
APUs)

Pilot Reports (PIREPS)            (Pilot Reports x 1,000)/
                                  Totally Flying Hours

Flight Exception Rate             Number of Diversions, Air
                                  Turn Backs and Rejected
                                  Takeoffs for Mechanical Reasons
COPYRIGHT 2009 U.S. Air Force, Logistics Management Agency
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:total not mission capable maintenance
Author:Pendley, Scotty A.; Thoele, Benjamin A.; Albrecht, Timothy W.; Howe, Jeremy A.; Antoline, Anthony F.
Publication:Air Force Journal of Logistics
Article Type:Essay
Geographic Code:1USA
Date:Mar 22, 2009
Words:7316
Previous Article:Lessons from the first deployment of expeditionary airpower.
Next Article:Logistics and warfare.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters