Printer Friendly

Automation in future air traffic management: effects of decision aid reliability on controller performance and mental workload.


Several proposals for future air traffic management (ATM) will change the roles of air" traffic controllers and pilots. For example, under Free Flight (FF; Radio Technical Commission for Aeronautics [RTCA1, 1995) and Distributed Air/Ground Traffic Management (DAG-TM; National Aeronautics and Space Administration [NASA], 1999), pilots would have greater freedom to choose their own heading, altitude, and speed in real time and primary responsibility for maintaining separation from other aircraft in the immediate airspace. Controllers would not be involved in active control of aircraft but would be in a role of "management by exception" (Dekker & Woods, 1999; Wickens, Mavor, Parasuraman, & McGee, 1998). Management by exception refers to a management concept in which managers are notified by staff only if a certain variable (e.g., a budget) exceeds or falls below a certain value (Drucken 1954). In the case of air traffic control (ATC), controllers would manage traffic flow, leaving the detection and resolution of conflicts to the pilots and intervene only if aircraft separation falls below a certain value (e.g., 5 nautical miles laterally and 1000 feet vertically).

The feasibility of the FF and DAG-TM concepts has been tested in studies with pilots in flight simulations (e.g., Dunbar et al., 1999: Lozito, McGann, Mackintosh, & Cashion, 1997: van Gent, Hoekstra, & Ruigrok, 1998). However, all future ATM concepts envisage a role for the controller to step in and intervene to ensure aircraft separation under certain conditions (failure of aircraft systems, bad weather', etc.). It is therefore important to examine how well controllers can detect and resolve conflicts when they are removed from the tactical control loop but then have to reenter it to ensure safety.

Several studies using moderate--to high-fidelity simulators and experienced en route controllers have shown that conflict detection performance and situation awareness were reduced and mental workload increased under such simulated FF conditions (Castano & Parasuraman, 1999; Corker, Fleming, & Lane, 1999: Endsley, Mogford, Allendoerfer, Snyder, & Stein, 1997; Endsley & Rogers, 1998: Galster. Duley, Masalonis, & Parasuraman, 2001: Metzger & Parasuraman, 2001; Willems & Truitt, 1999). (However', no adverse effects were reported by Hilburn & Parasuraman, 1997, who tested British military controllers, or by Remington, Johnston, Ruthruff, Gold, & Romera, 2000, who tested four retired controllers in a simple visual search task that required only conflict detection and no subsidiary tasks.) Air-ground integration studies involving both pilots and controllers have also been conducted. For example. DiMeo et al. (2002) found that controllers reported higher workload and showed more conservative conflict resolution behavior than did pilots, whereas pilots preferred FF scenarios and found them to be safer and to provide greater situation awareness than do current operations.

These investigations provided the first empirical evidence of the effects of future ATM concepts on controller performance and pointed to the lack of aircraft intent information (Castano & Parasuraman, 1999) and the passive monitoring role (Metzger & Parasuraman, 2001) as factors contributing to reduced controller performance. For these ATM concepts to be implemented successfully, therefore, automation support must be provided for controllers (Corker et al., 1999; Parasuraman, Duley, & Smoker, 1998), and the DAG-TM program specifically incorporates controller automation tools (NASA, 1999). One system now in operational use is the User Request Evaluation Tool (URET), developed by the MITRE Corporation. URET assists controllers in detecting potential conflicts between aircraft or between aircraft and restricted airspace and suggests resolutions by continuously checking current flight plan trajectories for strategic conflicts up to 20 min into the future. It includes sophisticated algorithms that analyze and integrate data from different sources (e.g., radar data, flight plans) considering numerous additional parameters (climb rates for different aircraft types, wind, weather models, etc.). Hence URET goes well beyond the capability of simple alerts such as the short-term conflict alert (STCA). For evaluations of the effectiveness of URET in conflict detection, see Brudnicki and McFarland (1997) and Masalonis and Parasuraman (2003).

How will these and other automated systems influence controller performance, and will they enhance or reduce safety under FF? The advent of these technologies has stimulated much research on automation and human performance (Parasuraman & Byrne, 2003: Parasuraman & Mouloua, 1996; Sheridan, 2002). A major conclusion is that automation fundamentally changes the nature of the cognitive demands and responsibilities of human operators, often in ways that were unintended or unanticipated by the designers (Bainbridge, 1983: Billings, 1997: Parasuraman & Riley, 1997: Sarter & Amalberti, 2000: Wiener & Curry; 1980). Previous research and experience shows that automation leads to both benefits and costs. Among the human performance costs of certain automation designs are unbalanced mental workload, complacency, reduced situation awareness, cognitive skill loss, and poorly calibrated trust (Parasuraman & Riley). Such effects have been discussed in design guidelines for automation for future ATM (Ahlstrom, Longo, & Truitt, 2002), but these have been derived mostly from research on cockpit automation. There is as yet little empirical work on controller performance with automation. particularly in relation to ATM concepts in which controllers share their decisions with other members of the system and assume a rather passive role.

The current study focused on two aspects of controller-automation interaction: (a) whether automation can reduce controller workload and (b) how automation reliability affects controller performance and workload (Wickens, 2000).

Automation is often implemented in an attempt to reduce the operator's workload during peak periods of task load. However, this does not always occur. For example, cockpit automation has sometimes reduced mental workload in phases of flight when workload was already low (e.g., autopilot during the cruise phase) and increased mental workload in phases of flight when workload was already high (e.g., reprogramming the flight management system during final approach), a phenomenon referred to as clumsy automation (Wiener & Curry, 1980). In addition, automation often changes manual control tasks to monitoring tasks, leaving the human to supervise the automation (Sheridan, 2002), which can impose considerable workload (Warm, Dember, & Hancock, 1996).

The second area of concern is the ability of human operators to manage a system when automation fails or malfunctions in some way. This has been referred to as the out-of-the-loop unfamiliarity (OOTLUF) problem (Wickens, 1992). In addition, several studies have examined the effects of imperfect or unreliable automation on operator performance in target detection and complex decision-making tasks (Galster, Bolia, Roe, & Parasuraman, 2001; Rovira, McGarry, & Parasuraman, 2002; Wickens, Gempler, & Morphew, 2000). The results generally showed that operators have difficulties in detecting targets or making effective decisions if the automation incorrectly highlights a low-priority target or gives incorrect advice. The OOTLUF problem also results in operators requiring more time to intervene under automated control than under manual control because they have to first regain awareness of the state of the system. Operators have a better mental model or awareness of the system state when they are actively involved in creating the state of the system than when they are passively monitoring the actions of another agent or automation (Endsley, 1996; Endsley & Kiris, 1995), particularly if the automation interface does not support the operator in gathering the raw information on which the automation bases its decisions (Lorenz, Di Nocera, Rottger, & Parasuraman, 2002).

This problem seems particularly relevant to the problem of automation in future ATM concepts because the shared decision making can already take the controller out of the control loop and limit the controller's access to the information (e.g., pilot intent) relevant to conflict detection and resolution. If automation is then introduced to compensate for the effects of reduced situation awareness induced by the transfer of decision-making authority away from the controller to the pilot or dispatchei; the OOTLUF problem might be further aggravated with imperfect automation when the controller is expected to detect and resolve conflicts despite being initially "remote" from the control loop.

Although some decision aids may improve performance under current ATC conditions (e.g., Hilburn, 1996; Schick & Volckers, 1991), no em pirical data are available on the effects of automation on controller performance and mental workload under FF and other future ATM systems. Automating the decision-making process in a dynamic environment such as ATC is not a trivial task, especially under conditions of shared decision making. A powerful decision aid will have to accurately predict pilot intentions, weather; and wind. Under traditional ATC conditions, pilots always had to follow the direction of the controller and, typically, stay on assigned airways. Therefore, pilot intent was relatively easy to predict. If the pilots indeed followed ATC instructions, they did whatever the controller told them to do. With the introduction of the National Route Program in the late 1990s, these restrictions were loosened, and under FF conditions pilots have even greater freedom to choose their routes and altitudes and are not required to stay on airways. Hence pilot intent is very difficult to predict for both controllers and automation.

Although highly capable automated systems are being developed, these considerations suggest that the emergence of fully reliable automation that can cope with all situations is unlikely. As Bilimoria (2001) noted, changes in pilot intent might in some cases not be received in time or not be received at all by the computer system from which URET receives the information required by its conflict prediction algorithm. Therefore it is important to examine not only how well controllers perform with decision-aiding automation but also when the automation is less than perfect. The two experiments reported here on conflict detection automation examined these issues.


The first experiment examined the potential of a reliable conflict detection aid to compensate for the reduced performance and increased mental workload typically associated with increased traffic density and FE A "mature" level of FF was chosen in anticipation of a long-term future operational concept, as in Concept Element 5 (Phillips, 2000) of the NASA DAG-TM, which addresses en route free maneuvering. In this concept, controllers monitor free-maneuvering aircraft and provide only advisories on other traffic, weather, airspace restrictions, and required time-of-arrival assignments. Even though pilots stayed on airways and filed flight plans in this simulation, they could deviate from them at any time without notifying the controller. Authority for maintaining separation was with the (simulated) pilots. Controllers monitored traffic and were expected to act as a backup (i.e., detect conflicts) in case such self-separations failed.

It was expected that conflict detection performance would be reduced and mental workload increased under high levels of traffic as compared with moderate levels. With the support of a decision aid, however, performance should be improved and workload reduced under both moderate and high traffic density. It was also expected that the performance of routine ATC tasks (e.g., communication) would be reduced under high traffic conditions, as compared with moderate traffic conditions, without the aid and that the detection aid would free resources and improve performance in routine ATC tasks as compared with unaided performance. Finally, because the use of automation (and eventually operator performance) is determined by, among other factors, operator trust in the automation and self-confidence to perform without the automation, controller ratings of trust and self confidence (Experiment 2 only) were also obtained. If trust is low, operators are not likely to use it, and therefore their performance might not benefit from the aid as much as expected. If operators place too much trust in the automation, however, an automation failure could lead to a performance breakdown.


Participants. Twelve active full-performance level en route controllers from the Washington, D.C., Air Route Traffic Control Center (ARTCC) between the ages of 32 and 51 years (M = 37.17 SD = 4.84) served as paid volunteers. Their average overall experience (years on the job), including all military and civilian positions, ranged from 11.0 to 19.5 years (M = 13.46, SD = 3.24). All were male.

Apparatus. A medium-fidelity ATC simulator (Masalonis et al., 1997) was used to simulate a generic airspace. The simulation consisted of a radar or primary visual display (PVD), a data link display, and electronic flight strips presented on two different monitors. A trackball was used as the input device for both monitors. The PVD, as shown in Figure 1, consisted of aircraft targets, data blocks, jet routes, and way points. The adjacent monitor displayed the data link and an electronic flight progress strip for each flight (Figure 2). The data link display was used to simulate communications between pilots and controllers. Not only is the data link an easy way to simulate communications without the requirement for pseudo-pilots, it is also envisioned as the means of communication for routine transmissions in future ATC and cockpit systems.


Simulated ATC tasks and dependent variables. The controllers' most important (primary) task was the detection of potential conflicts. A potential conflict could result in an actual conflict, in which two aircraft lost separation (i.e., came within 5 nautical miles and 1000 feet of each other), or in a self-separation, in which one of two aircraft on a conflict course made an evasive maneuver (e.g., changed speed, heading, or altitude) in order to avoid the impending loss of separation. Controllers were required to indicate when they detected a potential conflict and name the call sign of the aircraft involved. Dependent variables were the percentage of detected conflicts and self-separations and the advance notification time for each. Advance notification times indicate how long before the loss of separation occurred or would have occurred (in the case of a self-separation) a controller reported a conflict.

Because ATC is a multitask situation, controllers were required to perform several other" tasks. Controllers had to communicate with pilots in order to accept them into the sector and to hand them off to an adjacent sector'. As soon as an aircraft came close to entering the sector boundaries, it sent a message via the data link display asking for acceptance. The controller had to respond to the message and accept the aircraft. As soon as an aircraft crossed a designated hand-off zone, controllers had to hand off the aircraft to the next sector by selecting the aircraft in a List of flights and clicking a hand-off button. Dependent variables included the percentage of successfully accepted aircraft, response times to requests for acceptance, percentage of aircraft banded off successfully, percentage of aircraft handed off early, and the response times to handoffs. Monitoring the progress of each aircraft moving through the sector and updating the flight strips accordingly served as a secondary task. Controllers were instructed to click on the way point of a flight strip as soon as an aircraft had crossed that way point on the radar display. Dependent measures included the percentage of missed and early way point updates as well as response times. Controllers were instructed that updating the way points was secondary in priority.

Other measures. Subjective ratings of mental workload were obtained with the NASA-TLX (NASA Ames Research Center, 1986). In automation conditions, subjective ratings of trust in the automation and self-confidence (Experiment 2 only) to perform without automation were obtained using a scale ranging from 0 (not at all) to 100 (extremely). The scales were based on measures used by Lee and Moray (1992, 1994) and adapted to fit the format of the NASA-TLX ratings. A heart rate monitor was used in order to obtain the 0.10-Hz band of heart rate variability (HRV) as a physiological measure of mental workload. Eye movements were also recorded. However, because of space limitations, those results will be reported in a separate paper.

Design. A two-factorial repeated measures design with two levels on each factor was chosen. Independent variables were (a) traffic density, with moderate and high levels of traffic in the ATC sector, and (b) the availability of a detection aid, with aid absent and aid present as treatment levels. All controllers performed in all resulting four conditions (within-subject design). The order of conditions was presented according to a complete double crossover design, with the availability of the detection aid as the first crossover and traffic density as the second crossover. The participants were randomly assigned to an order.

Each condition was represented by one scenario. Hence controllers were presented with four 30-min scenarios that were created to combine high (on average 16 aircraft after an initial 10-min ramp-up period) and moderate (on average 10 aircraft after an initial 10-min ramp-up period) traffic density with the absence or presence of a conflict detection aid in a sector with a 50-mile radius. Each scenario contained two conflicts and four self-separations. In scenarios with a conflict detection aid, a potential conflict was indicated by a red circle around the two aircraft involved (see Figure 1) 5 min before the aircraft lost separation. As soon as one aircraft made an evasive maneuver to avoid an impending conflict (e.g., descended 1000 feet), the circle disappeared. Self-separations always occurred after the aid appeared. In scenarios without the detection aid, the circle appeared only when aircraft lost separation so as to give the controllers feedback in a manner similar to that in their real work environments. In contrast to the currently used STCA, which is based on only current trajectory, the detection aid used in this study had access to information based on the flight plans and therefore was, in its functionality, similar to advanced aids such as URET.

Procedure. After signing the informed consent form, controllers were connected to the heart rate monitor, given a demonstration of the simulation, completed a practice trial, and were familiarized with the NASA-TLX and the trust scales. After this, each controller completed four 30-min scenarios.


For all dependent variables, 2 (presence or absence of aid) x 2 (high or moderate traffic load) repeated measures analyses of variance (ANOVAs) were calculated. An alpha level of .05 was used for all statistical tests.

Primary task performance: Detection of conflicts and self-separations. Averaged across all conditions, controllers detected 78.13% (SE = 4.44%) of all conflicts. They detected a higher percentage of conflicts when traffic density was moderate (M = 89.58%, SE = 4.23%) than when traffic was high (M = 66.67%, SE = 7.16%), F(1, 11) = 8.59, p < .05. With respect to self-separations, more were detected under moderate (M = 68.75%, SE = 6.60%) than under high traffic density (M = 37.50%, SE = 5.43%), F(1, 11) = 25.00, p < .001, and when the decision aid was present (M = 69.79%, SE = 6.73%) than when it was absent (M = 36.46%, SE = 4.99%), F(1, 11) = 30.61, p = .001.

Advance notification times were averaged across the two conflicts and four self-separations in each scenario. (Because of undetected potential conflicts, 8 out of 96 cells [8.33%] were empty, so they were replaced by the respective means of the conditions in which the cell was missing.) With the aid, conflicts were detected approximately 90 s earlier (M = 256.99 s, SE = 10.12 s) than without the aid (M = 164.06 s, SE = 20.53 s), F(1, 11) = 24.39, p < .001. Self-separations were detected earlier under moderate (M = 347.03 s, SE = 18.01 s) than under high traffic density (M = 251.20 s, SE = 11.39 s), F(1, 11) = 17.23, p < .01. Table 1 gives a summary of the detection rates and advance notification times for conflicts and self-separations.

Communication: Accepting and handing off aircraft. Controllers accepted more than 98% of the aircraft (M = 98.43%, SE = 0.42%) into their sector with an average response time of 41.91 s (SE = 2.48 s). Controllers handed off more aircraft successfully when the aid was present (M = 81.71%, SE = 5.19%) than when it was absent (M = 70.00%, SE = 4.91%), F(1, 11) = 8.22, p < .05, and handed off aircraft significantly later under high (M = 44.41 s, SE = 5.98 s) than under moderate traffic conditions (M = 27.51 s, SE = 5.46 s), F(1, 11) = 24.76, p < .001. Controllers handed off more aircraft prematurely without (M = 24.40%, SE = 5.08%) than with the aid (M = 13.80%, SE = 3.23%), F(1, 11) = 6.78, p < .05. While under high traffic density conditions, they handed off significantly more aircraft prematurely when the aid was absent (M = 25.95%, SE = 7.22%) than when it was present (M = 9.78%, SE = 5.55%); under moderate traffic density, the effect of the absence (M = 22.86%, SE = 7.44%) or presence of the aid (M = 17.82%, SE = 5.43%) was smaller. This interaction approached significance, F(1, 11) = 3.39, p = .09.

Secondary task performance. Performance in the secondary task was affected by traffic density but not by the presence or absence of the aid. Controllers missed updating more way points under high (M = 63.41%, SE = 6.23%) than under moderate traffic density (M = 41.55%, SE = 6.73%), F(1, 11) = 14.14, p < .01. Controllers updated more way points early under moderate (M = 15.00%, SE= 3.62%) than under high traffic density (M = 7.62%, SE = 2.39%), F(1, 11) = 3.93, p = .07. Two controllers were excluded from the analysis of response times because they did not update any way points in one or all conditions. On average, controllers updated a way point 86.36 s (SE = 6.39) after the aircraft passed the corresponding way point on the radar display but were not significantly affected by the experimental manipulations.

Physiological measure: HRV. The effect of traffic density on the 0.10-Hz band of the HRV was significant, F(1, 11) = 5.43, p < .05, indicating higher mental workload under high (M = 4.65, SE = 0.19) than under moderate traffic density (M = 4.74, SE = 0.19).

Subjective ratings of mental workload. Controllers rated mental workload significantly higher under high (M = 66.29, SE = 3.55) than under moderate traffic (M = 52.57, SE = 3.47), F(1, 11) = 28.47, p < .001.

Trust. Controller ratings of trust in the conflict detection aid on a scale from 0 to 100 ranged from 30 to 90 with an average of 63.35 (SE = 4.7). Median and mode were both 70. This can be considered a moderately high level of trust, considering that the aid functioned 100% reliably.


As expected, controllers missed more potential conflicts and detected self-separations later under high than under moderate traffic density. Performance in routine communication tasks as well as all measures of mental workload showed similar unfavorable effects of high traffic density. This validates the findings of adverse effects of high traffic density on controller performance in earlier FF studies in which controllers were removed from the active control loop (Galster, Duley, et al., 2001: Metzger & Parasuraman, 2001) and did not have access to pilot intent information (Castano & Parasuraman, 1990). The increase in the time to detect potential conflicts is of particular concern, given that the highly dense and efficient airspace under FF will leave controllers with less time to resolve conflicts (Galster, Duley, et al.; Metzger & Parasuraman).

This could have serious implications for safety and corroborates the view that FF and related future ATM concepts (e.g., Phillips. 2000: RTCA, 1995) will not be feasible without the provision of automation tools to support controllers. Supporting this view was the finding that the conflict decision aid improved performance in the detection of potential conflicts. The decision aid also had beneficial effects on the communication task, indicating that it might free resources that controllers could allocate to the performance of other tasks, such as communication or granting user requests.

The reallocation of freed resources might explain why the decision aid did not reduce workload as expected, given that whereas performance was considerably enhanced with the aid, mental workload remained unchanged (see Parasuraman & Hancock, 2001). Alternatively, the high demand for monitoring pilot actions under FF, as well as for monitoring the automation, could have increased workload (Warm et al., 1996). A third possibility is that traffic density is a stronger workload driver, as compared with lack of automation. Each aircraft required the controller to perform many routine and coordination tasks. A conflict detection aid does not reduce the demands imposed by these tasks. Interestingly, more aircraft were handed off prematurely without than with the aid, particularly under high traffic density, which could be a controller strategy to manage workload. Controllers might have felt more time pressure when they were manually performing under high traffic conditions and were trying to band off aircraft whenever they could, even if they did so prematurely.

Given the 100% reliability of the aid, controller trust ratings of the automation were not very high. Although this finding was initially unexpected, it was consistent with the post-experiment interviews with the controllers, who did not view detection aids (in general and the one used in the simulation) very favorably. This attitude is based on their experience with the STCA and its frequent false alarms. Perhaps the difference between the aid we used and STCA was not made clear enough.


Experiment 1 established the benefits of a conflict detection aid on controller performance under FE However; the automation in that study was always perfectly reliable, a situation not likely to be the case in real settings, given the inherent uncertainty of projecting future events and tint difficulty of any automation aid having fully up-to-date and accurate intent information. Controller recovery from an automation failure is of great concern under FF conditions (Galster, Duley et al., 2001) because more aircraft will be accommodated in the same amount of airspace, creating a denser airspace and leaving the controller with less time to ensure minimum separation requirements between aircraft than is currently the case. An added difficulty is that FF removes the controller from the active decision-making process by transferring authority to the pilots. Previous studies have shown that controllers require more time to detect conflicts in case airborne separation fails under passive FF than under active control (Metzger & Parasuraman, 2001). If the required time is greater than the time available to recover from a critical and rapidly developing situation, safety may be compromised.

Automation can compensate for this problem to some extent, but just like FE it can move the controller further away from the decision-making process. As long as the automation performs the conflict detection function reliably, system safety is maintained. However, in case of an automation failure, the effect of delayed detection of potential conflicts could be further aggravated. We therefore predicted the same results as in Experiment 1 for when the automation was reliable. For unreliable automation, we hypothesized that controllers would detect a conflict earlier under manual conditions than under automated conditions when the automation failed to point it out.


Participants. Twenty active full-performance level controllers Dora the Washington, D.C., ARTCC (n = 14) and Washington area terminal radar control (TRACON, n = 6) facilities served as paid volunteers. Most of the controllers had participated in Experiment 1 or previous studies using the same simulation. Their ages ranged from 31 to 53 years (M = 37.65, SD = 5.05), and their overall ATC experience ranged from 8 to 22 years (M = 14.08, SD = 3.86). There was no significant difference in age, F(1, 18) < 1, p > .05, or years on the job, F(1, 18) < 1, p > .05, between TRACON and en route controllers. Three (15%) en route controllers were female.

Apparatus. The same ATC simulation, tasks, and dependent variables as in Experiment 1 were used. Controllers were also interviewed about their experience with conflict detection aids, specifically with the more "intelligent" aids that are being implemented in facilities. They were instructed that the four automation conditions would contain a conflict detection aid that was highly reliable, based on the flight plan of the aircraft (as opposed to merely current speed and heading), and that it had a 6-min look-ahead time. The 6-min look-ahead time was chosen because in Experiment 1 controllers detected self-separations under moderate traffic conditions on average before the aid detected them. Before performing in the automated conditions, controllers received a detailed description and demonstration of the aid that explained its basic underlying principle and even pointed out some of its limitations ("In some cases, changes in flight intent information might not be available to the conflict probe ...").

Design. A repeated measures design included automation condition with the levels (a) reliable automation, (b) automation failure with 2 min to recover, (c) automation failure with 4 min to recover, and (d) manual condition. All controllers performed in all four conditions. Half the participants performed the manual condition before the automated conditions (i.e., all other conditions), and the other half performed the manual condition after the automated conditions. In both groups, the reliable automation condition was always presented before the two unreliable automation conditions. This was chosen deliberately so that the controllers would get enough training and experience with a reliable automation aid and build trust before being exposed to failures. The order of the two automation failure conditions was such that half of the participants performed the 2-min failure condition before the 4-min failure condition and the other half performed the scenarios in reverse order. This resulted in a double crossover design, the first associated with the order of the manual and automated conditions and the second (nested within the first) associated with the order of the two automation failure conditions.

The four different automation conditions were represented in live 25-min scenarios that included on average about 16 aircraft in a 50-mile radius sector after a 10-min ramp-up period. In the reliable automation condition, consisting of two scenarios, the conflict detection aid reliably detected all five potential conflicts (two conflicts and three self-separations). In the two automation failure conditions, the conflict detection aid detected the same five potential conflicts reliably. However, the aid failed to detect a sixth event about 21 min into the scenario. The event was a situation in which one aircraft deviated from its flight plan (e.g., after a Traffic Alert and Conflict Alerting System alert) in order to avoid a conflict and climbed or descended into the path of another aircraft that also deviated from its altitude filed on the flight plan. The maneuvers left controllers with 4 min in one scenario and 2 min in the other to detect the conflict before the loss of separation occurred. Altitudes and altitude changes of the aircraft involved in this situation were slightly different in the two automation failure scenarios (e.g., an aircraft climbed in one and descended in another scenario) so that controllers would not easily recognize that the same situation was presented twice.

A fifth scenario was assigned for the manual condition, in which no conflict detection aid was available. However, the controllers were presented with the same situation in which the automation failed in the automation condition and 4 min remained for detection. This allowed for a direct comparison of conflict detection performance between the manual and automated conditions. In order to create a set of live comparable scenarios, the sector and traffic patterns were rotated to simulate a different flow of traffic. Way points and flights were renamed so that the participants did not recognize that the scenarios were almost identical.

Procedure. After signing a consent form and providing biographical information, controllers were given instructions and a demonstration of the simulation, completed a practice trial, and were familiarized with the NASA-TEX and the trust ratings. Then the controllers completed the five scenarios.


Data from the two scenarios with reliable automation were averaged after initial analyses revealed no significant differences. The data of the 20 participants were analyzed with repealed measures ANOVAs with one four-level independent variable (reliable automation, failure 2 min, failure 4 min, and manual) and three planned orthogonal contrasts: (a) automated versus manual conditions, (b) reliable automation versus failure conditions, and (c) failure conditions with 2 versus 4 min to detect a conflict.

Primary task performance: Detection of conflicts and self-separations with reliable automation. There was a significant effect of automation on the detection of conflicts, F(5, 57) = 8.14, p = .01, and of self separations, F(3, 57) = 9.98, p = .001. More conflicts, F(1,191 = 8.14. p = .01, and more self-separations, F(l, 19) = 13.11, p < .01, were detected under automated conditions than under the manual condition. The automation condition also had a significant effect on advance notification of conflicts, F(5, 57) = 5.28, p < .01, with conflicts being detected earlier in the automated conditions than in the manual condition, F(1, 19) = 5.31, p < .05. Conflicts were also detected earlier in the failure conditions (M = 558.58 s, SE = 6.75 s) than in the reliable conditions (M = 306.92 s, SE = 14.93 s), F(1, 19) = 7.39, p = .01, but this was confounded with an order effect. The automation condition had no significant effects on advance notification of self-separations, F(3, 57) < 1, p > .05. Table 2 displays the data.

Primary tusk performance: Detection of conflicts and self-separations with unreliable automation. Table 3 shows descriptive statistics for the detection rates in the two automated conditions and the manual condition. Under automation, only 55% and 40% of the controllers detected the conflict when they had 2 and 4 min available, respectively. In contrast, 55.56% of the controllers detected the conflict under the manual condition. There was no difference in conflict detection under the manual condition between controllers who performed the manual condition first and those who performed it last, F(1,16) < 1, p > .05. The effect of the automation condition failed to reach significance, F(2, 58) = 1.78. p =. 18. Because a difference was predicted, however, post hoc tests were performed. Orthogonal contrasts revealed no significant effect between the 2- and 4-min failure conditions, F(1, 19) < 1, p > .05. Hence the data were collapsed across the two conditions for subsequent analyses. The contrast comparing automated and manual conditions found a trend for better detection under manual than under automated conditions, F(1, 19) = 2.40, p =. 14.

Because of the high number of missed potential conflicts, advance notification times were available for fewer than 50% of the cells, too low a number to calculate meaningful inferential statistics. Also, the maximum advance notification time was determined by the onset of the conflict (e.g., in the 2-min condition, maximum advance notification was 2 min), not by controller performance. Nevertheless, the automation failure condition with 4 min can be compared with the manual condition with 4 min between the onset of the conflict and the loss of separation. Table 4 displays the data. Visual inspection of the descriptive statistics revealed no marked differences between the manual and automated condition, even though the advance notification time in the automation condition was about 17 s earlier than that under" manual conditions. Also, the minimum advance notification time was greater in the automation (40.53 s) than in the manual condition (7.59 s). However, the median performance was better in the manual condition.

Communication: Accepting and handing off aircraft. Controllers accepted more than 98% of the aircraft (M = 98.15%, SE = 0.52%) into their" sector with an average response time of 36.79 s (SE = 2.39 s). There was a significant effect of automation on response times to requests for acceptance, F(3, 57) = 3.40, p < .05. Response times were shorter in the failure condition (M = 52.04 s, SE = 3.42 s) than in the reliable condition (M = 39.36 s, SE = 5.61 s), F(1, 19) = 5.28, p < .03, reflecting the order effect. The effect of automation on response times failed to reach significance, F(1, 19) = 2.71, p =. 12. No significant effects were found for" the percentage of successful, F(3, 57) < l, p > .05, or early handoffs, F(3, 57) = 1.95, p > .05. Aircraft were banded off faster in the failure (M = 26.42 s, SE = 2.49 s) than in the reliable (M = 31.58 s, SE = 3.54 s) condition, F(1, 19) = 4.77, p < .05, again reflecting the order effect.

Secondary task performance. A significant effect of automation on the percentage of missed updates was found, F(3, 57) = 4.28, p < .01. Controllers missed updating more way points under manual (M = 66.00%, SE = 2.87%) than under automated conditions (M = 56.48%, SE = 2.46%), F(1, 19) = 7.73, p = .01. One participant was excluded from the analysis of response times because he did not update any way points in three scenarios. For the remaining participants, a significant effect of the automation condition was found, F(3, 54) = 3.54, p < .05. It took controllers significantly longer to update way points under automated (M = 126.56 s, SE = 8.74 s) than under manual conditions (M = 97.15 s, SE = 12.20 s), F(1, 18) = 7.27, p < .05. There was no effect of automation on the percentage of early updates, F(3, 57) = 1.47, p > .05.

Subjective ratings of mental workload. The effect of automation on ratings of mental workload failed to reach significance, F(5, 57) = 1.86, p > .05. The contrast between automated (M = 60.03, SE = 1.71) and manual conditions (M = 65.46, SE = 5.76) was also nonsignificant, F(1, 19) = 2.59, p = .12.

Trust and self-confidence. Ratings of trust ranged from 0 to 100 with an average of 75.71 (SE = 2.14). Trust ratings differed significantly between the automation conditions, F(2, 38) = 1.63, p < .05. Controllers rated their trust higher under reliable (M = 79.55, SE = 2.66t than under failure conditions (M = 72.10, SE = 3.30), F(1, 19) = 5.96, p < .05. Ratings of self-confidence ranged from 25 to 100 with an average of 65.70 (SE = 2.42) and did not vary with automation conditions. Hence the controllers' trust in the automation was greater than their self-confidence to perform without it.


As in Experiment 1, there was a marked benefit of the conflict detection automation on controller performance when the automation functioned 100% reliably. More potential conflicts and self-separations were detected and conflicts were detected earlier with the automation than under manual conditions. The early detection results are particularly encouraging, considering that higher traffic densities will make timely detection of potential conflicts essential. These automation benefits did not carry over to the communication task of accepting and handing off aircraft. Such routine tasks imposed a considerable amount of workload but did not benefit from the availability of a decision aid aimed at conflict detection and resolution. Either the aid did not free enough resources or the controllers chose not to allocate them to improve communication performance. Overall mental workload was also not significantly reduced by the automation. Subjective ratings of workload did not change, and even though controllers updated more way points when they had conflict detection automation available, it took them longer to update them. This could represent either a speed-accuracy trade-off or a cost of the automation associated with putting controllers in a monitoring mode. Still, automation had mostly beneficial effects as long as it functioned 100% reliably.

Under imperfect automation, controllers were more likely to detect a conflict when performing manually than when assisted by automation that failed to point out a particular conflict. Even though this effect only approached conventional statistical significance, we believe it is practically significant, particularly for a safety-critical environment such as ATC. The reliability of the detection aid in this study (one failure per scenario) was rather low--in fact, much lower than would be acceptable for a real-world system. However; higher automation reliability is associated with a reduced likeliness to detect a failure of the automation (May, Molloy, & Parasuraman, 1993). Therefore the effect in this study should be considered a conservative estimate.

Further, studying operator response to very rare system failures or other surprises requires that the surprising event be presented only once or twice to prevent the buildup of expectancies (Molloy & Parasuraman, 1996: Wickens, 1998). However, low numbers of events result in low power and therefore the possibility of statistically nonsignificant results. Thus obtaining empirical data on the response to rare, unexpected events is difficult, expensive, and time consuming, especially if it becomes necessary to keep the sample size low to meet cost or time constraints, if a specific expert population is tested (air traffic controllers, pilots, astronauts, etc.), and if expensive equipment is used (e.g., simulators, eye-tracking equipment). As a consequence, the analysis and interpretation of the results focused on practical relevance as well as statistical significance (Wickens, 1998: Wickens, Gordon, & Liu, 19981.

No difference in detection rates was found when controllers had 2 versus 4 min available between the onset of the failure event and the loss of separation. Timeliness in the detection of a potential conflict was also not markedly different between manual performance and the comparable automation failure condition. It is possible that a longer time frame (similar to the one used in the other conditions) would have shown an effect. Two or 4 min might simply not have been enough to show a variation. However, inaccuracies in a conflict detection aid such as URET are more likely to be short term. Longer term changes in flight plans, for example, would eventually be available to URET so that they could be taken into account for the conflict or no-conflict decision.

One limitation of this study should be noted. The order of presentation of reliable and unreliable automation was confounded with the reliability. Therefore it is possible that the (reduced) detection of an automation failure was attributable to effects other than the reliability. For example, perhaps controllers became tired after repeated exposure to the scenarios. Communication performance suggested quite the opposite, however. Controllers accepted and handed off aircraft faster in the unreliable than in the reliable condition, suggesting a practice effect. Based on this finding, controllers should be expected to be more, rather than less, efficient in their task performance, including conflict detection. In addition, if fatigue were a major factor, then manual performance should have been reduced in those controllers who performed the manual condition last, but this was not found. There was no difference in controller performance as a function of order.


In two experiments we examined the effects of automation, both reliable and imperfect, on controller performance under conditions of shared decision making. The results showed that advanced FF conditions can lead to detrimental effects when traffic density is high. In support of previous findings, such conditions reduced conflict detection and increased mental workload (Endsley et al., 1997; Galster, Duley, et al., 2001; Metzger & Parasuraman, 2001). The provision of conflict probe automation mitigated these performance costs. It should be noted, however, that although controller performance was significantly improved by the aid, a significant number of potential conflicts were still missed. In the real world this would not be acceptable. The high number of missed events can probably be attributed to the fact that traffic patterns were created by the experimenter and did not represent recorded or live traffic, as in some higher fidelity ATC simulations. Therefore they lacked some of the structure that controllers are used to. However, that would be the case under the National Route Program and, even more so, under FF conditions. Also, controllers were not nearly as familiar with the sector as they would be in the real world. Therefore the values obtained (e.g., detection rates) should be considered as conservative estimates and should be directly compared not with real-world numbers but, rather, with the different conditions created in the simulations.

The results also suggested that these automation benefits might not come without a cost. Controllers performed better in detecting conflicts without automation than when they had automation support that was less than 100% accurate. FF removes the controller from the decision-making process in a manner that leads to reduced performance (e.g., in the detection of conflicts). Automation is introduced to make up for these deficits. If decision-making automation is reliable, performance is improved. However, if the automation is imperfect (for whatever reason), there is a chance that the failure will not be detected. For the design of future ATC systems, this implies that automation might not be able to fully compensate for the sub-optimal role of the controller, who will be left to monitor the decisions of others under future ATM conditions. The practical implication of this finding for the design engineer is that controllers should be given an active role in the system to ensure that they can detect and respond to malfunctions in a timely manner.

An alternative solution is to support controllers in routine tasks. In these experiments, automation did not reduce workload in all cases, potentially because of the heavy workload associated with the routine tasks under high traffic density and with the monitoring requirement. Supporting controllers in routine tasks would help to keep operators in the loop of the most important decisions but relieve them of repetitious and less important tasks. Improved displays (e.g., display integration, color coding, ecological interfaces) that allow the controller to detect malfunctions in the system more easily could also be helpful (Molloy & Parasuraman, 1994). However, as with any form of automation, the consequences on human performance and other criteria (e.g., secondary evaluative criteria; Parasuraman, Sheridan, & Wickens, 2000) need to be thoroughly evaluated.
TABLE 1: Mean Detection Rates and Advance Notification Times for
Conflicts and Self-Separations in Experiment 1

                          Detection Rate (%)

                Conflicts              Self-Separations
Aid           Traffic Density          Traffic Density

            Moderate       High      Moderate       High

Absent        91.67        62.5        45.83       27.08
              (5.62)      (8.97)       (8.04)      (4.83)
Present       87.5        70.83        91.67       47.92
              (6.53)     (11.45)       (4.70)      (8.95)

                     Advance Notification Time (s)

                Conflicts              Self-Separations
Aid           Traffic Density          Traffic Density

            Moderate       High      Moderate       High

Absent       196.21      131.91       343.78      224.45
             (25.36)     (30.52)      (30.08)     (18.27)
Present      247.08      266.9        350.29      277.95
             (18.42)      (8.43)      (21.19)      (8.84)

Note. Standard errors are shown in parentheses.

TABLE 2: Mean Detection Rates and Advance Notification
Times for Conflicts and Self-Separations in Experiment 2

                     Detection Rate (%)

Aid           Conflicts        Self-Separations

Absent         85 (5.26)         76.67 (4.90)
Present       100 (0.00)         96.11 (1.39)

                Advance Notification Time (s)

Aid           Conflicts        Self-Separations

Absent      279.09 (23.15)      342.59 (7.76)
Present     327.89 (6.90)       349.48 (4.17)

Note. Standard errors are shown in parentheses.

TABLE 3: Descriptive Statistics for Detection Rates
as a Function of Automation Condition


                   Auto Failure

            2 min (%)     4 min (%)     Manual (%)

n             20            20             18
Mean          35.00         40.00          55.56
SE            10.94         11.24          12.05
Minimum        0.00          0.00           0.00
Maximum      100.00        100.00         100.00
Mode           0.00          0.00         100.00
Median         0.00          0.00         100.00

TABLE 4: Descriptive Statistics for Notification
Times as a Function of Automation Condition


                 Auto Failure

            2 min (%)     4 min (%)     Manual (%)

n              7             8             10
Mean          83.23        127.12         110.27
SE            12.19         24.26          23.53
Minimum       28.80         40.53           7.39
Maximum      118.01        217.52         216.25
Mode           --            --             --
Median        87.38        102.62         120.00


This research was supported by Grant No. NAG-2-1096 from NASA Ames Research Center, Moffett Field, California. Kevin Corker and Richard Mogford were the technical monitors. We would like to thank all controllers who participated in the experiments. We appreciate the comments of two anonymous reviewers.


Ahlstrom, V., Longo, K., & Truitt, T. (2002). Human factors design guide update: A revisions to chapter 5--Automation guidelines (DOT/FAA/CT-02/11). Atlantic City International Airport, NJ: Federal Aviation Administration. William J. Hughes Technical Center.

Bainbridge, L. (1983). Ironies of automation. Automatica, 19, 775-779.

Bilimoria, K. D. (2001). Methodology for the performance evaluation of a conflict probe. Journal of Guidance, Control, and Dynamics, 24, 444-451.

Billings, C. E. (1997). Aviation automation: The search for a human-centered approach. Mahwah, NJ: Erlbaum.

Brudnicki, D. J., & McFarland, A. L. (1997). User Request Evaluation Tool (URET) conflict probe performance and benefits assessment (Tech. Report MP97W112). McLean, VA: MITRE.

Castano. D., & Parasuraman, R. (1999). Manipulation of pilot intent under free flight: A prelude to not-so-free flight. In Proceedings of the 10th International Symposium on Aviation Psychology (pp. 170-176). Columbus: Ohio State University.

Corker, K., Fleming, K., & Lane, J. (1999). Measuring controller reactions to free flight in a complex transition sector. Journal of Air Traffic Control, 4, 9-16.

Dekker. S. W. A., & Woods, D. D. (1999). To intervene or not to intervene: The dilemma of management by exception. Journal of Cognition, Technology and Work, 1, 86-96.

DiMeo, K., Sollenberger, R., Kopardekar, P., Lozito. S., Mackintosh, M.-A., Cardosi, K., et al. (2002). Air ground integration experiment (Tech. Report DOT/FAA/CT-TN02/06). Atlantic City International Airport, NJ: Federal Aviation Administration, William J. Hughes Technical Center.

Drucker, P. F. (1954). The practice of management. New York: Harper and Row.

Dunbar, M., Cashion, P., McGann, A., Macintosh, M.-A., Dulchinos, V., Jara, D., et al. (1999). Air-ground integration issues in a self-separation environment. In Proceedings of the 10th International Symposium on Aviation Psychology (pp. 183-1891. Columbus: Ohio State University.

Endsley, M. R. (1996). Automation and situation awareness. In R. Parasuraman & M. Mouloua (Eds.), Automation and human performance: Theory and applications (pp. 163-181). Mahwah, NJ: Erlbaum.

Endsley, M. R., & Kids, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors, 37, 381-394.

Endsley. M. R., Mogford, R. H., Allendoerfer, K. R., Snyder, M. D., & Stein, E. S. (1997). Effect of free flight conditions on controller performance, workload, and situation awareness (DOT/FAA/CT-TN97/12). Atlantic City International Airport, NJ: Federal Aviation Administration, William J. Hughes Technical Center.

Endsley, M. R., & Rodgers, M. D. (1998). Distribution of attention, situation awareness and workload in a passive ATC task: Implications for operational errors and automation. Air Traffic Control Quarterly, 6(1), 21-44.

Galster, S. M., Bolia, R. S., Roe, M. M., & Parasuraman, R. (2001). Effects of automated cueing on decision implementation in a visual search task. In Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting (pp. 321-325). Santa Monica, CA: Human Factors and Ergonomics Society.

Galster, S. M., Duley, J. A., Masalonis, A. J., & Parasuraman, R. (2001). Air traffic controller performance and workload under mature free flight: Conflict detection and resolution of aircraft self-separation. International Journal of Aviation Psychology, 11, 71-93.

Hilburn, B. G. (1996). The impact of advanced decision-aiding automation on mental workload and human-machine system performance. Unpublished dissertation, Catholic University of America, Washington, DC.

Hilburn, B. G., & Parasuraman, R. (1997). Free flight: Military controllers as an appropriate population for evaluation of advanced ATM concepts. In Proceedings of the 10th International CEAS Conference on Free Flight (pp. 23-27). Amsterdam: Confederation of European Aviation Societies.

Lee, J. D., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35, 1243-1270.

Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators' adaptation to automation. International Journal of Human-Computer Studies, 40, 153-184.

Lorenz, B., Di Nocera, F., Rottger, S., & Parasuraman, R. (2002). Automated fault management in a simulated space flight micro-world. Aviation, Space, and Environmental Medicine, 73, 886-897.

Lozito, S., McGann, A., Mackintosh, M., & Cashion. P. (1997, June). Free flight and self-separation from the flight deck perspective. Presented at the Seminar on Air Traffic Management Research and Development. Saclay, France. Retrieved January 30, 2005, from

Masalonis, A. J., Le, M. A., Klinge, J. C., Galster, S. M., Duley, J. A., Hancock, P. A., et al. (1997). Air traffic control workstation mock-up for free flight experimentation: Lab development and capabilities [Abstract]. In Proceedings of the Human Factors and Ergonomics Society 41st Annual Meeting (p. 1379). Santa Monica, CA: Human Factors and Ergonomics Society.

Masalonis, A. J., & Parasuraman, R. (2003). Fuzzy signal detection theory: Analysis of human and machine performance in air traffic control, and analytical considerations. Ergonomics, 46, 1045-1074.

May, P. A., Molloy, R. J., & Parasuraman, R. (1993). Effects of automation reliability and failure rate on monitoring performance in a multi-task environment (Tech. Report). Washington, DC: Catholic University of America, Cognitive Science Laboratory.

Metzger, U., & Parasuraman. R. (2001). The role of the air traffic controller in future air traffic management: An empirical study of active control versus passive monitoring. Human Factors, 43, 519-528.

Molloy, R., & Parasuraman, R. (1994). Automation-induced monitoring inefficiency: The role of display integration and redundant color coding. In M. Mouloua & R. Parasuraman (Eds.) Human performance in automated systems: Current research and trends (pp. 224-228). Hillsdale, NJ: Erlbaum.

Molloy, R., & Parasuraman, R. (1996). Monitoring an automated system for a single failure: Vigilance and task complexity effects. Human Factors, 38, 311-322.

National Aeronautics and Space Administration Ames Research Center. (1986). NASA Task Load Index (TLX): Paper and pencil version. Moffett Field, CA: Author, Aerospace Human Factors Research Division.

National Aeronautics and Space Administration, Aviation System Capacity Program, Advanced Air Transport Technologies Project. (1999). Concept definition for distributed air/ground traffic management (DAG-TM), Version 1.0. Moffett Field, CA: Author.

Parasuraman, R., & Byrne, E. A. (2003). Automation and human performance in aviation. In P. Tsang & M. Vidulich (Eds.) Principles and practice of aviation psychology (pp. 311-356). Mahwah, NJ: Erlbaum.

Parasuraman, R., Duley, J. A., & Smoker, A. (1998). Automation tools for controllers in future air traffic control. Controller: Journal of Air Traffic Control, 37, 8-15.

Parasuraman, R., & Hancock, P. A. (2001). Adaptive control of mental workload. In P. A. Hancock & P. A. Desmond (Eds.), Stress, workload, and fatigue (pp. 305-320). Mahwah, NJ: Erlbaum.

Parasuraman, R., & Mouloua, M. (Eds.). (1996). Automation and human performance: Theory and applications. Mahwah, NJ: Erlbaum.

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230-253.

Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 30, 286-297.

Phillips, C. T. (2000). Detailed description for CE-5 en route free maneuvering (NAS2-98005 RTO-41). Mays Landing, NJ: Titan Systems Corp., System Research Corp. Division.

Radio Technical Commission for Aeronautics. (1995). Report of the RTCA Board of Director's Select Committee on Free Flight. Washington, DC: Author.

Remington, R. W., Johnston, J. C., Ruthruff, E., Gold, M., & Romera, M. (2000). Visual search in complex displays: Factors affecting conflict detection by air traffic controllers. Human Factors, 42, 349-366.

Rovira, E., McGarry, K., & Parasuraman, R. (2002). Effects of unreliable automation on decision making in command and control. In Proceedings of Human Factors and Ergonomics Society 46th Annual Meeting (pp. 428-32). Santa Monica, CA: Human Factors and Ergonomics Society.

Sarter, N. B., & Amalberti, R. (Eds.). (2000). Cognitive engineering in the aviation domain. Hillsdale, NJ: Erlbaum.

Schick, F. V., & Volckers, U. (1991). The COMPAS system in the ATC environment. Braunschweig, Germany: Mitteilung Deutsche Forschungsanstalt fur Luft- und Raumfahrt.

Sheridan, T. B. (2002). Humans and automation. New York: Wiley.

van Gent, R. N. H. W., Hoekstra, J. M., & Ruigrok, R. C. J. (1998). Free flight with airborne separation assurance: A man-in-the-loop simulation study (Tech. Report). Amsterdam: National Aerospace Laboratory.

Warm, J, S., Dember, W., & Hancock, P. A. (1996). Vigilance and workload in automated systems. In R. Parasuraman & M. Mouloua (Eds.), Automation and human performance: Theory and applications (pp. 183-200). Mahwah, NJ: Erlbaum.

Wickens, C. D. (1992). Engineering psychology and human performance (2nd ed.). Scranton, PA: HarperCollins.

Wickens, C. D. (1998). Commonsense statistics. Ergonomics in Design, 6(4), 18-22.

Wickens, C. D. (2000). Imperfect and unreliable automation and its implications for attention allocation, information access and situation awareness (Tech. Report ARL-00-10/NASA-00-2). Urbana-Champaign, IL: Institute of Aviation, Aviation Research Lab.

Wickens, C. D., Gempler, K., & Morphew, M. E. (2000). Workload and reliability of traffic displays in aircraft traffic avoidance. Transportation Human Factors, 2, 99-126.

Wickens, C. D., Gordon, S. E., & Liu, Y. (1998). An introduction to human factors engineering. New York: Longman.

Wickens, C. D., Mavor, A., Parasuraman, R., & McGee, J. (1998). The future of air traffic control: Human operators and automation. Washington, DC: National Academy.

Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and problems. Ergonomics, 23, 995-1011.

Willems, B., & Truitt, T. R. (1999). Implications of reduced involvement in en route air traffic control (Tech. Report DOT/FAA/CT-TN99/2). Atlantic City International Airport, NJ: Federal Aviation Administration, William J. Hughes Technical Center.

Ulla Metzger is a human factors specialist at Deutsche Bahn AG, DB Systemtechnik, Munich, Germany. She received her Ph.D. in psychology in 2001 at Darmstadt University of Technology, Germany.

Raja Parasuraman is a professor of psychology at George Mason University, Fairfax, Virginia. He received his Ph.D. in psychology in 1976 at the University of Aston, Birmingham, U.K.

Date received: October 30, 2003 Date accepted: May 13, 2004

Address correspondence to Raja Parasuraman, Arch Lab, George Mason University, MS 3F5, 4400 Fairfax Dr., Fairfax, VA 22030-4444;
COPYRIGHT 2005 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Metzger, Ulla; Parasuraman, Raja
Publication:Human Factors
Geographic Code:1USA
Date:Mar 22, 2005
Previous Article:Designing effective human-automation-plant interfaces: a control-theoretic perspective.
Next Article:Effects of automation of information-processing functions on teamwork.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters