Printer Friendly

Team Play with a Powerful and Independent Agent: A Full-Mission Simulation Study.

One major problem with pilot-automation interaction on modem flight decks is a lack of mode awareness; that is, a lack of knowledge and understanding of the current and future status and behavior of the automation. A lack of mode awareness is not simply a pilot problem; rather, it is a symptom of a coordination breakdown between humans and machines. Recent changes in automation design can therefore be expected to have an impact on the nature of problems related to mode awareness. To examine how new automation properties might affect pilot-automation coordination, we performed a full-mission simulation study on one of the most advanced automated aircraft, the Airbus A-320. The results of this work indicate that mode errors and "automation surprises" still occur on these advanced aircraft. However, there appear to be more opportunities for delayed or missing interventions with undesirable system activities, possibly because of higher system autonomy and coupling.

INTRODUCTION

Achieving new levels of system performance often involves the development and introduction of new technology and automation capabilities. However, recent research has shown that expanding technological powers, although necessary, is rarely sufficient to improve performance. Rather, the success of modern technology depends to a large extent on its ability to function as a "team player" with human practitioners (e.g., Billings, 1996; Parasuraman & Mouloua, 1996; Wiener & Curry, 1980). New automated systems that have not been designed to be effective team players have been shown to contribute to miscommunications, misassessments, misentries, mode errors, workload bottlenecks, attentional bottlenecks, coordination surprises, incidents, and even accidents (e.g., Abbott et al., 1996; Norman, 1990; Sarter, Woods, & Billings, 1997).

Commercial aviation has served as one of the prime natural laboratories for advancing the understanding of human-automation cooperation because of the introduction of more powerful automated and computerized systems into flight decks over the last 20 years. The introduction of these systems has provided researchers the opportunity to collect data on human-automation cooperation by observing line operations (e.g., Wiener, 1989), interviewing pilots of these aircraft, surveying pilot opinions (e.g., Tenney, Rogers, & Pew, 1995; Wiener, 1989), and collecting reports of cases of miscommunication and confusion from line experience (e.g., Sarter & Woods, 1992, 1997). Researchers can examine incidents through reports to the Aviation Safety Reporting System (e.g., Eldredge, Dodd, & Mangold, 1991) and, unfortunately, reports and analyses of accidents in which human-automation interaction played a role (see summaries of selected accidents in Billings, 1996).

All of these efforts have pointed to a number of recurrent problems in human-automation interaction, such as clumsy automation (Wiener, 1989), loss of mode awareness (Sarter & Woods, 1995), and automation surprises (Sarter, Woods, & Billings, 1997). Mode awareness is defined as the ability of a supervisor to track and anticipate the status and behavior of automated systems. A lack of mode awareness can result in mode errors of commission when the operator executes an intention in a way appropriate for one automation configuration when the system is, in fact, in a different state. Other possible outcomes include delayed or missing interventions when system-initiated actions or events prove undesirable.

The strength of the existing data on human-automation interaction is that they come from user experiences in actual line operations. The weakness of these data is that they rely too much on opinion and retrospective analyses of past cases, are subject to reporting biases, or provide only coarse-grain insights into the sources of observed difficulties. As a result, there has been an interest in using full-scope training simulators to design flight situations that challenge pilot coordination and management of automated resources and closely track how they use automation to handle these situations. To date, most studies using this approach have examined relatively early-generation automated aircraft (e.g., Sarter & Woods, 1994; Wiener, 1985).

Clearly, automated systems have changed considerably since they were introduced into the cockpit. They have become more autonomous and possess higher authority. Nonetheless, the types of interfaces and displays available for pilots to interact with and monitor automation have remained basically the same. The first property of advanced systems, a high degree of autonomy, refers to their ability to initiate and carry out sequences of actions without requiring immediately preceding pilot input (Woods, 1996). This is possible because modern systems can change their behavior apparently on their own in response to input from a variety of sources, including operators, sensors of the environment, and designer instructions. This property presents the operator with the challenging task of keeping track of all possible sources of input and their interactions in order to anticipate how the automated systems are configured and how they will behave.

Advanced systems also involve a high level of authority; that is, the power to control and command actions (Woods, 1996). An example of high authority in the aviation domain is the "envelope protection" function on more recent automated aircraft. Envelope protection refers to the ability of the automation to detect and prevent or recover from predefined conditions (e.g., unsafe aircraft configurations). Once an undesired configuration is approached or detected, the automation has the power to override or limit pilot input.

Increasing levels of system authority and autonomy lead to a fundamentally new role for automation: It is no longer a purely reactive system that carries out pilot instructions narrowly, immediately, and directly. Rather, it increasingly behaves (or appears to behave) as if it were an agent capable of action based on its own accord (Sarter et al., 1997). As a consequence, it is much more important for human and machine to communicate about their intentions, future actions, and limitations and to coordinate their activities. However, for the most part, the newer, more powerful automated systems provide the same kind of feedback as do their predecessors.

In this study we examine human-automation interaction in the context of one of the most powerful automation suites currently in use in aviation: the Airbus A-320. This aircraft represents the trend toward automation with higher degrees of autonomy and authority. We conducted a full-scope simulation study to observe how line pilots who were experienced in glass cockpit operations handled flight situations that were designed to challenge pilot coordination and management of automated systems. The aircraft used in this study was chosen because it represents the trend toward highly independent and powerful automation -- this is not a study about one particular aircraft.

BACKGROUND: PILOT INTERACTION WITH MODERN COCKPIT AUTOMATION

This study builds directly on the results of a previous study that examined the potential for automation surprises in the context of high-autonomy, high-authority cockpit automation. In the earlier study (Sarter & Woods, 1997), we collected a corpus of A-S 20 pilot reports on cases in which the behavior of the automated systems surprised pilots during line operations. Most pilots who responded to the survey reported that they were surprised by the automation at least once during line operations, and they provided detailed descriptions of situations in which the automation did not act as they expected. From these reports, two main categories of automation surprises were extracted. These relate to situations in which (a) the system fails to take an expected action or (b) the automation carries out an action that was not explicitly commanded and not expected by the pilot.

In the spirit of converging operations, we used these categories to develop a simulation study of pilot-automation cooperation and coordination. We designed a scenario that contained multiple probes or challenges representative of the two categories. In the context of full-mission simulation, we presented the scenario to 18 experienced A-320 pilots. In this environment, investigators could determine if circumstances similar to those described previously did, in fact, create a problem, whether other problems were observed that were not reported by pilots, and how pilots managed to prevent or detect and recover from these communication and coordination breakdowns.

FLIGHT MANAGEMENT GUIDANCE SYSTEM

Before we describe the tasks and events used in the simulated flight scenario for this study, we provide a brief and simplified overview of one of the core systems of cockpit automation, the flight management and guidance system (FMGS). This system supports pilots in a variety of tasks related to flight management, flight guidance, and flight augmentation. Its major controls are the flight control unit (FCU), the multifunction keyboard of the control display unit (MCDU; one for each pilot), the sidesticks, and the thrust levers. FMGS-related cockpit displays consist of the two MCDU multifunction displays, two primary flight displays (PFD), which provide indications of the active and armed automation modes or flight mode annunciations (FMAs), and two horizontal situation indicators (HSIs), which are also called moving map displays.

The various FMGS interfaces and autoflight functions provide pilots with a high degree of flexibility in terms of selecting and combining levels and modes of automation in response to different task and situational requirements. The highest level of automatic control occurs in managed vertical and lateral navigation. In these modes of control, the pilot enters into the MCDU a sequence of targets that defines an intended flight path. He or she then activates the automation, which pursues the target sequence without the need for further pilot input. When the pilot must quickly intervene and change flight parameters, lower levels of automation are available. The pilot can enter individual target values for different parameters (i.e., airspeed, heading, altitude, vertical speed) on the Flight Control Unit (FCU). He or she then activates the corresponding mode. The target will be captured and maintained automatically until the target or mode of control is again changed by the pilot.

Flight path control is a highly dynamic task that involves transitions between different modes of control. These transitions can occur not only in response to pilot input but also because of changes in flight status and environment. For example, mode changes can occur automatically when a target value is reached (e.g., when leveling off at a target altitude) or when protection limits are exceeded (i.e., to prevent or correct pilot input that puts the aircraft into an unsafe configuration).

To find out which FMGS modes are currently active, the pilot must refer to the flight mode annunciations (FMAs) on the Primary Flight Display (PFD). These provide alphanumeric indications of the active (or armed) pitch, roll, and thrust modes as well as of the status of the autopilot(s), flight director(s), and auto-thrust system.

METHOD

Participants

The participants in this study were 18 male Airbus A-320 pilots (see Table 1). Participation was voluntary, and pilots were paid a nominal compensation for their cooperation. Pilots were assigned to one of two groups based on their line experience on the Airbus A-320. Half (n = 9) had up to 1200 hr of line experience on the A-320; the other half (n = 9) had more than 1200 hr of line experience on the airplane (1200 hr translate into approximately 1.5 years of calendar time on the aircraft). All participating pilots were first officers. Captains were not included in this study because a number of scenario events involved situations for which the detection of an event was the responsibility of the first officer alone.

Procedure

Pilots were asked to fly a 90-min scenario on an A-320 full-flight simulator. The scenario context was a flight from Los Angeles to San Francisco that was rerouted back to Los Angeles because of a major power outage in Northern California. Simulated out-the-window view as well as motion and sound cues were generated and presented throughout the flight.

Upon arriving at the simulator, pilots were given a short briefing on the context and purpose of the study, and they were informed about the role of the instructor who flew as the captain during the simulation. It was emphasized that the captain would not deliberately create problems for the participating pilot (e.g., through misentries). The participants were asked to handle the automation as they would in real line operations. They were then provided with the necessary flight paperwork and were given as much time as necessary to familiarize themselves with the information contained in these documents. The participating pilot was the pilot-flying (PF) until shortly after level-off at initial altitude. At that point, the captain became PP so that the participating pilot had to take care of the automation-related input required to comply with amended clearances. Once the automation was set up for the first approach back into Los Angeles and after this approach had been briefed, the participating pilot was PF a gain for the remainder of the flight.

After completion of the flight, a debriefing occurred that served as an opportunity for the investigator to ask additional questions about pilot behavior (especially any behaviors that were unexpected or ambiguous to the investigator) and the indications that had attracted the pilot's attention to different events. The debriefing enabled the investigators to discuss with pilots the details of events that they had either missed completely or handled in a problematic manner.

Experimental Scenario

In the study we used embedded probes that were designed specifically to operationalize previously identified problems associated with flight deck automation. The use of embedded probes is an important means of control over the context for behavior, and it allows for making inferences about the reasons underlying observed pilot performance, including breakdowns. Given that embedded probes are an integral part of the normal flow of events, they do not require interruptions of a scenario that might break and redirect the course of pilot reasoning and activities.

The scenario probes were designed based to a large extent on the results of a previous study (Sarter & Woods, 1997), which helped identify categories of problems with mode awareness and pilot-automation coordination. In cooperation with an A-320 instructor, we identified instances of two problem categories - the absence of expected automation actions and the occurrence of unexpected system activities - and combined them to form a coherent scenario. In addition, standard proficiency tasks were included to guard against the possibility that observed problems merely reflected a generally low level of pilot skill at using the automation.

The scenario provided opportunities for three major types of errors: errors of commission; delayed, yet still successful interventions with undesirable automation behavior; and errors of omission. Errors of commission could occur whenever a pilot interacted with or instructed the automation. Errors of omission and delayed interventions required specifically designed scenario events to ensure that systeminitiated activities took place which might be noticed late or missed completely by the pilot.

Standard Proficiency Tasks

A number of standard proficiency tasks that are part of every flight were included in the scenario. For example, pilots were asked to instruct the automation to go directly to a way point and to program and fly an Instrument Landing System (ILS) approach. Two additional tasks - instructing the automation to intercept a radial and flying in a holding pattern - were set up in a somewhat unusual manner. This was done to explore whether pilots would recognize the need for modifying the standard procedure for carrying out these tasks.

Probes of Mode Awareness and Pilot-Automation Coordination

All other scenario events and tasks were designed to test pilots' awareness of the status and behavior of the automation. The events involved a high potential for surprise arising from a mismatch between actual system behavior and pilots' likely expectations, which are known to guide their system monitoring (Sarter & Woods, 1997).

Violation of Expectations

Events in this category involved situations in which the automation failed to carry out an action that was expected by the pilots based on their understanding of the automation and their knowledge of input to the flight management system.

Need to (re)activate the approach. On some automated aircraft, the pilot must explicitly inform the automation about the beginning of the approach phase of flight. This step is necessary to ensure that the system slows the aircraft to allow the pilot to configure the airplane. This coordination step is called activating the approach and is achieved by pushing a key on one of the MCDU pages. If the pilot fails to activate the approach and then selects managed (i.e., automation-controlled) speed, the automation increases thrust and speed to return the aircraft to its descent target airspeed.

Selecting a lateral guidance mode after takeoff or go-around. Some aircraft are designed to default to flying runway track (instead of runway heading) following a takeoff or go-around. This can lead to a problem with takeoffs from parallel runways in case of a strong crosswind, in which the two different default settings - one aircraft's flight path affected by wind (heading) and the other's corrected for wind (track) - could create a situation of two converging aircraft. To avoid this problem, pilots must either select heading or activate managed navigation shortly after takeoff or go-around initialization. Discussions with pilots and results from surveys indicate that there seems to be a relatively high risk of forgetting this step, particularly in the infrequent case of a go-around, which usually involves additional distracting factors that required the go-around in the first place. To recreate this type of situation in this scenario, we instructed pilots to fly a published missed approach following a go-around.

Unexpected Automation Behavior

In the following cases the automation took some action that was not expected by the pilot. The unexpected action was triggered by sensor input, designer instructions, or system coupling.

Change of runway/loss of altitude constraints. In this case pilots were given an air traffic control (ATC) clearance for an ILS approach to runway "24 L" together with a number of altitude constraints for various way-points of the arrival that were not in the FMGC database. This required the pilot to program these constraints into the flight computer through the MCDU keyboard. Shortly after the entire clearance had been programmed, an amended clearance was issued to expect an ILS approach to runway "24 R." When the pilot changed the runway identifier in the MCDU to 24 R, the automation also erased all previously entered altitude constraints, even though they still applied.

The MCDU display indicated this change (the loss of the altitude constraints) through the disappearance of two normally present indications. The magenta altitude constraints next to the corresponding waypoints on the navigation display (ND) and the magenta asterisk next to the altitude constraint, which indicated that the altitude constraint would be met, both disappeared. Unless the pilot noticed these changes and reentered the constraints, the automation would fly the aircraft through these altitudes.

Expedite climb. During climb-out, the pilot set up the automation to fly the airplane to an altitude of 12 000 feet with an intermediate level-off at the waypoint Ventura at 10 000 feet. Once this occurred, ATC asked the pilot to expedite his climb through 9000 feet. The pilot could select from a number of modes to comply with this clearance. He could use the EXPEDITE mode, change the speed target on the FCU to expedite the climb, or use the vertical speed (V/S) mode on the FCU to get to 9000 feet. All of the modes he might use to comply with the new ATC instruction ignored the altitude constraint for an intermediate level-off at the waypoint Ventura at 10 000 feet. Thus the pilot had to remember to return to the managed climb mode at 9000 feet to honor the altitude constraint at Ventura; otherwise, the automation would fly through the constraint.

Go-around below 100 feet above ground level (AGL) without flight director guidance. This situation was particularly challenging as it involved a rare event in high-tempo, highly dynamic circumstances. It is the only situation for which applying Takeoff-Go Around (TOGA) power does not automatically arm the autothrust system. Once armed, single pilot action is required for the autothrust system to become active. In contrast, when a go-around is initiated below 100 feet AGL, pushing the thrust levers to the TOGA position disconnects the autothrust system. As a result, the pilot has to control speed and altitude manually.

In our scenario, during a visual approach backed up by ILS data at approximately 4-5 miles from the runway, pilots were told that they had to sidestep to the parallel runway because of departing traffic. The ILS for that runway was reported out of service, which eliminated guidance information and ensured that the pilot turned off the flight director. Pilots were told to initiate a go-around at about 80 ft AGL because landing traffic was still on the runway. The autothrust system did not arm, and pilots had to take steps to actively control the airplane's trajectory.

Data Collection

The first author was present during all experimental runs to collect data on pilot behavior in response to the probes built into the scenario and to debrief the participating pilots after the simulated flight was complete. The flight was not interrupted for any questions or discussions of observed pilot behavior. As there is no direct measure of mode awareness, pilots' level of awareness of the automation configuration was inferred from their responses to scenario events. This was possible because all probes were designed to be operationally significant, in the sense that they required some clearly defined, observable pilot intervention once the problem or situation was understood by the pilot. Additional supporting evidence about pilot awareness of automation state and behavior came from the participating pilots' discussions with the cooperating instructor throughout the flight.

In the debriefing pilots were asked about any behavior that was unclear or unexpected, and they were given a chance to ask questions about the scenario and the study in general. Both the observer and the instructor were present during the debriefing, which took between 30 mm and 60 mm depending on the pilot's performance during the flight and his interest in further discussions of issues related to the study.

RESULTS

Standard Proficiency Tasks

The standard proficiency tasks included in the scenario did not pose any problems for the pilots in the study. Difficulties were observed only with respect to the two tasks that were set up in an unusual manner and thus required pilots to modify the standard procedure accordingly.

Intercepting a radial outbound. In actual line operations, very-high frequency omnirange navigation equipment (VOR) radials are usually intercepted inbound to the station. There is a well-defined sequence of steps to program the flight management system (FMS) for such an intercept. In our scenario, however, we increased the difficulty of the task by giving pilots the unusual but possible clearance to intercept and track a radial outbound to a fix to see whether pilots understood that this clearance required a different procedure attributable to the flight management system logic. Six of the pilots (33.3%) had difficulties setting up the automation for the intercept -- two pilots were not sure how to create the outbound fix on the radial, and in three cases pilots confused the sequence of the to- and from-waypoints for the intercept. In another two cases, pilots forgot to update their current from-waypoint. All pilots detected and recovered from their errors on their own based on the visualization of the prog rammed intercept on the map display.

Hold present position. Eight of the pilots (44.4%) had problems complying with the ATC clearance to hold at their present position. In this scenario, three pilots (15.2%) tried to build the "HOLD" off of their next waypoint rather than off of the current from-waypoint, and two pilots (11.1%) entered the radial given to them by ATC under "inbound course" in the HOLD menu. In two other cases pilots failed to enter the specific HOLD parameters (distance of legs, direction of turn) requested by ATC. In addition, one pilot failed to realize that he did not have to enter the inbound course because it happened to match the default value.

In the first five cases, in which the map display provided a visualization of whether or not the HOLD had been built as intended, pilots detected and recovered from their mistakes on their own. In the latter three cases, however, in which the pilots had to realize the problem based on a review of the data on the MCDU HOLD page, they needed help from the instructor to realize that a problem existed and to modify their input to resolve the problem.

Probes of Pilots' Mode Awareness

Because there were no important differences in performance among line pilots at different levels of experience with this specific airplane, the performance of pilots in response to the embedded probes is reported for the group of participants as a whole.

Violation of Expectations

Need to (re)activate the approach. In 14 cases (26% of all 54 approaches flown in the course of this study), the participating pilot forgot to remind the pilot-not-flying (PNF) to activate the approach. The problem was not noticed by the PF until he selected managed speed, which led to a surprising increase rather than the expected decrease in thrust and speed, as the automation had not been instructed to slow the aircraft to the approach speed.

Selecting a lateral guidance mode after takeoff or go-around. This scenario required a published go-around in order to hold at the intersection of two specific airways (Raffs intersection). Once the pilot initiated the go-around, the automation defaulted to the so-called go-around track mode for lateral guidance. In this mode, the airplane flew the runway track (in stead of flying toward the intersection) unless the pilot intervened.

Four pilots forgot to activate managed navigation after initiating the go-around, which was necessary to make the airplane fly to the Raffs intersection and enter the holding pattern. In those four cases, the instructor had to intervene to ensure that the aircraft would get back on course. In addition, five pilots activated managed navigation too late to recover fully. They were only about two miles from the holding fix when they realized, based on indications on the map display, that the airplane was still in the runway-track mode. Another two pilots activated managed navigation (NAV) fairly late once they realized that, in this particular context, the dashed lines displayed in the MCP heading window did not indicate that managed NAV was active.

Unexpected Automation Behavior

Change of runway/loss of altitude constraints. In this situation, pilots programmed the anticipated ILS approach to runway 24 L in the MCDU, including a number of ATC altitude constraints for their arrival. ATC then informed the pilot that the ILS for runway 24 L had just failed, and that they could expect an approach to runway 24 R. When the pilot changed the runway in the MCDU, his action led not only to the desired runway change but also to the loss of all altitude constraints that he entered for the originally planned approach.

Four pilots never noticed that this happened and consequently failed to make their altitude constraints. In contrast, 10 pilots realized immediately or even anticipated the loss of constraints. Another 4 pilots detected the problem when they were given the clearance to maintain 270 knots until reaching 11 000 feet. They selected 270 knots on the FCU and then looked at the map display, where they realized that the magenta indications for the programmed altitude constraints were no longer shown next to the corresponding waypoints.

Of the 14 pilots who noticed the problem, only 12 recovered in time to make the constraints by reentering them. In addition, 1 of the 4 pilots who did not realize the problem still made the constraints because he was flying a descent profile that happened to lead to compliance. This case represents a good example of why it is not advisable to rely on performance outcome data alone -- without additional discussion in the debriefing, this pilot's inadvertent compliance would not have been uncovered.

Expedite climb. During climb-out, pilots were cleared to climb and maintain 12 000 feet and to cross the waypoint Ventura at or below 10 000 feet. Upon reaching approximately 4000 feet, they were given the instruction to expedite their climb through 9000 feet for traffic separation. Pilots had several automation options to choose from in order to comply with this clearance.

Eleven pilots used the EXPEDITE button on the FCU to engage this mode. Also, 5 pilots selected a lower airspeed on the FCU to make the airplane climb at a higher rate. The remaining 2 pilots used the vertical speed mode and dialed in a higher-than-normal rate of climb on the FCU.

In the debriefing, 7 pilots were asked why they did not use the EXPEDITE mode, which was designed for this type of situation. They responded that they did not like the fact that in this mode, the automation would drastically increase the pitch angle and slow the aircraft more than they felt was necessary. In addition, some pilots knew about and disliked the fact that the EXPEDITE mode would not honor any preprogrammed constraints.

Only 11 pilots (61.1%) complied with the altitude constraint at the waypoint Ventura. The other 7 pilots did remember to resume "normal climb" upon reaching 9000 feet, but they selected the "open climb" mode (instead of "managed vertical navigation"), which, similar to the EXPEDITE mode, does not honor constraints programmed into the MCDU.

Go-around below 100 feet AGL without flight director guidance. The case of a go-around below 100 feet AOL without flight director guidance was the only situation for which putting the thrust levers into the go-around position would not arm the autothrust system. In this situation, if the pilot failed to realize that the autothrust system was not armed, he might have experienced a number of problems. First, airspeed rapidly increased during the initial level-off because the thrust levers were in manual mode and set to full power. The pilot had to configure the airplane quickly to avoid overspeeding the flaps. After the flaps were up, the airplane continued to increase its speed. If the pilot realized that autothrust (ATHR) was not engaged, he might have chosen to activate ATHR by pushing the ATHR button on the FCU. This action, however, if not preceded by selecting a target speed on the FCU, resulted in yet another problem. The automation reverted to the last target speed it remembered -- namely, the approach speed. In other words, initially the airplane might have flown at a very high and rapidly increasing airspeed; once ATHR was selected, the power came back to idle to slow the airplane to a speed of about 140 kts. The solution was first to select a target speed on the FCU that allowed the pilot to safely revert to using ATHR.

The fact that the autothrust system was not armed was indicated by the absence of the blue ATHR indication (armed condition of the system) on the PFD. In fact, in the initial phase of this go-around, the entire flight mode annunciation area on the PFD was blank as the pilot turned off the flight directors (FDs) and the autopilot (AP) to fly a visual approach to runway 24 L. Another possible cue of the problem was the absence of the flashing climb (CLB) indication at thrust acceleration altitude, which appeared on the PFD if autothrust was armed to remind the pilot of activating autothrust.

With experience on the airplane, pilots tend to anticipate this prompt and might be surprised if it does not appear. On the FCU, the ATHR button was not illuminated, which was yet another cue that ATHR was not engaged. Once the pilot activated ATHR without selecting an FCU speed first, the approach speed value appeared in the FCU speed window, and it was also shown in blue digits below the airspeed tape. In terms of aircraft behavior, the pilot might have immediately realized the continuous and rapid increase in airspeed that could proceed all the way up to the maximum operating airspeed. Conversely, after engaging autothrust without setting a target airspeed, he might have observed an unexpected change to idle power and a rapid decrease in airspeed.

Only one pilot in this study handled the go-around below 100 feet AGL without any problems. He elected to stay in fully manual control of the aircraft until level-off at the acceleration altitude and then reengaged individual subsystems of the automation one after the other, each time assuring himself first that the automated system responded as expected and desired.

All other pilots focused on trying to figure out why the automation did not behave as expected, and they tried to get guidance from the automation as soon as possible. For example, seven pilots (38.9%) first called for the flight directors to be turned on after initiating the go-around, even though the automation was not set up to provide any meaningful guidance. Another seven pilots (38.9%) activated autothrust before selecting a target speed, and thus the approach speed became the airspeed target.

The fact that most pilots hesitated to take manual control of the aircraft and instead tried to understand what the automation was doing resulted in the following problems. Six of the pilots (33.4%) exceeded 250 knots IAS (indicated air speed) below an altitude of 10 000 feet. Another two pilots (11.1%) allowed the airspeed to increase until almost reaching the maximum allowable airspeed. Another two pilots (11.1%) oversped their flaps during the go-around. Finally, three pilots allowed the airspeed to increase all the way to the maximum operating speed before taking action.

During the debriefing all pilots explained that they had not expected the autothrust to disengage when applying full power for the go-around. They emphasized that they were busy watching airspeed trends and altitude instead of looking at the flight mode annunciations to find out about the status and behavior of the automation.

Table 2 provides an overview of pilots' performance in the context of the various scenario probes/events.

DISCUSSION

This study provides powerful converging evidence that the potential for breakdowns in mode awareness continues to exist even with recent, advanced automation designs. The prevalent problem in this study was significant delays before pilots detected and intervened with uncommanded system behavior. In some cases they completely missed the window of opportunity for recovery from that particular automated system activity -- an error of omission.

Omissions and delayed interventions can occur only in the context of automated systems that are sufficiently autonomous to initiate actions on their own. Observing this kind of breakdown in human-automation coordination requires careful setup to create conditions that trigger uncommanded system behaviors that might go undetected. This was the purpose of the probe events designed into the scenario used in this study.

Delayed interventions and mode errors of omission can be explained, in part, by the incompatibility among pilots' monitoring strategies and the increasing autonomy but low observability of today's automated systems. Pilots have reported that they monitor their automated flight deck systems with the goal of verifying their expectations for system behavior (Sarter & Woods, 1997). They form these expectations based on their knowledge of the input to the automated system in combination with their understanding of the functional structure of that system. On less advanced flight decks, where the automation functions more reactively, this expectation-driven approach to system monitoring might be more likely to succeed. Because these automated systems change status and behavior for the most part only in response to pilot input or commands, it is easier for pilots to predict changes in system status and behavior and to monitor for those events. If expected changes do not occur, the pilot is better able to notice and address problems.

With higher levels of autonomy and authority, systems can initiate actions independent of immediately preceding pilot input. Changes in system status and behavior can occur because of input from other sources, such as sensor readings, or because of system coupling. It is far more difficult for pilots to keep track of or form expectations about those events, especially when they involve longer time constants. Without those expectations, pilots are not likely to monitor the corresponding indications, and available feedback is often inadequate for capturing their attention in a timely manner (see Sarter, 2000; Woods, 1995). As a result, changes can go undetected for a long time, or they might be missed completely. The result is delayed interventions or errors of omission, respectively. These two outcomes are, fundamentally, symptoms of the same problem -- a pilot missing the onset of a system -- initiated activity or event. Whether this problem is noticed in time to recover, and thus results in delayed interven tion, or whether it leads to an error of omission depends primarily on the availability of effective feedback in support of data-driven monitoring.

This study also confirms a finding from previous studies (e.g., Sarter & Woods, 1994): Standard proficiency tasks do not create problems for glass cockpit pilots unless they involve some unusual aspect that requires deviations from the standard procedure for carrying out a task. For example, the intercept creates problems for some pilots only because of the unusual clearance asking pilots to fly outbound from, rather than inbound to, a very-high frequency omnirange navigation equipment (VOR). This indicates that even experienced pilots carry out many automation-related tasks based on rules and "recipes" and are therefore likely to encounter problems whenever novel circumstances require a deviation from those standard procedures. In other words, once a situation forces them outside of well-practiced routines, pilots tend to have difficulties. This can be explained in part by current approaches to training, which do not emphasize sufficiently the need for system exploration. Actively exploring a system has bee n shown to help build a model of its functional structure (e.g., Feltovich, Spiro, & Coulson, 1991) which would allow pilots to derive the appropriate actions to manage the automation across a wide range of situations.

One important way to address problems with mode awareness is to make automated systems more observable (i.e., they should be enabled to support operators more effectively in monitoring, anticipating, and learning from their behavior; Woods & Sarter, 2000). They should play a more active role in human-machine communication by capturing pilots' attention at the time of uncommanded events and transitions (for possible ways to achieve this goal, see Sklar & Sarter, 1999; Nikolic & Sarter, in press), or they should at least support their operators in detecting changes after the fact (e.g., through the use of event capture and history displays). Also, new styles of automation management and pilot-automation coordination should be explored that might lead to improved system and responsibility awareness (e.g., Hutchins, 1997; Olson & Sarter, 1999).

ACKNOWLEDGMENTS

However, addressing current problems with pilot-automation coordination is not enough. It is equally important to continue to monitor and predict the impact of further developments in automation design on overall system performance. The results of this research show how technology changes can create new opportunities for new kinds of errors. Based on the existing knowledge base on difficulties with human-automation interaction, we can predict that well-intentioned design will create new, ever more powerful machine agents. If these systems are developed without considering team play between humans and machine agents, unexpected, though not necessarily unpredictable, difficulties in human-machine coordination will continue.

The work described in this article was supported under Cooperative Agreement NCC 2592 with NASA-Ames Research Center (Technical monitors Everett Palmer and Kevin Corker). The authors are very grateful for the cooperation of a major United States carrier in this effort and for the assistance of many pilots and instructors who collaborated on this research. We also acknowledge the valuable comments and suggestions made by Charles Billings and Philip Smith in the course of this research project.

Nadine B. Sarter is an assistant professor in the Department of Industrial, Welding, and Systems Engineering and the Department of Psychology at Ohio State University. She received a Ph.D. in industrial and systems engineering from Ohio State University in 1994.

David D. Woods received a Ph.D. in cognitive psychology from Purdue University in 1979. He is a professor of industrial and systems engineering at the Institute for Ergonomics at Ohio State University.

REFERENCES

Abbott, K., Slotte, S., Stimson, D., Bollin, E., Hecht, S., Imrich, T., Lalley, R., Lyddane, G., Thie), G., Arnalberti, R., Fabre, F., Newman, T., Pearson, R., Tigchelaar, H., Sarter, N.. Helmreich, R., & Woods, D. (1996). The interface between flightcrews and modern flight deck systems (FAA Human Factors Team Report). Seattle: Federal Aviation Administration.

Billings, C. E, (1996). Aviation automation: The search for a human-centered approach. Mahwah, NJ: Erlbaum.

Eldredge, D., Dodd, R. S., & Mangold, S. J. (1991). A review and discussion of flight management system incidents reported to the Aviation Safety Reporting System (Battelle Report, prepared for the Department of Transportation). Columbus, OH: Volpe National Transportation Systems Center.

Feltovich, P. J., Spiro, R. J., & Coulson, R. L. (1991). Learning, teaching and testing for complex conceptual understanding (Tech. Report 6). Springfield, IL: Southern Illinois University School of Medicine.

Hutchins, E. (1997). The integrated mode management interface (NASA Contractor Report NCC 2-591). Moffett Field, CA: NASA-Ames Research Center.

Nikolic, M. I. & Sarter, N. B. (in press), Peripheral visual feedback: A powerful means of supporting attention allocation and human-automation coordination in highly dynamic data-rich environments. Human Factors.

Norman, D. A. (1990). The "problem" of automation: Inappropriate feedback and interaction, not "over-automation." Philosophical Transactions of the Royal Society of London, B 327, 585-593.

Olson, W.A. & Sarter, N. B. (1999). Informed consent in distributed cognitive systems: The role of conflict type, time pressure, and trust. In Proceedings of the 18th Digital Avionics Systems Conference (pp. 4.B.1-l-4.B.1-5). St. Louis, MO: Institute of Electrical and Electronics Engineers, Inc.

Parasuraman, R., & Mouloua, M. (1996). Automation technology and human performance: Theory and applications. Mahwah, NJ: Erlbaum.

Sarter, N. B. (2000). The need for multisensory feedback in support of effective attention allocation in highly dynamic event-driven domains: The case of cockpit automation. International Journal of Aviation Psychology, 10, 231-245.

Sarter, N. B., & Woods, D. D. (1992). Pilot interaction with cockpit automation: Operational experiences with the flight management system. International journal of Aviation Psychology, 2, 303-321.

Sarter, N. B., & Woods, D. D. (1994). Pilot interaction with cockpit automation: II. An experimental study of pilots' model and awareness of the flight management and guidance system. International Journal of Aviation Psychology, 4, 1-28.

Sarter, N. B., & Woods, D. D. (1995). How in the world did we ever get into that mode? Human Factors, 37, 5-19.

Sarter, N. B., & Woods, D. D. (1997). Team play with a powerful and independent agent: A corpus of operational experiences and automation surprises on the Airbus A-320. Human Factors, 39, 553-569.

Sarter, N. B., Woods, D. D., & Billings, C. E. (1997), Automation surprises. In G. Salvendy (Ed.), Handbook of human factors/ergonomics (2nd ed., pp. 1926-1943), New York: Wiley.

Sklar, A. E., & Sarter, N. B. (1999). "Good vibrations": The use of tactile feedback in support of mode awareness on advanced technology aircraft. Human Factors, 41, 543-552.

Tenney, Y. J., Rogers. W. H., & Pew, R. W. (1995). Pilot opinions on high level flight deck automation issues: Toward the development of a design philosophy (NASA Contractor Report 4669). Hampton, VA: NASA Langley Research Center.

Wiener, E. L. (1985). Human factors of cockpit automation: A field study of flight crew transition (NASA Contractor Report CR-177333). Moffett Field, CA: NASA-Ames Research Center.

Wiener, E. L. (1989). Human factors of advanced technology ("glass cockpit") transport aircraft (NASA Contractor Report 177528). Moffett Field, CA: NASA-Ames Research Center.

Wiener, E. L., & Curry, R. E. (1980), Flight-deck automation: Promises and problems. Ergonomics, 23, 995-1011.

Woods, D. D. (1995), The alarm problem and directed attention in dynamic fault management. Ergonomics, 38, 2371-2393.

Woods, D. D. (1996). Decomposing automation: Apparent simplicity, real complexity. In R. Parasuraman & M. Mouloua (Eds.), Automation technology and human performance: Theory and applications (pp. 3-17). Mahwah, NJ: Erlbaum.

Woods, D. D., & Sarter, N. B. (2000). Learning from automation surprises and going sour accidents. In N. Sarter & R. Amalberti (Eds.), Cognitive engineering in the aviation domain (pp. 327-353). Mahwah, NJ: Erlbaum.
                    Background and Flight Experience of
                           Participating Pilots
                    [less than or equal to]1200 hr of Line
                    Experience on the A-320 (n=9), Mean (SD)
Age                 37.2(2.1) years
Overall flight time 7111(1673) hr
Hours on A-320      714(312) hr
Previous Aircraft
    DC-9            1
    B 727           3
    B 747-200       1
    DC-10 2
    MD-80           1
    B 757           1
Prior glass cockpit
experience?
    Yes             1 (B 757)
    No              7
    MD-80           1
                    [greater than]1200 hr of Line Experience
                    on the A-320 (n=9), Mean (SD)
Age                 38.9 (3.4) years
Overall flight time 8933 (2475) hr
Hours on A-320      2078 (295) hr
Previous Aircraft
    DC-9            4
    B 727           4
    B 747-200
    DC-10 2
    MD-80           1
    B 757
Prior glass cockpit
experience?
    Yes             1 (F-18)
    No              7
    MD-80           1
                  Overview of Pilots' Performance in the
                Context of Different Scenario Events/Probes
                             Required Action(s) Delayed but Still
                             Performed at       Successful Action(s)/
                             Appropriate Time   Intervention(s)
Need to reactivate
the approach                         4                    0
Selecting a lateral guidance
mode after takeoff (TO)
or go-around                         7                    2
Change of runway/loss
of altitude constraints             10                    2
Expedite climb                      11                    0
Go-around below 100 feet
AGL without flight
director guidance                    1                    0
                             Interventions too Late No Action(s)
                             to Avoid Problem       Intervention(s)
Need to reactivate
the approach                           0                  14
Selecting a lateral guidance
mode after takeoff (TO)
or go-around                           5                   4
Change of runway/loss
of altitude constraints                2                   4
Expedite climb                         0                   7
Go-around below 100 feet
AGL without flight
director guidance                     13                   0
COPYRIGHT 2000 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Sarter, Nadine B.; Woods, David D.
Publication:Human Factors
Date:Sep 22, 2000
Words:7552
Previous Article:Training Concurrent Multistep Procedural Tasks.
Next Article:Human Versus Automation in Responding to Failures: An Expected-Value Analysis.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters