Target acquisition with UAVS: vigilance displays and advanced cuing interfaces.
U.S. Air Force missions involving reconnaissance, airlift support, and weapons delivery are being carried out by unmanned aerial vehicles (UAVs). These vehicles offer potential advantages over conventional "manned" aircraft; they are capable of long duration and extremely high altitude flight, they can operate in areas contaminated by radiation or biotoxins, and they can withstand high-g forces that exceed human tolerance. They also eliminate the risk of pilot fatalities and have generally lower aircraft operating costs. Consequently, UAVs are expected to have an increasing role in current and future aviation (Mouloua, Gilson, & Hancock, 2003).
Although UAVs are designed to operate without an onboard pilot, these aircraft are not uncontrolled--humans are needed to perform supervisory functions and to manage systems manually when unforeseen contingencies such as malfunctions and enemy actions arise. Along that line, Mouloua et al. (2003) have indicated that operator vigilance and target search/recognition capabilities are key elements to be considered for human-centered design in UAV control. For example, consider the detection and engagement of hostile aircraft in the vicinity of a UAV, a situation proposed by Thompson (2000) as a future concept of operations. In contemporary UAVs, and most likely the next generation as well, operators are required to monitor displays throughout the duration of a mission for the intrusion of "hostiles" into the vehicle's airspace and, upon detection, assume manual control of the vehicle for targeting. Consequently, it is important to gain insights about factors that affect the ability of UAV operators to remain vigilant and to search for and recognize targets. One approach toward that goal is to determine if factors known to influence vigilance and target detection in other settings are also relevant in the control of UAVs. That strategy guided the present study.
Two factors that influence signal detection in vigilance and may have important implications for the subsequent control of UAVs are display type and event rate. Display type categorizes vigilance displays as sensory or cognitive in format. Changes in the physical attributes of stimuli are critical signals for detection in sensory displays; cognitive displays require more symbolic manipulations to define critical signals, as when observers must determine if an array of digits sums to a predetermined value (See, Howe, Warm, & Dember, 1995). An assessment of the relative advantages of sensory and cognitive displays in a UAV control task is consistent with See et al.'s (1995) suggestion that more consideration be given to the potential benefits of cognitive vigilance displays in operational research. Accordingly, target acquisition was examined in the present study in a simulated UAV control environment after observers were alerted to the presence of hostile aircraft through the use of sensory or cognitive vigilance displays.
Cognitive displays might be useful for UAV operators because signal detection is more stable over time with cognitive than with sensory displays and cognitive displays are less susceptible to the degrading effects of parallax distortions produced by observers' head movements (See et al., 1995). However, cognitive tasks have been associated with higher levels of mental demand, which may interfere with a subsequent target acquisition task (Deaton & Parasuraman, 1993; Matthews, Davies, Westerman, & Stammers, 2000). The higher level of mental demand associated with cognitive displays and the consequent greater potential for dual-task interference led us to hypothesize that a cognitive vigilance display would be less effective than a sensory display in aiding UAV operators to detect potential enemy threats.
Event rate refers to the rate of presentation of noncritical or neutral events in which critical signals for detection are embedded (Warm & Jerison, 1984). A display with a high event rate would have the advantage of permitting UAV controllers to scan more frequently for enemy targets than would one with a low event rate. However, high-event-rate displays have been found to result in poorer levels of signal detection as well as in higher subjective ratings of mental workload, as compared with displays with low event rates (Warm, Dember, & Hancock, 1996). Therefore, a low-event-rate display might be more beneficial for UAV control. That possibility was also examined in the present study.
The application of advanced interface technologies in the form of supplementary visual, auditory, and haptic cues has been useful in enhancing performance efficiency and reducing workload in several aviation-related tasks involving target detection/acquisition (Haas, Nelson, Repperger, Bolia, & Zacharias, 2001; Tannen, Nelson, Bolia, Warm, & Dember, 2004). However, direct comparisons of the three formats are sparse, and no data are available concerning the benefits of supplemental cuing on target detection in UAV control. An examination of the effects of visual, auditory, and haptic advanced cuing interfaces on target acquisition in a simulated UAV control environment was, therefore, also part of the present study. The phenomenon of "shift cost," in which performance efficiency is degraded on a target task when observers must alternate between tasks that demand different forms of information processing (Monsell, 2003; Styles, 1997), led us to hypothesize that visual cuing would be superior to auditory and haptic cuing in locating enemy icons in the UAV control system because only with the visual cue would the sensory architecture of information processing be similar.
Sixteen naive observers (8 men and 8 women) from Wright-Patterson Air Force Base (WPAFB) served in the study. They were paid $40 for their participation. Observers ranged in age from 18 to 26 with a mean age of 22 years; all had normal or corrected-to-normal vision and were right-handed.
Global UAV Control Scenario
The UAV control task required observers to monitor a video display terminal (VDT) for a predefined warning signal indicating the presence of an enemy aircraft (target) in the vicinity of their UAV. Upon detecting such a warning, observers were required to locate and acquire (i.e., lock onto) the target. The VDT presented a blank, light blue field when the UAV was not under threat (nonthreat screen). The warning or vigilance display consisted of white block numerals that appeared on a dark blue background at the bottom center of the screen. Immediately following the correct detection of an enemy aircraft warning signal, or 2 s after an undetected warning, the threat screen shown in Figure 1 appeared on the VDT. That screen depicted sky (blue) and land terrain (green/brown) and also contained the warning display to be monitored, an altitude indicator (white vertical numbers), an aiming crosshair, a visual locator box, and the enemy target symbolized by an unfilled oval set off from its background by a black border.
[FIGURE 1 OMITTED]
Immediately after observers acquired the target, the VDT reverted to the nonthreat screen and maintained that screen until the next warning signal appeared. Because a scanning imperative can be time consuming and resource demanding (Jonides, 1981), the UAV control system employed in this study was designed to minimize that imperative by the use of the dual-screen format. Scanning the aerial environment for threats was demanded only when controllers were alerted to the presence of threats in the surrounding airspace. The UAV control arrangement employed herein mimicked current operational systems in which controllers located at a station distant from the UAV use a gimballed camera to gain a limited ("looking through a soda straw") field of view of ground and sky and acquire targets by manipulating the remote camera line of sight via a manual joystick in combination with a stationary display (Draper, Ruff, Fontejon, & Napier, 2002).
A 2 x 2 x 4 x 14 x 3 mixed design was employed. Independent variables were two warning display event rates or scan rates (slow = 15 events/min and fast = 40 events/min), two warning display formats (sensory and cognitive), four supplementary cuing interfaces (no cuing, visual, spatial-audio, and haptic), 14 target locations (described later), and three trial blocks (also described later). Scan rate was a between-groups factor, whereas display format, cuing interface, target location, and trial blocks were within-groups factors. Eight observers (4 women, 4 men) were assigned at random to each of the two scan rate conditions.
Specifics of the Warning Displays
The warning display formats used in this investigation were based on the sensory and cognitive vigilance tasks developed by Deaton and Parasuraman (1993). With each type of format, observers were required to monitor the repetitive presentation of pairs of digits. Stimuli were exposed for 300 ms at the rate of one digit pair every 4 s at the slow scan rate and one digit pair every 1.5 s at the fast scan rate. In both the sensory and cognitive formats, the digits were drawn from the set 0, 2, 3, 5, 6, and 9, which permitted 36 possible permutations of two-digit pairings, counting self-pairings (e.g., 33, 66). When displayed at full size, each digit was contained within a 21- x 33-mm rectangle (1.55[degrees] x 2.09[degrees] visual angle) and was separated from its pair mate at the closest point by 25 mm. In the sensory format, neutral events or safe scans consisted of paired digits that were identical in physical size. Critical signals for detection--threat warnings--were cases in which one of the digits was smaller than the other; specifically, it was scaled down so that it fit within a 15- x 29-mm rectangle (0.95[degrees] x 1.84[degrees]). In the cognitive task, safe scans were those in which the two digits were either both odd or both even. Threat warnings were odd-even or even-odd pairings.
Experimental sessions lasted for 32.4 min and were divided into three continuous 10.8-min blocks. Within the two display formats at each event rate, 18 warning signals were presented at random intervals to each observer during each trial block with the following constraints: A warning signal appeared at least once every 2 min; warning signals never occurred in two consecutive stimulus events; and four of the signals occurred while observers were actively acquiring a target. The last constraint reflected the likelihood in operational environments that new targets may appear while a UAV controller is in the process of acquiring a prior target. Accordingly, observers had to attend to the warning display for additional threats even while in the process of acquiring an enemy target.
In the sensory format, 18 digit pairings were selected from the 36 possible pairing permutations to serve as warning signals. The remaining permutations served as safe-scan pairings. Signal pairings were determined at random for each observer in any given run. These pairings were iterated at random for an observer throughout the three trial blocks of a given run with the restriction that any signal pair was presented only once per block. For each observer, randomization ensured that the spatial position of the smaller component in the total set of warning signals appeared equally often on the left and the right within any trial block in aw given run. In the cognitive format, 18 of the 36 possible digit permutations were by definition warning signals. These 18 were iterated at random for each observer throughout the three experimental blocks on any run, with the restriction that any signal pair was presented only once per block.
With any run in either the sensory and cognitive formats, each safe-scan pairing was presented on 8 occasions during each trial block at the slow event rate (18 x 8 = 144 safe scans) and on 23 occasions during each trial block at the fast event rate (18 x 23 = 414 safe scans). In all format/event rate combinations, the order and time of appearance of safe-scan pairings was varied at random for each observer within each trial block. Observers responded to warning signals by pressing the space bar on a computer keyboard with their left hand.
Specifics of the Target Acquisition Requirement
To acquire targets, observers positioned the center of the aiming crosshair located in the middle of the VDT screen (see Figure 1) over the stationary target (10 x 12 mm, 0.64[degrees] x 0.76[degrees] visual angle) using a force-feedback control stick (Immersion Corp.). The VDT afforded the observer an 11.25[degrees] field of view at either side of the crosshair. Targets could be initially positioned on a circular plane at one of 14 possible fixed locations outside the field of view along the remaining 337.5[degrees] arc extending from the left (-) to the right (+) of the observer. These positions were [+ or -] 60[degrees], 75[degrees], 90[degrees], 105[degrees], 120[degrees], 135[degrees], and 150[degrees]. Targets were equidistant from the UAV, their elevation was always 10[degrees] above the horizon--a position that located them along the same horizontal vector as the aiming crosshair (see Figure 1)--and they remained stationary. Consequently, operators had to alter only the yaw of the aircraft in order to acquire targets; pitch and roll were fixed parameters.
Each acquisition scenario began with the UAV centered at 0[degrees]. Targets were brought into view by left or right movements of the control stick resting in the participant's right hand. Leftward movements of the control stick turned the UAV counterclockwise, and rightward movements turned it clockwise. The scene shifted at a rate of 0.49[degrees]/s for each degree of stick displacement, and maintaining the stick in its maximum leftward or rightward displacement (+35[degrees]) resulted in continuous movement of the scene at the rate of 17.15[degrees]/s. Targets were considered "acquired" when the aiming crosshair was continuously positioned on the target for 2 s. In the 14 cases per block wherein target acquisition was required, target locations were varied at random for each observer with the restriction that a given location could appear only once per block. In the four cases per block wherein observers were warned of the presence of enemy aircraft while engaged in the acquisition of a prior threat, they were required to execute the appropriate warning detection response but were not required to acquire an additional target, and the threat screen for the prior target was not altered.
Target acquisition was augmented with one of four different cuing interfaces to indicate target location: no cue, visual, spatial-audio, or haptic. When visual cuing was present, target location information was provided by way of a line presented in the visual locator box positioned at the bottom right corner of the monitor indicating direction of the target from the aiming crosshair (see Figure 1). The locator box depicted a bird's-eye view of the environment, with the observer's UAV in the center and enemy target position on the horizontal plane illustrated by the locator line. Spatial-audio cuing consisted of broadband noise pulses colocated with the visual target. This was achieved using an Air Force Research Laboratory Auditory Localization Cue Synthesizer (ALCS; McKinley, Ericson, & D'Angelo, 1994). The noise pulses were digitally filtered with 1[degrees] nonindividualized head-related transfer functions and presented binaurally over headphones. Haptic cuing was achieved by means of the force feedback property of the control stick (Rosenberg, Lacey, & Stredney, 1995), which permitted guidance of the observer's control inputs toward the target.
The study was conducted at the U.S. Air Force Research on Adaptive Interfaces for Virtual Environments Laboratory at WPAFB. Observers were tested individually in a windowless room with fluorescent lighting. They wore headphones and were seated at a desk approximately 0.9 m from their control VDT, which was positioned at eye level. Observers were tested on four separate days over a maximum 2-week period. They experienced two runs per day, one with each warning format using a common cuing interface. With each format, testing was preceded by a two-phase practice session in which observers were familiarized with the warning format and the cuing interface to be experienced during that portion of the run. During each practice session, all observers reached a predetermined criterion of 90% correct detections on the warning (vigilance) task, with false alarms rates not to exceed 5%. Overall, the mean percentages of correct detections during practice for the cognitive and sensory versions of the warning task were 94.8% and 96.7%, respectively, values that did not differ significantly from each other, t(15) = 1.13, p > .05, and false alarms were essentially absent during the practice sessions with both warning formats. Thus performance on both versions of the warning task was at comparable levels at the outset of the study. In addition, all observers successfully locked onto all targets to be acquired during each practice session.
The main portion of each testing run with a given warning format was initiated immediately following practice. Upon the completion of a run, observers rated the perceived mental workload associated with that run using a computerized version of the NASA-Task Load Index (NASA-TLX), in which overall workload scores can vary from 0 to 100 (Warm et al., 1996). Observers were permitted a 30-min rest between runs on any given day. The order of presentation of the two warning formats within the two testing runs on a given day--sensory then cognitive (S/C) or cognitive then sensory (C/S)--was alternated across days. Half of the observers within each scan-rate condition were tested using the sequence S/C, C/S, S/C, C/S on Days 1 through 4, respectively; the remaining observers were tested using the opposite sequence. The order in which the observers experienced the four interface conditions across testing days was balanced.
A Dell Dimension XPS T500 Pentium III computer with customized software was used to orchestrate all stimulus presentations and to record (a) the accuracy of responses to warning signals and (b) total target acquisition time. Detection responses occurring within 1.5 s of the appearance of a warning signal were recorded automatically as correct detections. All other responses were recorded as false alarms. False alarm responses on the threat warning (vigilance) task did not lead to the initiation of a threat screen on the VDT. Essentially, the system featured a computer override that protected observers from unnecessary scanning resulting from mistakenly identified warnings. This feature was consistent with the aim of designing a system that minimized the observers' scanning imperative. A consequence of the override feature was that it also provided observers with feedback about false alarms on the threat warning task.
Threat Warning Detection
The percentages of correctly detected threat warnings in all experimental conditions were converted to arcsines and tested for statistical significance by means of a 2 (event rate) x 2 (display format) x 4 (cuing interface) x 5 (trial block) mixed analysis of variance (ANOVA) with repeated measures on the last three factors. Threat warnings were detected significantly more often when they appeared in the sensory format, M = 95.8%, SE = 1.2%, than in the cognitive format, M = 80.0%, SE = 4.5%, F(1, 14) = 17.05, p < .01. In addition, the overall frequency of detected threat warnings declined significantly over time blocks: Means and standard errors for Trial Block 1 = 89.7%, SE = 1.9%; for Trial Block 2 = 86.0%, SE = 3.0%; and for Trial Block 3 = 85.2%, SE = 3.3%, F(2, 23) = 3.62, p < .05. The main effect of event rate was not significant ([M.sub.fast] = 87.5%, SE = 2.6%; [M.sub.slow] = 86.4%, SE = 2.8%), and none of the remaining sources of variance in the analysis were significant, p > .05 in all cases. Box's correction was employed when appropriate with the repeated measures factors in this and all subsequent ANOVAs to correct for violations of the sphericity assumption (Maxwell & Delaney, 2004).
Mean percentages of false alarms were also converted to arcsines and tested for statistical significance by means of a 2 x 2 x 4 x 3 mixed ANOVA involving the same sources of variance as in the analysis of the correct detection data. The analysis indicated that observers committed a significantly greater number of false alarms when potential threat warnings were displayed in the cognitive format, M = 15.2%, SE = 3.2%, than in the sensory format, M = 3.4%, SE = 0.6%, F(1, 14) = 34.04, p < .001. None of the remaining sources of variance in the analysis reached statistical significance (p > .05 in all cases).
Target Acquisition Time
Observers in all experimental conditions successfully acquired all targets. Preliminary inspection of the target acquisition time data indicated that the scores were normally distributed in all experimental conditions and that transformations were not needed to meet the normality assumption of the ANOVA. A 2 (scan rate) x 2 (warning format) x 4 (cuing interface) x 14 (target location) x 3 (trial block) mixed ANOVA with repeated measures on the last four factors revealed that acquisition time was significantly faster when observers were alerted by warnings in the sensory format, M = 9.2 s, SE = 0.2, than by warnings in the cognitive format, M = 9.4 s, SE = 0.1, F(1, 14) = 7.46, p < .05. In addition, there were significant main effects for cuing interface, F(2, 23) = 65.42, p < .001, and target location, F(2, 25) = 35.16, p < .001, as well as a significant Interface x Location interaction, F(2, 27) = 5.77, p < .01. None of the remaining sources of variance in the analysis reached statistical significance (p > .05 in each case).
The Interface x Location interaction is presented in Figure 2. Mean acquisition times are plotted as a function of target starting location for the four cuing-interface conditions. Standard errors are omitted because they would clutter the figure.
[FIGURE 2 OMITTED]
It is evident in the figure that acquisition times were similar for the visual, auditory, and haptic cuing conditions and that for all of these conditions, acquisition times were generally faster than those in the uncued control condition. It is also evident in the figure that acquisition times for all cuing conditions increased symmetrically with increments in the angular displacement of the target's start location from the left or right of center and that the speed advantages for acquiring cued (as compared with uncued) targets diminished as the eccentricity of the target's start location increased in either the left or right direction. The figure also reveals a similar but less pronounced location effect for uncued targets presented to the left of center and an opposite trend for uncued targets presented to the right of center, which are associated with generally slower acquisition times as compared with the uncued leftward-appearing targets. These latter impressions for uncued targets were not supported, however, by a supplementary analysis indicating that the simple main effect of target location was not statistically significant in the uncued condition, p > .05.
Perceived Mental Workload
Mean global workload scores revealed that participants did not find the UAV control task to be especially demanding; means in all combinations of scan rate, warning format, and cuing interface fell consistently below the midpoint of the NASA-TLX scale, M = 35.8, range 26.8 to 45.4. A 2 (scan rate) x 2 (warning format) x 4 (cuing interface) mixed ANOVA with repeated measures on the last two factors was employed to assess the statistical significance of the global workload data. Perceived overall workload was found to be significantly higher when threat warnings were displayed in the cognitive format, M = 38.2, SE = 5.4, than in the sensory format, M = 33.4, SE = 4.8, F(1, 13) = 6.82, p < .05. None of the remaining task elements had a significant impact on overall workload; all main effects and interactions involving these elements lacked statistical significance (p > .05 in all cases).
Warning Detection/Target Acquisition Performance
The results of the study support the hypothesis that a sensory vigilance display would be more appropriate than a cognitive display in aiding UAV operators to detect potential enemy threats. Although both display formats were subject to the vigilance decrement, observers detected more targets with fewer false alarms in the sensory than the cognitive format, and the sensory format also led to more rapid threat acquisition times. Moreover, the perceived mental workload imposed on observers by the warning detection/target acquisition ensemble was significantly less when the ensemble included a sensory vigilance component, as compared with a cognitive one.
Research on practical issues often yields results that have unanticipated theoretical implications. This is the case with regard to the levels of workload found in the present study. For many years, vigilance tasks were considered to be understimulating and the vigilance decrement was considered to result from observers being unable to maintain a sufficient level of alertness. However, recent evidence has suggested the opposite: that the information-processing demand of vigilance tasks is high and that the decrement reflects the depletion of information-processing resources over time (Johnson & Proctor, 2004). The view that vigilance tasks are highly demanding comes from experiments with the NASA-TLX showing that the workload of vigilance tasks falls in the middle to upper range of the scale (Warm et al., 1996). Given that observers in this study had to continuously detect critical signals on the vigilance display and also acquire targets whose presence was indicated by those signals, one would anticipate that the workload scores herein would be at least as high if not higher than those typically found in vigilance experiments. To the contrary, however, they were much lower. The mean global workload scores in the cognitive (38.2) and sensory (33.4) vigilance tasks of this study fell well below the midpoint of the NASA-TLX and were considerably less than those reported by Deaton and Parasuraman (1993), whose cognitive (M = 65.4) and sensory (M = 62.6) tasks were duplicated in this study. What can account for this outcome?
Starting with Mackworth's (1950/1961) seminal experiments, laboratory and field research on vigilance has assumed that signal detection would represent the initial phase of a detection-action sequence in which observers become alerted to a problem and then react effectively to it. However, the laboratory studies that are responsible for uncovering the high workload of sustained attention tasks (Warm et al., 1996) have focused solely on the variables influencing signal detection; they have ignored the subsequent actions taken by operators upon detecting such signals. Consequently, the vigilance tasks confronting observers in the earlier studies were rather abstract in character: Signals were detected for the sake of detection, and there were no implications for subsequent action.
The design of the present study is unique in the laboratory vigilance domain: It is the initial experimental investigation into the detection-subsequent action scenario whereby performance on a laboratory vigilance task has immediate consequences for a task that follows. In contrast to earlier studies, in this investigation the vigilance task was incorporated into a simulated real-world setting in which detecting warning signals directly aided observers in tracking and destroying enemy threats. In this more dynamic context, the quality of vigilant behavior could have taken on greater perceived importance, leading to lower ratings of perceived workload. An account along this line is consistent with suggestions that increments in motivation increase the information-processing resources available to perform a task (Matthews & Davies, 1998) and that the manner in which an observer interprets the situation may have a serious impact on the stress and workload of sustained attention tasks (Hancock & Warm, 1989).
Advanced Cuing Interfaces
Consistent with previous research on aviation-related tasks (Haas et al., 2001; Tannen et al., 2004), the present study revealed that supplementary cuing could be effective in enhancing UAV operators' performance in target acquisition. The speed with which observers acquired threats was greater for each of the cuing interfaces than for the no-cuing control. However, the cuing interfaces were not differentially effective. This outcome is inconsistent with our hypothesis, based on the phenomenon of "shift cost," that visual cuing would be the most effective supplementary aid in a UAV-control environment dominated by visual input. However, it is congruent with Bronkhorst, Veltman, and van Breda's (1996) finding that search time for the position of an enemy aircraft in a simulated flight environment was improved equally well with either a spatial-audio cue or a visual cue over a no-cue control condition. Clearly. information about a target's spatial position, rather than harmony in the sensory modality of cue and target, is the more potent determiner of the ability of supplementary cues to enhance target detection in UAV control and other aviation-related tasks. This sort of outcome is in line with a substantial literature indicating that perceptual systems are highly responsive to invariant stimulus relations or "amodal" properties abstracted from different forms of sensory input (Calvert, Spence, & Stein, 2004; Dember & Warm, 1979; Walk & Pick, 1981).
The finding that the precise sensory form of cuing may be relatively unimportant for improving target acquisition speed in the UAV-control task has potentially valuable practical implications. Like pilots of today's advanced aircraft, operators of future UAV systems may experience "clutter" in which one sensory modality is overloaded at a given time (see Hettinger, Cress, Brickman, & Haas, 1996). Our results suggest that future UAV control systems might permit operators to select auditory, visual, or haptic targeting cues without incurring a performance cost by the choice of a particular channel. This flexibility is crucial because the speed at which one responds to a threat is a vital aspect of the combat environment (Shaw, 1988).
Although the results with the cuing interfaces seem straightforward, a complicating issue stemming from the significant Interface x Location interaction warrants consideration. That issue is the apparent loss of the speeded detection benefits associated with cuing as the eccentricity of the target's start location increased. At first glance, it might seem that the cuing interfaces were least effective when they were needed most. However, this finding is probably an artifact of the structure of the acquisition task employed in the study. Given that the rate of movement of the UAV in either a leftward or rightward direction was constrained (17.5[degrees]/s), cuing observers would generally offer an advantage over a no-cue condition by providing initial target direction information. However, because the targets were located on a circular plane surrounding the observer, the two extreme target locations were spatially adjacent. Assuming that observers made an initial movement of the control stick in a given direction and continued in that direction when no cue was present, acquisition times for these two extreme locations would be affected least by an initial wrong turn in the opposite direction of the target. Thus acquisition time for those positions without a cue should be least different from cued acquisition times.
The Air Force Office of Scientific Research New World Vistas Program funded this research. The authors are grateful for the support of the Air Force Research Laboratory Human Effectiveness Student Practicum Program in association with the University of Cincinnati Psychology Department. We also acknowledge the technical contributions of Jim Berlin and Merry Roe of Sytronics, Inc., and Nat Ungar of the University of Cincinnati.
Bronkhorst, A. W., Veltman, J. A., & van Breda, L. (1996). Application of a three-dimensional auditory display in a flight task. Human Factors, 38, 23-33.
Calvert, G., Spence, C., & Stein, B. E. (Eds.) (2004). The handbook of multisensory processes. Cambridge, MA: MIT Press.
Deaton, J. E., & Parasuraman, R. (1993). Sensory and cognitive vigilance: Effects of age on performance and mental workload. Human Performance, 6, 71-97.
Dember, W. N., & Warm. J. S. (1979). Psychology of perception (2nd ed.). New York: Holt, Rinehart, & Winston.
Draper, M. H., Ruff, H. A., Fontejon. J. V.. & Napier, S. (2002). The effects of head-coupled control and a head-mounted display (HMD) on large-area search tasks. In Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting (pp. 2139-2143). Santa Monica. CA: Human Factors and Ergonomics Society.
Haas, M. W., Nelson, W. T., Repperger. D., Bolia, R. S., & Zacharias, G. (2001). Applying adaptive control and display characteristics to future air force crew stations. International Journal of Aviation Psychology. 11,223-235.
Hancock, P. A., & Warm. J. S. (1989). A dynamic model of stress and sustained attention. Human Factors, 31, 519-537.
Hettinger L. J., Cress, J. D.. Brickman, B. J., & Haas. M. W. (1996). Adaptive interfaces for advanced airborne crew stations. In Proceedings of the Third Annual Symposium on Human Interaction With Complex Systems (pp. 188-192). Los Alamitos, CA: IEEE Computer Society Press.
Johnson, A., & Proctor, R. W, (2004). Attention: Theory and practice. Thousand Oaks, CA: Sage.
Jonides. J. (1981). Voluntary versus automatic control over the mind's eye's movement. In J. B. Long & A. D. Baddeley (Eds.), Attention and performance IX (pp. 187-205). Hillsdale, NJ: Erlbaum.
Mackworth, N. H. (1961). Researches on the measurement of human performance (Medical Research Council Special Report Series 268, London: HM Stationary Office). In H. W. Sinaiko (Ed.). Selected papers on human factors in the design and use of control systems (pp. 174-331). New York: Dover. (Original work published 1950)
Matthews. G., & Davies. D. R. (1998). Arousal and vigilance: The role of task demands, In R. R. Hoffman, M. F. Sherrick, & J. S. Warm (Eds.), Viewing psychology, as a whole: The integrative science of William N. Dember (pp. 113-144). Washington. DC: American Psychological Association.
Matthews. G., Davies. D. R., Westerman, S. J., & Stammers. R. B. (2000). Human performance: Cognition, stress and individual differences. Philadelphia, PA: Taylor & Francis
Maxwell. S. E., & Delaney. H. D. (2004). Designing experiments and analyzing data: A model comparison perspective (2nd ed). Mahwah. NJ: Erlbaum.
McKinley. R. L., Ericson, M. A., & D'Angelo, W. (1994). 3-Dimensional auditory displays: Development, applications, and performance. Aviation, Space, and Environmental Medicine, 65, A31-A38.
Monsell, S. (2003). Task switching. Trends in Cognitive Science, 7, 134-140.
Mouloua. M., Gilson. R.. & Hancock, P. A. (2003). Human-centered design of unmanned aerial vehicles. Ergonomics in Design, 11(1), 6-11.
Rosenberg, L. B., Lacey, T. A., & Stredney, D. (1995). Haptic interface for virtual reality simulation and training: Phase 1 (Air Force Office of Scientific Research Rep. No. TR-95-0482). Washington, DC: U.S. Government Printing Office.
See, J. E., Howe, S. R., Warm. J. S., & Dember, W. N. (1995). Meta-analysis of the sensitivity decrement in vigilance. Psychological Bulletin, 117, 230-249.
Shaw. R. L. (1988). Fighter combat: Tactics and maneuvering. Annapolis, MD: U.S. Naval Institute.
Styles, E. A. (1997). The psychology, of attention. Hove. UK: Psychology Press.
Tannen, R. S., Nelson. W. T., Bolia, R. S.. Warm, J. S., & Dember, W. N. (2004). Evaluating adaptive multisensory displays for target localization in a flight task. International Journal of Aviation Psychology, 14, 297-312.
Thompson, C. (2000). F-16 UCAVs: A bridge to the future of air combat? Aerospace Power Journal, 14(1), 22-36.
Walk, R. D., & Pick, H. L. (Eds.). (1981). Intersensory perception and sensory integration. New York: Plenum.
Warm, J. S., Dember, W. N., & Hancock, R A. (1996). Vigilance and workload in automated systems. In R. Parasuraman & M. Mouloua (Eds.), Automation and human performance: Theory and applications (pp. 183-200). Hillsdale, NI: Erlbaum.
Warm, J. S., & Jerison. H. J. (1984). The psychophysics of vigilance. In I. S. Warm (Ed.), Sustained attention in human performance (pp. 15-59). Chichester, UK: Wiley.
Daniel V. Gunn is a usability engineer at Microsoft Game Studios. He received his Ph.D. in experimental psychology/human factors at the University of Cincinnati in 2002.
Joel S. Warm is a professor of psychology and director of graduate studies (basic science) at the University of Cincinnati. He received his Ph.D. in experimental psychology from the University of Alabama in 1966.
W. Todd Nelson is a senior engineering research psychologist in the Collaborative Interfaces Branch, Warfighter Interface Division, Human Effectiveness Directorate of the Air Force Research Laboratory at Wright-Patterson Air Force Base. He received his Ph.D. in experimental psychology/human factors from the University of Cincinnati in 1996.
Robert S. Bolia is a computer scientist in the Collaborative Interfaces Branch, Warfighter Interface Division, Human Effectiveness Directorate of the Air Force Research Laboratory at Wright-Patterson Air Force Base. He received his M.A. in military studies from American Military University in 2004.
Donald A. Schumsky is professor emeritus of psychology at the University of Cincinnati. He received his Ph.D. in experimental psychology from Tulane University in 1962.
Kevin J. Corcoran is a professor of psychology and department head at the University of Cincinnati. He received his Ph.D. in psychology from the University of Connecticut in 1984.
Date received: May 19, 2003
Date accepted: December 14, 2004
Address correspondence to Daniel V. Gunn, Microsoft Game Studios. One Microsoft Way, Redmond, WA 98052: email@example.com.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||unmanned aerial vehicles|
|Author:||Gunn, Daniel V.; Warm, Joel S.; Nelson, W. Todd; Bolia, Robert S.; Schumsky, Donald A.; Corcoran, Ke|
|Date:||Sep 22, 2005|
|Previous Article:||Mission control of multiple unmanned aerial vehicles: a workload analysis.|
|Next Article:||An attempt to evaluate mental workload using wavelet transform of EEG.|