Printer Friendly

Automotive HUDs: the overlooked safety issues.

INTRODUCTION

The transfer of technology from tactical aviation to surface transportation is taking place at an astonishing rate. The types of technology now being integrated into automobiles include proximity warning systems; vehicle monitor and control systems; on-board navigation systems with route optimization algorithms, cartographic databases, electronic moving maps, and real-time communication from Global Positioning System (GPS) satellites; and on-board sensors to produce video imagery (Intelligent Transportation Society of America, 1994). The anticipated benefits of transferring these various types of defense technology to automobiles are intimately tied to the manner in which the driver may receive the information generated by them.

Much of this information is likely to be presented on head-up displays (HUDs), yet another piece of aviation technology. Because HUDs carry their own unique benefits and risks, in this article I attempt to separate the medium from the message, the display from the information. This critical review examines the evidence - or lack of evidence - of the benefits and risks associated with automotive HUDs. Particular emphasis is given to the manner in which the evidence relates to two issues that may well create serious safety hazards and which have been largely neglected in the automotive context: (a) the interaction between HUD focal distance and the perception of outside objects and (b) the capture of visual attention by HUD-projected virtual imagery.

The most thorough treatment of HUD design issues is Weintraub and Ensing's Human Factors Issues in Head-Up Display Design: The Book of HUD (1992). This compendious work provides an excellent background for the topics discussed here, even though fewer than two pages are devoted to automotive HUDs in a book almost entirely concerned with aviation. A report dealing specifically with automotive HUD research (Gish & Staplin, 1995) will provide an important resource to researchers engaged in the automotive HUD debate. Readers are encouraged to consult both texts.

The benefits expected from using HUDs derive from their two most salient features: the head-up location and a focal distance that is farther than the conventional instrument panel or dashboard. The head-up location is expected to permit drivers to keep their eyes on the road more than is possible with dashboard-mounted displays, which should result in better and faster detection of outside objects and events (an issue discussed at length in this article). A related benefit is that drivers will need less time to obtain information from a display that is head-up than from one that is head-down. Kiefer (1991) reported speedometer scan savings of 144 ms, though Gish and Staplin (1995) caution that this effect may not generalize to higher-workload conditions. Okabayashi et al. (1989) also reported lower recognition error rates and times for HUD imagery. The time advantage, however, diminished on curves - a finding that may substantiate Gish and Staplin's criticism that HUD advantages may be linked to low workload.

The only detriment so far attributed to the head-up location is the possibility that HUD imagery could obscure the driver's vision of outside objects (Okabayashi, Sakata, & Hatada, 1991), but this would be an issue only for imagery that directly overlays the forward scene.

The main benefit expected to accrue from the HUD's longer focal distance is a decrease in the accommodative shift (the distance that the eyes need to change focus between an outside object and a display). It is expected that the principal beneficiaries of the shorter focal transition will be the elderly, given their restricted accommodative range. Two studies, discussed later, provide evidence supporting this benefit to older drivers.

Given these and other attractive attributes, work has been proceeding to resolve the numerous display design issues involved in putting HUDs into automobiles - for instance, location, brightness, color, format, size, weight, and cost (see, e.g., Enderby & Wood, 1992; Flannagan & Harrison, 1994; Hasebe et al., 1990; and Weihrauch, Meloeny, & Goesch, 1989).

In transferring HUDs to automobiles, however, two very important safety issues are either receiving inadequate treatment or are being overlooked completely. The first is the optical distance at which HUD imagery should be focused and its effects on perceptual judgments of objects in the outside scene. The second issue concerns the effect of HUD images on visual attention to outside objects.

With regard to the first issue, it has been generally accepted that automotive HUDs should be focused at an optical distance between 0.4 and 0.5 diopter (2.5-2.0 m). This decision derives from reasoning that overlooks the effects of HUD focal distance on perceptual judgment and, furthermore, is supported by scanty evidence. The second issue - attention - has been largely dismissed because of data collected within a methodological paradigm that would make it impossible for HUD-related problems to emerge.

In both cases there has been a general failure to recognize the rich literature and ongoing controversies surrounding these issues in the field of tactical aviation human factors. There has also been a general lack of recognition - in both the aviation and automotive human factors literature - of the inherent connection between these two issues. These shortcomings are creating a situation in which decisions are being made, based on inadequate evidence, which have serious safety implications for a very large segment of the population: drivers.

Whether or not the benefits of automotive HUDs outweigh the potential risks will depend on the operational value of the information being supplied to the driver. For example, is the value of a HUD-presented digital speedometer worth risking the consequences of diminished visual attention to objects (e.g., vehicles, pedestrians) in the road scene? For information of high operational value (e.g., sensor imagery for vision enhancement), the concomitant risks are also likely to be greatly increased because of the way in which this type of information will have to be displayed. Here, of course, is where the distinction between the medium and the message evaporates.

HUD FOCAL DISTANCE

Although Weintraub and Ensing (1992) wrote that "there is no real consensus among the early automobile HUDs about the optical distance of the symbology" (p. 144), it has become almost axiomatic in the surface transportation community that automotive HUDs should be focused at a distance between 2.0 and 2.5 m (0.5-0.4 diopter). The rationale for focusing automotive HUD imagery at 2.0 to 2.5 m is well summarized by Harrison (1994) and is largely based on the results of minor portions of two studies reported in 1991 and 1992. Both of these studies addressed the effect of image distance on the extraction of information from the HUD, not its effect on the perception of outside objects.

Inuzuka, Osumi, and Shinkai (1991) asked their participants to fixate on a target lamp at a distance of 10 m. When the lamp was turned off, the participants had to read a HUD speedometer, the focal distance of which varied between 0.8 and 5 m. The researchers found that for their older participants, recognition time decreased as the focal distance of the HUD imagery increased, up to a distance of 2.5 m. Beyond 2.5 m, no further improvement emerged.

Using a similar procedure, Kato, Ito, Shima, Imaizumi, and Shibata (1992) reported no noticeable improvement beyond a HUD focal distance of 2.0 m. The effects reported in both studies were obtained only for older drivers. HUD focal distance appeared not to affect the recognition times of the younger participants. This interaction with age may be attributable to the fact that older people have an average resting focal distance of two or more meters (Simonelli, 1983) and to the effects of presbyopia, which make it more difficult to focus at nearer distances. Neither study involved concurrent performance of a driving task.

In 1989, Okabayashi et al. examined the legibility of HUD imagery during the performance of a simulated driving task. The screen on which the simulated driving scene was projected was placed at either 4 or 12 m from the driver's eyes. For each screen distance, the focal distance of the

HUD imagery was varied. The recognition error rates reported show that HUD legibility improved as the focal distance approached the distance of the screen.

None of the aforementioned researchers recognized or addressed the more important issue of how HUD imagery distance can affect the perception of real objects. The issue of HUD optical distance and outside object perception has received extensive attention in the human factors literature associated with tactical aviation, but it has hardly been recognized in surface transportation research. Tactical aviation HUD images are collimated to focus them at optical infinity (0 diopter), with the intention of allowing pilots to keep their eyes focused (i.e., lenses accommodated) on distant objects while viewing the HUD. There has, however, been a significant controversy surrounding the use of collimation in virtual image displays (e.g., Roscoe, 1987a).

There is empirical evidence that the lens accommodation (i.e., optical focus of the eyes) is not at infinity when viewing the HUD, but somewhat nearer (Iavecchia, Iavecchia, & Roscoe, 1988). The amount of this deviation, or misaccommodation, is highly correlated with the large variations among individuals in their resting position accommodation - that is, the focal distance of eyes when they are not focused on anything (e.g., in the dark). Misaccommodation inward causes objects in the outside world to appear smaller and more distant. This perceptual minification, also known as accommodative micropsia, was first reported by Wheatstone in 1852 (as cited in Smith, Meehan, & Day, 1992) and may be of much greater practical importance today because of the growing use of virtual image displays.

Weintraub and Ensing (1992) argued that "vergence is the culprit" (p. 98) that causes the micropsia cited by Roscoe (1987a, 1987b), not misaccommodation. They cited evidence that size and distance errors are much more highly correlated with an individual's resting (dark) convergence than with the dark focus (accommodation). Misconvergence could be the result of imperfect collimation or convergence-accommodation traps (e.g., dirty windscreens, HUD combiner frames). Furthermore, convergence to a near object would drive accommodation inward.

Roscoe (1987a, 1987b) attributed aircraft accidents and mishaps - for instance, very hard landings onto runways that seemed farther away than they really were - to this perceptual bias in pilots' flying with virtual image displays. Within the tactical aviation human factors literature, the ensuing controversy centered on how successful collimation was at bringing the pilot's eyes "out" and, eventually, on whether or not aviation HUDs should be used at all. Given such a choice, and given the obvious importance of HUDs to the execution of tactical missions, the controversy seems to have quieted. The concerns raised by Roscoe and others, however, still have no accepted explanation in terms of basic perceptual theory. As Weintraub and Ensing (1992) concluded, the reason that distance is overestimated and size underestimated "remains unknown" (p. 96).

If the effects of micropsia - whatever the cause - are of concern with a HUD focused at optical infinity, then a focal distance of 2.0 m can only amplify the problem. The distance selected for automotive HUD imagery is designed to place it near the front edge of the car. By definition, all outside objects (e.g., other vehicles) will be at considerably greater distances than the HUD images. Whereas there was concern that collimated HUDs might not be drawing pilots' eyes out far enough, automotive HUDs will insist that drivers' eyes be focused at distances much closer than the real objects that should be a driver's primary concern. Furthermore, the near HUD imagery will itself be a misconvergence trap, leading to double vision (diplopia) of further objects, as well as being yet another potential cause of distance over-estimation. For these reasons, the perceptual effects of automotive HUDs are likely to be much greater than those of concern with aviation HUDs.

Sojourner and Antin (1990) recognized the possibility that automotive HUDs cause the kind of size and distance misperception described by Roscoe (1987a, 1987b). They mentioned that the effect may be mitigated by the more numerous visual cues in the driving environment but added, "still, if on a foggy night the HUD imagery caused a leading automobile's tail lights to be minified, resulting in their distance being overestimated, the likelihood of a rear-end collision would be increased" (p. 330). One of the conclusions Sojourner and Antin reached is that future research should address the effects of automotive HUDs on the perceived size and distance of outside objects. As far as I know, such research has not been conducted.

Basic psychophysical research needs to be conducted on perceptual estimations of the absolute distances of objects while participants' eyes are focused on automotive HUD imagery. The actual distances of the objects should cover a range of distances applicable to driving safety. Furthermore, the distance estimations should be obtained for the same set of participants with and without the presence of the HUD imagery, in order to anchor the effect to a naturalistic baseline and to permit an examination of potential individual differences.

VISUAL ATTENTION

One of the paradoxes of HUDs is that they may do their job too well. Their salience, legibility, and head-up location may command too much of the operator's visual attention, a phenomenon labeled by Weintraub (1987) as cognitive capture. The cues associated with cognitive switching from an instrument panel or dashboard to the outside scene are largely missing with a HUD (Weintraub & Ensing, 1992). These cues are lens focal accommodation, ocular vergence adjustment, and gaze shifting via head and eye movements.

An additional cue may be provided by differences in ambient light levels and the subsequent pupillary dilation or constriction response. The results of such an effect (i.e., cognitive capture) would be to decrease the relative salience of outside objects and, consequently, lower the likelihood that these objects would be noticed or detected by the operator. Within the area of automotive HUDs, three studies provide evidence that appears to dismiss such a concern. However, the possibility of methodological confounds limits the conclusiveness of these data. Within the tactical aviation literature there is far from unanimity in either the evidence or the conclusions.

Flannagan and Harrison (1994) projected photographic slides of road scenes with superimposed HUD road map imagery onto a screen approximately 1.5 m in front of their participants' eyes. Participants' primary task was to report the direction of the final turn needed to reach a destination indicated on the map. The secondary task was to say whether or not the road scene showed a pedestrian entering the lane. Half of the 180 trials each participant encountered showed pedestrians. The vertical deviation of the HUD map was either 4 [degrees], 9 [degrees], or 15 [degrees] down from the horizon. Increasing the angular deviation of the HUD had little effect on error rates in the map task, but it significantly increased error rates in the pedestrian detection task, especially for the older participants.

It is difficult to generalize these results to what may happen in reality, for two reasons. First, the tasks involved noninteractive, static pictures. Second, and perhaps more important, the combined HUD-road scene imagery was presented for only 30 ms in each trial. Given the tachistoscopic presentation, and given the participants' instructions to treat the HUD map task as primary and to fixate on the HUD location before the display was flashed, the results become predictable from what is known of decay rates in iconic memory (e.g., Sperling, 1960). After reading the visual icon, or retinal afterimage, of a map, participants have little of the road scene image remaining. This would be increasingly true the farther the image of the pedestrian was from the foveated map image. The results thereby become irrelevant to the issue at hand.

Weihrauch et al. (1989) conducted a comparison of HUDs and head-down displays with participants performing a driving task in a closed-loop simulator. They reported that there was a 100-ms HUD advantage in participant reaction time for the detection of pop-up obstacles. However, the absence of experimental detail given (e.g., methodology, instrumentation, stimulus conditions) makes it difficult to assess the value of the data.

In a much more thoroughly reported study, Sojourner and Antin (1990) also compared driver performance with HUDs and head-down instrument panel displays, on which only a digital speedometer was presented. Participants performed three driving-related tasks in an open-loop simulator, in which they were shown a video of a driving scene. With the exception of the head-down speedometer, all imagery was projected onto a screen 3 m from the participants' eyes. One of the tasks was the detection of salient cues in the roadway (i.e., graphically presented "balls" appearing to the left, center, or right of the forward field of view). Participants were required to push down a mouse button as soon as they detected a salient cue. Participants in the HUD group missed 3 of the 90 occurrences of salient cues, and the head-down display participants missed 9. When missed cues were omitted from the data, HUD detection times were 440 ms faster than those in the head-down group. Sojourner and Antin calculated that such a time savings translates into 12.22 m at 100 km/h.

The HUD advantage reported in these studies is most likely attributable to the reaction time methodology employed. Participants who are instructed to detect obstacles in the road and to press a reaction time button when they do will not be surprised by these events when they occur, especially when they occur 90 times. As Weintraub and Ensing (1992) pointed out, it is the truly unexpected and unlooked-for nature of outside hazards that present the real problem of cognitive capture. That is what has made this problem so difficult to study with experimental and statistical rigor. It is hard to surprise someone repeatedly. The more numerous tactical aviation HUD studies, which have addressed this issue, are beginning to make it clear that HUDs may improve detection of anticipated objects and may hinder detection of the truly unexpected.

Fischer, Haines, and Price (1980) and Weintraub, Haines, and Randle (1985) provided evidence of how truly unexpected outside objects can go unnoticed when HUDs are used. When a large runway obstacle (i.e., an airplane) was introduced into the simulated landing scene, it was ignored by almost all pilots. The one-time nature of these surprise trials, and the small numbers of participants, made it impossible to submit these dramatic demonstrations to statistical scrutiny, thereby relegating them more to the realm of anecdote than experimental evidence. Nevertheless, these studies served the purpose of getting the attention of the human factors community (if not the pilots), as evidenced by the persistence of cognitive capture as a basic issue in HUD research (see, e.g., Weintraub & Ensing, 1992, pp. 104-109).

Recently reported evidence of HUD interference with the detection of runway obstacles (Wickens & Long, 1994) may also provide a methodological clue as to how to study this problem in both aviation and surface transportation. In a closed-loop flight simulator, 32 pilots performed landings with flight instrumentation symbology projected either in a head-up location or 13 [degrees] down. On breaking out of the simulated clouds, the pilots would call "runway in sight" when the runway scene became visible to them. Not surprisingly, the latency of detecting the runway was significantly shorter when the instrumentation display was in the head-up location.

On the final trial in each display location block, the runway obstacle was introduced, and the experimenters measured how much time it took the pilots to initiate a "go-around" maneuver. The response to the unexpected obstacle was significantly more rapid when the head-down display location was used. By using a large enough participant sample and a latency measure that would not tip off the pilots, Wickens and Long have elevated the evidence for HUD-related cognitive capture from the anecdotal to the statistically reliable. This methodological paradigm now needs to be replicated in the automotive context.

THE CONNECTION

The issues of HUD focal distance and visual attention have long been treated as independent. Evidence of this dissociation comes from the fact that the two are often discussed separately (as they are in the foregoing sections) and from the experimental procedures employed to study them. For example, investigators addressing cognitive capture effects frequently present all imagery (HUD, head-down display, and simulated outside scene) on the same projection screen placed a fixed distance from the participants' eyes. There is, however, good reason to believe that HUD focal distance has as much influence on the detection of outside objects as it does on their perceived size and distance. This connection, not surprisingly, complicates the conduct of research on two already challenging issues.

Weintraub, Haines, and Randle (1984) and Weintraub et al. (1985) presented participants with imagery of a runway scene that was collimated to be focused at optical infinity. They varied the focal distance of head-up and head-down instrumentation symbology from very near (0.25 m) to the same optical distance as the outside scene. Cognitive switching time from instrumentation to runway and back was apparently unaffected by differences in display location. It was, however, significantly influenced by the proximity of the display focal distance to that of the runway imagery. When these researchers declared, "the HUD wins" (Weintraub et al., 1984, p. 533; 1985, p. 617), they were referring not to the vertical location of the display but, rather, to its focal distance - that is, they should have said, "collimation wins."

Norman and Ehrlich (1986) studied the detection and recognition of targets focused at infinity while participants simultaneously monitored digits presented at varying focal distances. They reported large effects on detection and recognition times and errors, showing optimal performance when both digits and targets were presented at the same focal distance. This approach should be applied to detection, recognition, and size and distance perception of outside objects (at different distances) with automotive HUDs.

CONFORMAL IMAGERY

Vehicle status information (e.g., speed, fuel, headlights) has no direct spatial relationship to the outside world and can therefore be displayed out of the forward field of view. A potentially bigger payoff of HUDs, however, might be expected from the presentation of imagery that is a visual analogue of the forward scene - conformal imagery - and which must, therefore, overlay it. As Weintraub and Ensing (1992) pointed out, "conformality is THE unique aspect of a head-up display" (p. 83, emphasis in the original). It is possible, however, that conformal imagery presented on automotive HUDs will also amplify the risks.

Conformal imagery must be in dynamic visual registration with its real-world counterpart. For example, in aviation, the HUD horizon line must be seen to lie over the real horizon that the pilot sees through the windscreen. Conformal imagery must also be presented at the same optical distance as its counterpart. If the HUD-presented conformal imagery were focused at a different (e.g., closer) distance than the real-world counterpart, ocular convergence to it would result in a double image (diplopia) of the outside object that the imagery represents. In aircraft, the optical distance of conformal imagery is not an issue. Aviation HUD imagery is focused at optical infinity because of the operating distances involved in flying. In automobiles the issue is much more complicated.

The particular application to be used here as an example is the planned use of automotive HUDs as part of a Vision Enhancement for Crash Avoidance system (Intelligent Transportation Society of America, 1994). Such a system would employ one or more sensors (e.g., infrared, radar) to augment a driver's vision during low-visibility conditions (e.g., fog, night). The presentation of sensor imagery of the road scene would have to be con-formal and would thus have to adhere to all of the requirements of conformality. Regardless of how the images are generated (e.g., vector graphics, computer-generated raster graphics, or raw sensor imagery), they will have to be in spatial registration with their real-world counterparts (e.g., the roadway, other vehicles, pedestrians). This means placing the HUD imagery directly over the outside scene.

Because the number of objects a driver must see and track is much greater than those with which pilots must deal, the amount of conformal imagery will be much greater in automobiles than in aircraft, thus amplifying the risk of obscuration. The distances of these objects from the driver's eyes range from several meters to hundreds of meters. To present all of the imagery at one focal distance would result in inappropriate ocular convergence and lens accommodation for all but a few of the real objects whose analogue images are displayed on a HUD. The extent to which such a vision enhancement system might interfere with the detection of outside objects or distort judgments of their distances has received far too little attention (see, e.g., Bossi, Ward, & Parkes, 1994) to support meaningful review.

SUMMARY

It is not my intent in this article to say that HUDs are bad or that they are inappropriate for automobiles. Rather, the purpose of this critical review of the HUD research literature is to expose concerns about issues directly related to safety. These issues have suffered either from neglect or from shortcomings in the methodology used in the empirical research that has addressed them. Erroneous or premature conclusions have been reached based on scanty evidence that is either inappropriate or irrelevant. These conclusions (e.g., that there are no problems with automotive HUDs) have prompted some (e.g., Harrison, 1994) to propose that work can now proceed in the development of display formats for the HUD presentation of (to name a few) road maps, cellular phone status, car radio status, and a color schematic of the vehicle showing trouble spots. The safety implications of the ensuing clutter are likely to be compounded by the addition of con-formal imagery.

The issues of HUD focal distance, cognitive capture, and the inherent connection between the two are still matters of debate and research in the area of tactical aviation HUDs. These issues need very much to be addressed in the context of surface transportation. It is indeed possible that the concerns raised in the aviation human factors literature will not apply to automobiles because of such differences as the richer visual environment on the ground. It is also possible that the issues will be of much greater concern with automotive HUDs because of differences in factors such as the proximity of other vehicles and objects on the road, the wide range of visual distances involved, the demographic diversity of drivers, and levels of training. The questions implied by these factors and the issues discussed in this article must be treated as empirical questions and addressed within appropriate methodological paradigms and experimental design.

Finally, I have become informally aware of a number of automotive HUD studies not accessible or citable in the open literature. These studies have been conducted by private companies with their own funds and are held proprietary. One piece of research produced compelling evidence for the cognitive capture phenomenon: Drivers using a HUD failed to notice a baby carriage rolled unexpectedly into the road. Another study uncovered some visually disconcerting effects of HUD collimation when drivers were going around a curve. Unfortunately, these and other pieces of important research can carry no more weight than rumor. The issues surrounding automotive HUDs are important enough to warrant openly conducted research subjected to public scrutiny.

CONCLUSION

In tactical aviation, HUDs have proven to be essential. They support the operations of basic flight and weapon systems management under conditions that would be either very difficult or impossible with more conventional instruments alone. For automobiles, however, the body of evidence is inadequate to support an accurate assessment of either the operational benefits or the safety-related risks associated with HUDs.

ACKNOWLEDGMENTS

I would like to thank my colleagues, Helmut Knee, Jack Schryver, and Philip Spelt, for many helpful discussions and comments during the preparation of this paper. Also, the active interest of Carole Simmons has given me access to sources that would otherwise have been overlooked. This work was supported by Federal Highway Administration contract # 1883E020-A1. This article reflects the opinions of the author and does not necessarily reflect the opinions, policies, or regulations of the United States Department of Transportation or the Federal Highway Administration.

REFERENCES

Bossi, L., Ward, N., & Parkes, A. (1994). The effect of simulated vision enhancement systems on driver peripheral target detection and identification. Ergonomics and Design, 4, 192-195.

Enderby, C. M., & Wood, S. T. (1992). Head-up display in automotive/aircraft applications (SAE Tech. Paper 920740). In Electronic display technology and information systems (SP-904, pp. 39-48). Warrendale, PA: Society of Automotive Engineers.

Fischer, E., Haines, R. F., & Price, T. A. (1980). Cognitive issues in head-up displays (NASA Tech. Paper TP-1711). Moffett Field, CA: NASA-Ames Research Center.

Flannagan, M. J., & Harrison, A. K. (1994). The effects of automobile head-up display location for younger and older drivers (Report UMTRI-94-22). Ann Arbor: University of Michigan Transportation Research Institute.

Gish, K. W., & Staplin, L. (1995). Human factors aspects of using head-up displays in automobiles: A review of the literature (Report DOT HS 808 320). Washington, DC: U.S. Department of Transportation.

Harrison, A. (1994). Head-up displays for automotive applications (Report UMTRI-94-10). Ann Arbor: University of Michigan Transportation Research Institute.

Hasebe, H., Ohta, T., Nakagawa, Y., Matsuhiro, K., Sawada, K., & Matsushita, S. (1990). Head up display using dot matrix LCD (II; SAE Technical Paper 900667). In Automotive electronic displays and information systems (SP-809, pp. 81-87). Warrendale PA: Society of Automotive Engineers.

Iavecchia, J. H., Iavecchia, H. P., & Roscoe, S. N. (1988). Eye accommodation to head-up virtual images. Human Factors, 30, 689-702.

Inuzuka, Y., Osumi, Y., & Shinkai, H. (1991). Visibility of head up display (HUD) for automobiles. In Proceedings of the Human Factors Society 35th Annual Meeting (pp. 1574-1578). Santa Monica, CA: Human Factors and Ergonomics Society.

Intelligent Transportation Society of America. (1994). National ITS program plan (Vol. 2, Sec. 7.4). Washington, DC: U.S. Department of Transportation.

Kato, H., Ito, H., Shima, J., Imaizumi, M., & Shibata, H. (1992). Development of hologram head-up display (SAE Tech. Paper 920600). In Electronic display technology and information systems (SP-904, pp. 21-27). Warrendale, PA: Society of Automotive Engineers.

Kiefer, R. J. (1991). Effects of a head-up versus head-down digital speedometer on visual sampling behavior and speed control performance during daytime automobile driving (SAE Tech. Paper 910111). Warrendale, PA: Society of Automotive Engineers.

Norman, J., & Ehrlich, S. (1986). Visual accommodation and virtual image displays: Target detection and recognition. Human Factors, 28, 135-151.

Okabayashi, S., Sakata, M., Fukano, J., Daidoji, S., Hashimoto, C., & Ishikawa, T. (1989). Development of practical heads-up display for production vehicle application (SAE Tech. Paper 890559). In Automotive information systems and electronic displays: Recent developments (SP-770, pp. 69-78). Warrendale, PA: Society of Automotive Engineers.

Okabayashi, S., Sakata, M., & Hatada, T. (1991). Driver's ability to recognize objects in the forward view with superposition of head-up display images. In Proceedings of the Society for Information Display, 32, 465-468.

Roscoe, S. N. (1987a). The trouble with HUDs and HMDs. Human Factors Society Bulletin, 30(7), 1-3.

Roscoe, S. N. (1987b). The trouble with virtual images revisited. Human Factors Society Bulletin, 30(11), 3-5.

Simonelli, N. M. (1983). The dark focus of the human eye and its relationship to age and visual defect. Human Factors, 25, 85-92.

Smith, G., Meehan, J. W., & Day, R. H. (1992). The effect of accommodation on retinal image size. Human Factors, 34, 289-301.

Sojourner, R. J., & Antin, J. F. (1990). The effects of a simulated head-up display speedometer on perceptual task performance. Human Factors, 32(3), 329-339.

Sperling, G. (1960). The information available in brief visual presentations. Psychological Monographs, 72 (Whole No. 11).

Weihrauch, M., Meloeny, G. G., & Goesch, T. C. (1989). The first head-up display introduced by General Motors (SAE Tech. Paper 890288). In Automotive information systems and electronic displays: Recent developments (SP-770, pp. 5562). Warrendale, PA: Society of Automotive Engineers.

Weintraub, D. J. (1987). HUDs, HMDs, and common sense: Polishing virtual images. Human Factors Society Bulletin, 30(10), 1-3.

Weintraub, D. J., & Ensing, M. (1992). Human factors issues in head-up display design: The book of HUD (CSERIAC state of the art report). Wright-Patterson Air Force Base, OH: Crew System Ergonomics Information Analysis Center.

Weintraub, D. J., Haines, R. F., & Randle, R. J. (1984). The utility of head-up displays: Eye-focus vs. decision times. In Proceedings of the Human Factors Society 28th Annual Meeting (pp. 529-533). Santa Monica, CA: Human Factors and Ergonomics Society.

Weintraub, D. J., Haines, R. F., & Randle, R. J. (1985). Head-up display (HUD) utility: II. Runway to HUD transitions monitoring eye focus and decision times. In Proceedings of the Human Factors Society 29th Annual Meeting (pp. 615-619). Santa Monica, CA: Human Factors and Ergonomics Society.

Wickens, C. D., & Long, J. (1994). Conformal symbology, attention shifts, and the head-up display. In Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting (pp. 6-10). Santa Monica, CA: Human Factors and Ergonomics Society.

Daniel R. Tufano received a Ph.D. in psychology from Princeton University in 1980. He is a research staff member at Oak Ridge National Laboratory.
COPYRIGHT 1997 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:head-up displays
Author:Tufano, Daniel R.
Publication:Human Factors
Date:Jun 1, 1997
Words:5487
Previous Article:Crash reduction with an advance brake warning system: a digital simulation.
Next Article:Influence of lift moment in determining MAWL.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters