Printer Friendly

Gaze-contingent multiresolutional displays: an integrative review.

INTRODUCTION

Technology users often need or want large, high-resolution displays that exceed possible or practical limits on bandwidth and/or computation resources. In reality, however, much of the information that is generated and transmitted in such displays is wasted because it cannot be resolved by the human visual system, which resolves high-resolution information in only a small region.

One way to reduce computation and bandwidth requirements is to reduce the amount of unresolvable information in the display by presenting lower resolution in the visual periphery. Over the last two decades, a great amount of work has been put into developing and implementing gaze-contingent multiresolutional displays (GCMRDs). A GCMRD is a display showing an image with high resolution in one area and lower resolution elsewhere, and the high-resolution area is centered on the viewer's fovea by means of a gaze tracker or other mechanism. Work on such displays is found in a variety of research areas, often using different terms for the same essential concepts. Thus the gaze-contingent aspect of such displays has also been referred to as "foveated" or "eye-slaved" and the multiresolutional aspect is often referred to as "variable resolution," "space variant," "area of interest," or "level of detail." When considered together, gaze-contingent multiresolutional displays have been referred to with various combinations of these terms or simply as "moving windows." Figure 1 shows examples of a short sequence of a viewer's gaze locations in an image and two types of multiresolutional images that might appear during a particular eye fixation.

[FIGURE 1 OMITTED]

Note that the gaze-contingent display methodology has also had a tremendous influence in basic research on perception and cognition in areas such as reading and visual search (for a review, see Rayner, 1998); however, the present review exclusively focuses on the use of such displays in applied contexts.

Why Use Gaze-Contingent Multiresolutional Displays?

Saving bandwidth and/or processing resources and the GCMRD solution. The most demanding display and imaging applications have very high resource requirements for resolution, field of view, and frame rates. The total resource requirement is proportional to the product of these factors, and usually not all can be met simultaneously. An excellent example of such an application is seen in military flight simulators that require a wraparound field of view, image resolution approaching the maximum resolution of the visual system (which is at least 60 cycles/[degrees] or 120 pixels/[degrees]; e.g., Thibos, Still, & Bradley, 1996, figure 1), and fast display updates with minimum delay. Because it is not feasible to create image generators, cameras, or display systems to cover the entire field of view with the resolution of the foveal region, the GCMRD solution is to monitor where the observer's attention is concentrated and to supply higher resolution and greater image transfer or generation resources to this area, with reduced resolution elsewhere. The stimulus location to which the gaze is directed is generally called the point of gaze.

We will refer to the local stimulus region surrounding the point of gaze, which is assumed to be the center of attention, as the attended area of interest (A-AOI) and the area of high resolution in the image as the displayed area of interest (D-AOI). (It is common in the multiresolutional display literature to refer to a high-resolution area placed at the point of gaze as an area of interest [AOI]. However, from a psychological point of view, the term area of interest is more often used to indicate the area that is currently being attended. We have attempted to distinguish between these two uses through our terminology.) GCMRDs integrate a system for tracking viewer gaze position (by combined eye and head tracking) with a display that can be modified in real time to center the D-AOI at the point of gaze. If a high-resolution D-AOI appears on a lower-resolution background, one can simultaneously supply fine detail in central vision and a wide field of view with reasonable display, data channel, and image source requirements.

In general, there are two sources of savings from GCMRDs. First, the bandwidth required for transmitting images is reduced because information encoding outside the D-AOI is greatly reduced. Second, in circumstances where images are being computer generated, rendering requirements are reduced because it is simpler to render low-resolution than high-resolution image regions, and therefore computer-processing resources are reduced (see Table 1 for examples).

Unfortunately, GCMRDs can also produce perceptual artifacts, such as perceptible image blur and image motion, which have the potential to distract the user (Loschky, 2003; Loschky & McConkie, 2000, 2002; McConkie & Loschky, 2002; Parkhurst, Culurciello, & Niebur, 2000; Reingold & Loschky, 2002; Shioiri & Ikeda, 1989; van Diepen & Wampers, 1998; Watson, Walker, Hodges, & Worden, 1997). Ideally, one would like a GCMRD that maximizes the benefits of processing and bandwidth savings while minimizing perception and performance costs. However, depending on the needs of the users of a particular application, greater weight may be given either to perceptual quality or to processing and bandwidth savings.

For example, in the case of a GCMRD in a flight simulator, maximizing the perceptual quality of the display may be more important than minimizing the monetary expenses associated with increased processing (i.e., in terms of buying larger-capacity, faster-processing hardware). However, in the case of mouse-contingent multi-resolutional Internet image downloads for casual users, minimizing perceptible peripheral image degradation may be less important than maximizing bandwidth savings in terms of download speed. In addition, it is worth pointing out that perceptual and performance costs are not always the same. For example, a GCMRD may have moderately perceptible peripheral image filtering and yet may not reliably disrupt visual task performance (Loschky & McConkie, 2000). Thus when measuring perception and performance costs of a particular GCMRD configuration, it is important to decide how low or high one's cost threshold should be set.

Are GCMRDs really necessary? A question that is often asked about GCMRDs is whether they will become unnecessary when bandwidth and processing capacities are greatly expanded in the future. As noted by Geisler (2001), in general, one will always want bandwidth and processing savings whenever they are possible, which is the reason nobody questions the general value of image compression. Furthermore, as one needs larger, higher-resolution images and faster update rates, the benefits of GCMRDs become greater in terms of compression ratios and processing savings. This is because larger images have proportionally more peripheral image information, which can be coded with increasingly less detail and resolution, resulting in proportionally greater savings. These bandwidth and processing savings can then be traded for larger images, with higher resolution in the area of interest and faster update rates.

Even if the bandwidth problem were to be eliminated in the future for certain applications, and thus GCMRDs might not be needed for them, the bandwidth problem will still be present in other applications into the foreseeable future (e.g., virtual reality, simulators, teleconferencing, teleoperation, remote vision, remote piloting, telemedicine). Finally, even if expanded bandwidth and processing capacity makes it possible to use a full-resolution display of a given size for a given application, there may be good reasons to reduce the computational requirements where possible. Reducing computational requirements saves energy, and energy savings are clearly an increasingly important issue. This is particularly true for portable, wireless applications, which tend to be battery powered and for which added energy capacity requires greater size and weight. Thus, for all of these reasons, it seems reasonable to argue that GCRMDs will be useful for the foreseeable future (see Geisler, 2001, for similar arguments).

Why Should GCMRDs Work?

The concept of the GCMRD is based on two characteristics of the human visual system. First, the resolving power of the human retina is multiresolutional. Second, the region of the visual world from which highest resolution is gathered is changed from moment to moment by moving the eyes and head.

The multiresolutional retina. The multiresolutional nature of the retina is nicely explained by the sampling theory of resolution (e.g., Thibos, 1998), which argues that variations in visual resolution across the visual field are attributable to differences in information sampling. In the fovea, it is the density of cone photoreceptors that best explains the drop-off in resolution. However, in the visual periphery, it is the cone-to-ganglion cell ratio that seems to explain the resolution drop-off (Thibos, 1998). Using such knowledge, it is possible to model the visual sampling of the retina and to estimate, for a given viewing distance and retinal eccentricity, how much display information is actually needed in order to support normal visual perception (Kuyel, Geisler, & Ghosh, 1999), although such estimates require empirical testing.

The most fundamental description of visual acuity is in terms of spatial frequencies and contrast, as described by Fourier analysis (Campbell & Robson, 1968), and the human visual system seems to respond to spatial frequency bandwidths (De Valois & De Valois, 1988). An important finding for the creation of multiresolutional displays is that the human visual system shows a well-defined contrast sensitivity by retinal eccentricity relationship. As shown in Figure 2A, contrast sensitivity to higher spatial frequencies drops off as a function of retinal eccentricity (e.g., Peli, Yang, & Goldstein, 1991; Pointer & Hess, 1989; Thibos et al., 1996). Figure 2A shows two different contrast sensitivity cut-off functions from Yang and Miller (Loschky, 2003) and Geisler and Perry (1998). The functions assume a constant Michaelson contrast ratio of 1.0 (maximum) and show the contrast threshold as a function of spatial frequency for each retinal eccentricity in degrees visual angle. Viewers should be unable to discriminate spatial frequencies above the line for any given eccentricity in a given function (i.e., those frequencies are below perceptual threshold).

[FIGURE 2 OMITTED]

Note the overall similarity of the two functions, each of which is based on data from several different psychophysical studies using grating stimuli. (The small differences between the plots can be characterized as representing a band-pass vs. low-pass foveal contrast sensitivity function, but they could be reduced by changing some parameter values). As suggested by Figure 2A, substantial bandwidth savings can be accomplished in a multiresolutional image by excluding high-resolution information that is below contrast threshold at each eccentricity. However, if above-threshold spatial frequencies are excluded from the image, this will potentially degrade perception and/or distract the user, a point discussed in greater detail later.

Gaze movements. The concept of a gaze-contingent display is based on the fact that the human visual system compensates for its lack of high resolution outside of the fovea by making eye and head movements. During normal vision, one simply points the fovea at whatever is of interest (i.e., the A-AOI) in order to obtain high-resolution information whenever needed. For small movements (e.g., under 20[degrees]) only the eyes tend to move, but as movements become larger, the head moves as well (Guitton & Volle, 1987; Robinson, 1979). This suggests that in most GCMRD applications, eye tracking methods that are independent from, or that compensate for, head movements are necessary to align the D-AOI of a multiresolutional display with the point of gaze. Furthermore, just prior to, during, and following a saccade, perceptual thresholds are raised (for a recent review see Ross, Morrone, Goldberg, & Burr, 2001). This saccadic suppression can help mask the stimulus motion that accompanies the updating of the D-AOI in response to a saccadic eye movement.

In sum, the variable resolution of the human visual system provides a rationale for producing multiresolutional displays that reduce image resolution, generally describable in terms of a loss of higher spatial frequencies, with increasing retinal eccentricity. Likewise, the mechanisms involved in eye and head movements provide a rationale for producing dynamic displays that move the high-resolution D-AOI in response to the changing location of the point of gaze. Based on these ideas, a large amount of work has been has been carried out in a number of different areas, including engineering design work on the development of GCMRDs, multiresolutional image processing, and multiresolutional sensors; and human factors research on multiresolutional displays, gaze-contingent displays, and human-computer interaction.

Unfortunately, it appears that many of the researchers in these widely divergent research areas are unaware of the related work done in the other areas. Thus this review provides a useful function in bringing information from these different research areas to the attention of workers in these related fields. Moreover, the current review provides a general framework within which research across these areas can be integrated, evaluated, and guided. Accordingly, the remainder of this article begins by discussing the wide range of applications in which GCMRDs save bandwidth and/or processing resources at present or in which they are expected to do so in the future. The article then goes on to discuss research and development issues related to GCMRDs, which necessarily involves a synthesis of engineering and human factors considerations. Finally, the current review points out key unanswered questions for the development of GCMRDs and suggests promising human factors research directions.

APPLICATIONS OF GCMRDS

Simulators

Simulation, particularly flight simulation, is the application area in which GCMRDs have been used the longest, and it is still the GCMRD application area that has been most researched, because of the large amount of funding available (for examples of different types of flight simulators with GCMRDs, see Barrette, 1986; Dalton & Deering, 1989; Haswell, 1986; Thomas & Geltmacher, 1993; Tong & Fisher, 1984; Warner, Serfoss, & Hubbard, 1993). Flight simulators have been shown to save lives by eliminating the risk of injury during the training of dangerous maneuvers and situations (Hughes, Brooks, Graham, Sheen, & Dickens, 1982) and to save money by reducing the number of in-flight hours of training needed (Lee & Lidderdale, 1983), in addition to reducing airport congestion, noise, and pollution because of fewer training flights.

GCMRDs are useful in high-performance flight simulators because of the wide field of view and high resolution needed. Simulators for commercial aircraft do not require an extensive field of view, as external visibility from the cockpit is limited to ahead and 45[degrees] to the sides. However, military aircraft missions require a large instantaneous field of view, with visibility above and to the sides and more limited visibility to the rear (Quick, 1990). Requirements vary between different flight maneuvers, but some demand extremely large fields of view, such as the barrel roll, which needs a 299[degrees] (horizontal) x 142[degrees] (vertical) field of view (Leavy & Fortin, 1983). Likewise, situational awareness has been shown to diminish with a field of view less than 100[degrees] (Szoboszlay, Haworth, Reynolds, Lee, & Halmos, 1995). Added to this are the demands for fast display updates with minimum delay and the stiff resolution requirements for identifying aircraft from various real-world distances. For example, aircraft identification at 5 nautical miles (92.6 km) requires a resolution of 42 pixels/[degrees] (21 cycles/[degrees]), and recognition of a land vehicle at 2 nautical miles (37 km) requires resolution of about 35 pixels/[degrees] (17.5 cycles/[degrees]; Turner, 1984). Other types of simulators (e.g., automotive) have shown benefits from using GCMRDs as well (Kappe, van Erp, & Korteling, 1999; see also the Medical simulations and displays section to follow).

Virtual Reality

Other than simulators, virtual reality (VR) is one of the areas in which GCMRDs will be most commonly used. In immersive VR environments, as a general rule, the bigger the field of view the greater the sense of "presence" and the better the performance on spatial tasks, such as navigating through a virtual space (Arthur, 2000; Wickens & Hollands, 2000). Furthermore, update rates should be as fast as possible, because of a possible link with VR motion sickness (Frank, Casali, & Wierwille, 1988; Regan & Price, 1994; but see Draper, Viirre, Furness, & Gawron, 2001). For this reason, although having high resolution is desirable in general, greater importance is given to the speed of updating than to display resolution (Reddy, 1995). In order to create the correct view of the environment, some pointing device is needed to indicate the viewer's vantage point, and head tracking is one of the most commonly used devices. Thus, in order to save scene-rendering time--which can otherwise be quite extensive--multiresolutional VR displays are commonly used (for a recent review, see Luebke et al., 2002), and these are most often head contingent (e.g., Ohshima, Yamamoto, & Tamura, 1996; Reddy, 1997; Watson et al., 1997).

Reddy (1997, p. 181) has, in fact, argued that head tracking is often all that is needed to provide substantial savings in multiresolutional VR displays, and he showed that taking account of retinal eccentricity created very little savings in at least two different VR applications (Reddy, 1997, 1998). However, the applications he used had rather low maximum resolutions (e.g., 4.8-12.5 cycles/[degrees], or 9.6-25.0 pixels/[degrees]). Obviously, if one wants a much higher resolution VR display, having greater precision in locating the point of gaze can lead to much greater savings than is possible with head tracking alone (see section titled Research and Development Issues Related to D-AOI Updating). In fact, several gaze-contingent multiresolutional VR display systems have been developed (e.g., Levoy & Whitaker, 1990; Luebke, Hallen, Newfield, & Watson, 2000; Murphy & Duchowski, 2001). Each uses different methods of producing and rendering gaze-contingent multiresolutional 3-D models, but all have resulted in a savings, with estimates of rendering time savings roughly 80% over a standard constant-resolution alternative (Levoy & Whitaker, 1990; Murphy & Duchowski, 2001).

Infrared and Indirect Vision

Infrared and indirect vision systems are useful in situations where direct vision is poor or impossible. These include vision in low-visibility conditions (e.g., night operations and search-and-rescue missions) and in future aircraft designs with windowless cockpits. The requirements for such displays are similar to those in flight simulation: Pilots need high resolution for target detection and identification, and they need wide fields of view for orientation, maneuvering, combat, and tactical formations with other aircraft. However, these wide-field-of-view requirements are in even greater conflict with resolution requirements because of the extreme limitations of infrared focal plane array and indirect-vision cameras (Chevrette & Fortin, 1996; Grunwald & Kohn, 1994; Rolwes, 1990).

Remote Piloting and Teleoperation

Remote piloting and teleoperation applications are extremely useful in hostile environments, such as deep sea, outer space, or combat, where it is not possible or safe for a pilot or operator to go. These applications require real-time information with a premium placed on fast updating so as not to degrade hand-eye coordination (e.g., Rosenberg, 1993).

Remote piloting of aircraft or motor vehicles. These applications have a critical transmission bottleneck because low-bandwidth radio is the only viable option (DePiero, Noell, & Gee, 1992; Weiman, 1994); line-of-sight microwave is often occluded by terrain and exposes the vehicle to danger in combat situations, and fiberoptic cable can be used only for short distances and breaks easily. Remote driving requires both a wide field of view and enough resolution to be able to discern textures and identify objects. Studies have shown that operators are not comfortable operating an automobile (e.g., a Jeep) with a 40[degrees] field-of-view system, especially turning corners, but that they feel more confident with a 120[degrees] field of view (Kappe et al., 1999; McGovern, 1993; van Erp & Kappe, 1997). In addition, high resolution is needed to identify various obstacles, and color can help distinguish such things as asphalt versus dirt roads (McGovern, 1993). Finally, frame rates of at least 10 frames/s are necessary for optic flow perception, which is critical in piloting (DePiero et al., 1992; Weiman, 1994).

Teleoperation. Teleoperation allows performance of dexterous manipulation tasks in hazardous or inaccessible environments. Examples include firefighting, bomb defusing, underwater or space maintenance, and nuclear reactor inspection. In contrast to remote piloting, in many teleoperation applications a narrower field of view is often acceptable (Weiman, 1994). Furthermore, context is generally stable and understood, thus reducing the need for color. However, high resolution for proper object identification is generally extremely important, and update speed is critical for hand-eye coordination. Multiresolutional systems have been developed, including those that are head contingent (Pretlove & Asbery, 1995; Tharp et al., 1990; Viljoen, 1998) and gaze contingent (Viljoen), with both producing better target-acquisition results than does a joystick-based system (Pretlove & Asbery; Tharp et al.; Viljoen).

Image Transmission

Images are often transmitted through a limited-bandwidth channel because of distance or data-access constraints (decompression and network, disk, or tape data bandwidth limitations). We illustrate this by considering two examples of applications involving image transmission through a limited-bandwidth channel: image retrieval and video teleconferencing.

Image retrieval. Image filing systems store and index terabytes of data. Compression is required to reduce the size of image files to a manageable level for both storage and transmission. Sorting through images, especially from remote locations over bandwidth-limited communication channels, is most efficiently achieved via progressive transmission systems, so that the user can quickly recognize unwanted images and terminate transmission early (Frajka, Sherwood, & Zeger, 1997; To, Lau, & Green, 2001; Tsumura, Endo, Haneishi, & Miyake, 1996; Wang & Bovik, 2001). If the point of gaze is known, then the highest-resolution information can be acquired for that location first, with lower resolution being sent elsewhere (Bolt, 1984; To et al., 2001).

Video teleconferencing. Video teleconferencing is the audio and video communication of two or more people in different locations; typically there is only one user at a time at each node. It frequently involves sending video images over a standard low-bandwidth ISDN communication link (64 or 128 kb/s) or other low-bandwidth medium. Transmission delays can greatly disrupt communication, and with current systems, frame rates of only 5 frames/s at a resolution of 320 x 240 pixels are common. In order to achieve better frame rates, massive compression is necessary. The video sent in teleconferencing is highly structured (Maeder, Diederich, & Niebur, 1996) in that the transmitted image usually consists of a face or of the head and shoulders, and the moving parts of the image are the eyes and mouth, which, along with the nose, constitute the most looked-at area of the face (Spoehr & Lehmkule, 1982). Thus it makes sense to target faces for transmission in a resolution higher than that of the rest of the image (Basu & Wiebe, 1998).

Development of GCMRDs for video teleconferencing has already begun. Kortum and Geisler (1996a) first implemented a GCMRD system for still images of faces, and this was followed up with a video-based system (Geisler & Perry, 1998). Sandini et al. (1996) and Sandini, Questa, Scheffer, Dierickx, and Mannucci (2000) have implemented a stationary retina-like multiresolutional camera for visual communication by deaf people by videophone, with sufficient bandwidth savings that a standard phone line can be used for transmission.

Medicine

Medical imagery is highly demanding of display fidelity and resolution. Fast image updating is also important in many such applications in order to maintain hand-eye coordination.

Telemedicine. This category includes teleconsultation with fellow medical professionals to get a second opinion as well as telediagnosis and telesurgery by remote doctors and surgeons. Telediagnosis involves inspection of a patient, either by live video or other medical imagery such as X rays, and should benefit from the time savings provided by multiresolutional image compression (Honniball & Thomas, 1999). Telesurgery involves the remote manipulation of surgical instruments. An example would be laparoscopy, in which a doctor operates on a patient through small incisions, cannot directly see or manipulate the surgical instrument inside the patient, and therefore relies on video feedback. This is essentially telesurgery, whether the surgeon is in the same room or on another continent (intercontinental surgery was first performed in 1993; Rovetta et al., 1995). Teleconsultation may tolerate some loss of image fidelity, whereas in telediagnosis or telesurgery the acceptable level of compression across the entire image is more limited (Cabral & Kim, 1996; Hiatt, Shabot, Phillips, Haines, & Grant, 1996). Furthermore, telesurgery requires fast transmission rates to provide usable video and tactile feedback, because nontrivial delays can degrade surgeons' hand-eye coordination (Thompson, Ottensmeyer, & Sheridan, 1999). Thus real-time foveated display techniques, such as progressive transmission, could potentially be used to reduce bandwidth to useful levels (Bolt, 1984).

Medical simulations and displays. As with flight and driving simulators, medical simulations can save many lives. Surgical residents can practice a surgical procedure hundreds of times before they see their first patient. Simple laparoscopic surgery simulators have already been developed for training. As medical simulations develop and become more sophisticated, their graphical needs will increase to the point that GCMRDs will provide important bandwidth savings. Levoy and Whitaker (1990) have already shown the utility of gaze-contingent volume rendering of medical data sets. Gaze tracking could also be useful in controlling composite displays consisting of many different digital images, such as the patient's computerized tomography (CT) or magnetic resonance imaging (MRI) scans with real-time video images, effectively giving the surgeon "x-ray vision." Yoshida, Rolland, and Reif (1995a, 1995b) suggested that one method of accomplishing such fusion is to present CT, MRI, or ultrasound scans inside gaze-contingent insets, with the "real" image in the background.

Robotics and Automation

Having both a wide field of view, and an area of high resolution at the "focus of attention" is extremely useful in the development of artificial vision systems. Likewise, reducing the visual processing load by decreasing resolution in the periphery is of obvious value in artificial vision. High-resolution information in the center of vision is useful for object recognition, and lower-resolution information in the periphery is still useful for detecting motion. Certain types of multiresolutional displays (e.g., those involving log-polar mapping) make it easier to determine heading, motion, and time to impact than do displays using Cartesian coordinates (Dias, Araujo, Paredes, & Batista, 1997; Kim, Shin, & Inoguchi, 1995; Panerai, Metta, & Sandini, 2000; Shin & Inoguchi, 1994).

RESEARCH AND DEVELOPMENT ISSUES RELATED TO GCMRDS

Although ideally GCMRDs should be implemented in a manner undetectable to the observer (see Loschky, 2003, for an existence proof for such a display), in practice such a display may not be feasible or, indeed, needed for most purposes. The two main sources of detectable artifacts in GCMRDs are image degradation produced by the characteristics of multiresolutional images and perceptible image motion resulting from image updating. Accordingly, we summarize the available empirical evidence for each of these topics and provide guidelines and recommendations for developers of GCMRDs to the extent possible. However, many key issues remain unresolved or even unexplored. Thus an important function of the present review is to highlight key questions for future human factors research on issues related to GCMRDs, as summarized in Table 2.

Research and Development Issues with Multiresolutional Images

Methods of producing multiresolutional images. Table 3 summarizes a large body of work focused on developing methods for producing multiresolutional images. Our review of the literature suggests that the majority of research and development efforts related to GCMRDs have focused on this issue. The methods that have been developed include (a) computer-generated images (e.g., rendering 2-D or 3-D models) with space-variant levels of detail; (b) algorithms for space-variant filtering of constant high-resolution images; (c) projection of different levels of resolution to different viewable monitors (e.g., in a wraparound array of monitors), or the projection of different resolution channels and/ or display areas to each eye in a head-mounted display; and (d) space-variant multiresolutional sensors and cameras. All of these approaches have the potential to make great savings in either processing or bandwidth, although some of the methods are also computationally complex.

Using models of vision to produce multiresolutional images. In most cases, the methods of multiresolutional image production in Table 3 have been based on neurophysiological or psychophysical studies of peripheral vision, under the assumption that these research results will scale up to the more complex and natural viewing conditions of GCMRDs. This assumption has been explicitly tested in only a few studies that investigated the human factors characteristics of multiresolutional displays (Duchowski & McCormick, 1998; Geri & Zeevi, 1995; Kortum & Geisler, 1996b; Loschky, 2003; Luebke et al., 2000; Peli & Geri, 2001; Sere, Marendaz, & Herault, 2000; Yang et al., 2001), but the results have been generally supportive. For example, Loschky tested the psychophysically derived Yang et al. resolution drop-off function, shown in Figure 2A, by creating multiresolutional images based on it and on functions with steeper and shallower drop-offs (as in Figure 2D). Consistent with predictions, a resolution drop-off shallower than that in Figure 2A was imperceptibly blurred, but steeper drop-offs were all perceptibly degraded compared with a constant high-resolution control condition. Furthermore, these results were consistent across multiple dependent measures, both objective (e.g., blur detection and fixation durations) and subjective (e.g., image quality ratings).

However, there are certain interesting caveats. Several recent studies (Loschky, 2003; Peli & Geri, 2001; Yang et al., 2001) have noted that sensitivity to peripheral blur in complex images is somewhat less than predicted by contrast sensitivity functions (CSFs) derived from studies using isolated grating patches. Those authors have argued that this lower sensitivity during complex picture viewing may be attributable to lateral masking from nearby picture areas. In contrast, Geri and Zeevi (1995) used drop-off functions based on psychophysical studies using vernier acuity tasks and found that sensitivity to peripheral blur in complex images was greater than predicted. They attributed this to the more global resolution discrimination task facing their participants in comparison with the positional discrimination task in vernier acuity. Thus it appears that the appropriate resolution drop-off functions for GCMRDs should be slightly steeper than suggested by CSFs but shallower than suggested by vernier acuity functions. Consequently, to create undetectable GCMRDs, it is still advisable to fine-tune previously derived psychophysical drop-off functions based on human factors testing. Similarly, working out a more complete description of the behavioral effects of different detectable drop-off rates in different tasks is an important goal for future human factors research.

Discrete versus continuous resolution drop-off GCMRDs. A fundamental distinction exists between methods in which image resolution reduction is produced by having discrete levels of resolution (discrete drop-off methods; e.g., Loschky & McConkie, 2000, 2002; Parkhurst et al., 2000; Reingold & Loschky, 2002; Shioiri & Ikeda, 1989; Watson et al., 1997) and methods in which resolution drops off gradually with distance from a point or region of highest resolution (continuous drop-off methods; e.g., Duchowski & McCormick, 1998; Geri & Zeevi, 1995; Kortum & Geisler, 1996b; Loschky, 2003; Luebke et al., 2000; Peli & Geri, 2001; Sere et al., 2000; Yang et al., 2001). Of course, using a sufficient number of discrete regions of successively reduced resolution approximates a continuous drop-off method. Figure 1 illustrates these two approaches. Figure 1C has a high-resolution area around the point of gaze with lower resolution elsewhere, whereas in Figure 1D the resolution drops off continuously with distance from the point of gaze.

These two approaches are further illustrated in Figure 2. As shown in Figure 2A, we assume that there is an ideal useful resolution function that is highest at the fovea and drops off at more peripheral locations. Such functions are well established for acuity and contrast sensitivity (e.g., Peli et al., 1991 ; Pointer & Hess, 1989; Thibos et al., 1996). Nevertheless, the possibility is left open that the "useful resolution" function may be different from these in cases of complex, dynamic displays, perhaps on the basis of attentional allocation factors (e.g., Yeshurun & Carrasco, 1999). In Figure 2B through 2E, we superimpose step functions representing the discrete drop-off methods and smooth functions representing the continuous drop-off method.

With the discrete drop-off method there is a high-resolution D-AOI centered at the point of gaze. An example in which a biresolutional display would be expected to be just barely undetectably blurred is shown in Figure 2B. Although much spatial frequency information is dropped out of the biresolutional image, it should be imperceptibly blurred because the spatial frequency information removed is always below threshold. If such thresholds can be established (or estimated from existing psychophysical data) for a sufficiently large number of levels of resolution, they can be used to plot the resolution drop-off function, as shown in Figure 2C. Ideally, such a discrete resolution drop-off GCMRD research program would (a) test predictions of a model of human visual sensitivity that could be used to interpolate and extrapolate from the data, (b) parametrically and orthogonally vary the size of the D-AOI and level of resolution outside it, and (c) use a universally applicable resolution metric (e.g., cycles per degree). In fact, several human factors studies have used discrete resolution drop-off GCMRDs (Loschky & McConkie, 2000, 2002; Parkhurst et al., 2000; Shioiri & Ikeda, 1989; Watson et al., 1997), and each identified one or more combinations of D-AOI size and peripheral resolution that did not differ appreciably from a full high-resolution control condition. However, none of those studies meets all three of the previously stated criteria, and thus all are of limited use for plotting a widely generalizable resolution drop-off function for use in GCMRDs.

A disadvantage of the discrete resolution drop-off method, as compared with the continuous drop-off method, is that it introduces one or more relatively sharp resolution transitions, or edges, into the visual field, which may produce perceptual problems. Thus a second question concerns whether such problems occur, and if so, would more gradual blending between different resolution regions eliminate them? Anecdotal evidence suggests that blending is useful, as suggested by a simulator study in which it was reported that having nonexistent or small blending regions was very distracting, whereas a display with a larger blending ring was less bothersome (Baldwin, 1981). However, another simulator study found no difference between two different blending ring widths in a visual search task (Browder, 1989), and more recent studies have found no differences between blended versus sharp-edged biresolutional displays in terms of detecting peripheral image degradation (Loschky & McConkie, 2000, Experiment 3) or initial saccadic latencies to peripheral targets (Reingold & Loschky, 2002). Thus further research on the issue of boundary-related artifacts using varying levels of blending and multiple dependent measures is needed to settle this question.

A clear advantage of the continuous resolution drop-off method is that to the extent that it matches the visual resolution drop-off of the retina, it should provide the greatest potential image resolution savings. Another advantage is illustrated in Figure 2D, which displays two resolution drop-off functions that differ from the ideal on only a single parameter, thus making it relatively easy to determine the best fit. However, the continuous drop-off method also has a disadvantage relative to the discrete drop-off approach. As shown in Figure 2E, with a continuous drop-off function, if the loss of image resolution at some retinal eccentricity causes a perceptual problem, it is difficult to locate the eccentricity where this occurs because image resolution is reduced across the entire picture. With the discrete drop-off method, it is possible to probe more specifically to identify the source of such a retinal/image resolution mismatch. This can be accomplished by varying either the eccentricity at which the drop-off (the step) occurs or the level of drop-off at a given eccentricity. Furthermore, the discrete drop-off method can also be a very efficient method of producing multiresolutional images under certain conditions. When images are represented using multilevel coding methods such as wavelet decomposition (Moulin, 2000), producing discrete drop-off multiresolutional images is simply a matter of selecting which levels of coefficients are to be included in reconstructing the different regions of the image (e.g., Frajka et al., 1997).

In deciding whether to produce continuous or discrete drop-off multiresolutional images, it is also important to note that discrete levels of resolution may cause more problems with animated images than with still images (Stampe & Reingold, 1995). This may involve both texture and motion perception, and therefore studies on "texture-defined motion" (e.g., Werkhoven, Sperling, & Chubb, 1993) may be informative for developers of live video or animated GCMRDs (Luebke et al., 2002). Carefully controlled human factors research on this issue in the context of GCMRDs is clearly needed.

Color resolution drop-off. It is important that the visual system also shows a loss of color resolution with retinal eccentricity. Although numerous studies have investigated this function and found important parallels to monochromatic contrast sensitivity functions (e.g., Rovamo & Iivanainen, 1991), to our knowledge this property of the visual system has been largely ignored rather than exploited by developers and investigators of GCMRDs (but see Watson et al., 1997, Experiment 2). We would encourage developers of multiresolutional image processing algorithms to exploit this color resolution drop-off in order to produce even greater bandwidth and processing savings.

Research and Development Issues Related to D-AOI Updating

We now shift our focus to issues related to updating the D-AOI. In either a continuous or a discrete drop-off display, every time the viewer's gaze moves, the center of high resolution must be quickly and accurately updated to match the viewer's current point of gaze. Of critical importance is that there are several options as to how and when this updating occurs and that these can affect human performance. Unfortunately, much less research has been conducted on these issues than on those related to the multiresolutional characteristics of the images. Accordingly, our following discussion primarily focuses on issues that should be explored by future research. Nevertheless, we attempt to provide developers with a preliminary analysis of the available options.

Overview of D-AOI movement methods. Having made the image multiresolutional, the next step is to update the D-AOI position dynamically so that it corresponds to the point of gaze. As indicated by the title of this article, we are most interested in the use of gaze-tracking information to position the D-AOI, but other researchers have proposed and implemented systems that use other means of providing position information. Thus far, the most commonly proposed means of providing positional information for the D-AOI include the following:

(a) true GCMRD, which typically combines eye and head tracking to specify the point of gaze as the basis for image updating; gaze position is determined by both the eye position in head coordinates and head position in space coordinates (Guitton & Volle, 1987);

(b) methods using pointer-device input that approximates gaze tracking with lower spatial and temporal resolution and accuracy (e.g., head- or hand-contingent D-AOI movement); and

(c) methods that try to predict where gaze will move without requiring input from the user.

Gaze-contingent D-AOI movement. Gaze control is generally considered to be the most natural method of D-AOI movement because it does not require any act beyond making normal eye movements. No training is involved. Also, if the goal is to remove from the display any information that the retina cannot resolve, making the updating process contingent on the point of gaze allows maximum information reduction. The most serious obstacle for developing systems employing GCMRDs is the current state of gaze-tracking technology.

Consider the following specifications of a gaze-tracking system that would probably meet the requirements of the most demanding GCMRD applications: (a) plug and play; (b) unobtrusive (e.g., a remote system with no physical attachment to the observer); (c) accurate (e.g., <0.5[degrees] error); (d) high temporal resolution (e.g., 500-Hz sampling rate) to minimize updating delays; (e) high spatial resolution and low noise to minimize unnecessary image updating; (f) ability to determine gaze position in a wraparound 360[degrees] field of view; and (g) affordable. In contrast, current gaze-tracking technologies tend to have trade-offs among factors such as ease of operation, comfort, accuracy, spatial and temporal resolution, field of view, and cost (Istance & Howarth, 1994; Jacob, 1995; Young & Sheena, 1975). Thus we are faced with a situation in which the most natural and perceptually least problematic implementation of a GCMRD may be complex and uncomfortable to use and/ or relatively expensive and thus impractical for some applications.

Nevertheless, current high-end eye trackers are approaching practical usefulness, if not yet meeting ideal specifications, and are more than adequate for investigating many of the relevant human factors variables crucial for developing better GCMRDs. In addition, some deficiencies in present gaze-tracking technology may be overcome by modifications to the designs of GCMRDs (e.g., enlarging the high-resolution area to compensate for problems caused by lack of spatial or temporal accuracy in specifying the point of gaze). Furthermore, recent developments in gaze-tracking technology (e.g., Matsumoto & Zelinsky, 2000; Stiefelhagen, Yang, & Waibel, 1997) suggest that user-friendly systems (e.g., remote systems requiring no physical contact with the user) are becoming faster and more accurate. In addition, approaches that include prediction of the next gaze location based on the one immediately prior (Tannenbaum, 2000) may be combined with prediction based on salient areas in the image (Parkhurst, Law, & Niebur, 2002) to improve speed and accuracy. Moreover, as more applications come to use gaze tracking within multimodal human-computer interaction systems (e.g., Sharma, Pavlovic, & Huang, 1998), gaze-tracking devices should begin to enjoy economy of scale and become more affordable. However, even at current prices, levels of comfort, and levels of spatial and temporal resolution/accuracy, certain applications depend on the use of GCMRDs and work quite well (e.g., flight simulators).

Head-contingent D-AOI movement. At the present time, head-contingent D-AOI movement seems generally better than gaze-contingent D-AOI in terms of comfort, relative ease of operation and calibration, and lower price. However, it is clearly worse in terms of resolution, accuracy, and speed of the D-AOI placement. This is because head movements often do not occur for gaze movements to targets closer than 20[degrees] (Guitton & Volle, 1987; Robinson, 1979). Thus, with a head-contingent D-AOI, if the gaze is moved to a target within 20[degrees] eccentricity, the eyes will move but the head may not--nor, consequently, will the D-AOI. This would result in lower spatial and temporal resolution and lower accuracy in moving the D-AOI to the point of gaze, and it could cause perceptual and performance decrements (e.g., increased detection of peripheral image degradation and longer fixation durations and search times).

Hand-contingent D-AOI movement. Likewise, hand-contingent D-AOI movement, although easy and inexpensive to implement (e.g., with mouse input), may suffer from slow D-AOI movement. This is because hand movements tend to rely on visual input for targeting. In pointing movements, the eyes are generally sent to the target first, and the hand follows after a lag of about 70 ms (e.g., Helsen, Elliot, Starkes, & Ricker, 1998), with visual input also being used to guide the hand toward the end of the movement (e.g., Heath, Hodges, Chua, & Elliott, 1998). Similar results have been shown for cursor movement on CRT displays through manipulation of a mouse, touch pad, or pointing stick (Smith, Ho, Ark, & Zhai, 2000). The Smith et al. study also found another pattern of eye-hand coordination, in which the eyes led the cursor only slightly, continually monitoring its progress. All of this suggests that perceptual problems may occur and task performance may be slowed because the eyes must be sent into the low-resolution area ahead of the hand; the eyes (and hand) must make shorter-than-normal excursions in order to avoid going into the low-resolution area; or the eyes must follow the D-AOI at a lower-than-normal velocity.

Predictive D-AOI movement. A very different approach is to move the D-AOI predictively. This can be done based on either empirical eye movement samples (Duchowski & McCormick, 1998; Stelmach & Tam, 1994; Stelmach, Tam, & Hearty, 1991) or saliency-predicting computer algorithms (Milanese, Wechsler, Gill, Bost, & Pun, 1994; Parkhurst et al., 2002; Tanaka, Plante, & Inoue, 1998). The latter option seems much more practical for producing D-AOIs for an infinite variety of images. However, a fundamental problem with the entire predictive approach to D-AOI movement is that it may often fail to accurately predict the exact location that a viewer wants to fixate at a given moment in time (Stelmach & Tam, 1994). Nevertheless, the predictive D-AOI approach may be most useful when the context and potential areas of interest are extremely well defined, such as in video teleconferencing (Duchowski & McCormick, 1998; Maeder et al., 1996). In this application, the attended area of interest (A-AOI) can generally be assumed to be the speaker's face, particularly, as noted earlier, the eyes, nose, and mouth (Spoehr & Lehmkule, 1982). An even simpler approach in video teleconferencing is simply to have a D-AOI that is always at the center of the image frame (Woelders, Frowein, Nielsen, Questa, & Sandini, 1997), based on the implied assumption that people spend most of their time looking there, which is generally true (e.g., Mannan, Ruddock, & Wooding, 1997).

Causes of D-AOI update delays. Depending on the method of D-AOI movement one chooses, the delays in updating the D-AOI position will vary. As mentioned earlier, such delays constitute another major issue facing designers of GCMRDs. Ideally, image updating would place the highest resolution at the point of gaze instantaneously. However, such a goal is virtually impossible to achieve, even with the fastest GCMRD implementation. The time required to update the image in response to a change in gaze position depends on a number of different processes, including the method used to update the location of the D-AOI (e.g., gaze contingent, head contingent, hand contingent), multiresolutional image production delays, transmission delays, and delays associated with the display method.

In most GCMRD applications, the most important update rate bottleneck is the time to produce a new multiresolutional image. If it is necessary to generate and render a 3-D multiresolutional image, or to filter a constant high-resolution image, the image processing time can take anywhere between 25 to 50 ms (Geisler & Perry, 1999; Ohshima et al., 1996) and 130 to 150 ins (Thomas & Geltmacher, 1993) or longer, depending on the complexity of the algorithm being used. Thus increasing the speed of multiresolutional image processing should be an important goal for designers working on producing effective GCMRDs. In general, image-processing times can be greatly reduced by implementing them in hardware rather than software. The multiresolutional camera approach, which can produce an image in as little as 10 ms (Sandini, 2001), is a good illustration of such a hardware implementation. In this case, however, there is an initial delay caused by rotating the multiresolutional camera to its new position. This can be done using mechanical servos, the speed of which depend on the weight of the camera, or by leaving the camera stationary and rotating a mirror with a galvanometer, which can move much more quickly.

Problems caused by D-AOI updating delays. There are at least two ways in which delays in updating the D-AOI position can cause perceptual difficulties. First, if the D-AOI is not updated quickly following a saccade, the point of gaze may initially be on a degraded region. Luckily, because of saccadic suppression, the viewer's visual sensitivity is lower at the beginning of a fixation (e.g., Ross et al., 2001), and thus brief delays in D-AOI updating may not be perceived. However, stimulus processing rapidly improves over the period of 20 to 80 ms after the start of a fixation, and thus longer delays may allow perception of the degraded image (McConkie & Loschky, 2002). Second, when updates occur well into a fixation (e.g., 70 ms or later), the update may produce the perception of motion, and this affects perception and task performance (e.g., Reingold & Stampe, 2002; van Diepen & Wampers, 1998).

Simulator studies have shown that delays between gaze movements and the image update result in impaired perception and task performance and, in some cases, can cause simulator sickness (e.g., Frank et al., 1988; but see Draper et al., 2001). Turner (1984) compared delays ranging from 130 to 280 ms and found progressive decrements in both path-following and target-identification tasks with increasing levels of throughput delay. In addition, two more recent studies demonstrated that fixation durations increased with an increase in image updating delays (Hodgson, Murray, & Plummet, 1993; Loschky & McConkie, 2000, Experiment 6).

QUESTIONS FOR FUTURE RESEARCH

In this section we outline several important issues for future human factors evaluation of GCMRDs. The first set of issues concerns the useful resolution function, the second set concerns issues that arise when producing multiresolutional images, and the third set of issues concerns D-AOI updating (see Table 2).

Although the resolution drop-off functions shown in Figure 2A are a good starting point, an important goal for future human factors research should be to further explore such functions and the variables that may affect them. These may include image and task variables such as lateral masking (Chung, Levi, & Legge, 2001), attentional cuing (Yeshurun & Carrasco, 1999), and task difficulty (Bertera & Rayner, 2000; Loschky & McConkie, 2000, Experiment 5; Pomplun, Reingold, & Shen, 2001); and participant variables such as user age (e.g., Ball, Beard, Roenker, Miller, & Griggs, 1988) and expertise (Reingold, Charness, Pomplun, & Stampe, 2001). In addition, human factors research should extend the concept of multiresolutional images to the color domain. For example, can a GCMRD be constructed using a hue resolution drop-off function that is just imperceptibly different from a full-color image and that has a substantial information reduction? Furthermore, if the drop-off is perceptible, what aspects of task performance, if any, are negatively impacted? Finally, further research should quantify the perception and performance costs associated with removing above-threshold peripheral resolution (i.e., detectably degraded GCMRDs; for related studies and discussion see Kortum & Geisler, 1996b; Loschky, 2003; Loschky & McConkie, 2000, 2002; Parkhurst et al., 2000; Shioiri & Ikeda, 1989; Watson et al., 1997).

Human factors research should assist GCMRD developers by exploring the perception and performance consequences of important implementation options. One of the most fundamental choices is whether to use a continuous or a discrete resolution drop-off function. These two methods should be compared with both still and animated images. Numerous additional design choices should also be explored empirically. For example, it is known that the shape of the visual field is asymmetrical (e.g., Pointer & Hess, 1989). This raises the question of whether the shape of the D-AOI (ellipse vs. circle vs. rectangle) in a biresolutional display has any effects on users' perception and performance. Likewise, any specific method of multiresolutional image production may require targeted human factors research. For example, in the case of rendering 2-D or 3-D models with space-variant levels of detail (e.g., in VR), it has been anecdotally noted that object details (e.g., doors and windows in a house) appear to pop in and out as a function of their distance from the point of gaze (Berbaum, 1984; Spooner, 1982). It is important to explore the perception and performance costs associated with such "popping" phenomena. Similar issues can be identified with any of the other methods of multiresolutional image production (see Table 3).

Human factors research into issues related to D-AOI updating is almost nonexistent (but see Frank et al., 1988; Grunwald & Kohn, 1994; Hodgson et al., 1993; Loschky & McConkie, 2000, Experiment 6; McConkie & Loschky, 2002; Turner, 1984). Two key issues for future research concern the D-AOI control method and the D-AOI update delay. Given that a number of different methods of moving the D-AOI have been suggested and implemented (i.e., gaze-, head-, and hand-contingent methods and predictive movement), an important goal for future research is to contrast these methods in terms of their perception and performance consequences. The second key question concerns the effects of a systematic increase in update delay on different perception and performance measures in order to determine when and how updating delays cause problems. Clearly, the chosen D-AOI control method will influence the update delay and resultant problems. Consequently, in order to compensate for a D-AOI control method having poor spatial or temporal accuracy and/or resolution, the size of the area of high resolution may have to be enlarged (e.g., Loschky & McConkie, 2000, Experiment 6).

CONCLUSIONS

The present review is primarily aimed at two audiences: (a) designers and engineers working on the development of applications and technologies related to GCMRDs and (b) researchers investigating relevant human factors variables. Given that empirical validation is an integral part of the development of GCMRDs, these two groups partially overlap, and collaborations between academia and industry in this field are becoming more prevalent. Indeed, we hope that the present review may help facilitate such interdisciplinary links. Consistent with this goal, we recommend that studies of GCMRDs should, whenever appropriate, report information both on their effects on human perception and performance and on bandwidth and processing savings. To date, such dual reporting has been rare (but see Luebke et al., 2000; Murphy & Duchowski, 2001; Parkhurst et al., 2000).

As is evident from this review, research into issues related to GCMRDs is truly in its infancy, with many unexplored and unresolved questions and few firm conclusions. Nevertheless, the preliminary findings we reviewed clearly demonstrate the potential utility and feasibility of GCMRDs (see also Parkhurst & Niehur, 2002, for a related review and discussion). The ultimate goal for GCMRDs is to produce savings by substantially reducing peripheral image resolution and/or detail and yet, to the user, be undetectably different from a normal image. This has recently been shown in a few studies using briefly flashed (Geri & Zeevi, 1995; Peli & Geri, 2001; Sere et al., 2000; Yang et al., 2001) and gaze-contingent (Loschky, 2003) presentation conditions. Other studies (see Table 1) have shown that using GCMRDs can result in substantial savings in processing and/or bandwidth. Thus the GCMRD concept is now beginning to be validated.

Furthermore, general perceptual disruptions and performance decrements have been shown to be caused by (a) peripheral degradation removing useful visual information or inserting distracting information (Geri & Zeevi, 1995; Kortum & Geisler, 1996b; Loschky, 2003; Loschky & McConkie, 2000, 2002; Parkhurst et al., 2000; Peli & Geri, 2001; Reingold & Loschky, 2002; Shioiri & Ikeda, 1989; Watson et al., 1997; Yang et al., 2001) and (b) D-AOI update delays (Frank et al., 1988; Grunwald & Kohn, 1994; Hodgson et al., 1993; Loschky & McConkie, 2000, Experiments 1 and 6; McConkie & Loschky, 2002; Turner, 1984; van Diepen & Wampers, 1998). Such studies illustrate the manner in which some performance costs associated with detectably degraded GCMRDs can be assessed.

Any application of GCMRDs must involve the analysis of trade-offs between computation and bandwidth savings and the degree and type of perception and performance decrements that would result. Ideally, for most tasks in which a GCMRD is appropriate, a set of conditions can be identified that will provide substantial computation and/or bandwidth reduction while still maintaining adequate, and perhaps even normal, task performance. Simply because an implementation results in a detectably degraded GCMRD does not mean that performance will deteriorate (Loschky & McConkie, 2000), and consequently performance costs must be assessed directly. Developers must set a clear performance-cost threshold as part of such an assessment. A prerequisite for this step in the design process is a clear definition of tasks that are critical and typical of the application (i.e., a task analysis). In addition, a consideration of the characteristics of potential users of the application is important.

The specific target application provides important constraints (e.g., budgetary) that are vital for determining the available development options. For example, whereas gaze-contingent D-AOI update is a feasible (and arguably the optimal) choice in the context of flight simulators, given the cost of gaze trackers such a method may not be an option for other applications, such as video teleconferencing and Internet image retrieval. Instead, hand-contingent and/or predictive D-AOI updating are likely to be the methods of choice for the latter applications.

Finally, as clearly demonstrated in the present article, human factors evaluation of relevant variables is vital for the development of the next generation of GCMRDs. The current review outlines a framework within which such research can be motivated, integrated, and evaluated. The human factors questions listed in the foregoing sections require investigating the perception and performance consequences of manipulated variables using both objective measures (e.g., accuracy, reaction time, saccade lengths, fixation durations) and subjective report measures (e.g., display quality ratings). Such investigations should be aimed at exploring the performance costs involved in detectably degraded GCMRDs and the conditions for achieving undetectably degraded GCMRDs. Although the issues and variables related to producing multiresolutional images and to moving the D-AOI were discussed separately, potential interactions and trade-offs between these variables should also be explored. As our review indicates, the vast majority of these issues related to the human factors of GCMRDs are yet to be investigated and therefore represent a fertile field for research.
TABLE 1: Examples of Processing and Bandwidth Savings Attributable to
Use of Multiresolutional Images

Measure                         Savings

3-D image rendering time        4-5 times faster (Levoy & Whitaker,
                                1990; Murphy & Duchowski, 2001; Ohshima
                                et al., 1996, p. 108)
Reduced polygons in 3-D model   2-6 times fewer polygons, with greater
                                savings at greater eccentricities, and
                                no difference in perceived resolution
                                (Luebke et al., 2000)
Video compression ratio         3 times greater compression ratio in
                                the multiresolutional image (Geisler &
                                Perry, 1999, p. 422), with greater
                                savings for larger field of view images
                                and same maximum resolution
Number of coefficients used     2-20 times fewer coefficients needed in
in encoding a wavelet           the multiresolutional image, depending
reconstructed image             on the size of the D-AOI and the level
                                of peripheral resolution (Loschky &
                                McConkie, 2000, p. 99)
Reduction of pixels needed in   35 times fewer pixels needed in the
multiresolutional image         multiresolutional image as compared
                                with constant high-resolution image
                                (Sandini et al., 2000, p. 517)

TABLE 2: Key Questions for Human Factors Research Related to GCMRDs

Question                             References

Can we construct just undetectable   Geri & Zeevi, 1995; Loschky, 2003;
GCMRDs that maximize savings in      Luebke et al., 2000; Peli & Geri,
processing and bandwidth while       2001; Sere et al., 2000; Yang et
eliminating perception and           al., 2001
performance costs?
What are the perception and per-     Geri & Zeevi, 1995; Kortum &
formance costs associated with       Geisler, 1996b; Loschky, 2003;
removing above-threshold peri-       Loschky & McConkie, 2000, 2002;
pheral resolution in detectably      Parkhurst et al., 2000; Peli &
degraded GCMRDs?                     Geri, 2001; Reingold & Loschky,
                                     2002; Shioiri & Ikeda, 1989;
                                     Watson et al., 1997; Yang et al.,
                                     2001
What is the optimal resolution       Geri & Zeevi, 1995; Loschky, 2003;
drop-off function that should be     Luebke et al., 2000; Peli & Geri,
used in guiding the construction     2001; Sere et al., 2000; Yang et
of GCMRDs?                           al., 2001
What are the perception and per-     Baldwin, 1981; Browder, 1989;
formance costs and benefits asso-    Loschky, 2003; Loschky & McConkie,
ciated with employing continuous     2000, Experiment 3; Reingold &
vs. discrete resolution drop-off     Loschky, 2002; Stampe & Reingold,
functions in still vs. full-motion   1995
displays?
What are the perception and per-     No empirical comparisons to date
formance costs and benefits rela-
ted to the shape of the D-AOI
(ellipse vs. circle vs. rectangle)
in discrete resolution drop-off
GCMRDs?
What is the effect, if any, of       Loschky, 2003; Peli & Geri, 2001;
lateral masking on detecting         Yang et al., 2001
peripheral resolution drop-off in
GCMRDs?
What is the effect, if any, of       Yeshurun & Carrasco, 1999
attentional cuing on detecting
peripheral resolution drop-off in
GCMRDs?
What is the effect, if any, of       Bertera & Rayner, 2000; Loschky &
task difficulty on detecting         McConkie, 2000, Experiment 5;
peripheral resolution drop-off in    Pomplun et al., 2001
GCMRDs?
Do older users of GCMRDs have        Ball et al., 1988; Sekuler,
higher resolution drop-off           Bennett, & Mamelak, 2000
thresholds than do younger users?
Do experts have lower resolution     Reingold et al., 2001
drop-off thresholds than do
novices when viewing multiresolu-
tional images relevant to their
skill domain?
Can a hue resolution drop-off that   Watson et al., 1997, Experiment 2
is just imperceptibly degraded be
used in the construction of
GCMRDs?
What are the perception and per-     See Table 3
formance costs and benefits
associated with employing the
different methods of producing
multiresolutional images?
How do different methods of moving   No empirical comparisons to date
the D-AOI (i.e., gaze-, head-, and
hand-contingent methods and
predictive movement) compare in
terms of their perception and
performance consequences?
What are the effects of a systema-   Draper et al., 2001; Frank et al.,
tic increase in update delay on      1988; Grunwald & Kohn, 1994;
different perception and perfor-     Hodgson et al., 1993; Loschky &
mance measures?                      McConkie, 2000, Experiments 1 & 6;
                                     McConkie & Loschky, 2002; Reingold
                                     & Stampe, 2002; Turner, 1984; van
                                     Diepen & Wampers, 1998
Is it possible to compensate for     Loschky & McConkie, 2000,
poor spatial and temporal            Experiment 6
accuracy/resolution of D-AOI
update by decreasing the magnitude
and scope of peripheral resolution
drop-off?

TABLE 3: Methods of Combining Multiple Resolutions in a Single Display

Method of Making Images               Suggested Application Areas
Multiresolutional

Rendering 2-D or 3-D models with      Flight simulators, VR; medical
multiple levels of detail and/or      imagery; image transmission
polygon simplification
Projecting image to viewable          Flight simulator, driving
monitors                              simulator
Projecting 1 visual field to each     Flight simulator (head-mounted
eye                                   display)
Projecting D-AOI to 1 eye,            Indirect vision (head-mounted
periphery to other eye                display)
Filtering by retina-like sampling     Image transmission
Filtering by "super pixel"            Image transmission, video tele-
sampling and averaging                conferencing, remote piloting,
                                      telemedicine
Filtering by low-pass pyramid with    Image transmission, video tele-
contrast threshold map                conferencing, remote piloting,
                                      telemedicine, VR, simulators
Filtering by Gaussian sampling        Image transmission
with varying kernel size with
eccentricity
Filtering by wavelet transform        Image transmission, video
with scaled coefficients with         teleconferencing, VR
eccentricity or discrete bands
Filtering by log-polar or complex     Image transmission, video
log-polar mapping algorithm           teleconferencing, robotics
Multiresolutional sensor (log-        Image transmission, video
polar or partial log polar)           teleconferencing, robotics

Method of Making Images               Basis for Resolution Drop-Off
Multiresolutional

Rendering 2-D or 3-D models with      Retinal acuity or CSF x eccen-
multiple levels of detail and/or      tricity and/or velocity and/or
polygon simplification                binocular fusion and/or size

Projecting image to viewable          No vision behind the head
monitors
Projecting 1 visual field to each     Unspecified
eye
Projecting D-AOI to 1 eye,            Unspecified (emphasis on
periphery to other eye                binocular vision issues)
Filtering by retina-like sampling     Retinal ganglion cell density
                                      and output characteristics
Filtering by "super pixel"            Cortical magnification factor or
sampling and averaging                eccentricity-dependent CSF
Filtering by low-pass pyramid with    Eccentricity-dependent CSF
contrast threshold map
Filtering by Gaussian sampling        Human vernier acuity drop-off
with varying kernel size with         function (point spread function)
eccentricity
Filtering by wavelet transform        Human minimum angle of
with scaled coefficients with         resolution x eccentricity
eccentricity or discrete bands        function or empirical trial and
                                      error
Filtering by log-polar or complex     Human retinal receptor topology
log-polar mapping algorithm           or macaque retinocortical
                                      mapping function
Multiresolutional sensor (log-        Human retinal receptor topology
polar or partial log polar)           and physical limits of sensor

Method of Making Images               References
Multiresolutional

Rendering 2-D or 3-D models with      Levoy & Whitaker, 1990; Luebke
multiple levels of detail and/or      et al., 2000, 2002; Murphy &
polygon simplification                Duchowski, 2001; Ohshima et al.,
                                      1996; Reddy, 1998; Spooner,
                                      1982; To et al., 2001
Projecting image to viewable          Kappe et al., 1999; Thomas &
monitors                              Geltmacher, 1993; Warner et al.,
                                      1993
Projecting 1 visual field to each     Fernie, 1995, 1996
eye
Projecting D-AOI to 1 eye,            Kooi, 1993
periphery to other eye
Filtering by retina-like sampling     Kuyel et al., 1999
Filtering by "super pixel"            Kortum & Geisler, 1996a, 1996b;
sampling and averaging                Yang et al., 2001
Filtering by low-pass pyramid with    Geisler & Perry, 1998, 1999;
contrast threshold map                Loschky, 2002
Filtering by Gaussian sampling        Geri & Zeevi, 1995
with varying kernel size with
eccentricity
Filtering by wavelet transform        Duchowski, 2000; Duchowski &
with scaled coefficients with         McCormick, 1998; Frajka et al.,
eccentricity or discrete bands        1997; Loschky & McConkie,
                                      2002; Wang & Bovik, 2001
Filtering by log-polar or complex     Basu & Wiebe, 1998; Rojer &
log-polar mapping algorithm           Schwartz, 1990; Weiman, 1990,
                                      1994; Woelders et al., 1997
Multiresolutional sensor (log-        Sandini, 2001; Sandini et al.,
polar or partial log polar)           1996, 2000; Wodnicki, Roberts,
                                      & Levine, 1995, 1997


ACKNOWLEDGMENTS

This research was supported by grants to Eyal Reingold from the Natural Science and Engineering Research Council of Canada (NSERC) and the Defence and Civil Institute for Environmental Medicine (DCIEM) and to George McConkie from the U.S. Army Federated Laboratory under Cooperative Agreement DAALO 196-2-0003. We thank William Howell, Justin Hollands, and an anonymous reviewer for their very helpful comments on earlier versions of this manuscript.

REFERENCES

Arthur, K. W. (2000). Effects of field of view on performance with head-mounted displays. Unpublished doctoral dissertation, University of North Carolina at Chapel Hill.

Baldwin. D. (1981, June). Area of interest: Instantaneous field of view vision model. Paper presented at the Image Generation/ Display Conference II, Scottsdale. AZ.

Ball, K. K., Beard, B. L., Roenker, D. L., Miller, R. L., & Griggs, D. S. (1988). Age and visual search: Expanding the useful field of view. Journal of the Optical Society of America, 5, 2210-2219.

Barrette, R. E. (1986). Flight simulator visual systems--An Overview. In Proceedings of the Fifth Society of Automotive Engineers Conference on Aerospace Behavioral Engineering Technology: Human Integration Technology: The Cornerstone for Enhancing Human Performance (pp. 193-198), Warrendale, PA: Society of Automotive Engineers.

Basu, A., & Wiebe, K. J. (1998). Enhancing videoconferencing using spatially varying sensing. IEEE Transactions on Systems, Man, and Cybernetics--Part A: Systems and Humans, 28, 137-148.

Berbaum, K. S. (1984). Design criteria for reducing "popping" in area-of-interest displays: Preliminary experiments (NAVTRAE-QUIPCEN 81-C-0105-8). Orlando, FL: Naval Training Equipment Center.

Bertera, J. H., & Rayner, K. (2000). Eye movements and the span of the effective stimulus in visual search. Perception and Psychophysics, 62, 576-585.

Bolt, R. A. (1984). The human interface: Where people and computers meet. London: Lifetime Learning.

Browder, G. B. (1989). Evaluation of a helmet-mounted laser projector display. Proceedings of the SPIE: The International Society for Optical Engineering, 1116, 85-89.

Cabral, J. E., Jr., & Kim, Y. (1996). Multimedia systems for telemedicine and their communications requirements. IEEE Communications Magazine, 34, 20-27.

Campbell, F. W., & Robson, J. G. (1968). Application of Fourier analysis to the visibility of gratings. Journal of Physiology, 197, 551-566.

Chevrette, F. C., & Fortin, J. (1996). Wide-area-coverage infrared surveillance system. Proceedings of the SPIE: The International Society for Optical Engineering, 2743, 169-178,

Chung, S. T. L., Levi. D. M., & Legge, G. E. (2001) Spatial-frequency and contrast properties of crowding. Vision Research, 41, 1833-1850.

Dalton, N. M., & Deering, C. S. (1989). Photo based image generator (for helmet laser projector). Proceedings of the SPIE: The International Society for Optical Engineering, 1116, 61-75.

De Valois, R. L., & De Valois, K. K. (1988). Spatial vision. New York: Oxford University Press.

DePiero, F. W., Noell, T. E., & Gee, T. F. (1992), Remote driving with reduced bandwidth communication. In Proceedings of the Sixth Annual Space Operation, Application. and Research Symposium (SOAR '92) (pp. 163-171). Houston: NASA Johnson Space Center.

Dias, J., Araujo, H., Paredes, C., & Batista, J. (1997). Optical normal flow estimation on log-polar images: A solution for real-time binocular vision. Real-Time Imaging, 3, 213-228.

Draper, M. H., Viirre, E. S., Furness, T. A., & Gawron, V. J. (2001). Effects of image scale and system time delay on simulator sickness within head-coupled virtual environments. Human Factors, 43, 129-146.

Duchowski, A. T. (2000). Acuity-matching resolution degradation through wavelet coefficient scaling. IEEE Transactions on Image Processing, 9, 1437-1440.

Duchowski, A. T., & McCormick, B. H. (1998. January). Gaze-contingent video resolution degradation. Paper presented at Human Vision and Electronic Imaging II, Bellingham, WA.

Fernie, A. (1995). Helmet-mounted display with dual resolution. Journal of the Society for Information Display, 3. 151-155.

Fernie, A. (1996). Improvements in area of interest helmet mounted displays. In Proceedings of the Conference on 1kaining Lowering the cost, maintaining the fidelity (pp. 21.1-21.4). London: Royal Aeronautical Society.

Frajka, T., Sherwood, P. G., & Zeger, K. (1997). Progressive image coding with spatially variable resolution. In Proceedings, International Conference on Image Processing (pp, 53-56). Los Alamitos, CA: IEEE Computer Society.

Frank, L. H., Casali, J. G., & Wierwille, W. W. (1988). Effects of visual display and motion system delays on operator performance and uneasiness in a driving simulator. Human Factors, 30, 201-217.

Geisler, W. S. (2001). Space variant imaging: Foveation questions. Retrieved March 25, 2002, from University of Texas at Austin, Center for Perceptual Systems Web site, http://fi.cvis.psy.utexas. edu/foveated_questions.htm

Geisler, W. S., & Perry, J. S. (1998). A real-time foveated multi-resolution system for low-bandwidth video communication. Proceedings of the SPIE: The International Society for Optical Engineering, 3299, 294-305.

Geisler, W. S., & Perry, J. S. (1999). Variable-resolution displays for visual communication and simulation. SID Symposium Digest, 30, 420-423.

Geri, G. A., & Zeevi, Y. Y. (19951. Visual assessment of variable-resolution Imagery. Journal of the Optical Society of America A--Optics and Image Science, 12, 2367-2375.

Grunwald, A. J., & Kohn, S. (1994). Visual field information in low-altitude visual flight by line-of-sight slaved helmet-mounted displays, IEEE Transactions on Systems, Man, and Cybernetics, 24, 120-134.

Guitton, D., & Voile, M. (1987). Gaze control in humans: Eye-head coordination during orienting movements to targets within and beyond the oculomotor range. Journal of Neurophysiology, 58, 427-459.

Haswell, M. R. (19861. Visual systems developments. In Advances in flight simulation--Visual and motion systems (pp. 264-271). London: Royal Aeronautical Society.

Heath, M., Hodges, N. J., Chua, R., & Elliott, D. (1998). On-line control of rapid aiming movements: Unexpected target perturbations and movement kinematics. Canadian Journal of Experimental Psychology, 52, 163-173.

Helsen, W. F., Elliot, D., Starkes, J. L., & Ricker, K. L. (1998). Temporal and spatial coupling of point of gaze and hand movements in aiming. Journal of Motor Behavior, 30, 249-259.

Hiatt, J. R., Shabot, M. M., Phillips, E. H., Haines, R. E, & Grant, T. L. (1996). Telesurgery--Acceptability of compressed video for remote surgical proctoring. Archives of Surgery, 131, 396-400.

Hodgson, T. L., Murray. P. M., & Plummet, A. R. (19931. Eye movements during "area of interest" viewing. In G. d'Ydewalle & J. V. Rensbergen (Eds.), Perception and cognition: Advances in eye movement research (pp. 115-123). New York: Elsevier Science.

Honniball, J. R., & Thomas, P. J. (1999). Medical image databases: The ICOS project. In Multimedia databases and MPEG-7: Colloquium (pp. 3/1-3/5). London: Institution of Electrical Engineers.

Hughes, R., Brooks, R., Graham, D., Sheen, R., & Dickens, T. (1982). Tactical ground attack: On the transfer of training from flight simulator to operational red flag range exercise. In Proceedings of the Human Factors Society 26th Annual Meeting (pp. 596-600). Santa Monica, CA: Human Factors and Ergonomics Society.

Istance, H. O., & Howarth, P. A. (1994). Keeping an eye on your interface: The potential for eye-based control of graphical user interfaces (GUI's). In G. Cockton, S. Draper, & G. R. S. Weir (Eds.), People and Computers IX: Proceedings of HCI '94 (pp. 67-75). Cambridge: Cambridge University Press.

Jacob, R. J. K. (19951. Eye tracking in advanced interface design. In W. Barfield & T. A. Furness (Eds.), Virtual environments and advanced interface design (pp. 258-288). New York: Oxford University Press.

Kappe, B., van Erp, J., & Korteling, J. E. (1999). Effects of head-slaved and peripheral displays on lane-keeping performance and spatial orientation. Human Factors, 41, 453-466.

Kim, K. I., Shin, C. W., & Inoguchi, S. (1995). Collision avoidance using artificial retina sensor in ALV. In Proceedings of the Intelligent Vehicles '95 Symposium (pp. 183-187). Piscataway, NJ: IEEE.

Kooi. F. L. (1993). Binocular configurations of a night-flight head-mounted display. Displays, 14, 11-20.

Kortum, P. T., & Geisler, W. S. (1996a). Implementation of a foveated image-coding system for bandwidth reduction of video images. Proceedings of the SPIE: The International Society for Optical Engineering, 2657, 350-360.

Kortum, P. T., & Geisler, W. S. (1996b). Search performance in natural scenes: The role of peripheral vision [Abstract]. Investigative Ophthalmology and Visual Science Supplement, 37(3), S297.

Kuyel, T., Geisler, W., & Ghosh, J. (1999). Retinally reconstructed images: Digital images having a resolution match with the human eye. IEEK Transactions on Systems, Man, and Cybernetics--Part A: Systems and Humans, 29, 235-243.

Leavy, W. P., & Fortin, M. (1983). Closing the gap between aircraft and simulator training with limited field-of-view visual systems. In Proceedings of the 5th Interservice/Industry Training Equipment Conference (pp. 10-18). Arlington, VA: National Training Systems Association.

Lee, A. T., & Lidderdale, I. G. (1983). Visual scene simulation requirements for C-5A/C-141B aerial refueling part task trainer (AFHRL-TP-82-34). Williams Air Force Base, AZ: Air Force Human Resources Laboratory.

Levoy, M., & Whitaker, R. (1990). Gaze-directed volume rendering. Computer Graphics, 24, 217-223.

Loschky, L. C. (2003). Investigating perception and eye movement control in natural scenes using gaze-contingent multi-resolutional displays. Unpublished doctoral dissertation, University of Illinois, Urbana-Champaign.

Loschky, L. C., & McConkie, G. W. (2000). User performance with gaze contingent muhiresolutional displays. In A. T. Duchowski (Ed.), Proceedings of the Eye Tracking Research and Applications Symposium 2000 (pp. 97-103). New York: Association for Computing Machinery.

Loschky, L. C., & McConkie, G. W. (2002). Investigating spatial vision and dynamic attentional selection using a gaze-contingent multi-resolutional display. Journal of Experimental Psychology: Applied, 8, 99-117.

Luebke, D., Hallen, B., Newfie]d, D., & Watson, B. (2000). Perceptually driven simplification using gaze-directed rendering (Tech. Report CS-2000-041: Charlottesville: University of Virginia, Department of Computer Science.

Luebke, D., Reddy, M., Cohen, J., Varshney, A., Watson, B., & Huebner, R. (2002). Level of detail for 3Dgraphics. San Francisco: Morgan-Kaufmann.

Maeder, A., Diederich, J., & Niebur, E. (1996). Limiting human perception for image sequences. Proceedings of the SPIE: The International Society for Optical Engineering, 2657, 330-337.

Mannan, S. K., Ruddock. K. H., & Wooding, D. S. (1997). Fixation patterns made during brief examination of two-dimensional images. Perception, 26, 1059-1072.

Matsumoto, Y., & Zelinsky, A. (2000). An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement. In Proceedings of IEEE Fourth International Conference on Face and Gesture Recognition (FG'2000) (pp. 499-505). Piscataway, NJ: IEEE.

McConkie, G. W., & Loschky, L. C. (2002). Perception onset time during fixations in free viewing. Behavioral Research Methods, Instruments, and Computers, 34, 481-490.

McGovern, D. E. (1993). Experience and results in teleoperation of land vehicles. In S. R. Ellis (Ed.). Pictorial communication in virtual and real environments (pp. 182-195). London: Taylor & Francis.

Milanese, R., Wechsler, H., Gill, S., Bost, J.-M., & Pun. T, (1994). Integration of bottom-up and top-down cues for visual attention using non-linear relaxation. In Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 781-785). Los Alamitos, CA: IEEE Computer Society.

Moulin, E (2000). Multiscale image decompositions and wavelets. In A. C. Bovik (Ed.), Handbook of image and video processing (pp. 289-300). New York: Academic.

Murphy. H., & Duchowski, A. T. (2001, September). Gaze-contingent level of detail rendering. Paper presented at EuroGraphics 2001, Manchester, UK.

Ohshima, T., Yamamoto, H., & Tamura, H. (1996). Gaze-directed adaptive rendering for interacting with virtual space. In Virtual Reality Annual International Symposium (Vol. 267, pp. 103-110). Piscataway, NJ: IEEE.

Panerai, F., Metta, G., & Sandini, G. (2000). Visuo-inertial stabilization in space-variant binocular systems, Robotics and Autonomous Systems, 30, 195-214.

Parkhurst. D., Culurciello, E., & Niebur, E. (2000). Evaluating variable resolution displays with visual search: Task performance and eye movements. In A. T. Duchowski (Ed.), Proceedings of the Eye Tracking Research and Applications Symposium 2000 (pp. 105-109). New York: Associaton for Computing Machinery.

Parkburst, D., Law. K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 107-123.

Parkhurst, D. J., & Niebur, E. (2002). Variable-resolution displays: A theoretical, practical, and behavioral evaluation. Human Factors, 44, 611-629.

Peli, E., & Geri, G. A. (2001). Discrimination of wide-field images as a test of a peripheral-vision model. Journal of the Optical Society of America A--Optics and Image Science, 18, 294-301.

Peli, E., Yang. J., & Goldstein, R. B. (1991). Image invariance with changes in size: The role of peripheral contrast thresholds. Journal of the Optical Society of America, 8, 1762-1774.

Pointer. J. S., & Hess, R. F. (1989). The contrast sensitivity gradient across the human visual field: With emphasis on the low spatial frequency range. Vision Research, 29, 1135-1151.

Pomplun, M., Reingold, E. M., & Shen, J. (2001). Investigating the visual span in comparative search: The effects of task difficulty and divided attention. Cognition, 81, B57-B67.

Pretlove, J., & Asbery. R. (1995). The design of a high performance telepresence system incorporating an active vision system for enhanced visual perception of remote environments. Proceedings of the SPIE: The International Society for Optical Engineering, 2590, 95-106.

Quick, J. R. (1990). System requirements for a high gain dome display surface. Proceedings of the SPIE: The International Society for Optical Engineering, 1289, 183-191.

Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372-422.

Reddy, M. (1995). A perceptual framework for optimizing visual detail in virtual environments. In Framework for Interactive Virtual Environments (FIVE) Conference (pp. 179-188). London: University of London.

Reddy, M. (1997). Perceptually modulated level of detail for virtual environments. Unpublished doctoral thesis, University of Edinburgh, UK.

Reddy, M. (1998). Specification and evaluation of level of detail selection criteria. Virtual Reality: Research, Development and Application, 3, 132-143.

Regan E. C., & Price, K. R. (1994). The frequency of occurrence and severity of side-effects of immersion virtual reality. Aviation, Space, and Environmental Medicine, 65, 527-530.

Reingold, E. M., Charness, N., Pomplun, M., & Stampe, D. M. (2001). Visual span in expert chess players: Evidence from eye movements. Psychological Science, 12, 48-55.

Reingold. E. M., & Loschky, L. C. (2002). Saliency of peripheral targets in gaze-contingent multi-resolutional displays. Behavioral Research Methods. Instruments, and Computers, 54. 491-499.

2Reingold, E. M., & Stampe, D. M. (2002). Saccadic inhibition in voluntary and reflexive saccades. Journal of Cognitive Neuroscience, 14, 371-388.

Robinson, G. H. (1979). Dynamics of the eye and head during movement between displays: A qualitative and quantitative guide for designers. Human Factors, 21,343-352.

Rojer, A. S., & Schwartz, E. L. (1990). Design considerations for a space-variant visual sensor with complex-logarithmic geometry. In H. Freeman (Ed.), Proceedings of the 10th International Conference on Pattern Recognition (Vol. 2, pp. 278-285). Los Alamitos, CA: IEEE Computer Society.

Rolwes, M. S. (1990). Design and flight testing of an electronic visibility system. Proceedings of the SPIE: The International Society for Optical Engineering, 1290, 108-119.

Rosenberg, L. B. (1993, March). The use of virtual fixtures to enhance operator performance in time delayed teleoperation (USAF AMRL Tech. Report AL/CF TR-1994-0139, 49). Wright-Patterson Air Force Base, OH: U.S. Air Force Research Laboratory.

Ross, J., Morrone, M. C., Goldberg, M. E., & Burr, D. C. (2001). Changes in visual perception at the time of saccades. Trends in Neurosciences, 24(2), 113-121.

Rovamo, J., & Iivanainen, A. (1991). Detection of chromatic deviations from white across the human visual field. Vision Research, 51, 2227-2254.

Rovetta, A., Sala, R., Costal, E, Wen, X., Sabbadini, D., Milanesi, S., Togno, A., Angelini, L., & Bejczy, A. (1993). Telerobotics surgery in a transatlantic experiment: Application in laparoscopy. Proceedings of the SPIE: The International Society for Optical Engineering, 2057, 337-344.

Sandini, G. (2001). Giotto: Retina-like camera. Available from University of Genova LIRA-Lab Web site, http://www.lira. dist.unige.it

Sandini, G., Argyros, A., Auffret, E., Dario, P., Dierickx, B., Ferrari, F., Frowein, H., Guerin, C., Hermans, L., Manganas, A., Mannucci, A., Nielsen. J., Questa, R, Sassi, A., Scheffer, D., & Woelder, W. (1996). Image-based personal communication using an innovative space-variant CMOS sensor. In Fifth IEEE International Workshop on Robot and Human Communication (pp. 158-163). Piscataway, NJ: IEEE.

Sandini, G., Questa, E, Scheffer. D., Dierickx. B., & Mannucci, A. (2000). A retina-like CMOS sensor and its applications. In 2000 IEEE Sensor Array and Multichannel Signal Processing Workshop (pp. 514-519). Piscataway. NJ: IEEE.

Sekuler, A. B., Bennett, P. J., & Mamelak, M. (2000). Effects of aging on the useful field of view. Experimental Aging Research. 26, 103-120.

Sere, B., Marendaz, C., & Herault, J. (2000). Nonhomogeneous resolution of images of natural scenes. Perception, 29, 1403-1412.

Sharma, R., Pavlovic, V. I., & Huang, T. S. (1998). Toward multi-modal human-computer interface. Proceedings of the IEEE, 86, 853-869.

Shin. C. W., & Inoguchi, S. (1994). A new anthropomorphic retina-like visual sensor. In S. Peleg & S. Ullmann (Eds.), Proceedings of the 12th IAPR International Conference on Pattern Recognition, Conference C: Signal Processing (Vol. 3, pp. 345-348). Los Alamitos, CA: IEEE Computer Society.

Shioiri. S., & Ikeda, M. (1989). Useful resolution for picture perception as a function of eccentricity. Perception. 18, 347-361.

Smith. B. A., Ho, J., Ark, W., & Zhai, S. (2000). Hand-eye coordination patterns in target selection. In A. T. Duchowski (Ed.), Proceedings of the Eye Tracking Research and Applications Symposium (pp. 117-122). New York: Association for Computing Machinery.

Spoehr, K. T., & Lehmkule, S. W. (1982). Visual information processing. San Francisco: Freeman.

Spooner, A. M. (1982). The trend towards area of interest in visual simulation technology. In Proceedings of the Fourth Interservice/Industry Training Equipment Conference (pp. 205-215). Arlington, VA: National Training Systems Association.

Stampe, D. M., & Reingold, E. M. (1995). Real-time gaze contingent displays for moving image compression (DCIEM Contract Report W7711-3-7204/01-XSE). Toronto: University of Toronto, Department of Psychology.

Stelmach, L. B., & Tam, W. J. (1994). Processing image sequences based on eye movements. Proceedings of the SPIE: The International Society for Optical Engineering, 2179, 90-98.

Stelmach, L. B., Tam. W. J., & Hearty, P. J. (1991). Static and dynamic spatial resolution in image coding: An investigation of eye movements. Proceedings of the SPIE: The International Society for Optical Engineering, 1453, 147-151.

Stiefelhagen, R., Yang, J., & Waibel, A. (1997). A model-based gaze-tracking system. International Journal of Artificial Intelligence Tools, 6, 193-209.

Szoboszlay, Z., Haworth, L., Reynolds, T., Lee. A., & Halmos, Z. (1995). Effect of field-of-view restriction on rotorcraft pilot workload and performance: Preliminary results. Proceedings of the SPIE: The International Society for Optical Engineering, 2465. 142-153.

Tanaka. S., Plante, A., & Inoue, S. (1998). Foreground-background segmentation based on attractiveness. In M. H. Hamza (Ed.). Computer graphics and imaging (pp. 191-194). Calgary. Canada: ACTA.

Tannenbaum, A. (2000). On the eye tracking problem: A challenge for robust control. International Journal of Robust and Nonlinear Control, 10. 875-888.

Tharp, G., Liu, A., Yamashita, H., Stark, L., Wong, B., & Dee, J. (1990). Helmet mounted display to adapt the telerobotic environment to human vision. In Third Annual Workshop on Space Operations, Automation, and Robotics (SOAR 1989) (pp. 477-481). Washington, DC: NASA.

Thibos, L. N. (1998). Acuity perimetry and the sampling theory of visual resolution. Optometry and Vision Science, 75, 399-406.

Thibos, L. N., Still, D. L., & Bradley, A. (1996). Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vision Research, 36, 249-258.

Thomas, M., & Geltmacher, H. (1993). Combat simulator display development. Information Display. 9, 23-26.

Thompson, J. M., Ottensmeyer, M. P., & Sheridan, T. B. (1999). Human factors in telesurgery: Effects of time delay and asynchrony in video and control feedback with local manipulative assistance. Telemedicine Journal. 5, 129-137.

To, D., Lau, R. W. H., & Green. M. (2001). An adaptive multiresolution method for progressive model transmission. Presence--Teleoperators and Virtual Environments, l0, 62-74.

Tong, H. M., & Fisher, R, A. (1984), Progress report on an eye-slaved area-of-interest visual display. In E. G. Monroe (Ed.), Proceedings of the 1984 Image Conference III (pp. 279-294). Brooks Air Force Base, TX: U.S. Air Force Human Resources Laboratory.

Tsumura, N., Endo, C., Haneishi, H., & Miyake, Y. (1996). Image compression and decompression based on gazing area. Proceedings of the SPIE: The International Society for Optical Engineering, 2657, 361-367.

Turner, J. A. (1984). Evaluation of an eye-slaved area-of-interest display for tactical combat simulation. In Sixth Interservice/ Industry Training Equipment Conference and Exhibition (pp. 75-86). Arlington, VA: National Training Systems Association.

van Diepen. P. M. J., & Wampers, M. (1998). Scene exploration with Fourier-filtered peripheral information. Perception, 27, 1141-1151.

van Erp, J. B. F., & Kappe, B. (1997). Head-slaved images for low-cost driving simulators and unmanned ground vehicles (Final Report TD97-0216). Soesterberg, Netherlands: TNO Human Factors Research Institute.

Viljoen, G. T. (1998). Comparative study of target acquisition performance between an eye-slaved helmet display and unaided human vision. Proceedings of the SPIE: The International Society for Optical Engineering. 3362, 54-65.

Wang, Z., & Bovik, A. C. (2001). Embedded foveation image coding. IEEE Transactions on Image Processing, 10, 1397-1410.

Warner, H. D., Serfoss. G. L., & Hubbard, D. C. (1993). Effects of area-of-interest display characteristics of visual search performance and head movements in simulated low-level flight (Final Tech. Report, September 1989--July 1992, AL-TR-1993-0023). Williams Air Force Base, AZ: Armstrong Laboratory.

Watson, B. A., Walker, N., Hodges, L. F., & Worden, A. (1997). Managing level of detail through peripheral degradation: Effects on search performance with a head-mounted display. ACM Transactions on Computer-Human Interaction, 4. 323-346.

Weiman, C. F. R. (1990). Video compression via log polar mapping. Proceedings of the SPIE: The International Society for Optical Engineering, 1295, 266-277.

Weiman, C. F. R. (1994). Video compression by matching human perceptual channels. Proceedings of the SPIE: The International Society for Optical Engineering, 2239, 178-189.

Werkhoven, P., Sperling, G., & Chubb, C. (1993). The dimensionality of texture-defined motion: A single channel theory. Vision Research, 33, 463-485.

Wickens, C. D., & Hollands, J. G. (2000). Engineering psychology and human performance (3rd ed.). Upper Saddle River, NJ: Prentice Hall.

Wodnicki, R., Roberts, G. W., & Levine, M. D. (1995). A foveated image sensor in standard CMOS technology. In Proceedings of the IEEE Custom Integrated Circuits Conference (pp. 557-560). Piscataway, NJ: IEEE.

Wodnicki, R., Roberts, G. W., & Levine, M. D. (1997). A log-polar image sensor fabricated in a standard 1.2-Mu-m ASIC CMOS process. IEEE Journal of Solid-State Circuits, 32, 1274-1277.

Woelders. W. W., Frowein, H. W., Nielsen, J., Questa, P., & Sandini, G. (1997). New developments in low-bit rate videotelephony for people who are deaf. Journal of Speech. Language and Hearing Research, 40, 1425-1435.

Yang, J., Coia, T., & Miller, M. (2001). Subjective evaluation of retinal-dependent image degradations. In Proceedings of PICS 2001: Image Processing, Image Quality, Image Capture Systems Conference (pp. 142-147). Springfield, VA: Society for Imaging Science and Technology.

Yeshurun, Y., & Carrasco, M. (1999). Spatial attention improves performance in spatial resolution tasks. Vision Research, 39, 293-306.

Yoshida, A., Rolland, J. P., & Reif, J. H. (1995a). Design and applications of a high-resolution insert head-mounted display. In Proceedings of Virtual Reality Annual International Symposium '95 (Catalog No. 95CH35761: pp. 84-93). Los Alamitos. CA: IEEE Computer Society.

Yoshida, A., Rolland. J. P., & Reif, J. H. (1995b). Optical design and analysis of a head-mourned display with a high-resolution insert. Proceedings of the SPIE: The International Society for Optical Engineering, 2537, 71-75.

Young, L. R., & Sheena, D. (19751. Survey of eye movement recording methods. Behavior Research Methods and Instrumentation. 7, 597-429.

Eyal M. Reingold is a professor in the Department of Psychology at the University of Toronto. He received his Ph.D. in psychology in 1990 at the University of Waterloo.

Lester C. Loschky is a postdoctoral research associate in the Department of Psychology at the University of Illinois at Urbana-Champaign, where he received his Ph.D. in psychology in 2003.

George W. McConkie is a professor in the Department of Educational Psychology at the University of Illinois at Urbana-Champaign. He received his Ph.D. in experimental psychology in 1966 at Stanford University.

David M. Stampe is vice president of engineering and research at SR Research Ltd., Mississauga, Canada. He received his Ph.D. in psychology in 1999 at the University of Toronto.

Date received: August 25, 2000

Date accepted: February 19, 2003

Eyal M. Reingold, University of Toronto, Toronto, Ontario, Canada, Lester C. Loschky and George W. McConkie, University of Illinois at Urbana-Champaign, Urbana, Illinois, and David M. Stampe, University of Toronto, Toronto, Ontario, Canada

Address correspondence to Eyal M. Reingold, Department of Psychology, University of Toronto, 100 St. George St., Toronto, Ontario, Canada, M5S 3G3; reingold@psych.utoronto.ca. HUMAN FACTORS, Vol. 45, No. 2, Summer 2003, pp. 307-328. Copyrigh [c] 2003, Human Factors and Ergonomics Society. All rights reserved.
COPYRIGHT 2003 Human Factors and Ergonomics Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

 
Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Displays And Controls
Author:Reingold, Eyal M.; Loschky, Lester C.; McConkie, George W.; Stampe, David M.
Publication:Human Factors
Date:Jun 22, 2003
Words:14415
Previous Article:Effects of load and speed on lumbar vertebral kinematics during lifting motions.
Next Article:Bimodal displays improve speech comprehension in environments with multiple speakers.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters