Printer Friendly

Comparing tactile maps and haptic digital representations of a maritime environment.

Abstract: A map exploration and representation exercise was conducted with participants who were totally blind. Representations of maritime environments were presented either with a tactile map or with a digital haptic virtual map. We assessed the knowledge of spatial configurations using a triangulation technique. The results revealed that both types of map learning were equivalent.

**********

Whatever sensory modalities are used, people who are sighted and those who are blind can perceive their environment and perform actions in an egocentric frame of reference. However, to complete a holistic, more gestalt overview of space, an individual who is learning space needs to abstract spatial representations into an allocentric frame of reference (Thinus-Blanc, 1996). In other words, mastering navigation consists of coordinating the forward field of perception and the corresponding allocentric mental map (Wickens, 1999). Thus, because of the features of tactile and auditory modalities, people who are blind may have to integrate sequential information to generate an overall mental representation.

Geographic charts, maps, and spatial displays are powerful tools for building spatial knowledge. Tatham (2003) suggested that a better understanding of the perceptual and cognitive processes that are involved in processing spatial information, combined with new technological possibilities, provides an opportunity to deliver more responsive and salient information to individuals who are blind. Different types of digital maps have been adapted for people who are blind (Jansson & Billberger, 1999; Lahav & Mioduser, 2008; Nemec, Sporka, & Slavik, 2004; Rice, Jacobson, Golledge, & Jones, 2005). Jansson and Billberger (1999) used the Phantom Omni forced-feedback device as a digital map. This device is designed like an artificial arm containing electric motors. When the user grasps the end of the arm, he or she can feel the forces delivered by the motors that are controlled by the computer program. As a consequence, a virtual map is the tactile-kinesthetic rendering of geographic data provided by the application and the device.

When Jansson and Billberger compared accuracy in identifying virtual and real forms (spheres, cones, and so forth), they found that virtual shapes are less accurately recognized than real ones. Later, Nemec and colleagues (2004) performed an experiment using a force-feedback joystick (Wingman Strikeforce 3D from Logitech) in which people who were blind explored a virtual environment and a tactile map of the same environment. The results showed that the precision of scene topology described by the participants was equal in the two conditions. Then Lahav and Mioduser (2008) studied the exploration strategies that participants who were blind used in a haptic and auditory virtual environment. The experiment consisted of comparing the spatial performance of two groups of participants, one exploring a real classroom and the second exploring a virtual classroom. The participants interacted with this virtual environment using a force-feedback joystick (SideWinder from Microsoft). By using this device, they could move within the virtual environment and feel an object's texture, location, and size. The analysis of the participants' explorations and descriptions of the scene showed that allocentric exploration strategies were more efficient, and confirmed that the virtual environment gave the participants who were blind better access to spatial knowledge.

One of the lingering questions is: How do these virtual environments compare to information that is presented in a conventional tactile map? The study reported here had two goals. The first goal was to gain additional insights into the utility of exploring haptic auditory representations of a spatial scene compared to a tactile map for the consistency of spatial cognitive maps elaborated by people who are blind. The blind participants explored both the map in relief (tactile map) and the virtual environment rendering of the same maritime geographic configuration with seamarks (landmarks on the ocean). The second goal was to determine if exploring such a virtual map can actively help coordinate egocentric and allocentric spatial frames of reference in virtual environments so as to build more efficient spatial cognitive maps for way finding and navigation in a natural environment (such as an urban center). An experimental protocol requiring the participants to estimate directions between different objects in the environment was used. The use of aligned and misaligned situations provided a mechanism for investigating the role of egocentric and allocentric spatial frames of reference. The role of the coordination of egocentric and allocentric spatial frames of reference in the development of a coherent spatial cognitive map was evaluated by the use of misaligned landmarks, which demanded the participants to associate both spatial frames of reference.

The motivation for the study was to assess the possibility of understanding space from electronic charts without vision. Indeed, navigational geographic data provide many advantages over tactile ones. Navigational charts are available for every place in the ocean, the scaling and panning functionalities could improve the navigational possibilities within these maps, the voice announcements could overcome some tactile discrimination limits, and input from a GPS (global positioning system) device could provide an updated position on the chart. In other words, the study addressed the use of numerical charts by individuals without vision to provide sailors who are blind with a future updated geographic system.

Method

PARTICIPANTS

The participants were six adults who were blind (one woman and five men), with a mean age of 38 (SD = 9). None of the participants had residual vision or light perception. Two participants were congenitally blind, and the four others lost their vision later in life at 24, 18, 23, and 42 years, respectively.

The participants were recruited from a sailing association for people who are blind in Brest (France), which limited our available sample size to six individuals. Even if they sailed at least once a month, none of them expected to sail on their own. However, they were able to set up an itinerary on a tactile map and then reach the corresponding waypoints during navigation if they could use a GPS device with audio output. Usually, the sighted skipper set waypoints within the GPS from the magnets positioned by the blind sailors on the tactile map. All the participants used their own personal computers with text-to-speech software on a daily basis; however, they first encountered the haptic map device during the training phase of the study.

MATERIALS

In the study, we used two maps, tactile and virtual. Both were 40 centimeters wide by 30 centimeters high (about 16 inches by 12 inches) and 1 centimeter (about .4 inches) on the map represented 100 meters (about 109 yards) in a real environment (scale: 1:100,000). Both map scenes were the same and were unknown to the participants. They consisted of a homogeneous land mass (about 25% of the map area) and the sea (about 75%). We positioned a layout of six seamarks on the tactile map and used the same configuration with a 60-sixty degree rotation on the virtual map. Thus, the layout was the same, but the positions of the seamarks were different. We also changed the names of the seamarks to avoid any learning or confusing effect because of the order.

Tactile map

In the tactile map, the sea was represented by smooth plastic, and the land was indicated by a rough texture generated by a mixture of sand and paint. The salient objects were six raised markers in different geometric shapes: a triangle, circle, diamond, square, trapezoid, and hourglass. All the map features had undergone prior testing for legibility and ease of discrimination. This map was configured in the horizontal plane because people who are blind are most familiar with this interaction.

Virtual map

The virtual map was generated by SeaTouch, an auditory JAVA application developed at the European Center for Virtual Reality for the navigational training of sailors who are blind. This map exists only as a virtual environment. Participants perceive it because of the resistance of the phantom Omni haptic device. Indeed, when the stylus position interacts with an object of the virtual environment, small electromagentic motors are launched and generate a simulation of force feedback. This interaction is further increased with auditory output. The content of the virtual map was equivalent to that of the tactile map except that it was rendered in the vertical plane. We chose it because the phantom Omni haptic device offered a wider workspace of 300 x 400 millimeters (about 12 inches x 16 inches) that corresponds to an A3 (297 x 420 millimeters, or about 11.7 x 16.5 inches) map, and because it provided the implicit assumption that the workspace was aligned north up.

The rendering of the sea surface was soft, and sounds of waves were played when a participant touched it. Within the virtual environment, the rendering of the earth was rough and extruded 1 centimeter from the surface of the sea. When the haptic curser came into contact with the land, the sound of the songs of birds that are found inland were played. Between the land and the sea, the coastline was rendered as a vertical cliff that could be touched and followed. In this case, the sounds of seabirds were played. The six seamarks were generated by a spring effect, an attractor field analogous to a small magnet of 1 centimeter in diameter. When the haptic cursor contacted the seamarks, the user felt resistance, like gravity, and heard a synthetic voice that announced the names (Boat, Schooner, Float, Penguin, Guillemot, and Egret) of each immobile seamark.

EXPERIMENTAL DESIGN

To remove a potential learning effect during the exploration of the haptic and tactile maps, we used a crossover experimental design. If the participants were presented with exactly the same configuration of the seamarks on the tactile and virtual maps, a learning effect could occur in the second condition. For example, after exploring the tactile map, the participants would have formed a level of spatial knowledge that would confound their learning of the same configuration in the virtual condition. To avoid any learning effect, we applied a rotation of 60 degrees to the initial configuration.

PROCEDURE

Training phase

To ensure that the participants mastered haptic and auditory interactions with the SeaTouch map, we trained them in the system until they were able to "navigate" easily in the virtual environment. In other words, we confirmed that they were able to follow the coastlines, move over the surface of the sea, and locate seamarks with the stylus of the haptic device. We did not train the participants in the use of tactile maps because all the participants had already worked extensively with such maps.

Exploration

Before beginning any movement, the participants were informed that the ultimate purpose of the exploration phase was to obtain enough spatial knowledge to be able to answer questions about pointing to seamarks without reference to the tactile or virtual scenes. Exploring the tactile map consisted of moving fingers over the surface of the map; exploring the virtual one consisted of placing the stylus of the haptic device within the virtual environment. In both cases, the exploration stopped when the participant could remember the names of the six seamarks and locate them on the map without confusion.

The participants were told that the goal of the explorations of the tactile and virtual maps was to be able to estimate directions between the different seamarks. So as not to favor any conditions, three participants first explored the tactile map, and the other three began with the virtual one.

DATA COLLECTION

In line with the projective convergence technique of Hardwick, McIntyre, and Pick (1976), we drew six triangles for each condition (tactile and virtual) and in each situation (aligned and misaligned). After the participants explored each map, they pointed three times to each seamark from three different seamarks. Thus, for each condition, the participants answered 18 questions in an aligned situation and 18 in a misaligned situation. Data were collected from the participants by using a tactile protractor that was fixed to the table in front of them. The protractor's pointer was turned until the seamark's direction was estimated. To draw the corresponding triangle of a seamark, we extrapolated the estimated directions from the three starting seamarks. This data collection technique was familiar to the participants as sailors because they were used to handling a similar tool to set up their itineraries when sailing for the derivation of navigational directions.

The participants were asked to indicate angles between seamarks in two situations: aligned and misaligned. In the aligned situation, a sample question was as follows: "Imagine you (and the protractor) are at the Schooner and facing the north, what is the direction of the Egret?" Here, the axes of the participant and north were aligned. To estimate this direction, the participant could presumably work primarily from an egocentric frame of reference.

In the misaligned situation, a sample question was this: "Imagine you (and the protractor) are at the Schooner and facing the Penguin, what is the direction of the Egret?" Here, the axes of the participant and the north were different, so the participant had to process a mental rotation to combine these two axes and estimate the required direction. In other words, the participants were forced to reorient themselves by integrating egocentric and allocentric directions. Indeed, a sailor who is blind has to process these mental-spatial operations to update his or her orientation during true navigation (not necessarily facing the north) and relative to the map that he or she has consulted. Thus, this misaligned situation reflects the mental processes required by navigation.

DATA ANALYSIS

From the estimated directions, we computed triangulations. Within each series of 18 questions, each seamark was targeted three times from three other seamarks. Consequently, we were able to implement the projective convergence technique (Hardwick et al., 1976). Three nonparallel lines pointing to the same location were recorded, setting out the three sides of an error triangle (see Figure 1). The area of this error triangle (AET), expressed in [km.sup.2] (at the map scale), indicates the consistency of the responses (Kitchin & Jacobson, 1997). A high level of consistency is obtained when AET is closer to 0 [km.sup.2], whereas the consistency diminishes when AET increases.

We analyzed 6 (subjects) x 6 (seamarks) x 2 (map conditions) x 3 (questions) x 2 (alignment situations), that is, 432 responses allowing for the computation of 144 AET that were considered as the indicators of the consistency of global spatial representation. A preliminary inspection of the data revealed that AET did not follow a normal distribution (Lilliefors test, p > .05). Thus, statistical paired comparisons were performed on both map conditions (tactile versus virtual) in both alignment situations (aligned and misaligned) using the nonparametric Wilcoxon test.

[FIGURE 1 OMITTED]

Results

From a qualitative perspective, it appears that the error triangles that were generated for a typical participant were smaller in the aligned situation (see Figure 2A and B) than in the misaligned situation (see Figure 2C and D). Moreover, no error triangle went beyond the boundaries of either the tactile or the virtual workspace in the aligned situation, whereas some error triangles extended beyond the workspaces in the misaligned situation. The triangles that were obtained after tactile exploration were smaller than the virtual ones that were obtained in the aligned situation (see Figure 2A and B). In contrast, the tactile triangles were larger than the virtual ones in the misaligned situation (see Figure 2C and D). In other words, triangles obtained after tactile exploration can be different in size according to the alignment situations (Figure 2A and C). Conversely, the size of the triangles obtained after virtual exploration remained relatively constant across the alignment situations (Figure 2B and D).

ALIGNMENT EFFECT

All the participants reported that they encountered more difficulty answering the questions in the misaligned situation. They attributed this additional difficulty to the necessity to rotate their mental map and update their orientation prior to pointing with the protractor.

We also observed these difficulties when we analyzed the data quantitatively. Indeed, AET was equal to 0.65 (SD 1.76) [km.sup.2], with values spread from 4 [km.sup.2] to 12.14 [km.sup.2], in the aligned situation, and equal to 1.70 (SD 2.68) [km.sup.2], with values spread from 13.63 [km.sup.2] to 145 [km.sup.2] in the misaligned situation. Half the data were lower than 0.1 [km.sup.2] in the aligned situation and lower than 0.69 [km.sup.2] in the misaligned situation. These values indicate that not only did the mean value almost treble when the axes of the participant and the north were different, but that we noticed specific data distributions in each situation (see Figure 3). Although most of the data remained lower than 1 [km.sup.2], less than 5% of them exceeded 4 [km.sup.2] and 7 [km.sup.2] in the aligned and misaligned situation, respectively. This result mainly originated from differences in the frequencies obtained for high consistency levels, as can be seen for AET lower than 3 [km.sup.2]. In the aligned situation, among the 62 AET values lower than 1 [km.sup.2], 32 were obtained after tactile exploration, and 30 were obtained after virtual exploration.

[FIGURE 2 OMITTED]

For AET greater than 1 [km.sup.2], the differences between the tactile and virtual frequencies never exceeded 1. Conversely, in the misaligned situation, among the 50 AET values lower than 1 [km.sup.2], 19 were obtained after tactile exploration, and 21 were obtained after virtual exploration; differences still remained for AET lower than 2 [km.sup.2], with 10 responses in the tactile condition and 4 in the virtual condition. Nevertheless, such differences tended to disappear for AET greater than 2 [km.sup.2]. In addition, only 11% of the results were greater than the area of the polygon delimited by the six seamarks (2.44 [km.sup.2]) in the aligned situation versus 25% in the misaligned situation denoting, once again, a lower consistency in this latter case.

The comparison of the differences in AET in the consistency of the estimated directions in the aligned and misaligned situations were significant (p < .001). This result was also found by Rossano and Warren (1989) in a similar experiment.

MAP EFFECT

There were differences in the duration of exploration: the time for the tactile map was 6 minutes, 23 seconds (SD 2 minutes, 03 seconds), and 10 minutes, 17 seconds (SD 6 minutes, 12 seconds) for the virtual one. The participants reported greater difficulties in the virtual condition than in the tactile one. However, the results concerning AET were similar in the two conditions; AET was equal to 0.98 (SD 1.99) [km.sup.2], with values spread from 0.359 [km.sup.2] to 13.29 [km.sup.2], in the tactile condition and equal to 1.37 (SD 2.61) [km.sup.2], with values spread from 4 [m.sup.2] to 13.63 [km.sup.2], in the virtual condition. Half the data were lower than 0.31 [km.sup.2] in the tactile condition and lower than 0.25 [km.sup.2] in the virtual condition. We discovered this similarity between the map conditions when we analyzed the distribution of the data (see Figure 4). There were no differences for values lower than 1 [km.sup.2], and the differences remained minimal for the other clusters of the distribution. The majority of the data remained lower than 1 [km.sup.2], with less than 5% of the error triangles exceeding 4 [km.sup.2] and 7 [km.sup.2] in the tactile and virtual conditions, respectively. Of the 51 error triangle values lower than 1 [km.sup.2], 32 were obtained in the aligned condition and 19 were obtained in the misaligned situation for the tactile condition, with a similar distribution for the virtual condition (30 and 21 in the aligned and misaligned situation, respectively). Finally, 11% of the results were greater than the area of the polygon delimited by the six seamarks (2.44 [km.sup.2]) in the tactile condition versus 18% in the virtual condition.

[FIGURE 3 OMITTED]

In both situations (aligned and misaligned), we found no significant effect of the map conditions (p > .05). There was no significant effect of the same comparison for the aligned situation only (p > .05) or for misaligned situation only (p > .05). Our results reinforce the idea that a similar level of "accuracy" and consistency can be obtained when the exploration of a virtual map and the exploration of a traditional tactile map are compared.

[FIGURE 4 OMITTED]

Discussion

In this preliminary experiment, the consistency of the AETs that were built by the participants revealed a strong alignment effect in both the tactile and virtual conditions. It appears that the coordination of the ego and allocentric frames of reference remains a critical point. It is important to note, however, that the results did not show any differences in performance between the tactile map and the virtual environment in either the aligned or misaligned scenarios. Tactile maps did not perform better, but were equivalent to the virtual environment in the generation of cognitive maps. This situation held true for the generation of a mental representation and for the integration of the ego and allocentric frames of reference. This last result is surprising for two reasons. First, the participants were much more familiar with tactile maps than the virtual one. Second the force-feedback technology that was used in our experimental setting is not available for people who are blind in daily life. Intuitively, a static tactile map accessed by one or more hands would outperform a single-point stylus in a virtual environment. The equivalence of the spatial performances produced after having explored the tactile map with two hands and the virtual map with only the stylus also raises the following question: Which specific encoding processes are involved in the construction of a nonvisual mental map by touch?

If one finger (an analogy to the stylus of the virtual reality interface) is as efficient as conventional tactile map-exploration strategies, which have the possibility for multiple scanning techniques and the use of more than one hand to provide an anchored frame of reference, then movement encoding appears to be particularly important in the virtual environment. Haptic (tactile-kinesthetic) perceptions provided by the movements of one finger, as in our experiments, are necessarily

sequential and tend to rely on the egocentric spatial frame of reference. So, in line with the results of previous studies, which found that two hands are better than one (Craig, 1985; Heller, Hasselbring, Wilson, Shanley, & Yoneyama, 2004), we would expect that one-fingertip perceptions would be less powerful than the perceptions coming from 10 fingers of the two hands to build a cognitive map that should be able to integrate information in both the ego and allocentric frames of reference.

Millar (1994) and Gentaz and Gaunet (2006) proposed that the use of a haptic modality to encode small-scale space is based mainly on the memorization of the movement of kinesthetic information. Hatwell, Streri, and Gentaz (2003) argued that the quality of mental spatial images is dependent on the depth, exposure, and detail of the encoding. In other words, the more participants pay attention to the task, the better their cognitive maps are likely to be. This is the key to our results. A tactile map has a greater potential for accessing the configurational layout of the map scene and, in our scenario, the seamarks. This potential could lead participants to believe that they know where each seamark is and then not explore further or process spatial information more deeply. The movements induced by the sequential perceptions of one finger appear to encourage participants to verify each element many times to ensure that it is in the expected position.

We suggest that the use of the Phantom haptic device forces participants to encode information more deeply in a virtual condition than in a tactile one. This point was confirmed when we looked at the exploration times that can be considered as indicators of the amount of cognitive resources devoted to the task. This process could lead participants to encode spatial invariants as common benchmarks of current movement and generate a more durable spatial mental image. So, the constraints inherent in the virtual map that we initially viewed as a problem may turn out to be a more efficient manner of encoding a small-scale spatial layout, regardless of the required exploration time. Even if this finding does not contradict previous studies because the participants needed much more time to learn the configuration, we remain cautious about this paradoxical explanation, which should be verified in future experiences.

Another difference between tactile and digital maps is that the perimeter of the tactile map is readily tangible: it is a regular rectangle with discrete boundaries. In contrast, the workspace of the Phantom haptic device is an ellipse generated by the physical constraints of the device (see Figure 5). Participants are able to use the perimeter of the tactile map to encode the relationships among the perimeter, seamarks, and inherent spatial relationships to form a mental reference. As Hill and Rieser (1993) explained, the perimeter strategy allows people who are blind to refer to an external reference that creates a solid mental invariant. In this regard, participants use it to recall the locations of some elements relative to the boundaries of the map before mentally repositioning the rest of the configuration.

This is not possible in the virtual condition because of the lack of perimeter regularity. The perimeter does not feel like any well-known shape that could form a psychological schemata, and consequently the perimeter is difficult to memorize. The participants in our study did not make use of the perimeter in the virtual condition. If the perimeter provides participants with an external constant frame of reference in the tactile condition, it may be possible that gravity provides the same function in a vertical virtual environment. In addition, since people who are blind are more familiar with tactile maps, the lack of a significant difference between the precision of the cognitive maps encoded from tactile and virtual spaces could further support the important role of movement encoding to memorize space. It would be reasonable to expect that more in-depth training would allow people who are blind to benefit more widely from the movement encoding of space to build resilient, detailed cognitive representations of space, invariants that could coordinate more efficiently ego and allocentric spatial frames of reference.

[FIGURE 5 OMITTED]

The virtual SeaTouch map is in the vertical plane, while the tactile map is in the horizontal one. The vertical plane introduces the gravity invariant as a particular form of spatial reference. Pozzo, Papaxanthis, Stapley, and Berthoz (1998) showed that gravity can be regarded as either initiating or braking arm movements and, consequently, may be represented in the motor command at the planning level. So, the perception of verticality seems to be widely involved in the sensorimotor hand control and may be a major perceptible cue if it is available to encode a manipulatory space without vision. In line with this theory, different studies have demonstrated the resiliency of the perception of gravity (see, for example, Senot, Zago, Lacquaniti, & McIntyre, 2005). Thus, one could suggest that people who are blind should use the vertical axis as a tangible invariant to memorize the relationships between the positions of the seamarks. Doing so appears to facilitate the building of durable benchmarks into the configuration in a vertical virtual condition, which could account, in part, for our findings.

However, our sample size was small. In future research with a larger number of participants, we will investigate more precisely the influence of the encoding of movement by controlling for perimeter and gravity effects. In addition, we will focus on exploration strategies to gain a better understanding of the nonvisual encoding processes. These preliminary results show promise for the utility of virtual environments in representing spatial information to individuals who are blind.

We express our gratitude to all those who gave us the possibility of completing the study: the blind sailors of the Orion Association, which allowed us to perform the experiments; the CECIAA society for funding; and the European Center for Virtual Reality, which helped to implement SeaTouch with the support of master's degree students in computer science.

References

Craig, J. (1985). Attending to two fingers: Two hands are better than one. Perception & Psychophysics, 38, 496-511.

Gentaz, E., & Gaunet, F. (2006). L'inference haptique d'une localisation spatiale chez les adultes et les enfants: Etude de l'effet du trajet et du delai dans une tache de completement de triangle [Haptic inference of spatial localization in adults and children: Effect of the pattern and time in a triangle-completion task]. L'annee psychologique, 106, 167-190.

Hardwick, D., McIntyre, C., & Pick Jr., H. (1976). The content and manipulation of cognitive maps in children and adults. Monographs of the Society for Research in Child Development, 41, 1-55.

Hatwell, Y., Streri, A., & Gentaz, E. (2003). Touching for knowing: Cognitive psychology of haptic manual perception. Amsterdam: John Benjamins.

Heller, M. A., Hasselbring, K., Wilson, K., Shanley, M., & Yoneyama, K. (2004). Haptic illusions in the sighted and blind. In S. Ballesteros & M. A. Heller (Eds.), Touch, blindness and neuroscience (pp. 135-144). Madrid: UNED Press.

Hill, E. W., & Rieser, J. J. (1993). How persons with visual impairments explore novel spaces: Strategies of good and poor performers. Journal of Visual Impairment & Blindness, 87, 8-15.

Jansson, G., & Billberger, K. (1999). The PHANTOM used without visual guidance. Paper presented at the First PHANTOM Users Research Symposium, 1999, Heidelberg, Germany.

Kitchin, R. M., & Jacobson, R. D. (1997). Techniques to collect and analyze the cognitive map knowledge of persons with visual imparment. Journal of Visual Impariment & Blindness, 91, 360-376.

Lahav, O., & Mioduser, D. (2008). Haptic-feedback support for cognitive mapping of unknown spaces by people who are blind. International Journal of Human-Computer Studies, 66, 23-35.

Millar, S. (1994). Understanding and representing space: Theory and evidence from studies with blind and sighted children. New York: Oxford University Press.

Nemec, V., Sporka, A., & Slavik, P. (2004). Haptic and spatial audio based navigation of visually impaired users in virtual environment using low cost devices. Lecture notes in computer science (Vol. 3196, pp. 452-459). New York: Springer.

Pozzo, T., Papaxanthis, C., Stapley, P., & Berthoz, A. (1998). The sensorimotor and cognitive integration of gravity. Brain Research Reviews, 28, 92-101.

Rice, M., Jacobson, R., Golledge, R., & Jones, D. (2005). Design considerations for haptic and auditory map interfaces in cartography and geographic information science, Cartography and Geographic Information, 32, 381-391.

Rossano, M. J., & Warren, D. (1989). The importance of alignment in blind subjects' use of tactual maps. Perception, 18, 805-816.

Senot, P., Zago, M., Lacquaniti, F., & McIntyre, J. (2005). Anticipating the effects of gravity when intercepting moving objects: Differentiating up and down based on nonvisual cues. Journal of Neurophysiology, 94, 4471-4480.

Tatham, A. (2003). Tactile mapping: Yesterday, today and tomorrow. Cartographic Journal, 40, 255-258.

Thinus-Blanc, C. (1996). Animal spatial cognition: Behavioural and brain approach. London: World Scientific.

Wickens, C. (1999). Frames of reference for navigation. In D. Gopher & A. Koriat (Eds.), Attention and performance XVIl: cognitive regulation of performance: Interaction of theory and application (pp. 113-142). Cambridge, MA: MIT Press.

Mathieu Simonnet, Ph.D., postdoctoral fellow, Naval Research Institute, Ecole navale CC 600, 29240 Brest Cedex 9, France; e-mail: <mathieu. simonnet@orion-brest.com>. Stephane Vieilledent, Ph.D., associate professor, European Center for Virtual Reality, 25 rue Claude Chappe, 29280 Plouzane, France; e-mail: <stephane. vieilledent@ univ-brest.fr>. R. Daniel Jacobson, Ph.D., associate professor, Department of Geography, University of Calgary, 2500 University Drive NW, Calgary, AB T2N 1N4, Canada; e-mail: <dan.jacobson@ucalgary.ca>. Jacques Tisseau, Ph.D., professor and director, European Center for Virtual Reality, 25 rue Claude Chappe, 29280 Plouzane, France; e-mail: <tisseau@enib.fr>.
COPYRIGHT 2011 American Foundation for the Blind
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2011 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Simonnet, Mathieu; Vieilledent, Stephane; Jacobson, R. Daniel; Tisseau, Jacques
Publication:Journal of Visual Impairment & Blindness
Article Type:Report
Geographic Code:4EUFR
Date:Apr 1, 2011
Words:5281
Previous Article:A national study of parents' perspectives on dual-certified vision professionals.
Next Article:The use of assistive technology by high school students with visual impairments: a second look at the current problem.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters