Assessment of indoor route-finding technology for people who are visually impaired.
A common problem that individuals with visual impairments (that is, those who are blind or have low vision) face is independent navigation (Loomis, Golledge, & Klatzky, 2001). Although canes and dog guides help people with vision loss to avoid obstacles, it is still difficult for them to plan and follow routes in unfamiliar environments (Golledge, Marston, Loomis, & Klatzky, 2004). Navigation is especially a problem in indoor environments because signals from a Global Positioning System (GPS) are unavailable. The Minnesota Laboratory for LowVision Research at the University of Minnesota-Twin Cities is developing an indoor wayfinding technology, called the Building Navigator, that contains a route-planning feature. This article describes tests of this feature with users.
Successful wayfinding technology should provide two main pieces of information: 1) the current location and heading of the individual, and 2) the route to the destination (Golledge et al., 2004; Loomis et al., 2001; Loomis, Golledge, Klaztky, & Marston, 2007). Routes consist of waypoints, locations where the navigator changes direction. To follow a route, the navigator needs to have "real-time" access to information about the distance and direction to waypoints until the destination is reached.
Thus far, several indoor wayfinding technologies for travelers with visual impairments have addressed the first issue of localization (Giudice & Legge, 2008; Loomis et al., 2001). Braille signs are now commonly used in buildings, but they are sometimes difficult to locate because of their sporadic placement. Also, the majority of people with visual impairments do not read braille (Demographics update, 1996). Various other positioning technologies have been explored, such as Talking Signs (<www.talkingsigns. com>), Talking Lights (<www.talkinglights.com>), RFID (radio-frequency identification) tags, and systems using wireless signals, but there are still limitations in the accuracy and cost of installing and maintaining these systems (Giudice & Legge, 2008; Loomis et al., 2001). Our Building Navigator software has been designed to be integrated with positioning technologies, such as those just described, and to provide information about the layout and other salient features of indoor spaces.
The Building Navigator
The Building Navigator provides information about the spatial layout of rooms, hallways, and other important features in buildings through synthetic speech output. This software was designed to be part of a portable system, perhaps installed on a cell phone or PDA (personal digital assistant). This section describes the major components of the system: the Building Database, which contains information about the layout of the floor (digital map); the Floor Builder, which is used to input information on the layout of the building into the database; and two interface components for exploration and route finding.
THE BUILDING DATABASE
The Building Database stores information about the physical features of the building as well as the spatial layout of these features. Stored features include spaces, such as rooms and lobbies, and important objects, such as drinking fountains. Features are encoded in the database by first dividing a floor layout into meaningful spatial regions. These regions are assigned to types of features (like door, room, hallway, window, stairs, and elevator), and types of features are grouped into broader categories (including physical space, connecting space, and utility features) to facilitate fast searches for information on the layout.
Features that are spatially adjacent in the layout are associated with each other in the database through a set of logical relationships. For example, to indicate that a room is accessible from a particular hallway, a door feature is associated with both the hallway and room features, which makes the door logically accessible from both the hallway and the room.
The Building Database also includes functions for acquiring information (for example, getting a list of known buildings, floors within a building, and types of features present within a building) and for managing requests for information from input and output plug-ins. Input plug-ins are intended to handle environmental sensors like a wireless network location device, dead-reckoning systems, or rotation or orientation sensors. Their primary purpose is to gather information on the user's location, heading, and movements. Output plug-ins provide an interface through which the user interacts with the Building Database and the rest of the navigation device. This article discusses two types of speech-enabled output plug-ins for use in exploration and route finding.
Entering building information into the database
A separate software program, the Floor Builder, is used to enter data into the database. Conventional spatial mapping applications do not encode the range of features that are used by the Building Database or their spatial relations, resulting in the need for a custom application. First, a map of the building is digitized and segmented into features by a human operator, currently an experimenter. In the future, this person will be someone who is trained in the use of the software, not the end user. The Floor Builder software has a graphical user interface that allows the operator to follow along a series of simple point-and-click steps to parse the map into features. The segmented map is converted into the necessary data structures and uploaded into the Building Database. With the initial version of this software, an experienced operator can complete the mapping of a floor with 50 rooms in about 2 hours. A later version of the software, automating several of the component procedures, has reduced this time to approximately 20 minutes.
User interfaces for navigation
The Building Navigator presents information via synthetic speech output, but, in principle, the same verbal information could be sent to a braille display. A significant challenge was to develop verbal descriptions of the space that were concise, informative, and easy to understand. These descriptions benefited from prior work on verbal descriptions for outdoor wayfinding, but differ in important ways. For instance, unlike streets, indoor hallways are not typically named. Also, more information needs to be conveyed within a smaller area of space in indoor environments than in outdoor environments. Previous studies have found that individuals who are blind are able to learn and navigate effectively through buildings using consistent and structured verbal descriptions of layout geometry (Giudice, 2004; Giudice, Bakdash, & Legge, 2007).
Upon entering an unfamiliar building, a user may have two possible goals: to become familiar with the overall layout of the building through exploration or to find a specific location by following a route. The Building Navigator currently supports these two types of navigation.
Exploration Mode. In the Exploration Mode, speech output describes the layout of features near the user's current location. For example, if the user is standing in a lobby, the set of features described in this mode would include the doors and hallways located on the perimeter of the lobby. Users can also receive egocentric and allocentric descriptions of these features. An egocentric description provides the direction to a feature that is relative to the user's current location and chosen heading. Allocentric descriptions present information with respect to a set of absolute reference directions, such as North, South, East, and West. These descriptions also provide distances to features. See Figure 1A for an example of a feature list and the corresponding egocentric and allocentric descriptions.
The Exploration Mode can be used for virtual exploration of a space, meaning that a user can simulate navigation through the layout without setting foot in the physical building. The interaction is similar to having a guide provide detailed information on a layout at the user's request. This ability to explore and learn a space virtually before actually going there, called prejourney learning, has proved beneficial to pedestrians who are visually impaired (Holmes, Jansson, & Jansson, 1996).
Route Mode. The Route Mode produces a list of instructions for navigating from a start to a goal location via a series of waypoints (see Figure 1B). The route is computed using Dijkstra's shortest-path algorithm (Dijkstra, 1959), a well-known graph-search algorithm that finds the shortest path between nodes. The user listens to one route instruction at a time and selects the instruction by moving up or down in the list with key presses. The first instruction always indicates the user's starting location and heading. The subsequent instructions describe the distance and direction of travel to a series of waypoints and the direction of the turns. The final instruction indicates the distance to the goal location and on which side of the space it is located (for example, the south side of the hallway). Currently, all directions use an allocentric frame of reference (North, South, East, and West). To provide egocentric directions, the system must incorporate sensors or otherwise obtain information about the user's current heading.
[FIGURE 1 OMITTED]
A sample verbal description to a waypoint is "104 feet south to three-way intersection west past two intersections." The descriptions contain three critical parts in a standard format. First, the distance and direction to the waypoint is given ("104 feet south"). Distance is provided in one of three units: the number of feet, the number of steps, or the amount of time in seconds to travel. Distances in the number of steps and seconds are individually calibrated according to the length of a user's step and walking speed.
The second part of the instruction describes the geometry of the waypoint intersection ("three-way intersection west"). The waypoint can be a two-, three-, or four-way intersection. Also, for two- and three-way intersections, the directions of the branching arms are described in an allocentric frame of reference. Consequently, the user is responsible for translating the description into an egocentric frame of reference (for example, "three-way intersection west" means there is a hallway branching to the left if the user is facing north or a hallway branching to the right if the user is facing south). Last, the verbal description indicates how many intersections the user will pass before reaching the waypoint ("past two intersections"). Although these descriptions may seem difficult to parse, their structured format supports rapid learning. The participants learned to comprehend the descriptions with modest practice.
The remainder of this article describes user testing of the Route Mode of the Building Navigator. The goal was to validate our method of coding and presenting information from a building plan as a step toward developing a useful application for improving wayfinding by people with visual impairments.
User testing of the Route Mode of the Building Navigator
We tested the ability of sighted individuals wearing blindfolds and those with visual impairments to use the Route Mode of the Building Navigator interface without the use of positioning sensors. One goal was to determine if the route information provided by the Building Navigator resulted in improved wayfinding performance. Another goal was to compare route-following performance with three metrics for describing distances to waypoints (feet, steps, or seconds). The purpose of testing sighted participants was to evaluate how visual experience affects the ability to code spatial and distance information from instructions from the Building Navigator. We anticipated that differences in the performance of the sighted and visually impaired participants would reveal improvements that could be made to the technology.
Twelve sighted participants (6 men and 6 women aged 19-29) and 11 participants with visual impairments (6 men and 5 women aged 24-60) were tested. All the participants except one person who was visually impaired were unfamiliar with the floors of the building that were used for testing. The person who was the exception (S9 in Table 1) had only limited familiarity with the test floors and reported not having a good "cognitive map" of the locations and layouts of the rooms. The criteria that were used for selecting the participants with visual impairments were that their impairment resulted in limited or no access to visual signs, they were no older than 60 years, and otherwise had no deficits that impaired their mobility. Table 1 presents additional characteristics of these participants. All the participants provided informed consent and were compensated either monetarily or with extra credit in their introductory psychology course. This study was approved by the Institutional Review Board at the University of Minnesota.
Apparatus and materials
The Building Navigator software was installed on an Acer TravelMate 3000 laptop carried by the participants in a backpack. The participants wore headphones that were connected to the laptop to hear the speech output and used a wireless numeric keypad to communicate with the laptop. The experimenter could also communicate with the user's laptop (via a Bluetooth connection to a second laptop) for the purpose of entering the start and goal locations for the wayfinding trials. The sighted participants wore blindfolds during testing and were guided by the experimenter. The participants who were visually impaired used their preferred mobility devices (see Table 1).
The participants were tested in four conditions using a within-subjects design. In three conditions, they used the technology, once in each distance mode (feet, steps, and seconds). In the baseline condition, the participants were not allowed access to the Building Navigator technology.
In all the conditions, the participants were allowed to ask a "bystander," played by the experimenter, for information but only at the office doors. The bystander provided the participants with their current location and the egocentric direction to travel to reach the destination (for instance, "You are at Room 426. Go right."). The participants were instructed to minimize the number of questions to the bystander and to ask the questions only when necessary, as if they were interrupting people in their offices. In the real world, when signage is not accessible, individuals who are visually impaired have no recourse but to seek information from bystanders. We simulated the bystander, rather than relying on an actual bystander, to equalize the access to information for all the participants and conditions.
Each condition was tested in a different building layout, and the condition-layout pairings and the order of the conditions were counterbalanced. The participants were first trained to use the system before the testing began.
Calibration. To calibrate distances given in steps and seconds, we obtained an accurate estimate of each participant's step length and walking speed. The participants were asked to wear the backpack with the laptop inside, as during testing. The sighted participants practiced walking blindfolded while guided by an experimenter until they felt comfortable. All the participants were asked to walk a 30foot length of hallway three times while the experimenter counted their steps and timed them. The average number of steps and amount of time taken to walk the hallway were then entered into the system's software. The sighted participants had a mean step length of 1.85 feet per step and a mean velocity of 2.92 feet per second. The participants with visual impairments had a mean step length of 1.98 feet per step and a mean velocity of 3.43 feet per second. T-tests indicated no significant difference between the groups for the length of steps, but revealed a significant difference in velocity (p = .02). The use of step length and walking speed to compute travel distances relies on consistency in a person's walking characteristics. Previous work in our laboratory has shown that variability in the length of steps is small for sighted individuals and those with visual impairments (Mason, Legge, & Kallie, 2005).
Training. The participants were first introduced to the structure of route descriptions produced by the system, including explicit training on verbal descriptions that were used to convey the geometry of intersections. The experimenter showed an example of each type of intersection, using tactile maps for the participants with visual impairments, and explained the corresponding verbal description given by the system. The experimenter then tested the participants on their understanding by asking them to describe the geometry of intersections corresponding to the verbal descriptions.
The participants were also trained in the functions of the keypad. The 2 and 8 keys were used to move up and down the list of instructions. In the "seconds" distance mode, the 4 key was used to start and pause a timer that beeped until the number of seconds indicated by the selected instruction elapsed. The 6 key was used to stop and reset the timer. Last, the slash (/) key was used to repeat an instruction. The participants practiced using the keys with a sample list of instructions and thus were also familiarized with the speech output.
To familiarize themselves with the task and technology further, the participants completed three practice routes, one in each distance mode. The practice routes were located on a floor other than those used for testing.
Testing. The participants were tested on four routes in a novel layout for each condition. The routes for each layout were chosen to be of similar difficulty. The average distance per route was 144.8 feet (SD = 8.3), and the average number of turns required was 2 (SD = 0.24). The complexity of the routes was chosen to test the effectiveness of the system and to limit the amount of time required to test multiple routes in the experiment. Figure 2 shows a sample set of four routes that were used for testing.
[FIGURE 2 OMITTED]
At the beginning of a trial, the participants were escorted to the starting location and instructed to face a specific direction. The experimenter then stated their current location, the direction they were facing, and the goal location (for example, "You are at room N362 facing south. Go to room N349"). The participants then attempted to find the goal location, using questions to the bystander when necessary, and indicated when they thought they had arrived at the goal location. The trial ended when the participants correctly found the goal location or when they gave up. The participants with visual impairments were able to end the trials in the technology conditions when they felt the system was no longer helpful and they chose to rely exclusively on queries to the bystander. These participants, who were on average older than the sighted participants, were given the opportunity to choose when to stop the trial to prevent them from getting frustrated or tired during the experiment. The beginning of the next trial started at the previous goal location.
The participants who were visually impaired were allowed to detect intersections with any information they would normally use, including their residual vision, echolocation, a cane, or a dog guide. Because the blindfolded sighted participants were much less able to access information about intersections--because of their lack of devices, such as a white cane; their lack of visual cues; or their unfamiliarity with the sound cues--the experimenter told them when they passed intersections. Also, if the participants deviated from the prescribed route, they did not receive a new set of instructions from the Building Navigator. It was up to them to find their way back to the route or to end the trial. A second experimenter timed each trial with a stopwatch, recorded the participant's trajectory through the layout, and noted the location of the participant's queries to the bystander. At the end of the experiment, the participants completed a survey asking them to evaluate the technology and to rank the conditions from most to least preferred.
The participants' wayfinding performance was evaluated using several measures. For each condition, we computed the average number of turns that were made, the distance traveled, and the amount of time taken to complete a route. We also measured the average number of questions to the bystander that were made in each condition as an indicator of how independently the participants could locate the rooms.
The results for both groups of participants (those who were sighted and those who were visually impaired) were analyzed separately. The dependent measures--the number of turns, the distance traveled, and the travel time--were analyzed using analyses of variance (ANOVAs) blocked on subject. Because the data were not normally distributed, Box-Cox power transformations were performed on the data. The nonparametric version of the ANOVA was conducted for the bystander-query measure, since the data could not be normalized with a transformation. For all the measures, contrasts comparing conditions with the technology to the baseline condition were performed. Also, Bonferroni-corrected pairwise comparisons were conducted to evaluate if one of the three distance modes provided by the technology resulted in better performance.
The sighted participants were able to find 100% of the rooms for all the conditions. For all measures (see Figure 3), the ANOVAs were highly significant (p < .004) and the participants' performance was significantly better with the technology than in the baseline condition (p < .007). For three of the four measures (the number of turns made, the distance traveled, and the number of queries to the bystander), the participants' performance in each distance mode was significantly better than in the baseline condition (p < .001). When distance was given in feet or steps, the participants took significantly less time finding rooms than at the baseline (p < .003). There were no significant differences among the distance modes for any of the measures. As is shown by the median rankings displayed in Table 2, there was no clear preference for a particular distance mode when using the technology.
PARTICIPANTS WITH VISUAL IMPAIRMENTS
For all the conditions, the participants were able to find 93% of the target rooms. ANOVAs revealed significant effects for the number of turns made, the distance traveled, and the number of queries to the bystander (p < .04) (see Figure 4). For these measures, the participants' performance was significantly better with the technology than in the baseline condition (p < .02).
The participants asked significantly fewer questions of the bystander in all the distance modes than in the baseline condition (p < .001). They made significantly fewer turns with distance in steps and seconds (p < .008). When the distance was given in steps, they also traveled a significantly shorter distance than at the baseline (p = .006). There were no significant differences among the distance modes in any of the measures. Table 2 indicates that the participants preferred distance in steps when using the technology.
[FIGURE 3 OMITTED]
The goals of this experiment were to test if the route-finding technology improved the ability to find rooms in an unfamiliar building and if one of the three distance modes (feet, steps, or seconds) provided by the technology resulted in better performance. By testing both blindfolded sighted participants and those with visual impairments, we also evaluated how visual experience affected the use of the technology. Four measures were used to evaluate the participants' route-finding performance: the number of turns made, the distance traveled, the amount of time taken to find a room, and the number of queries to the bystander.
[FIGURE 4 OMITTED]
For the sighted participants, the use of the technology improved their performance on all the measures. For the participants who were visually impaired, the technology improved their ability to take the shortest route to the target room, as indicated by a reduction in the number of turns and the distance traveled. The technology also allowed these participants to navigate more independently, demonstrated by fewer questions to the bystander. Unlike for the sighted participants, the technology did not significantly decrease the amount of time needed to complete the routes. Because the participants with visual impairments were older, they might have required more time to interact with the technology (for instance, scrolling through the list of waypoints and absorbing the verbal instructions) than did the sighted participants. The participants who were visually impaired also had greater difficulty, and thus required more time, when distances were provided in feet or seconds than did the sighted participants.
The rankings provided by the sighted participants in the postexperiment survey did not reflect a strong preference for one of the distance modes. The participants with visual impairments preferred distance in steps to distance in time or feet. They also performed better in the route-finding task with distance in steps, demonstrated by fewer turns and shorter distances traveled. Distance in seconds, although individually calibrated, was not always reliable because the walking speed was variable, especially for the participants with dog guides. According to comments from the participants and observations during testing, some participants with visual impairments did not have a good understanding of distance in feet. The difference in performance between the sighted individuals and those who were visually impaired suggests that visual experience may improve the understanding of metric distances. People with visual impairments may benefit from additional training to use information on metric distance effectively. Most of the participants who were visually impaired preferred distance in steps because it was consistently accurate, likely because of the low variability in the length of their steps (Mason et al., 2005); participants also had more control over the counting than when distance was given in seconds.
Counting steps has typically been discouraged by mobility instructors. First, it requires cognitive effort and may distract travelers from attending to other sources of information in the environment. Second, it is not feasible to memorize steps for a large number of routes. Third, it is not a reliable strategy in changing environments, such as outdoors, where objects like cars may change positions from one day to the next. The participants with visual impairments demonstrated improved performance and a preference for distance in steps because we remedied the limitations of counting steps in several ways. In our application, the technology contains the digital map, eliminating the need for users to learn and memorize the step counts between locations. Therefore, the cognitive burden of counting steps is reduced. Also, counting steps is more reliable in building environments because the building structure is stable. With these considerations, we believe that providing distances in steps is a viable way for technologies to communicate route information for navigation inside buildings.
Some of the participants with visual impairments thought it was useful to know how many intersections to pass before they arrived at a waypoint, while others did not seem to use the information. The usefulness of this information seemed to depend on whether the participants could detect intersections, using their residual vision, echolocation, a cane, or a dog guide. Several participants commented that it was difficult to maintain orientation in allocentric coordinates (North, South, East, and West), and they would have preferred egocentric directions. Indeed, most mistakes that the participants made were turning in the wrong direction when they followed the route instructions. These orientation issues will be solved in the future when the Route Mode is integrated with positioning and heading sensors.
In this study, the baseline condition required wayfinding without the Building Navigator technology. This baseline gave the participants access to information from the bystander at every doorway in the layout. We expect that the assistive technology will be even more advantageous in realistic situations when access to information from bystanders is less reliable.
We conclude that route instructions can improve wayfinding by individuals with visual impairments in unfamiliar buildings. Even without additional positioning sensors, the participants who were visually impaired were able to follow instructions to locate rooms. Additional findings indicated that distances to waypoints can be conveyed effectively by converting metric distance into an estimate of the number of steps by the user. The preference for counting steps to estimate distance and the improved performance with this metric is an important finding for the design of wayfinding technology for people who are visually impaired.
This study was supported by the U.S. Department of Education (HI 33A011903) through a subcontract through SenderoGroup LLC., and National Institutes of Health grants T32 HD007151 (Interdisciplinary Training Program in Cognitive Science), T32 EY07133 (Training Program in Visual Neuroscience), EY02857, and EY017835. We thank Joann Wang, Julie Yang, Ameara Aly-youssef, and Ryan Solinsky for their help with the data collection and analysis.
Dijkstra, E. W. (1959). A note on two problems in connexion with graphs. Numerische Mathematik, 1, 269-271.
Giudice, N. A. (2004). Navigating novel environments: A comparison of verbal and visual learning. (Doctoral dissertation, University of Minnesota, 2004). Dissertation Abstracts International B, 65, 6064.
Giudice, N. A., Bakdash, J. Z., & Legge, G. E. (2007). Wayfinding with words: Spatial learning and navigation using dynamically-updated verbal descriptions. Psychological Research, 71, 347-358.
Giudice, N. A., & Legge, G. E. (2008). Blind navigation and the role of technology. In A. Helal, M. Mokhtari, & B. Abdulrazak (Eds.), The engineering handbook on smart technology for aging, disability, and independence (pp. 479-500). Hoboken, NJ: John Wiley & Sons.
Golledge, R. G., Marston, J. R., Loomis, J. M., & Klatzky, R. L. (2004). Stated preferences for components of a personal guidance system for nonvisual navigation. Journal of Visual Impairment & Blindness, 98, 135-147.
Holmes, E., Jansson, G., & Jansson, A. (1996). Exploring auditorily enhanced tactile maps for travel in new environments. New Technologies in the Education of the Visually Handicapped, 237, 191-196.
Demographics update. (1996). Estimated number of adult braille readers in the United States. Journal of Visual Impairment & Blindness, 90, 287.
Loomis, J. M., Golledge, R. G., & Klatzky, R. L. (2001). GPS-based navigation systems for the visually impaired. In W. Barfield & T. Caudell (Eds.), Fundamentals of wearable computers and augmented reality (pp. 429-446). Mahwah, NJ: Lawrence Erlbaum.
Loomis, J. M., Golledge, R. G., Klatzky, R. L., & Marston, J. R. (2007). Assisting wayfinding in visually impaired travelers. In G. Allen (Ed.), Applied spatial cognition: From research to cognitive technology (pp. 179-202). Mahwah, NJ: Lawrence Erlbaum.
Mason, S. J., Legge, G. E., & Kallie, C. S. (2005). Variability in the length and frequency of steps of sighted and visually impaired walkers. Journal of Visual Impairment & Blindness, 99, 741-754.
Amy A. Kalia, Ph.D., postdoctoral associate, Department of Psychology, University of Minnesota-Twin Cities, N218 Elliott Hall, 75 East River Road, Minneapolis, MN 55455; e-mail: <email@example.com>. Gordon E. Legge, Ph.D., Distinguished McKnight University Professor and chair, Department of Psychology, University of Minnesota-Twin Cities; e-mail: <legge@umn. edu>. Rudrava Roy, M.S., research assistant, Department of Computer Science and Engineering, University of Minnesota-Twin Cities; e-mail: <firstname.lastname@example.org>. Advait Ogale, M.S., research assistant, Department of Electrical and Computer Engineering, University of Minnesota-Twin Cities; e-mail: <email@example.com>.
Table 1 Description of the 11 participants with visual impairments. Low vision Participant Gender Age Age of onset or blind Mobility 1 Female 35 Not available Blind Dog 2 Female 35 Birth Blind Dog 3 Male 34 6 months Blind Dog 4 Male 43 19 years Low vision Cane 5 Female 26 Birth Low vision Cane 6 Female 41 18 months Blind Dog 7 Male 24 Birth Blind Cane 8 Male 47 Birth Low vision None 9 Male 60 6 years Low vision None 10 Male 52 Birth Low vision None 11 Female 55 Birth Blind Dog IogMAR Participant acuity Diagnosis 1 Not available Not available 2 Not available Clouded cornea, microthalmas 3 Not available Retrolental fibroplasia 4 1.77 Retinitis pigmentosa 5 1.32 Advanced retinitis pigmentosa 6 Not available Retinal blastoma 7 Not available Retinopathy of prematurity 8 1.18 Congenital cataracts 9 1.70 Secondary corneal opacification 10 1.44 Glaucoma, congenital cataracts 11 Not available Retinopathy of prematurity Table 2 Median rankings of the participants' preference for the four test conditions. Rankings for each condition (1 = most preferred) Building Building Building Navigator: Navigator: Navigator: Distance Distance Distance No Group in feet in steps in seconds technology Sighted 2.5 2.0 3.0 4.0 Visually impaired 3.0 1.0 3.0 3.0
|Printer friendly Cite/link Email Feedback|
|Author:||Kalia, Amy A.; Legge, Gordon E.; Roy, Rudrava; Ogale, Advait|
|Publication:||Journal of Visual Impairment & Blindness|
|Date:||Mar 1, 2010|
|Previous Article:||Reflections on the Olympic Torch experiences of students with visual impairments in Toronto.|
|Next Article:||Profile of personnel preparation programs in visual impairment and their faculty.|