Sharing control between humans and automation using haptic interface: primary and secondary task performance benefits.
Although certain costs of adding automation to a machine or process are easy enough to anticipate (e.g., additional operator training), other costs have turned out to be much more subtle. Especially when operators are called upon to share control with an automation system, unexpected problems arise. These include bumpy transitions between automated and manual control (Meyel; Rubinstein, & Evans, 2001; Mosier, 2002) and automation surprises, which occur when the operator has a poor mental image of automation behavior or misses automation mode changes (Sarter, Woods, & Billings, 1997). The consequences arising from these unexpected problems can be dire, at times offsetting the improved precision and performance or reduced operator workload for which automation was added in the first place. Unraveling these costs and providing solutions has been the focus of much research in recent years.
A portion of the unexpected problems associated with sharing control between a human operator and an automation system can be attributed to the need for the operator to master two control interfaces and to learn two or more distinct operating modes. Whereas many machines present a manual control interface requiring continuous, direct control and certain manual skills, the typical automation interface presents a set of indicators, knobs, and buttons requiring intermittent input and certain analytical and decision-making skills (Mosier; 2002). When complex or unanticipated conditions arise, the traditional approach to "cooperation" is for the operator to interrupt the automation and take over full control through the manual interface (Christoffersen & Woods, 2002). When control is wrested away from the automation system, the advantages of automation (precision, computational speed, and other functions) are lost (Christoffersen & Woods, 2002). Bainbridge (1985) has noted that ironically, the manual skills of an operator who regularly gives up control to an automation system may begin to degrade from lack of practice, leaving him or her ill prepared to meet the heightened challenges that typically arise during automation failures.
What is needed is an intermediate, more collaborative mode of interaction. Ideally, the operator would be able to negotiate with and redirect the automation system without first disabling it and then restarting (Christoffersen & Woods, 2002). There would be a natural kind of "give and take" between the operator and the automation. In this paper, we propose an automation interface and associated control-sharing paradigm that does not include mode switching and therefore avoids the associated pitfalls. The operator needs to learn only one interface and one set of rules.
Another portion of the unexpected problems in automation can be attributed to the poor rate and quality of information transmission supported by most automation interfaces. In addition to the communication of current mode and action, efficient cooperation requires the communication of goal and intent. One approach to improving communication is the use of multimodal displays. Sarter (2002) has estimated the value of various modalities for guiding the operator's attention and effectively managing interruptions. To visual and auditory displays, Sarter (2002) has added haptic (kinesthetic and tactile) display, including vibrotactile cues on the manual control interface.
In automobile interfaces, both vibrotactile and pulse torque cues applied through a motorized steering wheel have been tested as a means of informing or warning automobile drivers (Driver & Spence, 2004; Schumann, Godthelp, & Hoekstra, 1992; Suzuki & Jansson, 2003). Beyond the use of haptics for discrete cues, haptic display has been used to automate vehicle steering in such a way that the driver can continuously monitor the automation actions (Switkes, Rossetter, Coe, & Gerdes, 2004). In this work, automation took the form of a virtual potential field to aid the driver in lane keeping. Any deviation from the center of a lane produced continuous haptic information about the automation's tendency to return the vehicle to the lane center.
In this paper, we also employ haptic display with the aim of enabling more effective negotiation and coordination of intent and actions. The promise of haptic display follows in part from its lack of overlap with the visual and auditory modalities, but further, haptic signals can be information rich in ways that are particular to the communication of intent, especially intent involving direction and magnitude. Further, we believe that signals encoded in force and motion are especially suited for informing the operator about automation intent because they can simultaneously convey intent and automation confidence in that intent. Mechanically mediated communication can even support negotiation of authority. For example, a particular motion imposed by the automation can be accompanied by larger reaction forces that resist motion inputs from the operator. Likewise, the operator could express his or her desire for increased authority by using high impedance or less "gave," possibly by cocontracting the muscles in his or her arm. This is the way two human operators would communicate if they both grasped the same manual control interface: Each operator would apply muscle action to extract his or her own desired response with a certain authority while simultaneously perceiving the other's intent and desire for authority by feel.
We propose, then, to combine the machine and automation interfaces into a single interface modeled after the traditional manual control interface. The human operator is presented only the standard manual interface, over which the automatic control system is also given authority. The automation system imposes its control effort through a motor coupled directly to the interface. The motorized manual interface becomes a haptic display, relaying information about the actions of the automation system to the human operator's haptic senses. In effect, we propose a return to the "contact" or "direct" mode of interaction, in which visual/kinesthetic perception and motor response are relied upon rather than the analytical and decision-making skills usually required by the automation system. In contrast to the use of vibrotactile cues for alert and display of discrete information, we use haptics to relay continuous signals to the operator. In our shared control scheme, the automation is placed in mechanical parallel with the operator and takes the form of an assist that actually intervenes in the control loop. The operator may choose either to yield to the assist while observing its action or to override it by exerting slightly more effort.
Through the motor on the interface, the automation may apply torques according to its own rules or control law, as a function of sensed machine state. For example, a steering wheel can be given a "home" position that is itself animated according to sensed vehicle position within a lane. The automatic controller can create virtual springs that attach the steering wheel to a moving home angular position that corresponds to the vehicle direction recommended by the automation. By feel, the operator can form a mental image of the springs attached to the moving home position, especially by haptically exploring the invariants of the reaction torque to his or her own input motions. To ensure that the automation can be overridden, it uses a limited mechanical impedance (essentially a limited stiffness).
The introduction of assist through a motor on a manual control interface has been studied extensively in the applications of haptic interface to teleoperated and virtual environments (Gillespie, 2004; Hayward, Astley, Cruz-Hernandez, Grant, & Robles-De-La-Torre, 2004). In these applications, assist is offered in the form of virtual fixtures that may be used by the operator as mechanical guides for controlling force or motion direction. Virtual fixtures have been shown to improve performance in targeting tasks (Dennerlein & Yang, 2001; Hasser, Goldenberg, Martin, & Rosenberg, 1998), peg-in-hole tasks (Payandeh & Stanisic, 2002; Rosenberg, 1993; Sayers & Paul, 1994), and surgical interventions (Park, Howe, & Torchiana, 2001). Virtual fixtures are usually fixed in the shared workspace; however, virtual fixtures composed of functions of time or recognized operator motions were studied by Li and Okamura (2005). In this work we also employ virtual fixtures, created by the automation in the workspace shared by the automation and operator. Our fixtures, however, are animated by the automation system. By and large, the focus in the field of haptic interface has been improved human/machine performance. The possible secondary benefits, such as reduced operator workload, have been overlooked in the literature.
The setting for our investigation of mechanically mediated control sharing is a driving simulator with a motorized steering wheel. Although our particular implementation is far removed from actual driving--our former setups were in fact closer (Steele & Gillespie, 2001)--we hope that our results may nevertheless contribute to ongoing work in the design of automation interfaces for driving. The primary goal in this paper is to quantify the primary and secondary benefits of using a motorized interface to institute control sharing between an operator and an automatic controller.
We present three experiments designed to demonstrate our conception of shared control using a motorized manual control interface. Naturally, an expected outcome is improved performance on the semiautomated task, which is quantified in Experiment 1. However, there are other expected benefits. Experiment 2 was aimed at quantifying the benefits in reduced visual demand associated with the primary task. In Experiment 3, we investigated the hypothesized freeing of attention (as reflected by improved performance on a secondary task). The driving task in all experiments also includes a challenge not addressed by the automatic controller, which becomes a means for prompting negotiations between the human and automation and a basis for requiring and measuring maintained attention to the primary driving task. After the descriptions of the experiments, we present a general discussion comparing the results from all three experiments and discuss the effects of haptic assist on primary and secondary tasks, followed by a summary of our results.
To carry out our experiments, we developed a fixed-base driving simulator that featured a computer monitor and a motorized steering wheel. The computer monitor presented participants with a view of the simulated vehicle hood and roadway, and the motorized steering wheel provided steering control and force feedback from the simulated tire-road interactions. In addition, the motor on the steering wheel was used to apply torques from the automation system when it was enabled. These torques were designed to assist the driver in holding to the center of the simulated roadway (i.e., to assist in lane keeping). The automation-produced torques could be felt by the participant's hand on the steering wheel, so we refer to them as haptic assist. To prompt the participants to maintain some amount of authority while being assisted by the automation, obstacles were placed in the center of the roadway. The automation system was not able to sense and avoid these obstacles. The participants were instructed that they would be solely responsible for steering around the obstacles. In this section we introduce each of the major components of the driving simulator in turn: the vehicle, roadway, and obstacle models; the automation system (including its path-planning algorithm and feedback control law), and the simulator hardware. The experimental procedures pertaining to each of the three experiments and the associated performance metrics will be described in the sections to follow.
Vehicle, Roadway, and Obstacle Models
Participants drove a vehicle model, which rolled on flat ground, and they turned right or left by steering the front wheels, as in a typical car. The speed of the vehicle's front wheel was fixed at 15 miles/hr (mph; 24 km/hr); thus interactions with brake and accelerator pedals were not included. The kinematics of the vehicle model were computed according to the bicycle model, assuming no slip between the tires and the roadway (T. D. Gillespie, 1992). Force feedback on the motorized steering wheel reflected the vehicle model's self-aligning torque. This torque acts on the steering linkage and tends to turn the front wheels into the direction of travel of the vehicle, causing the vehicle to steer straight. Given the fixed speed of the vehicle, the self-aligning torque was proportional to the steering angle of the front wheels relative to the vehicle centerline.
A roadway was defined as a sequence of 16 straight and 15 left- and right-curved road segments of varying length totaling 1993 m. An overhead view of the roadway is shown in Figure 1. All the curved segments had a curvature of 0.025 [m.sup.-1]. To smooth transitions, segments were joined with clothoid curves 5.6 m long. Given the fixed front wheel speed, each trial lasted nearly 5 min; some variation in the time per trial arose because of the difference in length of the actual vehicle path compared with the length of the road centerline. Visually, the road segments had a gray, concrete texture with a yellow stripe along the center and green embankments on either side.
[FIGURE 1 OMITTED]
Participants were instructed to follow the yellow centerline of the road, except when they encountered orange cylindrical objects that had been placed on the road centerline at irregular intervals. All obstacles were 1.4 m in diameter and 2.0 m tall, and the spacing between obstacles varied from 20 to 80 m in a uniform random distribution. Obstacles were located on both straight and curved segments, as indicated in Figure 1. If the vehicle perimeter contacted an obstacle, a brief orange flash indicated the collision and the destruction of the obstacle.
In half the trials, an automation system assisted participants in lane keeping. Conceptually, the automation divided the steering task into two problems: generating a desired path that would return a stray vehicle to the road centerline and turning the steering wheel to follow that desired path.
The path planning employs a geometric approach based on knowledge of the vehicle's position and orientation relative to the nearby road geometry. This approach follows the predictive driver model of Hess and Modjtahedzadeh (1990), using the notion of an "aim point" ahead of the vehicle on the centerline of the road. This aim point is located by finding the closest point to the vehicle on the road and then looking forward 10 m along the road. The geometry of the aim-point construction is illustrated in Figure 2. If the vehicle's front wheels are always headed toward the aim point, they follow a path that leads back to the center of the road. Given the desired front wheel heading and the current vehicle heading, the desired steering wheel angle is determined. Because our path planning was based on a model of human driver behavior, we surmised that the automation would, in some sense, not fight the human driver but rather mimic the driver's behavior. This path-planning technique was also advantageous because the desired steering angle was relatively simple to calculate from the geometry; the only challenge was calculating the closest point in real time. We addressed that problem by using a feedback-stabilized closest-point algorithm, which features computational efficiency for real-time applications (Patoglu & Gillepsie, in press).
[FIGURE 2 OMITTED]
After calculating the desired steering angle, the automation's remaining task was to exert an appropriate torque on the steering wheel. A virtual torsional spring was used to oppose motion of the steering wheel away from the desired steering angle--that is, a restoring torque proportional to the difference between the desired steering wheel angle and the current measured angle was applied to the motorized steering wheel. A torsional stiffness of 1.2 Nm/rad set the level of control authority exerted by the automation. As a further limit on the automation's authority, the maximum magnitude of the torque was set to 0.82 Nm. Thus if the participant turned the wheel far away from the desired steering angle, the resistance imposed by the steering wheel motor would saturate well below the limits of human strength.
Participants drove the simulator while seated in front of a computer monitor that displayed the roadway, with their right hand grasping the motorized steering wheel as depicted in Figure 3. The motorized wheel is described in Gillespie, Hoffman, and Freudenberg (2003). The computational hardware supporting the driving simulator included two computers: a PC for graphical display and data logging and a Motorola MPC555 microcontroller for real-time simulation of the vehicle model and the automation system. An OpenGL graphics application running on the PC rendered a 3-D animation of the hood of the car and the road. A screen shot from the animation appears in Figure 4, showing embankments on either side of the road and two obstacles on the centerline. The graphics software received the vehicle state information every 8 ms through a serial communication link to the MPC555 microcontroller. For Experiment 2, the graphics program was used to occlude the participant's view of the road, except for 1-s glimpses when requested through a key press.
[FIGURES 3-4 OMITTED]
The driving simulator and shared controller described in this section were used in three experiments. Experiment 1 defined a baseline measure of driving performance for participants with and without the haptic assist. In Experiment 2, the participants' visual demand while driving was measured using the visual occlusion method with and without haptic assist. In Experiment 3, participants were asked to perform a secondary task while driving. Again, the experiment was run for the two conditions (with and without haptic assist), and the effect of haptic assist on secondary task performance was measured by accuracy and reaction time.
EXPERIMENT 1: BASELINE
The objective of our first experiment was to quantify the improvement in driving performance afforded by the haptic assist controller. Driving performance was defined in terms of two performance variables: lateral error in tracking the road centerline (a path-following task) and the number of obstacles hit. The haptic assist, when present, could be entrusted to help only with path following. It was solely the driver's responsibility to avoid obstacles in the middle of the road; the automation had no information about the obstacles. Thus the obstacles provided the motivation for keeping the human in the loop. The experimental condition under investigation in Experiment 1 was simply the presence of haptic assist. Experiment 1 provided the baseline performance, against which the results from Experiments 2 and 3 were compared. Experiments 2 and 3 imposed additional experimental conditions under which the effects of haptic assist were measured.
Participants. Eleven participants (9 men and 2 women) between the ages of 20 and 63 years (M = 30, SD = 11.9 years) volunteered for the study. All participants had normal or corrected-to-normal vision. Each participant provided informed consent in accordance with University of Michigan human participant protection policies. Individuals were not paid for their participation. Each participated in one experimental session lasting approximately 1 hr. Participants, whether left or right handed, were asked to use their right hand on the motorized steering wheel to steer the simulated vehicle along the centerline of the roadway as closely as possible without colliding with any obstacles. Each participant spent one 5-min trial familiarizing him- or herself with the path-following task under the experimental conditions of all three experiments.
Design. After completing the practice trial, each participant completed six 5-min trials, each representing a unique set of experimental conditions. Two of these six trials were designed to assess baseline path-following performance: one trial without haptic assist and the other with haptic assist. Each participant was randomly assigned a sequence of trials chosen from a set designed to counterbalance the ordering of the haptic assist condition. Participants did not receive performance feedback at the end of trials; however, the roadway and excursions of the car from the centerline were visible on the graphic display, and obstacle collisions, if they occurred, were accompanied by a brief flash of orange on the entire screen. The simulator logged the time, vehicle position and orientation, obstacle collisions, the closest point on the centerline (computed in real time), and the torque displayed on the motorized steering wheel.
Dependent variables and data analysis. To assess driving performance, we defined two performance measures: one for path following and the other for obstacle avoidance. The shortest distance between the center of the car and the center of the road was computed at each sample time (every 8 ms) and defined as the lateral error (LE). The root mean square (RMS) of LE, denoted RMS[LE], was used to assess path-following performance. However, to facilitate analysis of path-following performance and its dependence on assist condition independent of the obstacle avoidance maneuvers, we partitioned the data into segments between obstacles and segments near obstacles. Partitioning was defined in time, with near indicating 2 s before and 1 s after the instant at which the closest point on the centerline passed the obstacle. The near data segments were then discarded from the measure of path-following performance. Obstacle avoidance performance was defined as the fraction of obstacles hit from the 30 presented, reported as a percentage and denoted %Hit.
Quantile plots were used to verify that the data fit normal distributions for both performance metrics when considered across the 11 participants. Using [alpha] = .05 to establish statistical significance of results, multiple-factor analysis of variance (ANOVA) was performed for the two dependent performance measures: RMS[LE] and %Hit.
Figure 5 shows the tracking performance of a typical participant in a generic section of the roadway with and without steering assist. The section of roadway shown is 220 m in length and took about 32 s to traverse at the constant 15 mph (24 km/hr) vehicle speed--a moderate pace given the curvature of the road. The top trace shows the curvature of the road, indicating that a left turn and two straight segments are represented in this section of roadway. Deviation from the centerline is graphed versus time in the lower two traces, in which Trace A was recorded without haptic assist and Trace B was recorded with haptic assist. The obstacles are outlined as circles of radius 1.6 m, but they appear as ellipses because of the nonunity aspect ratio. The 1.6-m radius accounts for the 0.7-m obstacle radius and 0.9-m car half width--that is, a collision occurs if the vehicle center comes within 1.6 m of an obstacle center. Obstacle avoidance maneuvers produced by the participant are apparent in both traces, and those maneuvers are not appreciably different by condition. However, differences across condition are apparent in tracking performance in the sections of roadway between obstacles. Improvement can be observed in Trace B, where haptic steering assist was provided.
[FIGURE 5 OMITTED]
As previously described in the subsection discussing dependent variables and data analysis, the data were partitioned into segments either near or between obstacles, and only values of LE sampled when the vehicle was considered between obstacles were used to calculate RMS [LE]. The shaded areas in Figure 5 indicate data within the 3-s windows around each obstacle, which were considered near.
There was some variation in driving behavior among participants, particularly in the obstacle avoidance maneuvers. Some drivers chose more "aggressive" driving styles; they would wait longer before avoiding an obstacle and would turn the steering wheel faster and with more effort. The variation across all participants is shown as pointwise percentiles in Figure 6 over the same section of road shown in Figure 5. After computing pointwise percentiles, we low-pass filtered the data with a noncausal second-order Butterworth filter with a spatial cutoff frequency of 3.7 [m.sup.-1]. The 5th and 95th percentile of [absolute value of LE] form the bottom and top edges of the shaded region in the plot. The dark line drawn through the shaded region is the pointwise median of [absolute value of LE] for all participants. Qualitatively, the plot shows that the obstacle avoidance behavior is not significantly altered by the haptic assist, but the tracking error is reduced in the sections between obstacles.
[FIGURE 6 OMITTED]
Multiple-factor ANOVA applied to RMS[LE] revealed significant main effects attributable to the assist condition, F(1, 21) = 4.9, p = .05, MSE = 0.15, and subject, F(10, 21) = 3.5, p = .03, MSE = 0.11. For %Hit, again a significant main effect attributable to assist condition was found, F(1, 21) = 9.4, p = .01, MSE = 0.0032; however, no significant main effect was found for subject, F(10, 21) = 2.4, p = .09, MSE = 0.0083.
Thereafter, a paired t-test analysis was applied to the data. Table 1 presents the sample mean, [bar.x], and standard deviation, s, of RMS[LE] and %Hit along with the paired t-test results. For both performance measures, the difference between the no-assist and the with-assist conditions was calculated per participant, and the difference of means, [DELTA][bar.x], and the standard deviation, s[DELTA], of these differences along with the p value and degrees of freedom (DOF) of the t tests are presented in Table 1. There was a 30% reduction in RMS [LE], t(10), p = .013. However, for %Hit, there was a statistically significant increase from 0.57% to 2.8%, t(10), p = .0059, when haptic assist was added. This result represents a cost of adding haptic assist rather than a benefit. Note, however, that the assist is always trying to drive the vehicle back to the center of the road, exactly where the obstacles are placed. Without driver intervention, the car would drive through every obstacle on the course.
EXPERIMENT 2: VISUAL DEMAND
Our second experiment aimed to quantify the ability of the haptic assist controller to reduce the demand for visual cues while aiding participants in the path-following task. Again, because of the presence of obstacles on the road centerline, the path-following task demanded a certain amount of attention from the participants, whether or not the haptic assist was present. As in Experiment 1, the independent variable is the presence of haptic assist. In Experiment 2, however, the effects of haptic assist were measured with reduced visual feedback. The graphical display was blank except when a visual refresh was requested by the participant. The requests for visual feedback provided a measure of the participants' instantaneous and average demand for visual cues.
Participants. The same 11 individuals who participated in Experiment 1 also participated in Experiment 2.
Design. Experiment 2 concerned two of the six trials. These two trials, one with and one without haptic assist, were designed to assess driving performance and visual demand while the visual feedback was metered according to the visual occlusion method (Tsimhoni & Green, 2004). To enable us to measure the participants' demand for visual cues, the graphical display of the driving environment and roadway was blank except for 1-s glimpses provided each time participants pressed a key with their left hand on the computer keyboard. Participants were instructed to request the display whenever they felt that additional visual feedback was necessary to follow the roadway and avoid obstacles. A measure of visual demand throughout the trial was measured by the fraction of time that the visual feedback was not occluded. In addition to the data logged in Experiment 1, the simulator logged key presses on the keyboard.
Dependent variables and data analysis. To assess driving performance, the same two performance metrics already defined for Experiment 1 were used: RMS[LE] (defined in regions away from obstacles) and %Hit. In addition, an instantaneous measure of visual demand was computed using
VisD = 1.0/[t.sub.i] - [t.sub.i-1]
in which [t.sub.i] is the time of the ith key press and the numerator is the period of the time that the display was not occluded per request (Tsimhoni & Green, 2004). A measure of average visual demand over the entire trial, denoted Avg[VisD], was computed by the number of key presses in a given trial divided by the duration of the trial (~300 s).
Data analysis included quantile plots, multiple-factor ANOVA, and t tests. The value [alpha] = .05 was used to establish statistical significance.
The three performance metrics for Experiment 2 (visual demand) were RMS[LE], %Hit, and Avg[VisD]. Multiple-factor ANOVA was performed for all three dependent performance measures. Analysis of RMS[LE] revealed significant main effects attributable to assist condition, F(1, 21) = 12, p = .005, MSE = 0.65, and subject, F(10, 21) = 4.5, p = 0.01, MSE = 0.23. For %Hit, neither assist condition nor subject were significant main effects, although assist condition approached significance, F(1, 21) = 3.5, p = .09, MSE = 0.011. The new performance measure in this experiment was Avg[VisD], and the ANOVA results indicated significant main effects for assist condition, F(1, 21) = 96, p = .0001, MSE = 1370, and for subject, F(10, 21) = 8.6, p = .001, MSE = 1220.
As in Experiment 1, the presence of haptic assist produced a reduction in RMS[LE] in regions away from obstacles. The data presented in Figures 5 and 6 from Experiment 1 are also representative of the effects of the assist condition on path following for Experiment 2. Similarly, there was an increase in %Hit; however, in Experiment 2, the increase was not a statistically significant result. Visual demand was significantly reduced with the addition of haptic assist.
When Avg[VisD] is plotted against path length, there is no obvious, qualitative correlation with the proximity to obstacles or curves. In fact, visual demand remains relatively constant over the entire course. We conjectured that participants might try to schedule their request for a visual refresh during critical periods (e.g., a short time before an obstacle), in which case the key press frequency would be a poor measure of visual demand. However, a histogram of the number of key presses in relation to time before and after passing an obstacle revealed a flat distribution.
To further investigate the dependence of RMS [LE], %Hit, and Avg[VisD] on assist condition, we applied paired t tests. Table 2 presents the sample mean, [bar.x], and standard deviation, s, of the RMS[LE], %Hit, and Avg[VisD], along with the mean differences, [DELTA][bar.x], by condition and associated standard deviation, s[DELTA], and p values resulting from the paired t tests. There was a significant reduction in RMS[LE] of 41%, t(10), p = .002, and a significant reduction in Avg[VisD] of 29%, t(10), p = .0001, when the haptic assist was available as compared with the no-assist condition. With assist added, %Hit increased from 1.8% to 6.4%, and the increase in %Hit across the haptic assist condition is again significant, t(10), p = .045. Figure 7 shows a box plot of the average visual demand, Avg[VisD], by assist condition, in which each box represents n = 11 participants. As evident in the figure, the median visual demand was lower when the haptic assist was turned on.
[FIGURE 7 OMITTED]
EXPERIMENT 3: SECONDARY TASK
The third experiment was aimed at quantifying the ability of the haptic assist steering wheel to aid the participant in a path-following task while reducing mental load in spatial processing. Mental-processing load was estimated by measuring performance on a secondary task that required the participant to localize tones emitted from three speakers. Tone localization was chosen as a secondary task on the assumption that it would interfere with the path-following task or would compete for the same spatial-processing code (Wickens & Liu, 1988). Also, the ability to localize sounds, including reaction times for localization from within a vehicle, can be critical to safety and overall driving performance (Wallace & Fisher, 1998).
Participants. The same 11 individuals who participated in Experiments 1 and 2 also participated in Experiment 3. Participants were screened for normal, balanced hearing by self-assessment.
Design. Two of the six trials performed by each participant pertained to Experiment 3, one each for the two haptic assist conditions. These two trials were designed to simultaneously assess driving performance and performance on a secondary task involving auditory localization of tones. The primary task was the same as in Experiments 1 and 2: to follow the center of the road as closely as possible while avoiding obstacles. Participants were not told which task was more important or how their performance on either task would be measured.
Secondary task: Tone location. Three computer speakers were placed approximately 1 m in front of the participant's head with an 18-cm center-to-center spacing on top of the computer monitor that displayed the simulated roadway. These speakers played 0.5-s square-wave tones with a fundamental frequency of middle C. The sound level reading at the participant's head location was measured to be 81 dBA. The time between tones was randomly selected, with a uniform distribution between 2 and 6 s. Participants were asked to identify which of the three speakers played the tone and to press a corresponding key on the computer keyboard. The j key was used to indicate that the left speaker had played, the k key to indicate the center speaker, and the l key to indicate the right speaker. Participants were not told that the speed of their response would be measured.
Performance by each participant on both the primary and secondary tasks was recorded for two 5-min trials: one trial without haptic assist and the other with haptic assist. The simulator logged the time, the vehicle position and orientation, the closest point on the centerfine, the number of obstacles hit, the torque displayed on the motorized steering wheel, the tones sounded by the speakers, and the key presses registered by the keyboard.
Dependent variables and data analysis. To assess driving performance, we used the same two performance metrics defined for Experiment 1: RMS[LE] (in regions away from obstacles) and %Hit. These performance metrics were analyzed as described previously for Experiment 1.
Two additional performance metrics were defined for the secondary tone location task: accuracy and reaction time. Accuracy, denoted ToneAcc, was defined as the percentage of tones that were correctly identified. The reaction time, denoted RT, was the time in milliseconds between the tone onset and the registration of the key press by the personal computer. Because of technical limitations, RT data were quantized to 8-ms levels. The precision of the timing, however, was better than 1 [micro]s, and timing jitter (standard deviation of sample time) was measured at 0.53 ms. Quantization and timing jitter can be considered noise in the measurements of response time. Data analysis for the various performance metrics included quantile plots, multiple-factor ANOVA, and t tests. The value [alpha] = .05 was used to establish statistical significance.
ToneAcc performance was first determined independently for each speaker by computing the percentage of correct responses for a particular speaker (left, center, or right). For example, the location accuracy for the left speaker is the number of times the] key was hit in response to a tone from the left speaker as a percentage of the total number of tones from the left speaker during the 5-min trial (about 37 tones). Quantile plots of the tone location accuracy data showed that these data were approximately normally distributed. ANOVA showed that the speaker (right, center, left) was not a significant main effect. Thus the data by speaker were combined to define a single ToneAcc performance metric for each participant and assist condition.
Quantile plots were also used to check for normality of the RT data. The RT data were, as expected, not normally distributed, so the use of transformations and data set truncations was investigated. The inverse (1/RT) transformation, the logarithmic (log[RT]) transformation, and truncations of the RT data to 0.75, 1.0, 1.25, 1.5 and 2.0 s were applied to the data; truncation refers to exclusion of all data beyond the range specified. The influence of these various transformations and truncations on the ANOVA-reported p values were compared with the influence of the same operations on the p values generated by synthesized data, as reported in Ratcliff (1993). Data were synthesized in Ratcliff (1993) by specifying a difference in means or tail sizes between two ex-Gaussian distributions. A comparison of p value trends produced by the operations on our data versus the operations on the synthesized data suggested that a 1.25-s cutoff produced the most statistical power and suggested further that our data showed a difference in tail sizes. The rationale behind truncation is that the longer RT data are spurious in the sense that they are strongly influenced by processes other than the condition being tested, such as distractions or the intrusion of cognitive processes not relevant to the experiment (Ulrich & Miller, 1994). Despite quantization of timing data, the high precision of the timing data and low software jitter allows the mean difference to be extracted with precision similar to the jitter ([+ or -] 0.5 ms). Note that each of the 11 participants reacted to 112 tones in each trial.
As in Experiments 1 and 2, the presence of haptic assist produced a reduction in RMS[LE] in regions away from obstacles. The data presented in Figures 5 and 6 for Experiment 1 are also representative of the effects of the assist condition on path-following performance for Experiment 3.
Multiple-factor ANOVA applied to RMS[LE] revealed significant main effects attributable to assist condition, F(1,21) = 8.78, p = .014, MSE = 0.547, and subject, F(10, 21) = 3.4, p = .033, MSE = 0.134. For %Hit, assist condition was not a significant main effect, but subject was a main effect, F(1, 21) = 5.07, p = .0085, MSE = 0.00682.
The sample mean, [bar.x], and standard deviation, s (collapsed across participants), of RMS[LE], %Hit, and ToneAcc are presented in Table 3, along with the p values of paired t tests applied to the difference in the means, [DELTA][bar.x]. There was a significant 38% reduction in RMS[LE], t(10), p < .0095. That is, path following was significantly improved in the regions between obstacles with the addition of haptic assist. The percentage of obstacles hit increased from 3.4% to 4.3%, but the difference was not statistically significant. Of the two new performance metrics that measured performance on the secondary (tone localization) task, ToneAcc did not change appreciably. The difference in mean percentage of correct identifications rose slightly with the addition of haptic assist but without any statistical significance, t(10), p = .604.
RT was the other performance metric for the secondary task. After a multiple-factor ANOVA revealed that the effect of path curvature on the RT data was not a significant main effect, F(1, 2242) = 0.21, a multiple-factor ANOVA was performed considering assist condition and proximity to obstacles (near/between) as experimental factors. The ANOVA reported significant main effects in haptic assist condition, F(1, 2286) = 8.5, p = .003, MSE = 0.16, proximity to obstacles, F(1, 2286) = 35, p = .0001, MSE = 0.66, and subject, F(10, 2286) = 52, p = .0001, MSE = 0.97, with significant interactions in assist by subject, F(10, 2286) = 0.12, p = .0001, MSE = 0.12, and in proximity to obstacles by subject, F(10, 2286) = 3.2, p = .0005, MSE = 0.059.
A paired t test was performed for the RT data in which the participants' mean reaction time was subtracted from their respective RT data and the population means with and without haptic assist were compared. A statistically significant 18-ms decrease in RT was found with haptic assist as compared with no assist, t(2328), p = .0009, regardless of proximity to obstacles. When only RT data between obstacles were considered, the effect of adding haptic assist was a 21-ms decrease in RT, t(1612), p = .0005, and when only RT data near obstacles were considered, the effect of adding haptic assist was a 10-ms increase in RT, but without statistical significance, t(714), p = .172. Because proximity to obstacles was a main effect, we determined the effect of proximity to obstacles without regard to assist condition, and the mean reaction time was found to increase by 57 ms, t(2328), p < .0001, when participants were near obstacles.
The data from all three experiments may be examined to compare the effects of haptic assist with the effects of imposing the conditions of the visual occlusion method and of adding a secondary task. Figure 8 shows six box plots defined by the medians and upper and lower quartiles for the two assist conditions (on/off) and by the three experiments (1 = baseline, 2 = visual demand, and 5 = secondary task). The improvement in path-following performance afforded by the haptic assist is evident in each experiment. The cost in path-following performance incurred by the conditions of the visual occlusion method and the addition of a secondary task are also evident when comparing across experiments. Although the availability of haptic assist does not restore path-following performance to baseline levels under visual occlusion, haptic assist does restore path-following performance to baseline performance with haptic assist under the secondary task. These data show that haptic assist can improve path-following performance to a degree that does not diminish even when a secondary task is added.
[FIGURE 8 OMITTED]
To achieve path following, the stiffness of the shared controller must be tuned to balance two conflicting goals. The automation must resist deviations from its desired steering angle with enough authority to reject disturbances such as wind gusts and road crown. The automation effectively resists undesired movement of the steering wheel by constructing a virtual spring to hold the wheel. A greater spring stiffness achieves the tracking goals better. However, in sharing control, the driver must overcome the stiffness presented by the virtual spring if he or she wishes to steer with an angle other than that determined by the automation. If the virtual spring is too stiff, the driver may find it difficult to overpower the controller's actions, but if the spring is very weak, disturbances would cause excessive error in lane keeping with the controller acting alone.
As discussed in the Results sections for the three experiments, the improvement in path-following performance was usually accompanied by reduced obstacle avoidance performance (increased %Hit). Figure 9 shows the number of obstacles hit for the two assist conditions (on/ off) for each of the three experiments. A marker for each participant indicates the number of obstacles hit in each of the six trials, arranged by assist condition and experiment. The dependence of the number of obstacles hit on the participant is evident. The reduction in obstacle avoidance performance with the addition of assist is clear, especially in Experiments 1 and 2.
Note, however, that a reduction in performance is a natural consequence of an assist system that helps maintain path following but that is not aware of obstacles that lie on the path. Note also that the decrement in performance incurred by the haptic assist is about the same magnitude as the decrement incurred by the addition of a secondary task. Also, in the presence of the secondary task, the addition of haptic assist does not incur a statistically significant further performance cost in obstacle avoidance. These results, however, are somewhat inconclusive. They merit further exploration in future studies.
[FIGURE 9 OMITTED]
Certainly the reduction in obstacle avoidance performance incurred by haptic assist motivates the development of sensors in the automobile that can serve in collision warning or collision avoidance systems. The assist could be turned off and a warning (audio, visual, or haptic) sounded when an obstacle is detected by the automatic controller. A more proactive system would assess the traffic situation and help the driver make an evasive maneuver by planning a path around the obstacle.
Accuracy and reaction time were the metrics of participants' performance in the tone location experiment. The hypothesis of Experiment 3 was that the performance of the secondary task would improve if the driver was provided haptic assist. Indeed, there was an improvement (reduction) of the reaction time by 18 ms, t(2328), p = .0009, but the improvement in the localization accuracy was small and not statistically significant. The reduction in reaction time is a desirable result for two reasons. Taken by itself, a faster reaction means that the driver can react sooner to dangerous situations--for example, the honk of a horn from another vehicle. The faster reaction time to the auditory probe also implies that with shared control, the driver experiences a reduced cognitive load in the driving task (Raney, 1993). The validity of using reaction time as an indicator of cognitive load was affirmed by the 37-ms increase in reaction time when participants were near obstacles (actively engaged in obstacle avoidance) versus when they were far from obstacles, t(2328), p < .0001. So, in addition to the improved performance in the primary control task, the haptic assist evidently increases the availability of cognitive processing capacity for performance of a secondary task.
The presence of the secondary task decreased the path-following performance by 18%, when compared with the baseline experiment with no assist, t(10), p = .033. This statistic is further evidence that the spatial reasoning task selected for the secondary task was competing for some of the same cognitive processing capacity required for driving (the primary task). When haptic assist was enabled and path following in the baseline condition was compared with path following in the presence of the secondary task, there was only a 6% degradation in performance, which was not statistically significant, t(10), p = .64. This result suggests that the haptic assist can allow a driver to perform a secondary task with negligible degradation in tracking performance.
We have investigated the use of haptic interface to realize and test the idea of a human driver sharing control of vehicle heading with an automatic controller. The human and controller share the same control interface (e.g., steering wheel) and are mechanically interconnected such that they may exchange information and share control authority with each other. Haptic display becomes the means to place the automatic controller in the haptic perceptual space of the human. The human is free not only to observe the actions of the controller but may override them at any time he or she sees fit, based on his or her perception of additional factors in the task environment. Shared control extends the notion of a virtual fixture to a virtual agent or copilot. Like the virtual fixture, the human is aware of the virtual agent by feel and can use the agent to negotiate a task more efficiently.
Although certain recommendations about haptic versus other types of assist remain unexplored in our present experiments, our findings hold some important implications for the design of automation systems. We demonstrated a significant reduction in visual demand and a freeing of attention by inserting automation into the control loop through a motor on the manual interface. Both of these positive effects were achieved while significantly improving performance on the primary driving task, which was shared by the human-machine team. Through the use of haptic assist, the human remains in the loop, and so we believe that shared control through a haptic interface incurs minimal loss of obstacle avoidance performance to surprise events in the primary task.
We hypothesize that the mechanical coupling between hand and manual interface and the colocated sensing and actuation functions of the hand keep the operator in the loop. Thus we expected to measure maintained obstacle avoidance, but this expectation was not fully borne out in our data. There was a statistically significant reduction in obstacle avoidance with the addition of haptic assist. However, this should be considered in light of the fact that our measure of obstacle avoidance was strongly confounded with the haptic assist condition. Specifically, the haptic assist favored the center of the path, where the obstacles were located. This condition was most apparent in the baseline trial, without the conditions of visual occlusion or the presence of a secondary task. Notably, under the visual occlusion and secondary task conditions, the diminished obstacle avoidance performance incurred by the automation disappeared.
The methods and experiments described in this paper both draw from and contribute to the fields of human factors and haptic interface. In the field of human factors, substantial knowledge already exists about the effects of adding automation on the performance of human-machine teams. However, the use of haptic feedback and its relative merit as compared with visual or auditory feedback has not been evaluated for control sharing. In particular, haptic feedback can be used to provide information regarding continuous control action taken by an automation system that does not further load the visual or auditory systems. In this paper we have demonstrated improved performance with the addition of automation while actually reducing visual demand.
In the haptic interface field, the ability of a manual interface to simultaneously display information and function as a control input device is well understood and used to advantage in numerous applications. However, only the primary task performance benefits of adding automation with haptic feedback have received attention in the field of haptic interface. The auxiliary performance benefits of haptic assist, either in the reduction of a particular mental workload (e.g., spatial processing) or the reduction in demand on other perceptual systems, are often sought in applications, and they are sometimes claimed but seldom quantified. Our work combines the concepts of colocated action and sensing inherent to haptic interfaces and examines not only the human-machine performance in the primary task but also human performance in a secondary task.
Bainbridge, L. (1983). Ironies of automation. Automatica, 19, 775-779.
Christoffersen, K., & Woods, D. D. (2002). How to make automated systems team players. In E. Salas (Ed.), Advances in human performance and cognitive engineering research: Automation (Vol. 2, pp. 1-12). New York: Elsevier Science.
Dennerlein, J. T., & Yang, M. C. (2001). Haptic force-feedback devices for the office computer: Performance and musculoskeletal loading issues. Human factors, 43, 278-286.
Driver. J., & Spence, C. (2004). Crossmodal spatial attention: Evidence from human performance. In C. Spence & I. Driver (Eds.). Crossmodal space and crossmodal attention (pp. 179-220. Oxford, UK: Oxford University Press.
Gillespie, R. B. (2004). Haptic interface to virtual environments. In T. Kurfess (Ed.), CRC handbook on robotics and automation (pp. 25.1-23.23). Boca Raton, FL: CRC Press.
Gillespie, R. B., Hoffman, M., & Freudenberg, J. (2005). Haptic interface for hands-on instruction in system dynamics and embedded control. In 11th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. IEEE Virtual Reality (pp. 125-131). Los Alamitos, CA: IEEE Computer Society.
Gillespie. T. D. (1992). Fundamentals of vehicle dynamics. Warrendale, PA: Society of Automotive Engineers.
Hasser, C. J., Goldenberg, A. S., Martin, K. M., & Rosenberg, L. B. (1998). User performance in a GUI pointing task with a low-cost force-feedback computer mouse. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition (Vol. 64, pp. 151-155). New York: American Society of Mechanical Engineers, Dynamic Systems and Control Division.
Hayward. V., Astley, O. R.. Cruz-Hernandez, M., Grant. D., & Robles-De-La-Torre, G. (2004). Haptic interfaces and devices. Sensor Review, 24, 16-21.
Hess. R. A., & Modjtahedzadeh. A. (1990). A control theoretic model of driver steering behavior. IEEE Control Systems Magazine, 10(5). 5-8.
Li, M., & Okamura, A. M. (2005). Recognition of operator motions for real-time assistance using virtual fixtures. In 11th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, IEEE Virtual Reality, (pp. 125-131). Los Alamitos, CA: IEEE Computer Society.
Meyer, J. S., Rubinstein. D. E., & Evans, J. E. (2001). Executive control of cognitive processes in task switching. Journal of Experimental Psychology: Human Perception and Performance, 27. 763-797.
Mosier, K. L. (2002). Automation and cognition: Maintaining coherence in the electronic cockpit. In E, Salas (Ed.), Advances in human performance and cognitive engineering research: Automation (Vol. 2, pp. 93-122). New York: Elsevier Science.
Park, S. S., Howe. R. D., & Torchiana, D. E (2001). Virtual fixtures for robot-assisted minimally invasive cardiac surgery; In Fourth International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 1419-1420). Berlin: Springer-Verlag.
Patoglu. V., & Gillespie, R. (in press). Feedback stabilized minimum distance maintenance for convex parametric surfaces. IEEE Transactions on Robotics.
Payandeh, S., & Stanisic, Z. (2002). On application of virtual fixtures as an aid for telemanipulation and training. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (pp. 18-25). Los Alamitos, CA: IEEE Computer Society.
Raney, G. E. (1993). Monitoring changes in cognitive load during reading: An event-related brain potential and reaction time analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19. 51-69.
Ratcliff, R. (1993). Methods for dealing with reaction time outliers. Psychological Bulletin, 114, 510-532.
Rosenberg, L. B. (1993). Use of virtual fixtures to enhance telemanipulation with time delay. In Proceedings of the ASME Winter Annual Meeting (Vol. 49, pp. 29 56). New York: American Society of Mechanical Engineers, Dynamic Systems and Control Division.
Sarter, N. B. (2002). Multimodal information presentation in support of human-automation communication and coordination. In E. Salas (Ed.), Advances in human performance and cognitive engineering research: Automation (Vol. 2, pp. 13-36), New York: Elsevier Science.
Sarter, N. B., Woods, D. D., & Billings, C. E. (1997). Automation surprises. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 1926-1943). New York: Wiley.
Sayers, C., & Paul, R. (1994). An operator interface for teleprogramming employing synthetic fixtures. Presence, 3, 309-320.
Schumann. J., Godthelp, J., & Hoekstra, W. (1992). An exploratory simulator study on the use of active control devices in car driving (Tech. Rep. No. IZF 1992 B-2). Soesterberg. Netherlands: TNO Institute for Perception.
Steele. M.. & Gillespie. R. B. (2001). Shared control between human and machine: Using a haptic interface to aid in land vehicle guidance. In Proceedings of the Human Factors and Ergonomics Society. 45th Annual Meeting (pp. 1671-1675). Santa Monica, CA: Human Factors and Ergonomics Society.
Suzuki. K., & Jansson, H. (2003). An analysis of driver's steering bebaviour during auditory or haptic warnings for the designing of lane departure warning system. Society of Automotive Engineers of Japan Review, 24, 65-70.
Switkes. J. P., Rossetter. E. 1., Coe, I. A.. & Gerdes, J. C. (2004). Handwheel force feedback for lanekeeping assistance: Combined dynamics and stability. In Proceedings of AVEC '04, 7th International Symposium on Advanced Vehicle Control (pp. 47-52). Tokyo: Society of Automotive Engineers of Japan.
Tsimhoni, O., & Green, R A. (2001). Visual demand of driving and the execution of display-intensive in-vehicle tasks. In Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting (pp. 1586-1590). Santa Monica, CA: Human Factors and Ergonomics Society.
Ulrich, R., & Miller, J. (1994). Effects of truncation on reaction time analysis, Journal of Experimental Psychology: General, 123, 34-80.
Wallace, J. S., & Fisher. D. L. (1998). Sound localization: Information theory analysis. Human Factors, 40, 50-68.
Wickens, C. D., & Liu, L. (1988). Codes and modalities in multiple resources: A success and a qualification. Human factors, 30, 599-616.
Paul G. Griffiths is a Ph.D. student of mechanical engineering at the University of Michigan. He received his M.S. in mechanical engineering from the University of California, Berkeley, in 2002.
R. Brent Gillespie is an assistant professor of mechanical engineering at the University of Michigan. He received his Ph.D. in mechanical engineering from Stanford University in 1996.
Date received: February 28, 2002
Date accepted: April 20, 2005
Address correspondence to Paul G. Griffiths, Department of Mechanical Engineering, University of Michigan, 2350 Hayward St., Ann Arbor, MI 48109; email@example.com.
TABLE 1: Mean and Standard Deviation of RMS[LE] and %Hit for Baseline Trials (Experiment 1) With and Without Assist No Assist With Assist Performance Metric [bar.x] s [bar.x] s RMS[LE] (m) 0.680 0.391 0.473 0.234 %Hit 0.61 1.26 3.03 3.15 Paired t Test Performance Metric [DELTA][bar.x] s[DELTA] p DOF RMS[LE] (m) -0.207 0.264 .0134 10 %Hit +2.42 2.62 .00594 10 TABLE 2: Mean and Standard Deviation of RMS[LE], %Hit, and Avg[VisD] For Experiment 2 No Assist With Assist Performance Metric [bar.x] s [bar.x] s RMS[LE] (m) 1.412 0.727 0.8282 0.312 %Hit 1.82 3.11 6.36 8.75 Avg[VisD] (-) 0.570 0.0966 0.404 0.0765 Paired t Test Performance Metric [DELTA][bar.x] s[DELTA] p DOF RMS[LE] (m) -0.584 0.513 .0018 10 %Hit +4.54 8.07 .0456 10 Avg[VisD] (-) -0.166 0.0563 <.0001 10 TABLE 3: Mean and Standard Deviation of RMS[LE], %Hit, ToneAcc, and RT for Experiment 3 No Assist With Assist Paired t Test No Assist With Assist Performance Metric [bar.x] s [bar.x] s RMS[LE] (m) 0.817 0.520 0.503 0.221 %Hit 3.64 6.90 4.55 5.83 ToneAcc (%) 94.9 4.41 95.30 3.92 RT (ms) 564 147 545 132 Paired t Test Performance Metric [bar.x] s[DELTA] p DOF RMS[LE] (m) -0.314 0.264 .0093 10 %Hit +0.91 5.18 .287 10 ToneAcc (%) +0.35 4.29 .604 10 RT (ms) -18.2 143 .0009 2318
|Printer friendly Cite/link Email Feedback|
|Author:||Griffiths, Paul G.; Gillespie, R. Brent|
|Date:||Sep 22, 2005|
|Previous Article:||Audio and visual cues in a two-talker divided attention speech-monitoring task.|
|Next Article:||Speech-based interaction in multitask conditions: impact of prompt modality.|