Printer Friendly

Hands-Free Maneuvers of Robotic Vehicles via Human Intentions Understanding Using Wearable Sensing.

1. Introduction

With the improvements of computational resources and manufacturing capacities, there is a rapid and steady development in intelligent robotic vehicles technology in recent decades [1, 2]. These robotic vehicles have numerous advantages over traditional vehicles to solve traffic problems such as traffic jams and traffic accident which are caused by the driver's error or negligence [3]. Besides, the intelligent robotic vehicles are more and more likely to eliminate gas/brake pedals, gearshifts, and steering wheels. Google has built a two-seater prototype intelligent vehicle sans the steering wheel or pedals. Even without vehicle controls available to the human driver, this prototype is able to safely maneuver around obstacles via the built-in sensors and the software system [4]. Besides, aiming at ride-hailing and ride-sharing fleets, Ford will build a fully autonomous robotic vehicle without a steering wheel or pedals by 2021 [5].

However, in the intelligent robotic vehicle driving process, the human driver or passenger usually has some specific driving requirements such as accelerating in a straight road, stopping for some emergencies, or turning in a temporary direction. Consequently, how to maneuver the driving modes according to the human special intentions is a necessary issue that should be considered in the vehicle design.

During the human-robotic vehicle interaction process, it is significant for the vehicle to understand the human intentions or behaviors in order to achieve different vehicle maneuvers. Several related works have been conducted in recent years.

By using human motions, the car's speed, and the distance between the car and the intersection, Ohashi et al. proposed a model using case-based learning to construct an experimental system for human driver's intentions understanding [6]. In [7], the research team recognized a set of continuous driver intentions by observing the easily accessible vehicles and environment signals such as pedals or global vehicle positions. Based on the playback system and machine learning, Oliver and Pentland presented a dynamical graphical framework to model and recognize driver's behaviors at a tactical level that focused on how contextual information impacted the driver's performance [8]. Researchers in [9] developed a driver behavior recognition approach by characterizing and detecting driving maneuvers and then modeled and recognized the driver's behaviors in different situations. From the accessible vehicle onboard sensors, Berndt and Dietmayer investigated a method to infer the driver's intentions to leave the lane or other maneuvers. In this work they expected to help drivers predict trajectories or assess risks [10].

However, these recognition and understanding methods for human intentions are too complex to implement in practice. Additionally, we usually cannot get much recorded information from the vehicle embedded system since there are less traditional operation devices in the future robotic vehicles.

Using the gesture to represent human intentions for the robotic vehicle maneuver is a practical and interesting work that attracts a lot of attention. Operating robotic vehicles via human gestures will help the human take his/her hands off the current operation habits to reduce the contributions of the negligence and error which may cause vehicle collisions [11]. Ionescu et al. developed efficient human-vehicle interaction through a smart and real-time depth camera operating in the near infrared spectrum. The acquired depth information was processed for the human gesture detection and recognition to interpret the driving intentions to control the vehicle [12]. Researchers in [13] established the communication by gestures between the human and an intelligent wheelchair through a webcam and sensors. By using an array of cameras which outputted information with an instantaneous state, Kramer and Underkoffler acquired the images of human gestures and then designed a controller that automatically extracted and detected the gesture from the gesture data for the vehicle maneuver [14]. In [15], to enable the human and the vehicle to communicate and work together, Fong et al. used sensor fusion and computer vision to recognize the remote environment and improve the situation awareness. Then they created easily used remote driving tools for the vehicles. Researchers in [16] employed a Leap Motion to detect the gesture data and extracted seven independent instructions for the autonomous vehicle maneuver.

Although there are several vision-based approaches, vision-based driving intention recognition and understanding highly depends on the working surroundings. Its performance is easily interfered by the complex and dynamic background such as the crowded urban settings. Furthermore, the vision system usually requires the human to be within some certain areas in order to capture the motion information, which significantly constrains the activities and working ranges of the humans.

Therefore, with the extensive development and employment of robotic vehicles, researchers expect that humans and vehicles could collaborate seamlessly in different driving situations. Developing a simply configured, naturally operable, and highly robust human intention understanding approach for human-robotic vehicle collaboration is a very necessary issue.

To this end, different from existing approaches of using vehicle built-in devices or vision systems, we propose a wearable-sensing-based maneuver intention understanding approach using a wearable sensory system [17-19] to assist the human in the maneuvering of robotic vehicle without physical contact. This interaction method does not restrain the human's hand to be physically involved in the driving task and can be applied in the complicated human-robotic vehicle interactions.

The major contributions of this work include the following: (1) We propose a natural wearable sensing solution to assist human drivers to maneuver robotic vehicles without traditional operation devices in specific driving situations, which is more robust than existing approaches. (2) We develop a driving intention understanding approach using fuzzy control and human motions information, including forearm postures and muscle activities, which are captured by the wearable sensory system.

2. System Framework

The system framework, which is designed for the human to use the wearable-sensing-based maneuver intention understanding approach to operate the robotic vehicle, is shown in Figure 1. This system contains three layers: the data layer, the decision layer, and the execution layer.

When the human intends to change the robotic vehicle's driving modes, his intentions expressed by forearm postures and muscle activities are detected and calculated via a wearable sensory system, as presented in Figure 2. After being collected, the expression information is preprocessed and fused together to output useful information in the data layer. Then the processed information, including the human hand's rotation angles and arm muscles electromyography (EMG) signals, is sent to the decision layer by means of wireless communication devices in real time.

After that, in the decision layer, the acquired information is further processed to generate intention instructions based on the intention understanding model. Simultaneously, the instruction outputs motivate the vehicle motion planning algorithms by calling the corresponding driving mode function. In order to ensure the vehicle to execute accurately, both intention instructions and algorithm outputs are utilized to make motion planning decisions.

In the execution layer, the vehicle driving commands are generated based on the decision layer outputs for the vehicle to plan motions in the real world workspace. Meanwhile, the vehicle execution states are sent back to the decision layer to alert the motion planning algorithms if the driving intention is accepted. The motion planning algorithms will output the decision again if the driving intention failed to be accepted.

3. Maneuver Intention Representation and Data Acquisition

3.1. Maneuver Intention Representation. The human maneuver intentions [20], including brake, turn, and acceleration, are pretty common in daily driving. These intentions can be reflected and represented by lots of manners, such as body movements and natural languages. Because there are no traditional manual operation devices in the future intelligent robotic vehicles, the normal driving manners are not available in these cases. In this research, in order to make it practical and natural, we utilize the human forearm postures and muscle activities to represent these maneuver intentions. As shown in Figure 2, the intention information usually contains forearm rotation angles and EMG signals. Therefore, the maneuver intentions can be described as

[I.sub.d] = {[I.sub.dr], [I.sub.de]}, (1)

where [I.sub.dr] denotes the maneuver intention interpreted by forearm rotations; [I.sub.de] denotes the maneuver intention interpreted by EMG signals.

3.2. Wearable Sensory System. We employ a wearable sensory system for human-robotic vehicle interaction to acquire the human forearm postures and muscle activities information in the maneuver process. The sensory system that we choose is Myo [21], which can be worn at the driver's forearm and integrates with an inertial measurement unit (IMU) [22-24] and eight EMG sensors [25-27]. The IMU chip contains an onboard digital motion processor (DMP) and MPU-9150 module which consists of a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer. The detected information from the IMU and EMG sensors is preprocessed by a microcontroller unit (MCU) with a 32 bit ARM architecture 72 MHz Cortex M4 CPU core. All the raw and calculated data are made available through a first-in-first-out (FIFO) buffer that is read by the MCU over the communication bus. The Bluetooth Low Energy (BLE) module on the mainboard is used for external communication between Myo and the client controller [28].

The working principle of the information acquisition by this wearable sensory system is presented in Figure 2. The human forearm postures will be tracked and recorded by the IMU. This data includes acceleration and angular velocity information which can be fused to describe the forearm motions and rotation angles. When the human performs maneuver intentions, the electrical skeletal muscle activities from his forearm will be measured by the EMG sensors. This EMG information can be extracted to estimate human's finger motions such as wave-in, finger-spread, and fist.

3.3. Data Acquisition and Processing. When the human performs his maneuver intentions, as shown in Figure 2, his forearm postures can be quantified by the IMU outputs which contain the 3-axis acceleration information and the 3-axis angular velocity information about forearm motions. Furthermore, these data can be fused together into quaternions

i = [[[q.sub.0], [q.sub.1], [q.sub.2], [q.sub.3]].sup.T], (2)

where [[absolute value of q].sup.2] = [q.sup.2.sub.0] + [q.sup.2.sub.1] + [q.sup.2.sub.2] + [q.sup.2.sub.3] = 1. The sample frequency of the IMU is 50 Hz in our work.

In order to calculate the forearm postures, Euler angles [29] are utilized to parameterize the forearm spatial rotations in the 3D work space. The Roll-Pitch-Yaw Euler angles can be represented by

R(t) = [[[phi](t), [theta](t), [psi](t)].sup.T], (3)

where t denotes the IMU sampling time, [phi] is the Roll rotation about the x-axis, [theta] is the Yaw rotation about the y-axis, and [psi] is the Pitch rotation about the z-axis. As presented in Figure 3, Euler angles are able to visually describe the forearm rotation movements in the hand-over process.

Moreover, the Euler angles are able to be calculated by the quaternions as

[mathematical expression not reproducible]. (4)

Therefore, the driver's intentions [I.sub.dr] interpreted by the arm rotation in (1) can be represented as

[I.sub.dr] = {[I.sub.dr] ([x.sub.r]) | [x.sub.r] [member of] R(t)}. (5)

Simultaneously, the human finger motions can be calculated based on the EMG signals which are collected from the human forearm's muscle activities. The EMG data acquired by the wearable sensory system can be described as

E(t) = [[[e.sub.1] (t), [e.sub.2] (t),..., [e.sub.n] (t)].sup.T], (6)

where t is the sampling time of the EMG sensor, e(t) is the output of each EMG sensor, and n is the number of EMG channels on the wearable sensory system which is 8 in our work. We sample these EMG signals at a frequency of 200 Hz.

The raw EMG signal is a set of discrete points with positive and negative components. Along with the finger activities, the electric potentials generated by muscle cells have a distinct effect on the dispersion of the EMG signal. Therefore, to take advantage of the EMG data accurately, we adopt the standard deviation (SD) [sigma] of the EMG data to extract the characteristics from the finger activities. The standard deviation could reflect the muscle activities observably. In the human-robotic vehicle interaction, the standard deviation can be calculated by

[mathematical expression not reproducible], (7)

where [e.sub.i](k), k = 1,2, ..., K is a set of EMG signals and K is the window size for determining the number of EMG data to be employed to calculate the stand derivation. We select K = 150 in this study.

Moreover, the maneuver intentions [I.sub.de] interpreted by the finger motions in (1) can be represented by

[I.sub.de] = {[I.sub.de] ([x.sub.e]) | [x.sub.e] [member of] [member of] [sigma] (t)}. (8)

According to (5) and (8), the maneuver intention can be represented as

[I.sub.d] = {[I.sub.dr] (xr), [I.sub.de] ([x.sub.e]) | [x.sub.r] [member of] R(t), [x.sub.e] [member of] [sigma] (t)}. (9)

From the above, it can be concluded that, during the robotic vehicle maneuver process, R(t) and [sigma](t) are dynamically programmed and updated via the human forearm postures and muscle activities. Therefore, the maneuver intentions [I.sub.d] will be interpreted and updated in real time.

4. Maneuver Intention Understanding Using Fuzzy Control

In this section, based on the wearable sensing information and the maneuver intention representation, we build an intention understanding model using the fuzzy control.

4.1. Maneuver Intention Fuzzification. Before developing the fuzzy controller [30], we should define the fuzzy set and domain of discourse using the wearable sensing information, which contains forearm postures and muscle activities. In this work, we find it is difficult to distinguish various intentions by directly employing the raw standard derivations of all EMG signals, while the average of them presents clear differences. Hence, we utilize [bar.[sigma]] = [[summation].sup.8.sub.i=1] [[sigma].sub.i]/8 to denote the muscle activities from the driving intention. Additionally, in order to distinguish steering modes in the robotic vehicle maneuver, we utilized [DELTA][theta] = [theta](t+1) - [theta] (t) to work as an input for the fuzzy controller. Therefore, combining the forearm rotation angles and EMG signals together, we deploy the fuzzy controller with four inputs ([phi], [DELTA] [theta], [psi], [bar.[sigma]]) and one output ([I.sub.d]).

Moreover, the fuzzy sets of the inputs and output are defined as follows:

(i) The Roll angle [phi] : {NB, NM, NS, ZO, PS, PM, PB}

(ii) The Yaw angle [DELTA][theta] : {N, ZO, P}

(iii) The Pitch angle [psi] : {NB, NM, NS, ZO, PS, PM, PB}

(iv) The EMG signal [bar.[sigma]] : {ZO, PS, PM, BD, PB}

(v) Driving intentions [I.sub.d] : {AC, DC, ST, TL, TR, BD},

where "AC," "DC," "ST," "TL," "TR," and "BD" denote the maneuver intentions of "Acceleration," "Deceleration," "Stop," "TurnLeft," "TurnRight," and "BackwardDriving," respectively. Additionally, "NB," "NM," "NS," "ZO," "PS," "PM," and "PB" denote negative big, negative middle, negative small, zero, positive big, positive middle, positive small, and separately, which are employed to represent degree of membership in the maneuver intention understanding.

4.2. Membership Functions. According to the fused wearable sensing information and driving operations in the vehicle maneuver, we define the domains of discourse of the inputs and output as follows:

(i) The Roll angle [phi] : [-90[degrees], 90[degrees]]

(ii) The Yaw angle [DELTA][theta] : [-18[degrees], 18[degrees]]

(iii) The Pitch angle [psi] : [-90[degrees], 90[degrees]]

(iv) The EMG signal [bar.[sigma]] : [0, 90]

(v) Driving intentions [I.sub.d] : {-2, -1, 0,1,2,3}.

In this study, we ask 5 subjects, who have different hands and muscular tensions when maneuvering the robotic vehicle, to detect the forearm rotation angles and EMG signals. Each subject performs 20 times for each maneuver intention. The triangular membership function [31] is employed for each input and output in the fuzzy controller. The membership functions we design are shown in Figures 4~8. It can be seen that, during the maneuver process, the degree of membership is varied with the domain of discourse of each input or output correspondingly.

4.3. Maneuver Intention Understanding. Since the fuzzy controller in this work is configured with four inputs and one output, the fuzzy rules cannot be presented by the traditional rule-table. However, we can use the "IF-THEN" statements [32] to describe the valid fuzzy rules we utilize in the robotic vehicle maneuver. As shown in Table 1, the "IF" parts are antecedents and the "THEN" parts are consequents.

For the maneuver intention understanding, we employ "AND" as the fuzzy operator in each rule and "OR" as the fuzzy operator among different rules. As presented in Table 1, for the maneuver intention of "Acceleration," we utilize the aggregate degree of membership as its fuzzy decision, which can be calculated by

[mathematical expression not reproducible]. (10)

Similarly, the fuzzy decisions of other maneuver intentions can be calculated based on Table 1. Afterwards, we employ the Middle of Maximum (MOM) [33] as the defuzzification approach to calculate the corresponding maneuver intention. Furthermore, the maneuver intention understanding result can be expressed as

[I.sub.d] = round [arg mom [mu]([l.sub.x])], (11)

where [I.sub.x] [member of] {AC, DC, ST, TL, TR, BD}, [mu]([I.sub.x]) denotes the outputted fuzzy decision, and "round" means that the output is rounded to the nearest integer.

Therefore, based on (11), the maneuver intention can be understood when the human operates the robotic vehicle under specific requirements.

5. Experimental Results and Analysis

5.1. Experimental Platform. The developed approach in this work is implemented on a lab research autonomous robotic vehicle of the 1/10-Scale Vehicle Research Platform (1/10-SVRP). The 1/10-SVRP consists of five 1/10-scale autonomous robotic vehicles, a human manual driving interface, and a 1/10-scale driving environment including an ultra-wideband based indoor GPS system, traffic lights, road signs, and various lane setups. As shown in Figure 9, the wearable sensory system described above is worn around human's arm during the human-robotic vehicle interaction. The sensory information detected by IMU and EMG sensors is sent to the control system in real time via a pair of Bluetooth devices. Once the controller generates new commands, these signals are sent to the vehicle motor drivers to plan the goal motions.

In the robotic vehicle maneuver process, the velocity we employed for the robotic vehicle is expressed by

V(t) = [k.sub.[sigma]] [bar.[sigma](t)], (12)

where [k.sub.[sigma]] is the EMG control factor; [bar.[sigma](t)] is determined by the clench and release of the human fist.

The steering angles in the robotic vehicle turning operation are calculated with the following function:

[[theta].sub.T] (t) = [k.sub.[theta]] [theta](t) + [l.sub.[theta]], (13)

where [k.sub.[theta]] is the steering angle coefficient; [l.sub.[theta]] works as the offset to adjust the initial angle.

5.2. Maneuver Intention Understanding Verification. In this section, we test the maneuver intention understanding approach via the forward (acceleration and deceleration) and steering (turning left and turning right) driving modes in practical situations and present the verification results.

5.2.1. Forward Driving. When the human expects the robotic vehicle to speed up or speed down in the forward driving, according to the fuzzy rules, he should present specific finger motions and keep the rotation angles constrained in the required intervals at the same time. As depicted in Figure 10, the human forearm rotations and finger activities properly meet the required rules. Meanwhile, the vehicle's movement procedure in Figure 10(a) shows that the vehicle is accelerated and decelerated correspondingly along with the variation of the EMG signals. Consequently, it can be seen that the maneuver intention understanding approach correctly follows the human intentions to execute the accelerating and decelerating driving.

5.2.2. Steering. When the human wants the robotic vehicle to turn left or right, in accordance with the steering rules of "TL" and "TR," he should control the Yaw angle properly and keep the Roll angle and Pitch angle in the designed constraints simultaneously. The rotation angles information and EMG signals are presented in Figure 11. It can be seen that the robotic vehicle properly performs the steering directions along with the variation of the rotation angles, which indicates that the proposed approach exactly understands the maneuver intentions in the steering operation.

5.3. Accuracy Evaluation. In this section, we conduct understanding accuracy evaluation and compare the results with the work in [16], which utilized a Leap Motion to acquire the human behaviors' information for the vehicle maneuver.

We employ the wearable sensory system to perform all designed maneuver intentions to operate the robotic vehicle without considering the fixed route. Each intention is operated 40 times based on the understanding model. The understanding accuracy results are presented in Table 2. It can be seen that the proposed understanding approach is able to effectively and sensitively identify all the maneuver intentions in the human-robotic vehicle interaction. However, for some intentions understanding such as "TR" and "TL," they present relatively fair accuracy than others. To solve it, we can design much better fuzzy rules through practical trials to improve the accuracy of these maneuver intentions' understanding.

In addition, the average understanding accuracy of this study is about 93.33% which is higher than the work in [16]. Furthermore, from [16] we can calculate that the standard deviation of all errors (SD-E) is about 4.52, which is higher than 3.76 in our work. Therefore, it can be concluded that our approach is more stable in the maneuver intentions understanding. The result comparisons are shown in Table 3.

5.4. Robustness Evaluation. Based on the research platform, we design two tasks to evaluate the robustness of the maneuver intention understanding model in some common driving scenes, such as driving straight on the lane, turning in the intersection, and turning for obstacle avoidance.

5.4.1. Lane Tracking. When the human maneuvers a robotic vehicle in the straight lane, the straightness of driving is very significant to the traffic. Meanwhile serpentine driving usually results in a fine or even a terrible accident. Therefore, the straight driving test is conducted based on the intention understanding model.

We ask 5 individuals with valid driving licenses and considerable driving experiences to maneuver the vehicle one by one for two loops. Each individual operates one straight driving process in each loop. Therefore, we can get 10 driving records from the experiment. As shown in Figure 12, the vehicle is driven forward from A to B.

According to the maneuver results, the occurrences of lane departure in the straight driving are shown in Figure 13. The numbers of lane departures of these ten driving records are "2," "1," "1," "0," "2," "1," "0," "1," "3," and "1," respectively. The average of the number of lane departures is 1.20, which suggests that the maneuver intention understanding approach presents a robust stability and adaptability for different individuals in the straight driving situations.

5.4.2. Obstacle Avoidance. To evaluate the performance of flexibility of the maneuver intention understanding model, some hybrid driving modes to avoid obstacles are allocated to the robotic vehicle in the second task. As presented in Figure 14, the vehicle is driven from A to B. During this process, the vehicle must cross the intersection and avoid colliding with obstacles on the road. The experiment is conducted using the same method by 5 different individuals as task 1.

Based on the driving records, the numbers of obstacle collisions are "1," "2," "1," "1," "3," "0," "1," "3," "2," and "1," respectively. As shown in Figure 15, the average of the number of obstacle collisions is 1.50. From the results, it can be observed that the maneuver intention understanding approach presents a robust flexibility for the hybrid driving modes in the complex road setting. Comparing to task 1, the standard deviation of the numbers of obstacle collisions (0.97) is higher than that of the numbers of lane departures (0.92), which reveals that the intention understanding approach in hybrid driving modes shows a relatively fair robustness. One of the key reasons is that different fuzzy rules for the intentions present diverse understanding accuracies, in which some of them will impact the overall robustness. Additionally, it is easy for divers to feel nervous in the complicated driving surroundings which can result in obstacle collisions. However, these problems above could be overcome by optimizing fuzzy rules in the proposed approach and taking more practice for the human.

From the above, it is shown that the vehicle maneuvers are successfully and effectively performed by using the maneuver intention understanding approach. Notably, experimental results and evaluations of this approach demonstrate that by taking advantage of the natural wearable sensing information the human driver can maneuver the vehicle only using forearm postures and muscle activities in a much easier and more stable manner with considerable accuracy and robustness.

6. Conclusions

A novel and practical wearable-sensing-based maneuver intention understanding approach was proposed to assist the human driver to naturally operate the robotic vehicles without physical contact. The wearable sensory device can be naturally applied in the complicated human-vehicle interactions without restraining the human's hand to be physically involved in the driving task. First, when the human driver performed his maneuver intentions, the wearable sensory system information which included forearm postures and muscle activities was recorded and updated in real time. Additionally, after getting the parameterized intention information, we developed a maneuver intention understanding approach using the fuzzy control. Afterwards, based on the proposed approach, we conducted a set of experiments on our vehicle research platform. Experimental results and evaluations demonstrated that by taking advantage of the nonphysical contact and natural handleability of this approach the robotic vehicle was successfully and effectively maneuvered to accomplish the driving tasks with considerable accuracy and robustness in human-robotic vehicle interaction.

In human-vehicle interaction, the driver's unconscious gestures and involuntary movements may cause unstable detection and interpretation of the driver's intentions. Therefore, future works will be conducted on integrating multiple kinds of sensing information, such as human gaze information and natural language, as triggers to avoid the false-positive intention understanding. Additionally, with looking forward to extending the applications of our approach in more complicated situations, future works will also be conducted on integrating radar sensing information as the input of the fuzzy control to improve the intention understanding accuracy to avoid potential collisions.

https://doi.org/10.1155/2018/4546094

Conflicts of Interest

The authors declare that they have no conflicts of interest. Supplementary Materials

The demo we recorded for the experimental verification. (Supplementary Materials)

References

[1] D. L. Fisher, M. Lohrenz, D. Moore, E. D. Nadler, and J. K. Pollard, "Humans and Intelligent Vehicles: The Hope, the Help, and the Harm," IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 56-67, 2016.

[2] B. Paden, M. Cap, S. Z. Yong, D. Yershov, and E. Frazzoli, "A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles," IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 33-55, 2016.

[3] L. Li, W. Huang, Y. Liu, N. Zheng, and F. Wang, "Intelligence testing for autonomous vehicles: a new approach," IEEE Transactions on Intelligent Vehicles, vol. 1, no. 2, pp. 158-166, 2016.

[4] https://www.cnet.com/news/google-unveils-self-driving-carsans-steering- wheel.

[5] http://www.theverge.com/2016/8/16/12504300/ford-autonomouscar-ride-sharing- 2021.

[6] K. Ohashi, T. Yamaguchi, and I. Tamai, "Humane automotive system using driver intention recognition," in Proceedings of the SICE Annual Conference 2004, pp. 2363-2366, jpn, August 2004.

[7] H. Berndt, J. Emmert, and K. Dietmayer, "Continuous driver intention recognition with Hidden Markov Models," in Proceedings of the 11th International IEEE Conference on Intelligent Transportation Systems, ITSC 2008, pp. 1189-1194, China, December 2008.

[8] N. Oliver and A. P. Pentland, "Driver behavior recognition and prediction in a SmartCar," in Proceedings of the Enhanced and Synthetic Vision 2000, pp. 280-290, April 2000.

[9] N. Kuge, T. Yamamura, O. Shimoyama, and A. Liu, "A driver behavior recognition method based on a driver model framework," SAE Technical Papers, 2000.

[10] H. Berndt and K. Dietmayer, "Driver intention inference with vehicle onboard sensors," in Proceedings of the 2009 IEEE International Conference on Vehicular Electronics and Safety, ICVES 2009, pp. 102-107, India, November 2009.

[11] C. A. Pickering, K. J. Burnham, and M. J. Richardson, "A research study of hand gesture recognition technologies and applications for human vehicle interaction," in Proceedings of the 3rd Institution of Engineering and Technology Conference, Automotive Electronics, pp. 1-15, on, 2007.

[12] B. Ionescu, V Suse, C. Gadea et al., "Using a NIR camera for car gesture control," IEEE Latin America Transactions, vol. 12, no. 3, pp. 520-523, 2014.

[13] S. P. Kang, G. Rodnay, M. Tordon, and J. Katupitiya, "Ahand gesture based virtual interface for wheelchair control," in Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2003, pp. 778-783, Japan, July 2003.

[14] K. H. Kramer and J. S. Underkoffler, "Gesture-Based Control System For Vehicle Interfaces," in Google Patents, 2009.

[15] T. Fong, C. Thorpe, and C. Baur, "Advanced interfaces for vehicle teleoperation: Collaborative control, sensor fusion displays, and remote driving tools," Autonomous Robots, vol. 11, no. 1, pp. 77-85, 2001.

[16] U. E. Manawadu, M. Kamezaki, M. Ishikawa, T. Kawano, and S. Sugano, "A hand gesture based driver-vehicle interface to control lateral and longitudinal motions of an autonomous vehicle," in Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016, pp. 1785-1790, Hungary, October 2016.

[17] A. Nag, S. C. Mukhopadhyay, and J. Kosel, "Wearable Flexible Sensors: A Review," IEEE Sensors Journal, vol. 17, no. 13, pp. 3949-3960, 2017.

[18] I. H. Lopez-Nava and A. Munoz-Melendez, "Wearable Inertial Sensors for Human Motion Analysis: A Review," IEEE Sensors Journal, vol. 16, no. 22, pp. 7821-7834, 2016.

[19] M. Janidarmian, A. Roshan Fekr, K. Radecka, and Z. Zilic, "Multi-Objective Hierarchical Classification Using Wearable Sensors in a Health Application," IEEE Sensors Journal, vol. 17, no. 5, pp. 1421-1433, 2017.

[20] S. Lefevre, C. Laugier, and J. Ibanez-Guzman, "Exploiting map information for driver intention estimation at road intersections," in Proceedings of the 2011 IEEE Intelligent Vehicles Symposium, IV'11, pp. 583-588, Germany, June 2011.

[21] https://www.myo.com/.

[22] G. Loianno, C. Brunner, G. McGrath, and V. Kumar, "Estimation, Control, and Planning for Aggressive Flight With a Small Quadrotor With a Singe Camera and IMU," IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 404-411, 2017.

[23] H. Ahmed and M. Tahir, "Improving the Accuracy of Human Body Orientation Estimation With Wearable IMU Sensors," IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 3, pp. 535-542, 2017.

[24] J. Wahlstrom, I. Skog, P. Handel, and A. Nehorai, "IMU-Based Smartphone-to-Vehicle Positioning," IEEE Transactions on Intelligent Vehicles, vol. 1, no. 2, pp. 139-147, 2016.

[25] N. D. Bunt, J. C. Moreno, P. Muller, T. Seel, and T. Schauer, "Online Monitoring of Muscle Activity During Walking for Bio-feedback and for Observing the Effects of Transcutaneous Electrical Stimulation," Biosystems and Biorobotics, vol. 15, pp. 705-709, 2017.

[26] S. H. Roy, G. De Luca, M. Cheng, A. Johansson, L. D. Gilmore, and C. J. De Luca, "Electro-mechanical stability of surface EMG sensors," Medical & Biological Engineering & Computing, vol. 45, pp. 447-457, 2007.

[27] C. J. De Luca, M. Kuznetsov, L. D. Gilmore, and S. H. Roy, "Inter-electrode spacing of surface EMG sensors: Reduction of crosstalk contamination during voluntary contractions," Journal of Biomechanics, vol. 45, no. 3, pp. 555-561, 2012.

[28] K. Nymoen, M. R. Haugen, and A. R. Jensenius, "Mumyo-evaluating and exploring the myo armband for musical interaction," 2015.

[29] J. Diebel, "Representing attitude: Euler angles, unit quaternions, and rotation vectors," Matrix, vol. 58, pp. 1-35, 2006.

[30] K. Tanaka and H. O. Wang, Fuzzy control systems design and analysis: a linear matrix inequality approach, John Wiley and Sons, 2004.

[31] J. S. Kim and K.-S. Whang, "A tolerance approach to the fuzzy goal programming problems with unbalanced triangular membership function," European Journal of Operational Research, vol. 107, no. 3, pp. 614-624, 1998.

[32] H. Ishibuchi, K. Nozaki, N. Yamamoto, and H. Tanaka, "Selecting fuzzy if-then rules for classification problems using genetic algorithms," IEEE Transactions on Fuzzy Systems, vol. 3, no. 3, pp. 260-270, 1995.

[33] H. Hellendoorn and C. Thomas, "Defuzzification in fuzzy controllers," Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology, vol. 1, no. 2, pp. 109-123, 1993.

Weitian Wang (iD), Rui Li, Longxiang Guo, Z. Max Diekel, and Yunyi Jia (iD)

Department of Automotive Engineering, Clemson University, Greenville, SC 29607,

USA

Correspondence should be addressed to Yunyi Jia; yunyij@clemson.edu

Received 11 November 2017; Accepted 6 February 2018; Published 17 April 2018

Academic Editor: L. Fortunab

Caption: Figure 1: The framework of wearable-sensing-based driving intention understanding for the robotic vehicle maneuvers.

Caption: Figure 2: The driving intentions represented by the forearm postures and muscle activities information.

Caption: Figure 3: EMG signals and spatial rotations from the human driver's forearm.

Caption: Figure 4: The membership function of the Roll angle [phi].

Caption: Figure 5: The membership function of the Yaw angle [DELTA][theta].

Caption: Figure 6: The membership function of the Pitch angle [psi].

Caption: Figure 7: The membership function of the EMG signal [bar.[sigma]].

Caption: Figure 8: The membership function of the driving Intentions [I.sub.d].

Caption: Figure 9: The human is employing the wearable sensing to maneuver the robotic vehicle on the 1/10-Scale Vehicle Research Platform.

Caption: Figure 10: The variation of the human EMG signals and rotation angles in the acceleration and deceleration operation.

Caption: Figure 11: The variation of the human EMG signals and rotation angles in the steering operation.

Caption: Figure 12: The lane tracking task.

Caption: Figure 13: The times of lane departure of the two accelerating methods in the lane tracking task.

Caption: Figure 14: The obstacle avoidance task.

Caption: Figure 15: The times of obstacle collision of the two accelerating methods in the second task.
Table 1: The valid fuzzy rules we utilize in the robotic
vehicle maneuver.

          [phi]           ZO         ZO          ZO       ZO   ZO
IF     [DELTA][theta]   N, ZO, P   N, ZO, P     N,ZO,P     P    N
          [psi]            ZO        ZO      PS, PM, PB   ZO   ZO
       [bar.[sigma]]    PM, PB       PS          ZO       ZO   ZO
THEN     [I.sub.d]        AC         DC          ST       TL   TR

           [phi]         PS, PM, PB
IF      [DELTA][theta]     N,ZO,P
           [psi]             ZO
        [bar.[sigma]]        ZO
THEN     [I.sub.d]           BD

Table 2: Driving intentions understanding accuracy.

Driving      Testing   Successful   Failed   Recognition
intentions    data                            accuracy

AC             40          39         1         97.5%
DC             40          38         2          95%
ST             40          38         2          95%
TL             40          36         4          90%
TR             40          35         5         87.5%
BD             40          38         2          95%

Table 3: The comparison of our approach and the
previous work.

Works               Interaction      Accuracy   SD-E
                      interface

Our approach       Wearable sensor    93.33%    3.76
The work in [16]     Leap Motion      77.4%     4.52
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Wang, Weitian; Li, Rui; Guo, Longxiang; Diekel, Z. Max; Jia, Yunyi
Publication:Journal of Robotics
Date:Jan 1, 2018
Words:5807
Previous Article:Information-Fusion Methods Based Simultaneous Localization and Mapping for Robot Adapting to Search and Rescue Postdisaster Environments.
Next Article:Learning by Teaching with Humanoid Robot: A New Powerful Experimental Tool to Improve Children's Learning Ability.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |