An embedded stereo vision sensor forobstacle and lane marker detection.
The DARPA Urban Challenge 2007 was the third in a series of competitions the US Defense Advanced Research Projects Agency (DARPA) had held to foster the development of autonomous robotic ground vehicle technology for various--mostly military--applications (Gourley 2007).
The focus of the 2007 event was shifted from desert to an urban environment. The urban area selected was the (abounded) housing area at the former George Air Force Base in Victorville, CA, USA. The challenge for the participating teams was to complete a complex 60 mile urban course with live traffic and had to be completed in less than six hours. During the course, the vehicles had to cope with traffic intersections, live traffic, road following, 4-way-stops, lane changes, three-point turns, U-turns, blocked roads, and pulling in and out of a parking lot (DARPA 2006).
This paper presents the design of a real-time embedded stereo vision sensor, which was used for obstacle and lane marker detection. This sensor was part of the sensor suit of RED RASCAL and contributed to the autonomous capabilities of this vehicle.
The paper is organized as follows. Section 2 describes the system concept of the autonomous vehicle RED RASCAL. The embedded stereo vision sensor is presented in section 3. Section 4 summarizes the achieved results and section 5 concludes the paper.
2. THE VEHICLE RED RASCAL
The team SciAutonics / Auburn Engineering successfully participated in the Grand Challenges 2004 and 2005 with their ATV-based vehicle RASCAL (Travis et al., 2006).
[FIGURE 1 OMITTED]
For 2007, the team built a new vehicle based on the lessons learned from the former Challenges. The new vehicle was a modified Ford Explorer and was adapted for "drive-by-wire". The vehicle remained "street legal" and readily drivable on public highways. It was called RED RASCAL to reflect its close relation to its predecessor.
2.1 Vehicle Architecture
The high-level architecture of RED RASCAL is shown in Fig. 1. The vehicle is at the bottom. On the lower right, sensors send data up to a series of processing stages. On the lower left, actuators on the vehicle receive commands from the control logic. In the centre is the Vehicle State Estimator, which continually makes optimal estimates of the vehicle's position, velocity, heading, roll, pitch, and yaw, based on inputs from the GPS, IMU, vehicle sensors, and obstacle sensors. The vehicle state estimates are used by all parts of the control system, including the Path Planner.
At the top of the diagram is the Mission State Estimator. It determines what phase of a mission the vehicle is in and identifies when unique behaviours such as passing or parking are required. When the vehicle is driving in an urban environment, the key challenge that must be addressed is situational awareness. Not only must the vehicle follow a path and avoid moving obstacles, but it must also demonstrate correct behaviours depending on the situation. The Estimator treats each type of situation as a state and uses state diagrams to cope with behaviours and transitions (Bevly et al., 2007).
2.2 Sensor Suite
Sensors were needed for both localization (GPS, IMU, etc.) and perception (obstacle and road detection). A strategy of redundancy was employed to provide measurements from some sensors when others were not available or in the event of the failure of a particular sensor. The main sensor for vehicle localization was a single antenna Navcom SF-2050 DGPS receiver with Starfire satellite-based corrections provided by Navcom. A Honeywell HG1700 tactical grade 6 degree of freedom IMU was used to measure translational accelerations and angular rates. A NovAtel Beeline RT20 dual antenna GPS system was used for obtaining the initial vehicle orientation, as well as longitudinal and lateral velocities (Bevly et al., 2007).
The vehicle used several types of environmental sensors for obstacle and vehicle avoidance: LIDAR sensors and the embedded stereo vision sensor. The capabilities of these sensors overlapped to provide the redundancy desired to protect against sensor failure and improve reliability of measurements. As a near--and mid-range (4m to 20m) obstacle sensing system, the embedded stereo vision system was used.
3. EMBEDDED STEREO VISION SENSOR
The embedded stereo vision sensor detected objects up to a distance of 20m and provided information about lane markers and lane borders too. The stereo vision sensor consisted of a pair of Basler A601f monochrome cameras with a resolution of 656 (H) x 491 (V) and a quantization of 8 bits/pixel.
[FIGURE 2 OMITTED]
The frame rate of the sensor was 10 frames per second to cope with the real-time requirements of the vehicle. Both cameras were connected by a 400MBit-FireWire network to an embedded vision system.
The embedded system was based on a Texas Instruments TMS320C6414 DSP running at 1GHz. The operating system was DSP/BIOS II from Texas Instruments. The embedded vision system was responsible for the synchronous acquisition of both images, for execution of the computer vision algorithms, and the communication with the vehicle central brain via an Ethernet interface using UDP sockets (Hemetsberger et al., 2006).
The stereo head of the sensor was mounted on the dashboard while the DSP box was located in the trunk of the autonomous vehicle.
The main task of the stereo vision sensor was the detection of obstacles, lane markers, and lane borders in front of the vehicle. For obstacle detection, a fast stereo matching method was used. Furthermore, the bouncing of the vehicle was predicted and compensated in the stereo images to improve detection of obstacles and lane markers. For extraction of lane markers and lane borders, only the right camera image was used. The image was searched on different levels of resolution for significant markings and borders on the road. Identified lane markings and borders were classified, labeled, and reported back to the vehicle brain (Travis et al., 2006).
Additionally, the stereo vision sensor contained of a debugging interface for real-time logging of the right sensor input image. Extracted obstacles, lane markers, lane borders, and some internal states were logged for field-testing and evaluation, too.
During the qualification runs for the urban challenge event at Victorville, the embedded stereo vision sensor was used for obstacle and lane marker detection. An example of some detected obstacles (during sensor setup tests) is shown in Fig. 3. Figure 4 presents an exemplary output of detected lane markers on a road with live traffic.
Due to an accident in the second test run, the vehicle RED RACSAL was not able to qualify for the final event on November 3, 2008, the Urban Grand Challenge. However, our embedded stereo vision sensor did an excellent job during all pre-tests and test runs. Our sensor had proven to be an excellent and valuable assistance system for autonomous vehicles.
This paper presented design and realization of a real-time embedded stereo vision sensor. The sensor was designed to deliver information on obstacles, lane markers, and lane borders in front of an autonomous vehicle. The sensor was successfully tested at the DARPA Urban Challenge 2007 in Victorville, CA, USA as part of the vehicle RED RASCAL from the team SciAutonics / Auburn Engineering.
Just recently we have started two new projects for advanced driver assistance systems in next generation intelligent cars. In these projects we plan to disseminate and extend the used technology for next generation intelligent sensor devices.
[FIGURE 3 OMITTED]
[FIGURE 4 OMITTED]
The author would like to thank Mr. Daniel Baumgartner, Mr. Christoph Loibl, and Mr. Christoph Schonegger from the University of Applied Sciences Technikum Wien, Austria for their excellent support during that project. Furthermore, we would like to thank all people of the SciAutonics / Auburn Engineering team, Thousand Oaks, CA, USA for their valuable assistance during this project.
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no ICT-216049 (ADOSE).
Bevly, D.M., Porter, J. (2007), Development of the SciAutonics / Auburn Engineering Autonomous Car for the Urban Challenge, SciAutonics, LLC and Auburn University College of Engineering Technical Report, June 1, 2007.
DARPA (2006), Defense Advanced Research Projects Agency (DARPA) Announces Third Grand Challenge--Urban Challenge Moves to the City, DARPA News Release, May 1, 2006.
Gourley, S.R. (2007), DARPA 'Urban Challenge' results show the way forward, BATTLESPACE C4ISTAR TECHNOLOGIES, pp. 13-20, ISSN 14651157
Hemetsberger, H., Kogler, J., Travis, W., Behringer, R., Kubinger, W. (2006), Reliable Architecture of an Embedded Stereo Vision System for a Low Cost Autonomous Vehicle, Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC), pp. 945-950, Toronto, Canada, September 17-20, 2006
Travis, W., Daily, D., Bevly, D.M., Knoedler, K., Behringer, R., Hemetsberger, H., Kogler, J., Kubinger, W., Alefs, B., (2006), SciAutonics-Auburn Engineering's Low-Cost High-Speed ATV for the 2005 DARPA Grand Challenge, Journal of Field Robotics, Vol. 23, No. 8, pp. 579-597.
|Printer friendly Cite/link Email Feedback|
|Publication:||Annals of DAAAM & Proceedings|
|Date:||Jan 1, 2008|
|Previous Article:||Considerations and use of the genetic algorithm in optimization of standard fixture components.|
|Next Article:||Integrated approach for automatic assembly, packaging and disassembly planning.|