Printer Friendly

Pattern recognition from lidar for dynamic object detection.

1. INTRODUCTION

The research addressed in this paper is concerned with the theoretical investigations and practical implementations of reliable shape classification algorithm for autonomous vehicles in indoor environment. The classified objects are mainly focusing on dynamic human leg forms, but the classification can be extended to an arbitrary set of objects.

Detecting different objects on a moving platform using Lidar and vision, or both sensors at the same time, for collision avoidance, mapping, or SLAM is well reported subject (Guivant et al., 2000).

Several research works have been performed using laser scanners in object classification and moving object tracking including application to localization and navigation, warning systems and others (Neira et al., 1999). For the object classification the main directives in this domain are based either on heuristic methods, voting schemes, multi-hypotheses tracking (Streller & Dietmayer, 2004) or even boosting approaches (Viola & Jones, 2001). While the first two approaches lack the mathematical description framework and thus are not consistent, they still offer reasonable performance.

The approach adopted in this work is based on the Gaussian Mixture Model (Premebida & Nunes, 2006) representation of the classified objects. In the first part of the paper the segmentation algorithm will be presented, while in the second part the focus will be on the theoretical background of the classifier together with the experimental details.

2. LIDAR DATA PROCESSING

In this section there are presented the details regarding the measurements taken with the laser scanner and the way in which these measurements can be represented in coherent form.

2.1 LIDAR Characteristics and the Measured Environment

The LIDAR measurement returns a bearing and range information about the surrounding reflective surfaces. The functionality of the device that we used is based on the time-of-flight and phase difference of the emitted laser beam reconstructing the distance to the measured object r and the angle [phi] from which the beam is rejected.

The range data in this article are obtained from a LIDAR that scans 180[degrees] arc to give the distance to the objects in the environment at angular intervals of 0.5[degrees] with a maximum range of 50m. Since the measurements are three-dimensional points using a single source for the laser beam, the representation of such a point cloud is equivalent to map.

2.2 Scan segmentation

The scan segmentation belongs to the primary modules of the lidar architecture among with the data acquisition and pre- filtering modules. The segmentation is the process of splitting a scan into several coherent clusters, i.e. point clouds. The choice of segmentation method is rather arbitrary and will be dependent on other design choices as the alignment and covariance estimation strategies. The current strategy is the one based on the simple assumption of distances between segments adopted from (Premebida & Nunes, 2005). It is assumed that the laser range scan information is of the form Z = {b1, ..., bL} which is a set of beam. Each element of this set bj is a pair of ([[phi].sub.j], [[rho].sub.j]), where [[phi].sub.j] is the angle of the beam relative to the robot and [[rho].sub.j] is the distance from the reflected surface.

The output of the splitting procedure is an angle ordered sequence P = ([S.sub.1], ..., [S.sub.M]} of segments in such a way that [union] [S.sub.i] = Z. The elements of each segment S contain pairs f Cartesian coordinates x = (x,y) which can be converted to polar coordinates with x =r cos(q ) and x =r sin(q). A typical segmented scan can be found on Fig.1.

2.3 Scan description

This module uses the segmented data in order to extract relevant information about the segmented data in order to ensure robustness in the algorithm. The extracted information is used later on in the classifier module and can be used also for visualization purposes too. The basic set of data which was used in the experiments contained the following entries:

* f1: object centroid;

* f2: normalized Cartesian dimension:

f2 = [square root of [DELTA][X.sup.2] + [DELTA][Y.sup.2] (1)

* f3: the standard deviation of the points from the centroid:

f3 = [square root of 1/n-1 [summation] [parallel] [r.sub.n] - [bar.x][parallel]] (2)

[FIGURE 1 OMITTED]

3. GMM OBJECT DESCRIPTION

A Gaussian mixture model (GMM) is a weighted combination of Gaussian probability density functions. These densities capture the particularities of an object. In a GMMmodel the probability distribution of a x random variable is defined as a sum of M weighted Gaussian probability density functions:

p(x|[THETA]) = [M.summation over (m=1)] [[alpha].sub.m]p(x | [[theta].sub.m]) (3)

where [[theta].sub.1], ..., [[theta].sub.M] are the Gaussian distributions parameter and are the weighted vector such that [[summation].sup.M.sub.m=1] [[alpha].sub.am] = 1. A set of parameters for a mixture model is given by [THETA] = ([alpha]; [[theta].sub.1], ..., [[theta].sub.M]) where each parameter [[theta].sub.m] = ([[mu].sub.m], [[summation].sub.m]) represents the mean and the covariance of the model. The likelihood for a feature vector [OMEGA] of each class is given by the linear combination of the Gaussian mixture probability density functions:

p([OMEGA]|[q.sub.i], [[THETA].sup.i]) = 1/[square root of [(2[pi]).sup.2] [absolute value of [[summation].sup.i.sub.m]]] exp [-1/2 [([OMEGA] - [[mu].sup.i.sub.m]).sup.T] [([[summation].sup.i.sub.m]).sup.-1] ([OMEGA] - [[mu].sup.i.sub.m])] (4)

The Gaussian mixture parameters for each object of interest was determined using the expectation maximization (EM) algorithm. In this way for each set of feature vectors ([[OMEGA].sup.N] = [[OMEGA].sub.1], ..., [[OMEGA].sub.N]) the EM algorithm computes M Gaussian parameter vectors that maximize the joint likelihood among the Gaussian density functions:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

A typical GMM representation of leg forms from the laser scan is shown on Fig.2.

[FIGURE 2 OMITTED]

4. BAYESIAN CLASSIFIER

After Gaussian mixture pdf is available for each object class, in order to classify which category [q.sub.i] modeled by [[THETA].sup.i] fits the current observation feature-vector [[OMEGA].sub.k] a Bayesian decision framework based on the log-likelihood and on the logprior probability is used.

In order to decide which is the most likely class of object [q.sub.i] for the segment [S.sub.j] a decision rule of the following form was adopted:

[S.sub.j] [member of] [q.sub.i] if log (P([[THETA].sup.i]|[[OMEGA].sub.k])) = max (log(P([[THETA].sup.u]|[[OMEGA].sub.k]))) (6)

where u spans from 1 to number of classes (Premebida & Nunes, 2005). This approach is rather an intuitive one, and further refinements can be performed on it.

5. EXPERIMENTAL RESULTS

The experimental part was performed with a SICK LMS200 range device. On the Fig. 3 there is presented a typical scenario for an office environment with a pair of legs in the middle wich was detected by the presented algorithm. Although in an indoor environment there may be similar forms in a laser scan than the one for which the classifier was trained, in most of the cases the results were correct during the experiments.

[FIGURE 3 OMITTED]

6. CONCLUSION

This paper gave a theoretical introduction to the laser data segmentation, as well as for the GMM based classifier. The theoretical invesitgations were validated with practical measurements on a mobile robot.

As further work it is intended to combine this type of classifier with other ones, like the Ada boost classifier from images. The result of the classified dynamic objects is also intended to be tracked over the time in order to reduce the false positives for the classifier.

7. REFERENCES

Guivant, J., Nebot, E., & Durrant-Whyte, H. F. (2000). Simultaneous localization and map building using natural features in outdoor environments. Intelligent Autonomous Systems VI, (pp. 581-588)

Neira, J., Tard, J. D., Horn, J., & Schmidt, G. (1999). Fusing range and intensity images for mobile robot localization. IEEE Trans. Robotics and Automation, (pp. 76-84)

Premebida, C., & Nunes, U. (2006). A multi-target tracking and GMM-classifier for intelligent vehicles. 9th International IEEE Conference on Intelligent Transportation Systems. Toronto

Premebida, C., & Nunes, U. (2005). Segmentation and geometric primitives extraction from 2D laser range data for mobile robot applications. Proc. 5th National Festival of Robotics, Scientific Meeting (ROBOTICA). Coimbra

Streller, D., & Dietmayer, K. (2004). Object tracking and classification using a multiple hypothesis approach. IEEE Intelligent Vehicles Symposium. Parma

Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. IEEE CVPR 2001, I
COPYRIGHT 2009 DAAAM International Vienna
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Tamas, Levente; Popa, Mircea; Lazea, Gheorghe
Publication:Annals of DAAAM & Proceedings
Article Type:Report
Geographic Code:4EUAU
Date:Jan 1, 2009
Words:1430
Previous Article:Strategic decisions about the creation of innovative products.
Next Article:Optimization posibilities for company's resources allocation.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |