Printer Friendly

Efficient Lane Detection Using Deep Lane Feature Extraction Method.

1. Introduction

Lane detection plays an important role in the advanced driver assistance systems(ADAS) and unmanned vehicle systems [1-2]. Accurate lane detection can provide exact lane departure warning information when the vehicle deviates from the lanes and thus avoid potential traffic accidents. Besides, real-time lane detection can serve self-navigation vehicles to achieve more accurate route planning [3].

Despite the remarkable progress achieved in the past few years, many challenges remain to be overcome. The first urgent problem is how to meet the real-time application requirements. Though some researches have applied the state-of-the-art technique like deep learning to the lane detection [4-5] and achieve very good results, limited by the current computing platform, the real-time performance is not guaranteed. In addition, under some harsh road environment, lane detection still cannot meet the accuracy requirements. For example, lane detection is often disturbed by the changing road environment, such as passing vehicles, complex lighting and traffic signs on roads. These disturbances often lead the false lane detection results.

This research aims to tackle the above challenges by proposing an efficient lane detection algorithm using deep lane feature extraction method. The proposed method contains three main stages, 1) pre-processing, 2) deep lane feature extraction and 3) lane fitting. Figure 1 shows the overall framework. In pre-processing stage, an edge image is obtained in the bird's eye view of the road image which is generated by the inverse perspective mapping (IPM). Then, the two dimensional separable Gaussian filter is used to eliminate the salt-and-pepper noise in the road image. In deep lane feature extraction stage, line segment detector (LSD) is firstly applied to detect the line segments in the IPM image. Then, an adaptive lane clustering algorithm (ALC) is proposed to gather the adjacent line segments generated by the LSD method. Finally, a local gray value maximum cascaded Spatial correlation filter algorithm (GMSF) is used to extract the target lane lines among the multiple lines. In lane fitting stage, Kalman filtering is employed to improve the accuracy of extraction result, followed by RANSAC (Random Sample Consensus) algorithm, which is applied to fit the extracted lane points to parabolic model. The experimental results show that our algorithm performs well under severe road conditions with shadows, culvert, street writings and passing vehicles. Besides, the proposed algorithm can achieve real-time processing with 38 fps on the embedding platform. Hence, it is potential for practical application.

The rest of this paper is organized as follows: Section 2 briefly reviews some work related to lane feature detection and lane fitting, followed by the methodological details of the efficient lane detection method in Section 3. Section 4 shows the experiment results. Finally, a conclusion and discussion is presented in Section 5.

2. Related Work

In general, lane detection methods contain two main stages, detecting lane feature and fitting them to a parametric curve.

2.1. Lane Feature Detection

In lane detection stage, the main task is to extract the feature of lane lines in the road image under complex road environments [6-7]. Over the past few decades, considerable research efforts have been devoted to detect the lane feature. However, most of the existing methods are difficult to extract lane feature influenced by the shadow or the lane-like noise. The color-based approaches [8-9] are easily influenced by the lighting conditions, while edge-based methods [10-11] perform poorly when interfered by the noise like passing cars. Besides, the steerable Gaussian filter [12] is another common used method to extract edge features. However, this method needs to set a threshold to determine edges which is often difficult to set. In [13], a canny edge detection algorithm combined with Hough transform was used to extract the feature points of lane lines, and fit them to the cubic B-Spline curve. But the system only processed by the canny edge detection is vulnerable to the noise, such as the damaged pavement. In [14], a maximally stable extreme region(MSER) method was used to extract the lane lines region first and the priori knowledge was used to eliminate the unwanted region, which would affect the true results. This method works well in most scenarios, but when a curve appears, the performance would be significantly downgraded.

2.2. Lane Fitting

In lane fitting stage, the main task is to apply some line models to fit the points obtained from the detection stage. The parameters of the line model are solved according to the feature points of lane markings from a road image. Many straight and curve fitting methods have been developed. The simplest model used to characterize lane line is a straight model [15]. This technique is simple but it generates error when the lane is not straight. In [16], a real-time lane detection algorithm based on a hyperbola-pair model is proposed, the paper assume that parameters of the model are the same in the case of parallel lanes on the same road. In [17], Du et al. propose robust lane detection approach based on 'ridge' detector and modified sequential RANSAC. This approach has implemented an effective noise filtering mechanism after ridge detection, which increases model fitting speed and accuracy. It is capable of fitting multiple road models simultaneously, including straight line and hyperbola, in the quest for the best matching results. In [18], Tan et al. separated the images into near vision field and far vision filed. A Hough transform was used to draw the lane lines in the near vision, and an improved river flow algorithm was used in the far vision to extract the lane points. Then by applying a RANSAC algorithm, the hyperbola-pair model was fitted by the extracted lane points. However, this method does not work well when the lane line is absent temporarily.

3. The Efficient Lane Detection Method

As mentioned in first section, the proposed lane detection method contains three main stages, pre-processing, deep lane feature extraction and lane fitting. The highlight of our research is on Stage II: deep lane feature extraction.

3.1. Stage I: Pre-Processing

Stage I mainly include two parts: inverse perspective mapping (IPM) and separated Gaussian filtering followed by edge detection.

3.1.1. Inverse Perspective Mapping (IPM) Inverse perspective mapping(IPM) refers to a process of converting a perspective image through a transformation matrix into an inverse perspective image. IPM can be used to generate a bird's eye view of the road image[19]. The spatial characteristics of the lane line are not well preserved in the perspective image, as shown in Figure 2(a). More specifically, the lanes appear with fixed width formed by parallel lines in the IPM image, as shown in Figure 2(b). We can apply the road spatial geometry character to detect the lane lines.

In the European space, we define the world coordinate system W(x, y, z) and the image coordinate system I(u, v), respectively. The road image under the coordinate system I is transformed into the z = 0 plane in the world coordinate system W, and the relationship is shown in Figure3.

Assuming the camera mounted position coordinate in the world coordinate system is the (w, l, h). The parameters of the camera calibration as follows: [gamma] is the angle between camera's optical axis o projected in the z = 0 plane and y axis, as shown in Figure4(a). [theta] is the angle of the camera's optical axis o deviate from the z = 0 plane, as shown in Figure 4(b) . 2a is the horizontal and vertical viewing angle of the camera. Rx, Ry are the horizontal and vertical resolution of the image sequence, respectively.

Through the coordinate system transformation, for any pixel (x, y) in the input image frame, it is possible to establish the pixel coordinates (u, v) corresponding to the inverse perspective transform image by the following formula:

[mathematical expression not reproducible] Eqs. (1) (2)

A bird's-eye view of the road image can be obtained by inverse perspective mapping as shown in Figure 2.

3.1.2. Filtering and Edge Detection The raw IPM image often contain some noise due to camera image quality. Besides, under the practical road environment, there are often many places where the lane is worn or disturbed by surrounding environment.

Hence, the raw IPM image need further polishing in order to obtain a clearer road image. In this paper, a two-dimensional separable Gaussian filter proposed by [20] is used to smooth the IPM image. The two-dimensional separable Gaussian filter is divided into horizontal Gaussian kernel and vertical Gaussian kernel. In the horizontal direction, a second derivative of Gaussian filter is used, in which the size of the filter kernel is set to the width of the lanes demarcated in the raw IPM image. Meanwhile, a smoothing Gaussian filter is applied in the vertical direction, in which the size of the filter kernel is set according to the height of the dash lane line calibrated in the original IPM image. The formula of the Gaussian filter is:

[mathematical expression not reproducible] Eqs. (3) (4)

Where [[sigma].sub.y] is calculated according to the size of the filter kernel in the vertical direction, [[sigma].sub.x] is calculated according to the size of the filter kernel in the horizontal direction. As we can see through the image before and after filtering (the raw image and filtered image are shown in Figure 5(a) and Figure 5(b), the image after filtering can effectively eliminate the interference of the vehicles' shadow.

Then, canny operator is used to obtain the edge image [21]. Some research, e.g. [22], uses scanning algorithm, but this method only works for the lines with the specifically calibrated width. In the actual road environment, the width of the lane is not constant, like the lanes in the intersection or the worn lanes. Thus, we use the edge image, and reserve most of the lane information even the lane with different widths. The result of edge image is shown in Figure 5(c).

3.2. Stage II: Deep Lane Feature Extractionmethod

Pre-processing only roughly detects edges. But in the real road environment, edge image always contains many noise. To accurately extract lanes, lane feature extraction is essential. The main propose of the lane feature extraction is to find out possible location of the lane lines. Three methods, line segments detection, adaptive line clustering and local gray value maximum cascaded Spatial correlation filter(GMSF), are proposed here. The function of line segments detection is to find out all the line segments in the IPM image. Meanwhile, the role of the adaptive line clustering is to cluster the adjacent line segments. Besides, the effect of local gray value maximum cascaded spatial correlation filter is to extract the target lane lines among the multiple lines.

3.2.1. Line Segment Detection The first part of the deep lane feature extraction is to detect the straight lines in the road image in order to determine where the lane lines may locate. Traditionally, Hough Transform or improved Hough Transform is widely used for line detection [23-24]. But the calculation speed is relatively slow and it is difficult for real-time applications. In order to achieve fast and accurate lane detection, line segment detection (LSD) is used to detect the straight line in the edge image[25]. LSD is an efficient and accurate line segment extractor, which does not require manually-set parameters. More importantly, it requires less computation [26]. There are two main concepts in LSD: gradient of the pixel and level-line, as shown in Figure 6(a) (gradient expressed by the yellow arrows; level-line is the tangent of the gradient expressed by the red line). The implementation of the LSD algorithm is described as follows:

Step 1. Zoom the input image. The input image is filtered by the Gaussian kernel and zoomed to avoid sawtooth effect.

Step 2. Calculate the level-lines around every pixel, and thus produce a level-line filed, as shown in Figure 6(c). In addition, the gradient of every level-line is also calculated.

Step 3. Segment line support regions according to the gradient of the level-lines, as shown in Figure 6(d). In each line support region, the level-line has the similar gradient.

Step 4. Calculate the minimum circumscribed rectangle of every line support region, as shown in Figure 6(e) (the green box). Also, the center of the rectangle is served as the spindle (the blue lines).

Step 5. If the difference between the gradients of the level-line and rectangle s spindle is less than2[theta] (here the [theta] is set 22.5[degrees]), then the level-line is called aligned point, as shown in Figure 6(e) (the red points). If the percentage of aligned point in a rectangle is big enough, the rectangle is served as line segment.

In addition, the line segment has its own physical characteristics. Owing to the advantage of inverse perspective transformation, lane markers in inverse perspective image are always near vertical. We take full advantage of this space feature, hence, only the lines with near vertical angle are chosen as the lane marking candidates. Thus, most of the interference information is filtered out.

3.2.2. Adaptive Line Clustering(ALC) LSD detects many line segments. But since most of the lane lines have two edges in the edge image, many segments detected are actually from the same lane marker, as shown in Figure 7(a). Hence, we propose an adaptive line clustering (ALC) algorithm to cluster the adjacent line segments, as shown in Algorithm 1. The detailed procedure of the ALC algorithm is summarized as follows:

Step1: Order the line segments. The detected line segmentli are expressed as [l.sub.i]: x = [k.sub.i] * y + [b.sub.i] (i=1,... ,n), where (x,y) represent the coordinates of the image and [k.sub.i], [b.sub.i] represent different lines parameters. All line segments are ascended according to b.

Step2: Generate a bin, and starting from the first line segment, put the first line segment to the bin as the lane line candidate; and then, cluster the surrounding line segments whose distance is within the pre-calibrated lane width to the bin. A line segment can be clustered to the bin only if the b value is with the lane width and the k value is with the threshold (0.04 in this research after a great deal of experiment). The value of b essentially indicates the distance between two segments and the value of k indicates whether these two segments are parallel to each other. Once the segment has been clustered into a bin, delete this segment and cluster the remaining segments until no more line segments can be clustered. Then a new bin can be created and repeat the above procedure again.

Step3: Update the line function in each bin. Each line segment in a bin is expressed as x = [k.sub.m] * y + [b.sub.m] (m=1,... ,p); then the overall k of the bin is updated as k = [SIGMA][k.sub.p]/p, and the overall b of the bin is updated as b = [SIGMA][b.sub.p]/p.
ALGORTHM 1 Adaptive line clustering.

Input: line segments
Output: clustered lines
([k.sub.i], [b.sub.j]):the parameter of the line segment x = [k.sub.i] *
y + [b.sub.i] (i=1,***,n; [b.sub.1] < [b.sub.2] < *** < [b.sub.n])
T1: threshold 1
T2: lane width
m: the index of clustering line, the initial value is 1
for i=l to do
for j=i+l to do
if [b.sub.j]--[b.sub.j] < [T.sub.2] and |[k.sub.j] - [k.sub.i]| < T1
then
[b.sub.m] = [b.sub.i]+[b.sub.j]/2;
[K.sub.m] = [k.sub.i]+[k.sub.j]/2;
else
[b.sub.m] = [b.sub.i];
[k.sub.m] = [k.sub.i];
end for
m++
end for


A great advantage of the proposed ALC algorithm is that it can gather the line segments in the near field without knowing how many line segments in a frame previously. Unlike the K-means clustering algorithm [27] or the improved K-means clustering algorithm[28], it doesn't need to determine the K value in advance. More importantly, Unlike the KNN clustering algorithm[29], once a line segment is clustered into a bin, this segment will be removed from the detection set to avoid repeated detection. This significantly reduces computing time. An example of clustering result is shown with the red lines in Figure 7(b).

3.2.3. Lane Line Extraction Using Local Gray Value Maximum Cascaded Spatial Correlation Filter(GMSF) The methods described above can extract lane lines in most of road environments. But when the pavement is damaged or vehicles appear in front, many unwanted lines will be detected, as shown in Figure 8(a). Therefore, it is necessary to further improve lane line extraction. Since the lane lines are generally white or yellow, with higher gray value than the surrounding pavement, we should take full advantage of the color characteristics of the lane lines to extract the target lane lines. Besides, lane lines in real world have their own spatial characteristics-the lane lines are parallel to each other, and the width between the lane lines is fixed. Make full use of spatial knowledge, in this paper, we develop a local gray value maximum cascaded spatial correlation filter algorithm to extract the target lane lines in the gray image, as shown in Algorithm2. The detailed procedure is described as follow (see Algorithm 2):

Step 1. Extend the detected lines in the sub-region and divide them into left lane lines and right lane lines. The left and right lane lines are divided according to the position of the line segment in the image as shown in Figure 8(b).

Step 2. Calculate the sum of gray value of every stretched left and right lane lines. Then, descend the left lane lines and right lane lines separately according to the sum of grey value. Those with the maximum gray value sum are most likely to be the target lane lines.

Step 3. Calculate the distance between the two target lanes, and check whether they satisfy the pre-defined land width. If yes, the target lanes would be selected; if not, the line with next-highest gray value sum would be tested until the target lane lines are selected.

The proposed method can significantly improve lane line extraction accuracy. The method takes full advantage of the color information and spatial information of the lane lines. Besides, the proposed method cascades two characteristics, and uses two characteristics for mutual verification. Figure 8 presents an example.

3.3. Stage III: Lane Tracking and Fitting

Through the deep lane feature extraction described in the last section, lane lines can be detected accurately. But the detected lane lines are straight, and cannot adapt to all road scenes. Meanwhile, not all road images have lane lines. Under some real road environment, the lane lines may be worn or lost. Thus, the lane tracking and fitting is essential. This stage is to track lanes and fit lines.
ALGORITHM 2 Local gray value maximum cascaded Spatial correlation
filter algorithm.

Input: extended cluster lines
Output: lane lines
H:the width between two lanes
([k.sub.i], [b.sub.i]):the parameter of the left extended cluster
[linel.sub.i]: x = [k.sub.i] * y + [b.sub.i] (i=1,***,n),the grey value
sum
of the extended cluster line [l.sub.i].is[S.sub.i],([S.sub.1] >
[S.sub.2] > *** >
[S.sub.n])
([k.sub.j], [b.sub.j]):the parameter of the right extended cluster
line [l.sub.j] : x = [k.sub.j] * y + [b.sub.j] (i=1,***,m),: the grey
value
sum of the extended cluster line [l.sub.j].is[S.sub.j]([S.sub.j] >
[S.sub.2] >
*** >[s.sub.m])
[lane.sub.l] left target lane
[lane.sub.r]: right target lane
for i=l to do
for j=l to m do
if |[b.sub.j]-[b.sub.i]| <H then
[lane.sub.l]=[x.sub.i];
[lane.sub.r]=[x.sub.j];
break;
end if
end for
end for


3.3.1. Lane Tracking In a real testing environment, the lane line detection is easily affected by surrounding conditions, such as rainy weather or lane lines missing. As a result, not in every frame, can the lane lines be accurately detected. Therefore, lane tracking is essential to construct the whole lane line even when lane lines cannot be detected in some frames [30]. Kalman filtering is widely used in lane tracking [31]. In most of lane tracking research, Kalman filtering is used to track the parameters ([rho], [theta])[32], which are generated from the Hough transform. In this paper, we apply the Kalman filtering to track the x-coordinate (x1, x2, x3, x4) of four endpoints (p1,p2,p3,p4) as shown in Figure 8(c).

In our tracking system, the state vector[X.sub.k] and the observation vector[Z.sub.k]represented as:

[X.sub.k] =[x1,x2,x3,x4,x1',x2',x3',x4'] Eq. (5)

[Z.sub.k] = [x1,x2,x3,x4] Eq. (6)

Where x1', x2', x3', x4' represent the first derivatives of x1,x2,x3,x4 respectively. The system state updating matrix A is defined as:

[mathematical expression not reproducible] Eq. (7)

The observation matrix H of the system is:

[mathematical expression not reproducible] Eq. (8)

The prediction equation of the system is:

[X.sub.k] = [AX.sub.k-1] + [BU.sub.k] Eq. (9)

where [X.sub.k] is the prediction value of our system at the time k,[X.sub.k-1] is optimal value of our system in the time k-1, B is the control matrix of the system, and [U.sub.k] is the control value in the time k (0 in this research).

3.3.2. Line Fitting The lane line extraction only extract straight lines. To further improve the detection accuracy, we need to fit the extraction points near the straight lane lines to a specific curve. Previous research also suggests to fit the lane lines to a straight line [10], hyperbola curve [16], or spline curve [32]. We apply the parabolic model to fit the lane lines:

[mathematical expression not reproducible] Eq. (10)

[mathematical expression not reproducible] Eq. (11)

where ([a.sub.l], [b.sub.l], [c.sub.l]) and ([a.sub.r], [b.sub.r], [c.sub.r]) are the parameters of the parallel parabolas [x.sub.l](left lane line) and [x.sub.r] (right lane line), respectively.

Before fitting the curve, we need to select the control points first. After Kalman filtering, the lane lines detected are still straight lines, as shown in Figure 9(a). Note in the edge image, every lane line has two boundary lines. So we select a series of points at the detected straight lane lines, and extend them to the left and right sides for two pixels, as shown in Figure 9(b). Then we check the values of the four extended pixels near the selected point from left to right (the white points' value in the edge image is 1, and the black points' value is 0). If the value of the extended pixels is 1, we choose it as the control point; if there are more than one pixels with value 1, we calculate the central position as the control point. Besides, if no pixel with value 1, we choose the selected point in the straight line as the control point.

After identifying control points, the next step is to fit the control points to the parabolic model and get the optimal coefficients of them. We apply RANS AC, one of the widest used robust regression algorithm in computer vision [33-34], to fit the curve. The specific fitting procedure is presented as follows:

Step1: Select three control points from the left lane randomly, and obtain the coefficients([a.sub.l], [b.sub.l], [c.sub.l]) for left parabolic lane model to establish the left parabolic model.

Step2: Calculate the distance between each control point to the left parabolic. If the distance is less than the setting threshold T1, the control point is counted as inner point.

Step3: If the inner point is more than the setting threshold T2, set the coefficients as the lane model candidate.

Step4: Repeat the above three steps, and choose the lane model candidate which has most inner points.

Lane model fitting for the right lane is similar to the left lane fitting. Figure 9(c) presents a fitting example.

4. Experiments

4.1. Dataset and Preparation

The proposed lane detection algorithm has been tested on NVIDIA JETSON TX1 embedded platform. It can achieve real-time processing with average 38 fps, which is potential for practical application. The whole system is coded in C++using the open source OpenCV library.

We have collected a large-scale traffic scene video frames from urban roads or highway with resolution of 1280*720. Those video frames include representative real scenarios with different lightening and traffic conditions of urban streets in Beijing. We have divided those dataset into five clips. Clip 1 includes different kinds of lanes, e.g, clear lanes, dashed lanes, and worn-out lanes. Clip 2 covers complex illumination conditions, such as strong reflections, weak light. Clip 3 has many different traffic condition, including light traffic, medium traffic and heavy traffic. Clip 4 was recorded with various disturbances, such as shadows, street writings, and vehicles; and traffic in Clip 5 is complicated with different behaviors of ego vehicles, such as lane keeping, lane changing. Some details of the test scenes are shown in Table 1.

4.2. Experiments Results

Two important indicators are used to evaluate our algorithm including true positive rate (TPR) and false negatives rate (FPR). TPR is the proportion of the lanes detected correctly in all the lanes of all frames, i.e. TPR = (the lanes detected correctly) / (the number of all target lanes); and FPR is the proportion of the miss detected lanes in all lanes of all frames, i.e. FPR = (the miss detected lanes) / (the number of all target lanes). The algorithm ran an average speed of 38 frames per second on embedded platform, and a total of 33390 frames were tested. The detail test result of our method is shown in Table 2.

To highlight the effects of our new improvements, i.e. adding ALC and GMSF, we process the datasets without ALC and without GMSF respectively. We further compare our methods to Aly's [20] and Shu-Chung Yi's [35] methods. The results are shown in Table 3.

Some examples of lane detection results using our method are shown in Figure 10. It shows that our method works well in different scenarios, even with those challenging circumstances including shadows, culvert, street writings and passing vehicles.

But in some cases, our algorithm does not perform very well, such as in the situation where lane lines are missed for a long time, as illustrated in Figure11(a). This is because the tracking method we adopted can only eliminate the error for a short time; but when the lane lines disappear for a long time, it does not work well. We plan to employ some prediction algorithm to improve it. Besides, when a lane line is obscured by vehicles for a long time, the detection results will be influenced, as shown in Figure 11(b) and 11(c). Furthermore, when there appear some obstacles which are very similar to lane lines, such as guide lines, as shown in Figure 11(d), our method will falsely detect them as lane lines. Future research will consider combining the vehicle detection and use the position of vehicles to overcome this problem.

5. Conclusions and Discussion

In this paper, we propose a real-time lane detection method, which contains three main stages--pre-processing, deep lane feature extraction and lane fitting. In pre-processing stage, an edge image is obtained in the bird's eye view of the road image which is generated by the IPM. In deep lane feature extraction stage, an advanced lane extraction method is proposed. LSD is firstly applied to achieve fast line segments detection in the IPM image; an ALC algorithm is proposed to gather the adjacent line segments generated by the LSD method; and finally, a GMSF algorithm is used to extract the target lane lines among the multiple lines. In lane fitting stage, Kalman filtering is used to improve the accuracy of extraction result, followed by RANSAC algorithm, which is applied to fit the extracted lane points to parabolic model.

The experimental results indicate that our algorithm can achieve a real-time lane detection in various condition, even for those influenced by the shadows, culvert, street writings and passing vehicles. But there are still some problems. When an obstacle is very similar to lane lines, or where the lane line is missing for a long time, the lane detector will be influenced, and the further work will be devoted to overcome those problem. Besides, we will apply our algorithm to a lane departure warning (LDW) system.

Contact Information

(G.Y.)

Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China

yugz@buaa.edu.cn

(Z.W.)

Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China

wangzhangyu 123@163.com

(Y.M.)

Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China

mayalong@buaa.edu.cn

(Y.W)

Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China

ypwang@buaa.edu.cn

*Correspondence:

yugz@buaa.edu.cn

(G. Y.)

Tel.: (+86) 18601012574

Acknowledgments

This work is partially supported by the key Research and Development program of china under Grant #2016YFB0101001. The authors would also like to thank the insightful and constructive comments from anonymous reviewers.

Reference

[1.] Lee Soomok; Kim Seong-Woo, etc. Accurate Ego-Lane Recognition utilizing Multiple Road Characteristics in a Bayesian Network Framework. 2015 IEEE Intelligent Vehicles Symposium (IV)June 28 - July 1, 2015. COEX, Seoul, Korea

[2.] Casapietra E., Weisswange T.H., Building a Probabilistic Grid-based Road Representation from Direct and Indirect Visual Cues, 2015 IEEE Intelligent Vehicles Symposium (IV) June 28 -July 1, 2015. COEX, Seoul, Korea

[3.] Chandakkar Parag S., YilinWang and Li Baoxin, Improving Vision-based Self-positioning in Intelligent Transportation Systems via Integrated Lane and Vehicle Detection. 2015 IEEE Winter Conference on Applications of Computer Vision,2015:404-411, Hawaii.

[4.] Huval Brody, Wang Tao. etc. An Empirical Evaluation of Deep Learning on Highway Driving. Computer Science, 2015.

[5.] Li Jun, Mei Xue, Prokhorov D. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. 2016, 42(4):401-415.

[6.] Jung Soonhong, JunsicYoun, etc. Efficient Lane Detection Based on Spatiotemporal Images, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATIONSYSTEMS, VOL. 17, NO. 1, 2016: 289-295.

[7.] Filonenko Alexander, etc. Real-Time Lane Marking Detection. IEEE International Conference on Cybernetics, 2015:125-128

[8.] Chiu KY and Lin SF. Lane detection using color-based segmentation. Proceedings of the 2005 IEEE intelligent vehicles symposium, Las Vegas, NV, 6-8 June 2005, pp.706-711. New York: IEEE.

[9.] Cheng H.-Y., Jeng B.-S., Tseng P.-T, and Fan K.-C, Lane detection with moving vehicles in the traffic scenes, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 7, NO. 4, 2006, pp. 571-582.

[10.] Gaikwad Vijay and ShashikantLokhande, Lane Departure Identification for Advanced Driver Assistance. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 2, 2015, PP. 910-918.

[11.] Zhang Weiwei, XiaolinSong, et al. Multi scale matched filter-based lane detection scheme of the constant false alarm rate for driver assistance. ProcIMechE Part D:J Automobile Engineering 2015, Vol. 229(6) 770-781

[12.] Wang, Y.; Dahnoun, N; Achim, A. A novel system for robust lane detection and tracking. Signal Process. VOL 92, NO. 2, 2012, PP. 319-334

[13.] Wang Y, Teoh EK and Shen D. Lane detection and tracking using B-Snake. Image Vision Computing, VOL 22, NO. 4,2004, pp. 269-280.

[14.] Abdelhamid, Mammeri, et al. A real-time lane marking localization, tracking and communication system. Computer Communications, VOL.73, 2016,pp.132-143.

[15.] Lee JW. A machine vision system for lane-departure detection. Computer Vision & Image Understanding, 2002, 86(1):52-78

[16.] Chen Q and Wang H. A real-time lane detection algorithm based on a hyperbola-pair model. In: Proceedings of the IEEE 2006 intelligent vehicles symposium, Tokyo, Japan, 13-15 June 2006, pp.510-515. New York: IEEE.

[17.] Du Xinxin and Tan Kok Kiong, Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 5, MAY 2016

[18.] Tan Huachun, Zhou Yang, etc, Improved river flow and random sample consensus for curve lane detection. Advances in Mechanical Engineering, VOL.7, NO. 7, 2015, pp. 1-12.

[19.] Bok-SukShin, JunliTao, ReinhardKlette, A super particle filter for lane detection. Pattern Recognition, VOL.48, NO. 11, 2015, pp. 3333-3345.

[20.] Aly Mohamed, Real time Detection of Lane Markers in Urban Streets, 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, June 4-6, 2008

[21.] Son Jongin, HunjaeYoo, et al. Real-time illumination invariant lane detection for lane departurewarning system. Expert Systems with Applications, 42 (2015) 1816-1824.

[22.] KyawKoKoHtet, Comprehensive lane keeping system with mono camera. Control Conference (ASCC), 2015 10th Asian

[23.] Li Xue, Wu Qingxiang, et al. Lane Detection Based on Spiking Neural Network and Hough Transform. 2015 8th International Congress on Image and Signal Processing. 2015:626-630

[24.] JieGuo, Zhihua Wei, Miao Duoqian. Lane Detection Method Based on Improved RANSAC Algorithm. 2015 IEEE Twelfth International Symposium on Autonomous Decentralized Systems. 2015:285-288

[25.] Nan Zhixiong, Wei Ping, et al. Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering. Sensors, 2016, 16(8):1276

[26.] Gioi Rafael Grompone von, LSD: A Fast Line Segment Detector with a False Detection Control. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 32, NO. 4, APRIL 2010.

[27.] MacQueen J. Some Methods for Classification and Analysis of MultiVariate Observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability. Berkeley, University of California Press, 1967: 281 - 297.

[28.] Wang Y, Li W, Gao R, An improved k-means clustering algorithm, IEEE International Conference on Information Management & Engineering. 2011, 10(1):44-46

[29.] Niu Y, Wang X, On the k -Nearest Neighbor Classifier with Locally Structural Consistency. Lecture Notes in Electrical Engineering, 2014, 271:269-277

[30.] Tapia-Espinoza Rodolfo and Torres-Torriti Miguel, "A comparison of gradient versus color and texture analysis for lane detection and tracking," in Robotics Symposium (LARS), 2009 6th Latin American. IEEE, 2009, pp. 1-6.

[31.] Mammeri A., Boukerche A., et al. Lane detection and tracking system based on the mser algorithm, hough transform and Kalman filter. Proceedings of the 17th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, 2014, New York.

[32.] Wang Y, Shen D and Teoh EK, Lane detection using spline model, Pattern Recogn Letters, VOL.21, NO. 8, 2000,pp.677-689.

[33.] Kim ZuWhan. Robust lane detection and tracking in challenging scenarios, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL.9, NO. 1, 2008, 16-26.

[34.] Borkar A., Hayes M., and Smith M. T., "Robust lane detection and tracking with ransac and Kalman filter," in Proc. 16th IEEE ICIP, Nov. 2009, pp. 3261-3264.

[35.] Yi Shu-Chung, Chen Yeong-Chin and Chang Ching-Haur, A lane detection approach based on intelligent vision, Computers and Electrical Engineering, VOL.42, 2015, 23-29.

Guizhen Yu, Beihang University

Zhangyu Wang, Xinkai Wu, Yalong Ma, and Yunpeng Wang, Beihang University

History

Received: 12 Jan 2018

Published: 23 Sep 2017

e-Available: 23 Sep 2017

Keywords

Lane detection, Inverse perspective mapping, Line segment detector, Cluster, RANSAC

Citation

Yu, G., Wang, Z., Wu, X., Ma, Y. et al., "Efficient Lane Detection Using Deep Lane Feature Extraction Method," SAE Int. J. Passeng. Cars - Electron. Electr. Syst. 11(1):2018, doi:10.4271/07-11-01-0006.

doi:10.4271/07-11-01-0006
TABLE 1 The detail information of the video clips.

                                              Frame   Frame
        Traffic Scenes                        number  rate

Clip 1  different kinds of lanes              7823    30 fps
Clip 2  different kinds of illumination       9196    30 fps
        conditions
Clip 3  different kinds of traffic condition  8273    30 fps
Clip 4  different kinds of disturbances       4187    30 fps
Clip 5  different kinds of ego vehicle's      3911    30 fps
        behaviors

TABLE 2 The result of our method.

         Total frame  TPR    FPR   Frame rate

Clips 1     7823      98.8%  1.1%      36
Clips 2     9196      96.3%  3.1%      40
Clips 3     8273      99.2%  0.6%      38
Clips 4     4187      96.1%  3.9%      39
Clips 5     3911      99.6%  0.3%      37
Total      33390      98%    1.8%      38

TABLE 3 Comparison result of lane detection algorithm.

         Method without  Method without
         ALC             GMSF            Aly's method
         TPR     FPR     TPR     FPR     TPR     FPR

Clips 1  87.2%   12.8%   79.1%   20.9%   96.4%    1.6%
Clips 2  74.3%   25.7%   67.5%   32.5%   81.3%   17.1%
Clips 3  85.6%   14.4%   73.5%   26.5%   89.2%    4.8%
Clips 4  87.2%   12.8%   81.4%   18.6%   91.6%    6.4%
Clips 5  79.6%   20.4%   74.4%   25.6%   91.2%    6.8%
Total    82.78%  17.22%  75.18%  24.82%  89.94%   7.34%


         Yi's method     Proposed method
         TPR     FPR     TPR    FPR

Clips 1  83.6%   16.4%   98.8%  1.1%
Clips 2  85.2%   14.1%   96.3%  3.1%
Clips 3  85.4%   10.6%   99.2%  0.6%
Clips 4  87.7%    9.9%   96.1%  3.9%
Clips 5  87.4%   10.6    99.6%  0.3%
Total    85.86%  12.32%  98%    1.8%
COPYRIGHT 2018 SAE International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Yu, Guizhen; Wang, Zhangyu; Wu, Xinkai; Ma, Yalong; Wang, Yunpeng
Publication:SAE International Journal of Passenger Cars - Electronic and Electrical Systems
Article Type:Technical report
Date:Jan 1, 2018
Words:6291
Previous Article:3D Scene Reconstruction with Sparse LiDAR Data and Monocular Image in Single Frame.
Next Article:2-D CFAR Procedure of Multiple Target Detection for Automotive Radar.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters