Lane Detection Based on Connection of Various Feature Extraction Methods.
With the rapid development of society, automobiles have become one of the transportation tools for people to travel. In the narrow road, there are more and more vehicles of all kinds . As more and more vehicles are driving on the road, the number of victims of car accidents is increasing every year . How to drive safely under the condition of numerous vehicles and narrow roads has become the focus of attention. Advanced driver assistance systems which include lane departure warning (LDW) , Lane Keeping Assist, and Adaptive Cruise Control (ACC)  can help people analyse the current driving environment and provide appropriate feedback for safe driving or alert the driver in dangerous circumstances. This kind of auxiliary driving system is expected to become more and more perfect . However, the bottleneck of the development of this system is that the road traffic environment is difficult to predict . After investigation, in the complex traffic environment where vehicles are numerous and speed is too fast, the probability of accidents is much greater than usual. In such a complex traffic situation, road colour extraction and texture detection as well as road boundary and lane marking are the main perceptual clues of human driving .
Lane detection is a hot topic in the field of machine learning and computer vision and has been applied in intelligent vehicle systems . The lane detection system comes from lane markers in a complex environment and is used to estimate the vehicle's position and trajectory relative to the lane reliably . At the same time, lane detection plays an important role in the lane departure warning system. The lane detection task is mainly divided into two steps: edge detection and line detection.
Qing et al.  proposed the extended edge linking algorithm with directional edge gap closing. The new edge could be obtained with the proposed method. Mu and Ma proposed Sobel edge operator which can be applied to adaptive area of interest (ROI) . However, there are still some false edges after edge detection. These errors will affect the subsequent lane detection. Wang et al. proposed a Canny edge detection algorithm for feature extraction . The algorithm provides an accurate fit to lane lines and could be adaptive to complicated road environment. In 2014, Srivastava et al. proposed that the improvements to the Canny edge detection can effectively deal with various noises in the road environment . Sobel and Canny edge operator are the most commonly used and effective methods for edge detection.
Line detection is as important as edge detection in lane detection. With regard to line detection, we usually have two methods which include feather-based method and model-based methods. Niu et al. used a modified Hough transform to extract segments of the lane profile and used DBSCAN (density based spatial application noise clustering) clustering algorithm for clustering . In 2016, Mammeri et al. used progressive probabilistic Hough transform combined with maximum stable extreme area (MSER) technology to identify and detect lane lines and utilized Kalman filter to achieve continuous tracking . However, the algorithm does not work well at night.
In this paper, we propose a lane detection method that is suitable for all kinds of complex traffic situations, especially as driving speed in roads is too fast. First, we preprocessed each frame image and then selected the area of interest (ROI) of the processed images. Finally, we only needed edge detection vehicle and line detection for the ROI area. In this study, we introduced a new preprocessing method and ROI selection method. First, in the preprocessing stage, we converted the RGB colour model to the HSV colour space model and extracted white features on the HSV model. At the same time, the preliminary edge feature detection is added in the preprocessing stage, and then the part below the image is selected as the ROI area based on the proposed preprocessing. Compared with the existing methods, the existing preprocessing methods only perform operations such as graying, blurring, X-gradient, Y-gradient, global gradient, thresh, and morphological closure. And the ways to select the ROI area are also very different. Some of them are based on the edge feature of the lane to select the ROI area, and some are based on the colour feature of the lane to select the ROI area. These existing methods do not provide accurate and fast lane information, which increases the difficulty of lane detection. In this paper, experiments show that the proposed method is significantly better than the existing preprocessing method and ROI selection method in lane detection.
2. Overview of the Proposed System
This paper presents an advanced lane detection technology to improve the efficiency and accuracy of real-time lane detection . The lane detection module is usually divided into two steps: (1) image preprocessing and (2) the establishment and matching of line lane detection model.
Figure 1 shows the overall diagram of our proposed system where lane detection blocks are the main contributions of this paper. The first step is to read the frames in the video stream. The second step is to enter the image preprocessing module. What is different from others is that in the preprocessing stage we not only process the image itself but also do colour feature extraction and edge feature extraction . In order to reduce the influence of noise in the process of motion and tracking, after extracting the colour features of the image, we need to use Gaussian filter to smooth the image. Then, the image is obtained by binary threshold processing and morphological closure. These are the preprocessing methods mentioned in this paper.
Next, we select the adaptive area of interest (ROI) in the preprocessed image. The last step is lane detection. Firstly, Canny operator is used to detect the edge of lane line; then Hough transform is used to detect line lane. Finally, we use Extended Kalman Filter (EKF) to detect and track lane line in real time.
3. Proposed Methods
In this paper, based on the previous preprocessing, we firstly extract the colour features based on the white colour and then extract the edge features based on the straight lane. Because the high-speed section is the traffic accident-prone section, the high-speed road section mostly is the straight line lane .Therefore, in order to obtain a very high recognition rate, we successively carry on colour detection and edge detection to the lane. This paper combines colour features extraction and edge features extraction, and the experiment proves that the recognition rate and accuracy of lane detection are greatly improved.
Our main contribution in this paper is to do a lot of work in the preprocessing stage. We proposed to perform colour transform of HSV in the preprocessing stage, then extract white, and then perform conventional preprocessing operations in sequence. Moreover, we selected an improved method proposed in the area of interest (ROI). In this paper, based on the proposed preprocessing method (after HSV colour transform, white feature extraction, and basic preprocessing), one-half part of the processed image is selected as the area of interest (ROI). In addition, we performed twice edge detection. The first is in the preprocessing stage, and the second is in the lane detection stage after the ROI is selected. The purpose of performing twice edge detection is to enhance the lane recognition rate.
In this paper, Hough transform is used for the straight line detection. Figure 2 shows the basic principles of the Hough transform. In Figure 2(a), each point on the straight line crossing the point ([x.sub.a], [y.sub.a]) and the point ([x.sub.b], [y.sub.b]) corresponds to a straight line v = -[x.sub.a]u + [y.sub.a] and a straight line v = -[x.sub.b]u + [y.sub.b] on the parameter space map in Figure 2(a) after Hough transformation; two lines intersect at the point ([u.sub.0], [v.sub.0]), where [u.sub.0] and [v.sub.0] are the parameters of the line determined by the point ([x.sub.a], [y.sub.a]) and point ([x.sub.b], [y.sub.b]) in Figure 2(a) . On the contrary, the straight line v = -[x.sub.a] u + [y.sub.a] and the straight line v = -[x.sub.b] u + [y.sub.b] where the parameter space in Figure 2(b) intersects at the same point and the collinear points in Figure 2(a) are correspondence . According to this characteristic, given some specific points in Figure 2(a), the line equations connecting these points in Figure 2(b) can be calculated by Hough transform.
The Hough transform is implemented in polar form as 
[rho] = x cos ([theta]) + y sin ([theta]), (1)
where (x, y) are coordinates of nonzero pixels in binary image.
[rho] is the distance between the x-axis and fitted line.
[theta] is the angle between x-axis and normal line. The value range of [theta] is [+ or -] 90[degrees].
As shown in Figure 3, the Hough transform transforms the points of the image in Figure 3(a) into the polar coordinate parameter space of Figure 3(b). We can see that the collinear point ([x.sub.a], [y.sub.a]) and point ([x.sub.b], [y.sub.b]) in Figure 3(a) intersect at the same point ([[rho].sub.0], [[theta].sub.0]) in Figure 3(b). Here, [rho] and [theta] are the polar parameters of the desired straight line .
Different from Figure 2(b), when Figure 3(b) is expressed in polar coordinates, the collinear point ([x.sub.a], [y.sub.a]) and point ([[rho].sub.0], [[theta].sub.0]) mapped to the parameter space in the original image intersect at the point ([[rho].sub.0], [[theta].sub.0]) .
The Kalman filtering algorithm is used to track lane lines in real time. In this paper, we use Extended Kalman Filter (EKF) to track the lane in real time . After the parameters [rho] and [theta] of the lane based on the straight line model are obtained from the Hough transforms of Figures 2 and 3, the lane line can be tracked using the EKF. The EKF tracking algorithm is described in Table 1, the initial value of the parameter [[??].sub.0] and the initial value of the covariance [P.sub.0] are set as the unit matrix, and the predicted value of the current state is the tracking result of the previous state . The real value of the current state is the sequence frame of the current reading; thus the tracking value of the current state (the optimal estimation result) can be obtained. This value is also used as the prediction value of the next state to realize the cyclic estimation of the lane parameters, that is, the tracking . Table 1 shows Extended Kalman Filter algorithm module.
As shown in Figure 4, before inputting the image frame, we made preparations such as calculating the average value of vehicle parameter dimensions and setting EKF parameters. The input image frame is detected by the Canny edge operator and the resulting edge image is obtained. Then, we add the parameters [rho] and [theta] of the lane line based on the straight line model that have been obtained by the Hough transform and determine whether the lane parameters and dimensions detected by the Hough transform are the same for all input frame images. If they are equal, use EKF for lane tracking, or enter the dimension addition and subtraction module to adjust the parameter dimension.
4.1. Preprocessing. Preprocessing is an important part of image processing and an important part of lane detection. Preprocessing can help reduce the complexity of the algorithm, thereby reducing subsequent program processing time. The video input is a RGB-based colour image sequence obtained from the camera. In order to improve the accuracy of lane detection, many researchers employ different image preprocessing techniques.
Smoothing and filtering graphics is a common image preprocessing technique. The main purpose of filtering is to eliminate image noise and enhance the effect of the image. Low-pass or high-pass filtering operation can be performed for 2D images, low-pass filtering (LPF) is advantageous for denoising, and image blurring and high-pass filtering (HPF) are used to find image boundaries [24-26]. In order to perform the smoothing operation, an average, median , or Gaussian  filter could be used. In , in order to preserve detail and remove unwanted noise, Xu and Li firstly use a median filter to filter the image and then use an image histogram in order to enhance the grayscale image .
4.2. Colour Transform. Colour model transform is an important part of machine vision, and it is also an indispensable part of lane detection in this paper. The actual road traffic environment and light intensity all produce noise that interferes with the identification of colour. We cannot detect the separation of white lines, yellow lines, and vehicles from the background. The RGB colour space used in the video stream is extremely sensitive to light intensity, and the effect of processing light at different times is not ideal. In this paper, the RGB sequence frames in the video stream are colour-converted into HSV colour space images. Figures 5(a) and 5(b) are images of RGB colour space and HSV colour space, respectively. HSV represents hue, saturation, and value . As can be seen in Figure 6, the values of white and yellow colours are very bright in the V-component compared to other colours and are easily extracted, providing a good basis for the next colour extraction. Experiments show that the colour processing performed in the HSV space is more robust to detecting specific targets.
4.3. Basic Preprocessing. As shown in Figure 7, a large number of frames in the video will be preprocessed. The images are individually gray scaled, blurred, X-gradient calculated, Y-gradient calculated, global gradient calculated, thresh of frame, and morphological closure . In order to cater for different lighting conditions, an adaptive threshold is implemented during the preprocessing phase. Then, we remove the spots in the image obtained from the binary conversion and perform the morphological closing operation. As can be seen from Figure 7, the basic preprocessed frames cannot be very good at removing noise. It can be seen from the results after the morphological closure that although preliminary lane information can be obtained, there is still a large amount of noise.
4.4. Adding Colour Extraction in Preprocessing. In order to improve the accuracy of lane detection, we add a feature extraction module in the preprocessing stage. The purpose of feature extraction is to keep any features that may be lane and remove features that may be nonlane. This paper mainly carries on the feature extraction to the colour. After the graying of the image and colour model conversion, we add the white feature extraction and then carry out the conventional preprocessing operation in turn. The process of the colour extraction proposed in this paper is shown in Figure 8.
4.5. Adding Edge Detection in Preprocessing. This paper has carried out edge detection two times successively; the first time is to perform a wide range of edge detection extraction in the entire frame image. In the second, the edge detection is performed again after the lane detection after ROI selection. This detection further improves the accuracy of lane detection .This section mainly performs the overall edge detection on the frame image, using the improved Canny edge detection algorithm. The concrete steps of Canny operator edge detection are as follows: First, we use a Gaussian filter to smooth the image (preprocessed image), and then we use the Sobel operator to calculate the gradient magnitude and direction. Next step is to suppress the nonmaximal value of the gradient amplitude. Finally, we need to use a double-threshold algorithm to detect and connect edges. Figure 9 shows the image after extraction with Canny edge detection.
4.6. ROI Selection. After edge detection by Canny edge detection, we can see that the obtained edge not only includes the required lane line edges, but also includes other unnecessary lanes and the edges of the surrounding fences. The way to remove these extra edges is to determine the visual area of a polygon and only leave the edge information of the visible area. The basis is that the camera is fixed relative to the car, and the relative position of the car with respect to the lane is also fixed, so that the lane is basically kept in a fixed area in the camera.
In order to lower image redundancy and reduce algorithm complexity, we can set an adaptive area of interest (ROI) on the image . We only set the input image on the ROI area and this method can increase the speed and accuracy of the system. In this paper, we use the standard KITTI road database .We divide the image of each frame in the running video of the vehicle into two parts, and one-half of the lower part of the image frame serves as the ROI area. Figure 10 shows the ROI selection of sample frames (a), (b), (c), and (d) which are processed by the proposed preprocessing. The images of the four different sample frames have been able to substantially display the lane information after being processed by the proposed preprocessing method, but not only the lane information but also a lot of nonlane noise is present in the upper half of the image. So we cut out the lower half of the image (one-half) as the ROI area.
4.7. Lane Detection. The lane detection module is mainly divided into lane edge detection and linear lane detection. This section implements the basic functions of lane detection and performs lane detection based on improved preprocessing and the proposed ROI selection.
4.8. Edge Detection. Feature extraction is very important for lane detection. There are many common methods used for edge detection, such as Canny transform, Sobel transform, and Laplacian transform [18, 24]. We have selected Canny transform which is better. As shown in Figure 11, we performed Canny edge detection after the proposed ROI selection.
4.9. Lane Detection. The methods of lane detection include feature based methods and model-based methods. The method based feature is used in this paper to detect the colour and edge features of lanes in order to improve the accuracy and efficiency of lane detection.
There are two methods to achieve straight lane detection. One is to use the Hough line detected function encapsulated by the OpenCV library commonly used for image processing, and draw lane lines in the corresponding area of the original image. The other is self-programming. In the header file, the ROI area is traversed to perform line detection for a specific range of angles .
Both methods can be reflected in the video, and the first method runs faster. Since this article focuses on the accuracy and efficiency of lane detection, we chose the first method (Hough line function in the OpenCV library) to run faster for linear detection. Moreover, because the Hough transform is insensitive to noise and can process straight lines well, Hough transform is used to extract lane line parameters in each frame of the image sequence for lane detection.
In image processing, the Hough transform is used to detect any shape that can be expressed in a mathematical formula, even if the shape is broken or somewhat distorted. Compared with other methods, the Hough transform can find noise reduction better than other methods. The classic Hough transform is often used to detect lines, circles, ellipses, etc. As shown in Figure 12, lane detection uses Hough of sample frames (a), (b), (c), and (d).
4.10. Lane Tracking Using Extended Kalman Filter. After completing the lane detection, the next step is to track the lane, which is also a key technology for smart and automated vehicle (SAV) .
Image edge detection technology and linear lane detection are technologies used to detect lane; then EKF is used to track these parameters one by one . In this way, the tracking of lane lines is converted into the tracking of lane line parameters, which not only improves the tracking speed, but also introduces the method of Kalman tracking to improve the tracking accuracy.
The experimental results are shown in Figures 13 and 14. The real-time tracking lane line is detected in the video stream. Figure 13 shows different results of lane detection at different times (i), (ii), (iii), and (iv) in one video. Figure 14 shows different results of lane detection at different times (i), (ii), (iii), and (iv) in another video.
5. Results and Discussion
Figure 15 shows the preprocessing of four frames of images. Frame (a.i) and frame (b.i) are processed by basic preprocessing (without white feature extraction), and frame (a.ii) and frame (b.ii) are processed by the proposed preprocessing can see that frame (a.ii) and frame (b.ii) which are processed by the proposed preprocessing can display the lane line. But there is a large amount of white residue in frame (a.i) and frame (b.i), and it is difficult to detect lane lines. Therefore, the basic preprocessing of the frame does not work well for lane detection. In view of these, we propose to add HSV colour conversion in the preprocessing stage and then extract the white features of the frame before the blurry ones, so as to achieve a better detection effect and improve the detection accuracy.
As shown in Figure 16, frame (a) and frame (b) are extracted white features, respectively.
Most research scholars directly perform ROI selection on the original image. In this paper, a new ROI selection method is proposed. Experiments show that the proposed ROI selection can improve the accuracy and efficiency of lane detection.
Figures 17 and 18 show the ROI selection of white feature. It can be seen from the figures that ROI selection of white feature cannot accurately detect the area of lane line, which will eventually produce a great error.
Half of the input frames are proposed as ROI selection. As shown in Figure 19, the ROI selection implemented on the original image is followed by edge detection and lane detection on the selected ROI area. Compared with Figure 12, the result of the final lane detection contains many nonlane areas, and the effect of the lane detection is poor. The more the lane parameters are marked, the less efficient the calculation is. Therefore, the proposed method in this paper can lower the number of lane parameters, thereby reducing the calculation time and improving the detection efficiency.
To quantify the accuracy of lane detection, we used the correct detection rate to evaluate the performance of our proposed method for lane detection under the data set used. For better results of the proposed method, we first set the size of the image in the data set to the same size and randomly take 300, 500, 800, 1000, and 1500 images as a test set in training sets. In order to verify the excellence of our proposed method, as shown in Figure 20, we compared the detection efficiency of the basic preprocessing method for lane detection with the detection efficiency of the proposed preprocessing method. Moreover, as shown in Figure 21, we also compare the lane detection efficiency of the lane detection method that selects the ROI area only based on the lane colour with the lane detection efficiency of the proposed ROI selection method. From Figures 20 and 21, we can see that the results of the proposed method achieve the highest correct detection rate to prove the effectiveness of our proposed method.
In this paper, we proposed a new lane detection preprocessing and ROI selection methods to design a lane detection system. The main idea is to add white extraction before the conventional basic preprocessing. Edge extraction has also been added during the preprocessing stage to improve lane detection accuracy. We also placed the ROI selection after the proposed preprocessing. Compared with selecting the ROI in the original image, it reduced the nonlane parameters and improved the accuracy of lane detection. Currently, we only use the Hough transform to detect straight lane and EKF to track lane and do not develop advanced lane detection methods. In the future, we will exploit a more advanced lane detection approach to improve the performance.
The proposed lane detection data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This paper was also supported by the project of Local Colleges and Universities Capacity Construction of Science and Technology Commission in Shanghai (no. 15590501300).
 D. Pomerleau, "RALPH: rapidly adapting lateral position handler," in Proceedings of the Intelligent Vehicles '95. Symposium, pp. 506-511, Detroit, MI, USA, 2003.
 J. Navarro, J. Deniel, E. Yousfi, C. Jallais, M. Bueno, and A. Fort, "Influence of lane departure warnings onset and reliability on car drivers' behaviors," Applied Ergonomics, vol. 59, pp. 123-131, 2017.
 P. N. Bhujbal and S. P. Narote, "Lane departure warning system based on Hough transformand Euclidean distance," in Proceedings of the 3rd International Conference on Image Information Processing, ICIIP 2015, pp. 370-373, India, December 2015.
 V. Gaikwad and S. Lokhande, "Lane Departure Identification for Advanced Driver Assistance," IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp. 910-918, 2015.
 H. Zhu, K.-V. Yuen, L. Mihaylova, and H. Leung, "Overview of Environment Perception for Intelligent Vehicles," IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 10, pp. 2584-2601, 2017.
 F. Yuan, Z. Fang, S. Wu, Y. Yang, and Y. Fang, "Real-time image smoke detection using staircase searching-based dual threshold AdaBoost and dynamic analysis," IET Image Processing, vol. 9, no. 10, pp. 849-856, 2015.
 P.-C. Wu, C.-Y. Chang, and C.H. Lin, "Lane-mark extraction for automobiles under complex conditions," Pattern Recognition, vol. 47, no. 8, pp. 2756-2767, 2014.
 M.-C. Chuang, J.-N. Hwang, and K. Williams, "A feature learning and object recognition framework for underwater fish images," IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1862-1872, 2016.
 Y. Saito, M. Itoh, and T. Inagaki, "Driver Assistance System with a Dual Control Scheme: Effectiveness of Identifying Driver Drowsiness and Preventing Lane Departure Accidents," IEEE Transactions on Human-Machine Systems, vol. 46, no. 5, pp. 660-671, 2016.
 Q. Lin, Y. Han, and H. Hahn, "Real-Time Lane Departure Detection Based on Extended Edge-Linking Algorithm," in Proceedings of the 2010 Second International Conference on Computer Research and Development, pp. 725-730, Kuala Lumpur, Malaysia, May 2010.
 C. Mu and X. Ma, "Lane detection based on object segmentation and piecewise fitting," TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 12, no. 5, pp. 3491-3500, 2014.
 J.-G. Wang, C.-J. Lin, and S.-M. Chen, "Applying fuzzy method to vision-based lane detection and departure warning system," Expert Systems with Applications, vol. 37,no. 1, pp. 113-126, 2010.
 S. Srivastava, M. Lumb, and R. Singal, "Improved lane detection using hybrid median filter and modified hough transform," International Journal of Advanced Research in Computer Science and Software Engineering, vol. 4, no. 1, pp. 30-37, 2014.
 J. Piao and H. Shin, "Robust hypothesis generation method using binary blob analysis for multi-lane detection," IET Image Processing, vol. 11, no. 12, pp. 1210-1218, 2017.
 J. Niu, J. Lu, M. Xu, P. Lv, and X. Zhao, "Robust Lane Detection using Two-stage Feature Extraction with Curve Fitting," Pattern Recognition, vol. 59, pp. 225-233, 2015.
 J. Son, H. Yoo, S. Kim, and K. Sohn, "Real-time illumination invariant lane detection for lane departure warning system," Expert Systems with Applications, vol. 42, no. 4, pp. 1816-1824, 2015.
 A. Mammeri, A. Boukerche, and Z. Tang, "A real-time lane marking localization, tracking and communication system," Computer Communications, vol. 73, pp. 132-143, 2016.
 C. J. Chen, B. Wu, W. H. Lin, C. C. Kao, and Y. H. Chen, "Mobile lane departure warning system in," in Proceedings of the 2009 IEEE 13th International Symposium on Consumer Electronics, pp. 1-5, 2009.
 J. W. Lee, C. D. Kee, and U. K. Yi, "A new approach for lane departure identification," in Proceedings of the IEEE IV2003 Intelligent Vehicles Symposium, pp. 100-105, 2003.
 J. W. Lee and U. K. Yi, "A lane-departure identification based on LBPE, Hough transform, and linear regression," Computer Vision and Image Understanding, vol. 99, no. 3, pp. 359-383, 2005.
 H. Xu and H. Li, "Study on a robust approach of lane departure warning algorithm," in Proceedings of the EEE International Conference on Signal Processing System (ICSPS), pp. 201-204, 2010.
 A. Borkar, M. Hayes, and M. T. Smith, "Robust lane detection and tracking with Ransac and Kalman filter," in Proceedings of the 2009 IEEE International Conference on Image Processing, ICIP 2009, pp. 3261-3264, November 2009.
 H. Xu and H. Li, "Study on a robust approach of lane departure warning algorithm," in Proceedings of the IEEE International Conference on Signal Processing System (ICSPS), pp. 201-204, 2010.
 H. Chen and Z. Jin, "Research on Real-Time Lane Line Detection Techonogy Based on Machine Vision," in Proceedings of the 2010 International Symposium on Intelligence Information Processing and Trusted Computing (IPTC), pp. 528-531, Huang-gang, China, October 2010.
 H. Aung and M. H. Zaw, "Video based lane departure warning system using hough transform," in Proceedings of the International Conference on Advances in Engineering and Technology, pp. 85-88, Singapore, 2010.
 J. Fritsch, T. Kuhnl, and A. Geiger, "A new performance measure and evaluation benchmark for road detection algorithms," in Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC '13), pp. 1693-1700, IEEE, The Hague, The Netherlands, October 2013.
 S.-C. Huang and B.-H. Chen, "Automatic moving object extraction through a real-world variable-bandwidth network for traffic monitoring systems," IEEE Transactions on Industrial Electronics, vol. 61, no. 4, pp. 2099-2112, 2014.
Mingfa Li (iD), Yuanyuan Li (iD), and Min Jiang (iD)
Department of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
Correspondence should be addressed to Yuanyuan Li; email@example.com
Received 31 May 2018; Accepted 31 July 2018; Published 7 August 2018
Academic Editor: Jenq-Neng Hwang
Caption: Figure 1: Block diagram of proposed methods.
Caption: Figure 2: Hough transform. (a) A line in a Cartesian coordinate system and (b) spatial parameters after Hough transformation.
Caption: Figure 3: Hough transform. (a) Cartesian coordinate system parameter and (b) polar coordinate system parameter.
Caption: Figure 4: EKF Algorithm.
Caption: Figure 5: Two images of different colour spaces. (a) RGB and (b) HSV colour transform.
Caption: Figure 6: V-component values of representative colours under various illumination.
Caption: Figure 7: Basic preprocessing of sample frames (a), (b), (c), and (d).
Caption: Figure 8: Adding white extraction in preprocessing of sample frames (a),(b),(c), and (d).
Caption: Figure 9: Canny edge detection of sample frames (a), (b), (c), and (d).
Caption: Figure 10: ROI selection of sample frames (a), (b), (c), and (d) based on the proposed preprocessing.
Caption: Figure 11: Canny edge detection of sample frames (a), (b), (c), and (d) after the proposed preprocessing.
Caption: Figure 12: Lane detection using Hough of sample frames (a), (b), (c), and (d).
Caption: Figure 13: Different moments (i), (ii), (iii), and (iv) in one video.
Caption: Figure 14: Different moments (i), (ii), (iii), and (iv) in another video.
Caption: Figure 15: Comparison between the basic preprocessing method and the proposed preprocessing method. ((a.i) and (b.i)) Without extracting white before blurry one (the basic preprocessing method) and (a.ii) and (b.ii) with extracting white before blurry one (the proposed preprocessing method).
Caption: Figure 16: White extraction of frames (a) and (b).
Caption: Figure 17: ROI selection of white of the sample frames (a) and (b).
Caption: Figure 18: ROI selection of white of the sample frames (c) and (d).
Caption: Figure 19: ROI selection and lane detection of sample frames (a), (b), (c), and (d).
Caption: Figure 20: Comparison of correct detection rates between basic preprocessing and this paper.
Caption: Figure 21: Comparison of correct detection rates between ROI selection only based on colour and this paper.
Table 1: Extended Kalman Filter algorithm module. Initialization: [mathematical expression not reproducible] Prediction: (I) State Prediction: [mathematical expression not reproducible] (II) State Prediction Error Covariance Matrix: [mathematical expression not reproducible] Where, [mathematical expression not reproducible] Error Correction: (I) Kalman Gain: [mathematical expression not reproducible] Where, [mathematical expression not reproducible] (II) State Estimation: [mathematical expression not reproducible] (III) State Estimation Error Covariance Matrix: [P.sub.k] = (I - [G.sub.K][H.sub.k])[P.sub.k|k-1]
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Research Article|
|Author:||Li, Mingfa; Li, Yuanyuan; Jiang, Min|
|Publication:||Advances in Multimedia|
|Date:||Jan 1, 2018|
|Previous Article:||Scene Understanding Based on High-Order Potentials and Generative Adversarial Networks.|
|Next Article:||A Power Control Algorithm Based on Outage Probability Awareness in Vehicular Ad Hoc Networks.|