Next Article in Journal
How Entrepreneurship Sustains Barriers in the Entrepreneurial Process—A Lesson from a Developing Nation
Next Article in Special Issue
The Compatibility between the Takeover Process in Conditional Automated Driving and the Current Geometric Design of the Deceleration Lane in Highway
Previous Article in Journal
Mapping 30 Years of Sustainability of Solar Energy Research in Developing Countries: Indonesia Case
Previous Article in Special Issue
Research on the Anti-Reflective Cracking Performance of a Full-Depth Asphalt Pavement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Review on Lane Detection and Tracking Algorithms of Advanced Driver Assistance System

by
Swapnil Waykole
,
Nirajan Shiwakoti
* and
Peter Stasinopoulos
School of Engineering, RMIT University, Melbourne, VIC 3000, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(20), 11417; https://doi.org/10.3390/su132011417
Submission received: 1 September 2021 / Revised: 7 October 2021 / Accepted: 9 October 2021 / Published: 15 October 2021

Abstract

:
Autonomous vehicles and advanced driver assistance systems are predicted to provide higher safety and reduce fuel and energy consumption and road traffic emissions. Lane detection and tracking are the advanced key features of the advanced driver assistance system. Lane detection is the process of detecting white lines on the roads. Lane tracking is the process of assisting the vehicle to remain in the desired path, and it controls the motion model by using previously detected lane markers. There are limited studies in the literature that provide state-of-art findings in this area. This study reviews previous studies on lane detection and tracking algorithms by performing a comparative qualitative analysis of algorithms to identify gaps in knowledge. It also summarizes some of the key data sets used for testing algorithms and metrics used to evaluate the algorithms. It is found that complex road geometries such as clothoid roads are less investigated, with many studies focused on straight roads. The complexity of lane detection and tracking is compounded by the challenging weather conditions, vision (camera) quality, unclear line-markings and unpaved roads. Further, occlusion due to overtaking vehicles, high-speed and high illumination effects also pose a challenge. The majority of the studies have used custom based data sets for model testing. As this field continues to grow, especially with the development of fully autonomous vehicles in the near future, it is expected that in future, more reliable and robust lane detection and tracking algorithms will be developed and tested with real-time data sets.

1. Introduction

Autonomous passenger vehicles are a direct implementation of transportation-related autonomous robotics research. They are also known as self-driving vehicles or driverless vehicles. Shakey the robot (1966–1972) is the first autonomous mobile robot that has been documented [1]. It was developed by Stanford Research Institute’s Artificial Intelligence Centre and was capable of detecting the environment, thinking, planning, and navigation. In basic settings, vision-based lane tracking and obstacle avoidance sparked interest in autonomous vehicles [2]. In the early 1990s, The Royal Armament Research and Development Establishment in the United Kingdom created two vehicles for obstacle-free navigation on and off the road [3]. In the United States, the first operations of autonomous driving in realistic settings dates back to Carnegie Mellon University’s NavLab in the early 1990s [4]. The vehicle developed by NavLab was operated at very low speeds due to the limited computational power available at the time. Early US research projects also included the California PATH project, which developed the automated highway [5]. Vehicle steering was automated with manual longitudinal control in the “No Hands Across America” project [6]. In early 2000, CyberCars, one of several European projects began developing technologies based on automated transport [7]. The announcement of the defence advanced research projects agency (DARPA) grand challenge in 2003 generated research interest in autonomous cars. Following that, in 2006, the DARPA urban challenge was performed in a controlled situation with a variety of autonomous and human-operated vehicles. Since then, many manufactures, including Audi, BMW, Bosch, Ford, GM, Lexus, Mercedes, Nissan, Tesla, Volkswagen, Volvo and Google, have launched self-driving vehicle projects in collaboration with universities [8]. Google’s self-driving car has experimented and travelled 500 thousand kilometres and has begun building prototypes of its own cars [9]. A completely autonomous vehicle would be expected to drive to a chosen location without any expectation of shared control with the driver, including safety-critical tasks.
The performance of lane detection and tracking depends on the well-developed roads and their lane markings, so smart cities are also a prominent factor in autonomous vehicle research. The idea of a smart city is often linked with an eco-city or a sustainable city, both of which seek to enhance the quality of municipal services while lowering their costs. Smart cities’ primary goal is to balance technological innovation with the economic, social, and environmental problems that tomorrow’s cities face. The greater closeness between government and people is required in smart cities that embrace the circular economy’s concepts [10]. The way materials and goods flow around people and their demands will alter, as will the structure of cities. Several car manufacturers such as Tesla and Audi have already launched autonomous vehicle marketing for private use. Soon, society will be influenced by autonomous vehicles’ spread to urban transport systems [11]. The development of smart cities with the introduction of connected and autonomous vehicles could potentially transform cities and guide long-term urban planning [10].
Autonomous vehicles and Advanced Driver Assistance Systems (ADAS) are predicted to provide a higher degree of safety and reduce fuel and energy consumption and road traffic emissions. ADAS is implemented for safe and efficient driving, which has many driver assistance features such as warning drivers about forwarding collision warning or safe lane change [12]. Research shows that most accidents occur because of driver errors, and the ADAS can reduce the accidents and workload of the driver. If there is a likelihood of an accident, ADAS can take the necessary action to avoid it [13]. Lane departure warning (LDW), which utilizes lane detection and tracking algorithms, is an essential feature of the ADAS. The LDW warns the driver when a vehicle crosses white lane lines unintentionally and controls the vehicle by bringing it back into the desired safe path. Three types of approaches for lane detection are usually discussed in the existing literature: learning-based approach, features-based approach, and model-based approach [13,14,15,16,17,18] (detailed analysis are presented in Section 3.2). Many challenges and issues have been highlighted in the literature regarding the LDW systems, such as visibility conditions change, variation in images, and lane appearance diversity [17]. Since different countries have used various lane markers, there is a challenge for lane detection and tracking to solve the problems.

1.1. Objectives and Scope of the Study

There are limited studies that provide state-of-art lane detection and tracking algorithms for ADAS. This review paper aims to comprehensively review the previous literature on lane detection and tracking for ADAS and identify gaps in knowledge for future research. The report compares different lane detection and tracking algorithms and analyses different datasets used to verify the algorithms and metrics used to evaluate the algorithms. Specifically, the review identifies and classifies the existing lane detection and tracking algorithms under three themes: features-based, learning-based and model-based, which provides a systematic approach towards understanding the key characteristics of lane detection and tracking algorithms in the literature. Some patented works by vehicle manufacturers under these three categories are also reviewed to acknowledge growing commercialisation interests in this field of study. However, given the large number of patents by educational institutions, research groups and vehicle manufacturers, a detailed review of patented works is outside the scope of this study. This systematic review is expected to assist researchers working in this area by delivering current advancements made in lane detection and tracking for ADAS and the challenges to overcome in the future for robust lane detection and tracking systems.
The structure of the paper is as follows. Section 2 provides an overview of the methodology adopted for the literature review. It is then followed by a detailed literature review that includes a brief introduction to sensors used in the ADAS, an analysis of the existing literature on lane detection and tracking algorithms. Section 4 presents the discussions followed by conclusions in Section 5.

2. Methodology

Literature was gathered from the electronic database. The database included “ISI Web of Science”, “Science Direct” “Scopus” “Google Scholar” and “IEEE Xplore”. The keywords used for the search were “Lane detection algorithms”, “Lane tracking algorithms”, “Lane departure warning algorithms”, “Advanced driver assistance system”, “Lane change tracking”, “Vehicle tracking”, Vehicle tracking sensors”, and “Automated lane change” or a combination of these words (Figure 1). We also searched for patented works. Patents published from 2006 to 2020 were also searched using the term “Lane detection and tracking,” “Lane departure warning,” and “Advance driver assistance system” using the “Google scholar” and “PubMed”. As mentioned in Section 1.1, the objective of patents search is to acknowledge growing commercialisation interests in this field of study rather than a detailed review of the patented works. As such, only a sample of patents works from vehicle manufacturers was discussed. The period studied for the literature is the past two decades, as lane tracking and detection is an emerging field that has gained momentum post-2000. English language-based publications were only considered for the review as they are widely accessible to global readers. Relevant publications further improved the search procedure in the reference lists available in the collected literature. The lane detection and tracking algorithms were investigated under three approaches that have been commonly referred to in the literature (Features based, learning-based and model-based as shown in Figure 1). The existing databases were analyzed to identify the availability of datasets for future research. The lane detection criteria, calculation of the detection rate and accuracy of the algorithms that have been adopted in the literature are also reviewed.

3. Literature Review

A comparison of the different sensors used in ADAS is presented first. It is then followed up with an in-depth review of algorithms used for lane detection and tracking, including the patented works.

3.1. Sensors Used in the ADAS

ADAS deploy different sensors fusion systems to guide the vehicle (Figure 2). In the literature, three types of sensors have been identified that are used in the ADAS, which are, Light Amplification by Stimulated Emission of Radiation (LASER) based sensors, Radio Detection And Ranging (RADAR) based sensors and vision-based sensors as described below. Table 1 shows the Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis of LASER, RADAR and vision-based sensors.se

3.1.1. LASER Based Sensors

Laser scanner and Light detection and ranging (LIDAR) is the common laser-based sensors. In this technology, the transmitter and receiver are placed, and the impulse light of electromagnetic waves are recorded through it. Infrared near about (800–950 nm) and ultraviolet above (1500 nm) wavelength of the electromagnetic spectrum are used [19]. By estimating the time of flight, the distance between the transmitter and the receiver is calculated. It may not be possible to derive the direct relative velocity of the moving object, so it is obtained by taking the derivative of ranges with respect to time. These types of sensors are used for multiple target tracking. The drawbacks of these sensors are vulnerability to dirty lenses and the inadequacy of the reflecting target. Besides, for weather conditions, these sensors may be too sensitive. These types of sensors are reliable for automatic car parking and collision mitigation [19,20].

3.1.2. RADAR Based Sensors

RADAR sensors can detect the images in haze, dust, rain, and snow up to 200 m ahead. Through the radar detection and ranging process, these sensors emit strong radio waves through the transmitter and receive them back through the receiver, similar to laser-based sensors. The distance between sensors and objects is calculated by the time of flight. Another advantage is that frequency between emitted and Doppler echo can be calculated, which provides the object’s velocity. To map movements of aircraft, these kinds of sensors are often used in aviation and defence manufacturing sectors. In the automobile sector, two types of sensors are used: long-range sensors, which range between 77–81 GHz spectrum and short-range sensors that range between 21.65–26.65 GHz. In extreme weather conditions, these sensors are very vulnerable and sometimes fail to work. These kinds of sensors are used for collision mitigation and adaptive cruise control [19,20].

3.1.3. Vision-Based Sensors

These types of sensors come under the passive sensors category, which means they do not emit any waves. To assess the presence, orientation and accuracy of the surrounding objects, vision sensors use images. Vision sensors use a mixture of image acquisition and image processing, and multi-point inspection is carried out using a single sensor. Two types of sensors are used in a vision-based system, the first is a monocular camera, and the second is a stereo vision camera. These sensors do not directly derive the range and velocity of the objects, and as such, a sophisticated signal procedure is used to derive these parameters. These sensors are readily available and affordable in the automobile sector. For traffic signal analysis and lane change assistance, these kinds of sensors are applicable. The main drawback is vulnerabilty to extreme weather conditions and sometimes failing to work at nighttime [19,20].
Figure 2. Sensors fusion to guide autonomous vehicle, adapted and reprinted from ref. [21].
Figure 2. Sensors fusion to guide autonomous vehicle, adapted and reprinted from ref. [21].
Sustainability 13 11417 g002

3.2. Lane Detection and Tracking Algorithms

In this section, we have conducted a comparison and analysis of algorithms in three different categories according to approaches used: features-based approach, model-based and learning-based approach.
The feature-based approach uses edges and local visual characteristics of interest, such as gradient, colour, brightness, texture, orientation, and variations, which are relatively insensitive to road shapes but sensitive to illumination effects. The model-based approaches apply global road models to fit low levels of features that are more robust against illumination effects, but they are sensitive to road shapes [13,14]. The geometrics parameters are used in the model-based approach for lane detection [16,17,18]. The learning-based approach consists of two stages: training and classification. The training process uses previously known errors and system properties to construct a model, e.g., program variables. In addition, the classification phase applies the training model to the user set of properties and outputs that are more likely to be correlated with the error ordered by their probability of fault discloser [19]. In the following sections, we describe the three approaches used in the literature in detail. It is then followed up by summary tables (Table 2, Table 3, Table 4 and Table 5) that present the key features of these algorithms and strengths, weaknesses, and future prospects.

3.2.1. Features-Based Approach (Image and Sensor-Based Lane Detection and Tracking)

Image and sensor-based lane detection and tracking decision-making processes are dependent on the sensors attached to the vehicle and the camera output. In this approach, the image frames are pre-processed, and a lane detection algorithm is applied to determine lane tracking. The sensor values are used to further decide on the path to be followed by the lane markings [22,23].
Kuo et al. [24] implemented a vision-based lane-keeping system. The proposed system obtains the vehicle position following the lane and controls the vehicle to be in the desired path. The steps involved in the lane-keeping system are inverse perspective mapping, detection of lane scope features and reconstruction of the lane markings. The main drawback of the system is that the performance is reduced when the vehicle is driving in a tunnel.
Kang et al. [25] proposed a kinematic-based fault-tolerant mechanism to detect the lane even if the camera cannot provide the road image due to malfunction or environmental constraints. In the absence of camera input, the lane is predicted using the kinematic model by taking the parameters such as the length and speed of the vehicle. The camera input is given as a clothoid cubic polynomial curve road model. In the absence of camera input, the lane coefficients of the clothoid model will be available. A lane restoration scheme is used to overcome this loss based on a multi-rate state estimator obtained from the kinematic lateral motion model in the clothoid function. The predicted lane is based on the past curvature rate and road curvature. The results show that the proposed method can maintain the lane for 3 s without camera input. The developed algorithm was simulated using CARSIM and Simulink. It has been tested in a test vehicle equipped with an Auto Box from dSPACE in Tucson from HYUNDAI Motors.
Borkar et al. [26] proposed a lane detection and tracking method using inverse projective mapping (IPM) to create a bird’s-eye view of the road; a Hough transform for detecting candidate lane and Kalman filter track the lane. The road image is converted to grayscale form followed by temporal blurring. The application of IPM makes the image provide a bird’s eye view. The lanes are detected by identifying the pair of parallel lines which are separated by a distance. The IPM images are converted to binary, and a Hough transform is performed on the binary image and then divided into two halves. To determine the center of the line, the one-dimensional matched filter is applied to each sample. The pixel with a large correlation that exceeds the threshold is selected as the center of the lane. The Kalman filter is used to track the lane, which takes the lane orientation and difference between the current and previous frames. A firewire camera is used to capture the image of the road. The performance of the proposed algorithm provides better accuracy under the isolated highway and metro highway, and the accuracy is in the range of 86% on city roads. The improved performance is due to the usage of the Kalman filter to track the lane.
Sun et al. [27] proposed a lane detection mechanism considering multiple frames in contrast with the single frame along with the inertial work classifier. The initially assigned probability value changes due to error and vehicle movement. Kalman filter is applied to smooth the line segments in Hough space. The inertial measurement unit (IMU) values are used to align the previous line segments in the Hough space. The lane detection is determined by considering the line segments with a high probability value. The analysis of the method using the Caltech dataset provides accuracy in the range of 95% to 97%. The lane detection under different environmental conditions such as sunlight, rain and with high values of sunlight and rainfall shows the performance in the range of 72% to 87%. The Hough transform is employed to extract the line segment from lane markings stored in the Hough space. The Hough space is used to store the line segments with an associated probability value. The truthiness of the line segments is determined using Convolutional Neural Net. The system is implemented using NVIDIA GTX1050ti GPU, OV10650 camera, and the IMU is Epson G320.
Lu et al. [28] proposed a lane detection algorithm for urban traffic scenarios in which the road is well-constructed, flat and of equal width. The road model is constructed using feature line pairs (FLP), the FLP is detected using Kalman filter and a regression diagnostic technique to determine the road model using FLP. The result shows that the time taken to detect the road parameters is 11 ms. The proposed method is implemented using C++ on a 1.33 GHz AMD processor-based personal computer with a single camera and a Matrox Meteor RGB/PPB digitizer and implemented in THMR-V (Tsinghua Mobile Robot V).
Zhang and Shi [29] proposed a lane detection method for detecting the lanes at night. The sober and canny operator detects the edges of the lanes. Gradients acquiring a certain threshold are labelled as edge points. The histogram with the higher brightness is named as lane boundary, and the low valued histogram is named a road. The accuracy of the proposed method is high even in the presence of noises from car head and rear lights and road contour signs.
Borkar et al. [30] proposed a layered approach to detect the lane at night. The region of interest is specified in the captured image of the road. The image is converted to greyscale for further processing. Temporal burring is applied to obtain the continuous lanes of the long line. Depending on the characteristics of the neighboring pixels, an adaptive controller is used to determine the object. The images are converted to the left and right halves, and each half Hough transform is performed to determine the straight lines. The final process deals with the fitting of all the straight lines. Firewire S400 (400 Mbps) color camera in VGA resolution (640 × 480) at 30 fps is used to capture the video and fed to MATLAB, and lanes are detected in an offline manner. The performance of the proposed method is good in isolated highways and in metro highway scenarios. With moderate traffic, the accuracy of detecting the lanes is reduced to 80 percent.
Priyadarshini et al. [31] proposed a lane detection system that detects the lane during the daytime. The captured video is converted to a grayscale image. A Gaussian filter is applied to remove the noise. The Canny edge detection algorithm is used to detect the edges. To identify the length of the lane, a Hough transform is applied. The proposed method is simulated using a raspberry pi-based robot with a camera and ultrasonic sensors to determine the distance between neighbouring vehicles.
The survey by Hong et al. [32] discussed video processing techniques to determine the lanes illumination change on the region of interest for straight-line roads. The survey highlights the methodologies involved, such as choosing the proper color space and determination of the region of interest. Once the intended image is captured, a color segmentation operation is performed using region splitting and clustering schemes. This is followed by applying the merging algorithm to suppress the noise in the image.
A color-based lane detection and a representative line extraction algorithm are proposed by Park et al. [33]. The captured image in RGB format is converted to gray code followed by binary image conversion. The purpose of binary image conversion is to remove the shadows in the captured image. The lanes in the image are detected using the canny algorithm by the feature named color. The direction and intensity are determined by removing the noise using the gaussian filter. The images are smoothened by applying a median filter. The lanes in the image are considered as the region of interest, and Hough transform is applied to confirm the accuracy of the lanes in the region of interest. The experiment is performed during the daytime. The results show that the lane detection rate is more than 93%.
El Hajjouji et al. [34] proposed a hardware architecture for detecting straight lane lines using Hough transform. The CORDIC (Coordinate Rotation Digital Computer) algorithm calculates the gradient and phase from the captured image. The output of CORDIC block is the norm and angle of the x-axis of the image. The norm and angles are compared with the threshold obtained from the region of interest. The Hough transform is applied to the outcome of the comparator module, and the relation between the Hough space and the angle is determined. The noises are removed by the Hough transform voting procedure. Finally, the output is obtained as the slope of the straight line. The algorithm is implemented in the Virtex-5 ML505 platform. The algorithm was tested on a variety of images with varying illumination and different road conditions, such as urban streets, highways, occlusion, poor line paintings, day and night and scenarios. The algorithm provides a detection rate of 92%.
Samadzadegan et al. [35] proposed a lane detection methodology in a circular arc or parabolic based geometric method. The RGB colour is converted to an intensity image that contains a specific range of values. A three-layer pyramid image is constructed using bi-cubic interpolation method. Among the three layers of region of interest, the first layer pixels undergo randomized Hough transformation to determine the curvature and orientation features followed by a Genetic Algorithm Optimisation. The process is repeated to the remaining two layers. The outcome obtained in the lower layers are the features of the lane and used to determine the lanes in the region of interest. The result shows that there is a performance drop in lane detection when entering the tunnel region and occlusion in lane markings due to the shadow of another vehicle.
Cheng et al. [36] proposed a hierarchical lane detection system to detect the lanes on structured and unstructured roads. The system classifies the environment into structured and unstructured based on the feature extraction, which depends on the color of the lane marking. The connected component labelling method is applied to determine the feature objects. During the training, phase supervised learning is performed and manually classified the objects as left lane, right lane and no lane markings. The image is classified as structured and unstructured based on the vote value associated with the weights. The lanes for structured roads are detected by eliminating the moving vehicle on the lane image followed by lane recognition by considering the angle of inclination and starting points of the lane markings. The lane coherence verification module compares the lane width of the current frame with the previous frame to determine the lanes. For unstructured roads, the following steps are performed: mean shift segmentation, which deals with the determination of road surface by comparing with the surroundings to determine the variation in colors and texture. The region merging and boundary smoothing module deals with pruning unnecessary boundary lines and neglecting the region which is smaller than the threshold. The boundary is selected based on the posterior probability of each set of candidates. The simulation results show that around 0.11 s is needed to identify structured or unstructured roads. The system achieves an accuracy of 97% in lane detection.
Han et al. [37] proposed a LIDAR sensor-based road boundary detection and tracking for both structured and unstructured roads. The LIDAR is used to obtain the polar coordinates. The line segments are obtained from the height and pitch of LIDAR. Information such as roadside, curbs, sidewalks and buildings are obtained from the line segments. The road slope and width are obtained by merging two-line segments. The road is tracked using the nearest neighbor filter to estimate the state of the target. The algorithm is tested in a real vehicle equipped with LIDAR, GPS and IMU. The road boundary detection accuracy is 95% for structured and 92% for unstructured roads.
Le et al. [38] proposed a method to detect pedestrian lanes under different illumination conditions with no lane markings. The first stage of the proposed system is the vanishing point estimation which works based on votes of local orientations from colored edge pixels. The local orientation of pixels is determined as the vanishing point. The next stage is the determination of the sample region of the lane from the vanishing point. To achieve higher robustness towards different illuminations, invariant space is used. Finally, the lanes are detected using the appearance and shape information from the input image. A Greedy algorithm is applied, which helps to determine the connectivity between the lanes in each iteration of the input image. The proposed model is tested on the input image of both indoor and outdoor environments. The results show that the lane detection accuracy is 95%.
Wang et al. [39] proposed a lane detection system for straight and curve road scenarios. The captured image determines the region of interest, set as 60 m which falls in the near field region. The region of interest is divided into the straight region and the curve region. The near field region is approximated as the straight line, and the far-field region is approximated as the curve. An improved Hough transform is applied to detect the straight line. The curve is determined in the far-field region using the least-squares curve fitting method. The WAT902H2 camera model is used to capture the image of the road. The results show that the time taken to determine the straight and curve lane is 60–80 ms compared to 70–100 ms in the existing works and the accuracy is around 92–93%. The error rate in bending to the left or right direction is from −0.85 to 5.20% for different angles.
Yenıaydin [40] proposed a lane detection algorithm based on camera and 2D LIDAR input data. The camera obtains the bird’s eye view of the road, and the LIDAR detects the location of objects. The proposed method consists of the steps mentioned below:
  • Obtain the camera and 2D LIDAR data.
  • Perform segmentation operation of the LIDAR data to determine groups of objects. It is done based on the distance among different points.
  • Map the group or objects to the camera data.
  • Turn the pixels of groups or objects into camera data. It is done by the formation of the region of interest based on a rectangular region. Straight lines are drawn from the location of the camera to the corner of the region of interest. The convex polygon algorithm determines the background and occluded region of the image.
  • Apply lane detection to the binary image to detect the lanes. The proposed approach shows better accuracy compared with the traditional methods for a distance less than 9 m.
Kemsaram et al. [41] proposed a deep learning-based approach for detecting lanes, objects and free space. The Nvidia tool comes with SDK (software development kit) with inbuilt options for object detection, lane detection and free space detection. The object detection module loads the image and applies transformations to the image to detect different objects. The lane detection framework uses the lane Net pipeline, which uses the images. The lanes are assigned with numbers from left to right. For each frame, the lane detection framework determines the lane markings. The lane detection function creates the pixel coordinates (x, y) for each lane marking. The free space module can identify the free space on the surface and in front of the vehicle. The proposed method is implanted in C++ and runs real-time on Nvidia Drive PX 2 platform. The time taken to determine the lane falls under 6 to 9 ms.

3.2.2. Model-Based Approach (Robust Lane Detection and Tracking)

Lee and Moon [42] proposed a robust lane detection and tracking system. This system’s main aim is to detect the lane and track by considering different environmental conditions such as clear sky, rainy, and snowy during morning and night. The proposed system consists of three phases, namely initialization, lane detection, and lane tracking. In the initialization phase, the road region is captured and pre-processed to a low-resolution image. The edges are extracted, and the image is split into the left half and right half region. An intersection point is made from both regions, and intersection points are mostly found near the vanishing point. Once the vanishing points become greater than the threshold, the region above and below the vanishing points is removed. In the lane marking detection phase, the lane marking is determined in the rectangular region of interest. The image is converted into greyscale by using edge line detection, and a line segment is detected. The hierarchical agglomerative clustering method is used for a color image. The line segment is determined from surrounding vehicles, shadows, trees, and buildings by using its frequency in the region of interest. Other disturbances are not continuous compared to the real lane marking, and they can be determined by comparing them with the consecutive frames. In the lane tracking phase, lane tracking is achieved from the modified region of interest. Multiple pairs of lanes with the same weight are considered, and the smallest are chosen. Some lanes, which are not detected, are predicted by using the Kalman filter. This system is tested using C++ and open CV library with Ubuntu14. There is scope for improvement of the algorithm during the night scenario.
Son et al. [43] proposed a robust multi-lane detection and tracking algorithm to determine the lane accurately under different road conditions such as poor road marking, obstacles and guardrails. An adaptive threshold is used to extract strong lane features from images that are not clear. The next step is to extract the erroneous lane features and apply the random sample consensus algorithm to prevent false lane detection. The selected lanes are verified using the lane classification algorithm. The advantage of this approach is that no prior knowledge of the lane geometry is required. The scope for improvement is the detection of the false lane under the different urban driving scenarios.
Li et al. [44] proposed a real-time robust lane detection method consisting of three methods: lane marking extraction, geometric model estimation, and tracking key points of the geometric model. In the lane extraction process, lane width is chosen according to the standards followed in the country. The gradient of each pixel is used to estimate the edge points of lane marking.
Son et al. [45] proposed a method that uses the illumination property of lanes under different conditions, as it is a challenge to detect the lane and keep the lane on track under different conditions. The methodology involves the determination of the vanishing point and in which the bottom half of the image is analyzed using a canny edge detector and Hough transform. The second step involves the determination of white lanes or yellow lanes based on the illumination property. The white and yellow lanes are used to obtain the binary image of the lane. The lanes are labelled, and the angles are made to intercept the y-axis. If there is a match, they are grouped to determine long lanes.
Chae et al. [46] proposed an autonomous lane changing system consisting of three modules: perception, motion planning, and control. The surrounding vehicles are detected using LIDAR sensor input. In motion planning, the vehicle determines the mode such as lane-keeping or lane change, followed by the desired motion that is planned considering the safety of surrounding vehicles. A linear quadratic regulator (LQR) based model predictive control is used for longitudinal acceleration and deciding the steering angle. The stochastic model predictive control is used for lateral acceleration.
Chen et al. [47] proposed a deep convolutional neural network to detect the lane markings. The modules involved in the lane detection process are lane marking generation, grouping, and lane model fitting. The lane grouping process involves forming a cluster comprising neighbouring pixels represented as a single label that belongs to the same lane and connecting the labels called super marking. The next step of lane model fitting uses 3rd order polynomial to represent straight and curved lanes. The simulation is done on the CAMVID dataset. The setup requires high-end systems to do the training. The algorithm is evaluated for a minimal real-time situation. The authors proposed a Global Navigation Satellite System (GNSS) based lane-keeping assistance system, which calculates the target steering angle using a model predictive controller. The advantage of the approach is that it is estimated from GNSS when the lane is not visible due to environmental constraints. The steering angle and acceleration are modelled using the first-order lag system. The model predictive control is used to control the lateral movement of the vehicle. The proposed system was simulated, and prototype testing was conducted in a real vehicle, OUTLANDER PHEV (Mitsubishi Motors Corporation). The results show that the lane is followed with a minimal lateral error of about 0.19 m. The drawback of the approach is that the time delay of GNSS has an impact on the oscillation in the steering. Hence, the GNSS time delay should be kept minimal compared to the steering time delay.
Lu et al. [48] proposed a lane detection approach using Gaussian distribution random sample consensus (G-RANSAC). The process involves converting a bird’s eye view image to look at all the lane characteristics. The next step is using a ride detector to extract the features of lane points and remove noise points using an adaptable neutral network. The ridge features are extracted from the gray images, which provide better results during the presence of vehicle shadow and minimal illumination on the environment. Finally, the lanes are detected using the RANSAC approach. The RANSAC algorithm considers the confidence level of ridge points in determining the lanes from noise. The proposed algorithm is tested under four different illumination conditions: normal illumination and good pavement, intense illumination and shadow interruption, normal illumination and sign-on-the-ground interruption and poor illumination and vehicle interference. The algorithm achieved 99.02%, 96.92%, 96.65% and 91.61% true-positive rates respectively.

3.2.3. Learning-Based Approach (Predictive Controller Lane Detection and Tracking)

Bian et al. [49] implemented a lane-keeping assistance system (LKAS) with two switchable assistance modes: lane departure prevention and lane-keeping co-pilot modes. The LKAS is designed to achieve better reliability. The two switchable assistance modes consist of a conventional Lane Departure Prevention (LDP) mode and a lane-keeping Co-pilot (LK Co-Pilot) mode. The LDP mode is activated if a lane departure is detected. A lateral offset is used as a lane-departure metric to determine whether to trigger the LDP or not. The LK Co-pilot mode is activated if the driver does not intend to change the lane; this mode helps the driver follow the expected trajectory based on the driver’s dynamic steering input. Care should be taken to set the threshold accurately and adequately; otherwise false lane detection would be increased.
Wang et al. [50] proposed a lane-changing strategy for autonomous vehicles using deep reinforcement learning. The parameters which are considered for the reward are delay and traffic on the road. The decision to switch lanes depends on improving the reward by interacting with the environment. The proposed approach is tested under accident and non-accident scenarios. The advantage of this approach is collaborative decision making in lane changing. Fixed rules may not be suitable for heterogeneous environmental or traffic scenarios.
Wang et al. [51] proposed a reinforcement learning-based lane change controller for a lane change. Two types of lane change controllers are adopted, namely longitudinal and lateral control. A car-following model, namely the intelligent driver model, is chosen for the longitudinal controller. The lateral controller is implemented by reinforcement learning. The reward function is based on yaw rate, acceleration, and time to change the lane. To overcome the static rules, a Q-function approximator is proposed to achieve continuous action space. The proposed system is tested in a custom-made simulation environment. Extensive simulation is expected to test the efficiency of the approximator function under different real-time scenarios.
Suh et al. [52] implemented a real-time probabilistic and deterministic lane changing motion prediction system which works under complex driving scenarios. They designed and tested the proposed system on both a simulation and real-time basis. A hyperbolic tangent path is chosen for the lane-change maneuver. The lane changing process is initiated if the clearance distance is greater than the minimum safe distance and the position of other vehicles. A safe driving envelope constraint is maintained to check the availability of nearby vehicles in different directions. A stochastic model predictive controller is used to calculate the steering angle and acceleration from the disturbances. The disturbance values are obtained from experimental data. The usage of advanced machine learning algorithms could improve the currently developed system’s reliability and performance.
Gopalan et al. [53] proposed a lane detection system to detect the lane accurately under different conditions such as lack of prior knowledge of the road geometry, lane appearance variation due to change in environmental condition, and independent of vehicle speed. The modules of the proposed system are lane detection and tracking. The basic approach used for lane detection is to classify the lane markings from the non-lane markings from the labelled training sample. A pixel hierarchy feature descriptor method is proposed to identify the correlation between the lane and its surroundings. A machine learning-based boosting algorithm is used to identify the most relevant features. The advantage of the boosting algorithm is the adaptive way of increasing or decreasing the weightage of the samples. The lane tracking process is performed during the non-availability of knowledge about the motion pattern of lane markings. Lane tracking is achieved by using particle filters to track each of the lane markings and understand the cause for the variation. The variance is calculated for different parameters such as the initial position of the lane, motion of the vehicle, change in road geometry, traffic pattern. The variance associated with the above parameters is used to track the lane under different environmental conditions. The learning-based proposed system provides better performance under different scenarios. The point to consider is that the assumption made is the flat nature of the road. The flat road image was chosen to avoid the sudden appearance and disappearance of the lane. The proposed system is implemented at the simulation level.
To summarize the progress made in lane detection and tracking as discussed in this section, Table 2 has been presented that shows the key steps involved in the three approaches for lane detection and tracking, along with remarks on their general characteristics. It is then followed with Table 3, Table 4 and Table 5 that presents the summary of data used, strengths, drawbacks, key findings and future prospects of the key studies that have adopted the three approaches in the literature.
Table 2. A summary of methods used for lane detection and tracking with general remarks.
Table 2. A summary of methods used for lane detection and tracking with general remarks.
MethodsStepsTool UsedData UsedMethods ClassificationRemarks
Image and sensor-based lane detection and tracking
  • Image frames are pre-processed
  • Lane detection algorithm is applied
  • The sensors values are used to track the lanes
  • Camera
  • Sensors
sensors valuesFeature-based approachFrequent calibration is required for accurate decision making in a complex environment
Predictive controller for lane detection and controllerMachine learning technique (e.g., neural networks,)
  • Model predictive controller
  • Reinforcement learning algorithms
data obtained from the controllerLearning-based approachReinforcement learning with model predictive controller could be a better choice to avoid false lane detection.
Robust lane detection and tracking
  • Capture an image through camera
  • Use Edge detector to data for extract the features of the image
  • Determination of vanishing point
Based on robust lane detection model algorithmsReal-timeModel-based approachProvides better result in different environmental conditions. Camera quality plays important role in determining lanes marking
Table 3. A comprehensive summary of lane detection and tracking algorithm.
Table 3. A comprehensive summary of lane detection and tracking algorithm.
SourcesDataMethod UsedAdvantagesDrawbacksResultsTool UsedFuture ProspectsDataReason for Drawbacks
SimulationReal
[24] YInverse perspective mapping method is applied to convert the image to bird’s eye view.Minimal error and quick detection of lane.The algorithm performance drops when driving in tunnel due to the fluctuation in the lighting conditions.The lane detection error is 5%. The cross-track error is 25% and lane detection time is 11 ms.Fisheye dashcam, inertial measurement unit and ARM processor-based computer.Enhancing the algorithm suitable for complex road scenario and with less light conditions.Data obtained by using a model car running at a speed of 100 m/s.Performance drop in determining the lane, if the vehicle is driving in a tunnel and the road conditions where there is no proper lighting.
The complex environment creates unnecessary tilt causing some inaccuracy in lane detection.
[25] YKinematic motion model to determine the lane with minimal parameters of the vehicle.No need for parameterization of the vehicle with variables like cornering stiffness and inertia. Prediction of lane even in absence of camera input for around 3 s.The algorithm suitable for different environment situation not been consideredLateral error of 0.15 m in the absence of camera image.Mobileye camera, carsim and MATLAB/Simulink, Auto box from dSPACE.Trying the fault tolerant model in real vehicle.Test vehicle----
[26]Y Usage of inverse mapping for the creation of bird’s eye view of the environment.Improved accuracy of lane detection in the range of 86%to 96% for different road types.Performance under different vehicle speed and inclement weather conditions not considered.The algorithm requires 0.8 s to process frame. Higher accuracy when more than 59% of lane markers are visible.Firewire color camera, MATLABReal-time implementation of the workHighway and streets and around Atlanta----
[27]YYHough transform to extract the line segments, usage of a convolutional neural network-based classifier to determine the confidence of line segment.Tolerant to noiseIn the custom dataset, the performance drops compared to Caltech dataset.For urban scenario, the proposed algorithm provides accuracy greater than 95%. The accuracy obtained in lane detection in the custom setup is 72% to 86%.OV10650 camera and I MU is Epson G320.Performance improvement is future consideration.Caltech dataset and custom dataset.The device specification and calibration, it plays important role in capturing the lane.
[28] YFeature-line-pairs (FLP) along with Kalman filter for road detection.Faster detection of lanes, suitable for real-time environment.Testing the algorithm suitability under different environmental conditions could be done.Around 4 ms to detect the edge pixels, 80 ms to detect all the FLPs, 1 ms to determine the extract road model with Kalman filter tracking.C++; camera and a matrox meteor RGB/ PPB digitizer.Robust tracking and improve the performance in urban dense traffic.Test robot.------
[29]Y Dual thresholding algorithm for pre-processing and the edge is detected by single direction gradient operator. Usage of the noise filter to remove the noise.The lane detection algorithm insensitive headlight, rear light, cars, road contour signs.The algorithm detects the straight lanes during the night.Detection
Of straight lanes.
Camera with RGB channel.-------Custom datasetSuitability of the algorithm for different types of roads during night to be studied.
[30]Y Determination of region of interest and conversion of binary image via adaptive threshold.Better accuracyThe algorithm needs changes for checking its suitability for the day time lane detection90% accuracy during night at isolated highwaysFirewire S400 camera and MATLABGeometrics transformation of image for increasing the accuracy and intensity normalization.Custom datasetThe constraints and assumption considered do not suit for the day time.
[31]Y Canny edge detector algorithm is used to detect the edges of the lanes.Hough transform improves the output of the lane tracker.------Performance of the proposed system is better.Raspberry pi based robust with camera and sensors.Simulation of the proposed method by using raspberry Pi based robot with a monocular camera and radar-based sensors to determine the distance between neighboring vehicles.Custom data------
[32]Y Video processing technique to determine the lanes illumination change on the region of interest.--------Robust performancevision-based vehicleDetermine the lanes illumination changes on the region of interest for curve line roadsSimulator----
[33]YYA colour-based lane detection and representative line extraction algorithm is used.Better accuracy in the day time.Algorithm needs changes to test in different scenario.The results show that the lane detection rate is more than 93%.MATLABThere is scope to test the algorithm in the night time.Custom dataUnwanted noise reduces the performance of the algorithm.
[34] YProposed hardware architecture for detecting straight lane lines using Hough transform.Proposed algorithm provides better accuracy for occlusion, poor line paintings.Computer complexity and high cost of HT (Hough transform)Algorithm tested under various conditions of roads such as urban street, highway and algorithm provides a detection rate of 92%.Virtex-5 ML 505 platformAlgorithm need to test with different weather condition.Custom-----
[35] YProposed a lane detection methodology in a circular arc or parabolic based geometric method.Video sensor improves the performance of the lane marking.Performance dropped in lane detection when entering the tunnel regionExperiment performed with different road scene and provided better results.maps, video sensors, GPS.Proposed method can test with previously available data.CustomDue to low illumination
[36]Y Proposed a hierarchical lane detection system to detect the lanes on the structured and unstructured roads.Quick detection of lanes.----The system achieves an accuracy of 97% in lane detection.MATLABAlgorithm can test on an isolated highway, urban roads. ----
[37] YLIDAR sensor-based boundary detection and tracking method for structured and unstructured roads.Regardless of road types, algorithm detect accurate lane boundaries.Difficult to track lane boundaries for unstructured roads because of low contract, arbitrary road shapeThe road boundary detection accuracy is 95% for structured roads and 92% for unstructured roads.Test vehicle with LIDAR, GPS and IMU.Algorithm needs to test with RADAR based and vision-based sensors.Custom dataLow contract arbitrary road shape
[38]Y Proposed a method to detect the pedestrian lanes under different illumination conditions with no lane markings.Robust performance for pedestrian lane detection under unstructured environment.More challenging for indoor and outdoor environment.The result shows that the lane detection accuracy is 95%.MATLABThere is scope for structured roads with different speeds limitNew dataset of 2000 images (custom)Complex environment
[39]YYThe proposed system is implemented using an improved Hough transform, which pre-process different light intensity road images and convert it to the polar angle constraint area.Robust performance for a campus road, in which the road does not have lane markings.Performance drops due to low intensity of light------Test vehicle and MATLAB-------Custom dataLow illumination
[40]Y A lane detection algorithm based on camera and 2D LIDAR input data.Computational and experimental results show the method significantly increases accuracy.------The proposed approach shows better accuracy compared with the traditional methods for distance less than 9 m.Proposed method need to test with RADAR and vision-based sensors datasoftware based analysis and MATLABFusion of camera and 2D LIDAR data-----
[41]Y A deep learning-based approach for detecting lanes, object and free space.The Nvidia tool comes with SDK (software development kit) with inbuild options for object detection, lane detection and free space.Monocular camera with advance driver assistance system is costly.The time taken to determine the lane falls under 6 to 9 ms.C++ and NVidia’s drive PX2 platformComplex road scenario with different high intensity of light.KITT----
Table 4. A comprehensive summary of learning-based model predictive controller lane detection and tracking.
Table 4. A comprehensive summary of learning-based model predictive controller lane detection and tracking.
SourcesDataMethodAdvantagesDrawbacksResultTool UsedFuture ProspectsDataReason for Drawback
SimulationReal
[42] YGradient cue, color cue and line clustering are used to verify the lane markings.The proposed method works better under different weather conditions such as rainy and snowy environments.The suitability of the algorithm for multi-lane detection of lane curvature is to be studied.Except rainy condition during the day, the proposed system provides better results.C++ and OpenCV on ubuntu operating system.
Hardware: duel ARM cortex-A9 processors.
----48 video clips from USA and KoreaSince the road environment may not be predictable, leads to false detection.
[43]Y Extraction of lanes from the captured image Random, sample consensus algorithm is used to eradicate error in lane detection.Multilane detection even during poor lane markings. No prior knowledge about the lane is required.Urban driving scenario quality has to be improved in cardova 2dataset since it perceives the curb of the sidewalk as a lane.The Caltech lane datasets consisting of four types of urban driving scenarios:
Cordova 1;
Cordova 2;
Washington2; with a total of 1224 frames containing 4172 lane markings.
MATLABReal time implementation of the proposed algorithmData from south Korea road and Caltech dataset.IMU sensors could be incorporated to avoid the false detection of lanes.
[44]YYRectangular detection region is formed on the image. Edge points of lane is extracted using threshold algorithm. A modified Brenham line voting space is used to detect lane segment.Robust lane detection method by using a monocular camera in which the roads are provided with proper lane markings.Performance drops when road is not flatIn Cardova 2 dataset, the false detection value is higher around 38%. The algorithm shows better performance under different roads geometries such as straight, curve, polyline and complexSoftware based performance analysis on Caltech dataset for different urban driving scenario. Hardware implementation on the Tuyou autonomous vehicle.----Caltech and custom-made datasetDue to the difficulty
In image capturing false detection happened. More training or inclusion of sensors for live dataset collection will help to mitigate it.
[45] YBased on voting map, detected vanishing points, usage of distinct property of lane colour to obtain illumination invariant lane marker and finally found main lane by using clustering methods.Overall method test algorithm within 33 ms per frame.Need to reduce computational complexity by using vanishing point and adaptive ROI for every frame.Under various
Illumination condition lane detection rate of the algorithm is an average 93%
Software based analysis done.There are chances, to test algorithm at day time with inclement weather conditions.Custom data based on Real-time-----
[46]Y Proposed a sharp curve lane from the input image based on hyperbola fitting. The input image is converted to grayscale image and the feature namely left edge, right edge and the extreme points of the lanes are calculatedBetter accuracy for sharp curve lanes.The suitability of the algorithm for different road geometrics yet to study.The results show that the accuracy of lane detection is around 97% and the average time taken to detect the lane is 20 ms.Custom made simulator C/C++ and visual studio-----Custom data-----
[47]Y vanishing point detection method for unstructured roadsAccurate and robust performance for unstructured roads.Difficult to obtain robust vanishing point for detection of lane for unstructured scene.The accuracy of vanishing point range between 80.9% to 93.6% for different scenarios.Unmanned ground vehicle and mobile robot.Future scope for structured roads with different scenarios.Custom dataComplex background interference and unclear road marking.
[48]Y Proposed a lane detection approach using Gaussian distribution random sample consensus (G-RANSAC), usage of rider detector to extract the features of lane points and adaptable neural network for remove noise.Provides better results during the presence of vehicle shadow and minimal illumination of the environment.----The proposed algorithm is tested under different illumination condition ranging from normal, intense, normal and poor and provides lane detection accuracy as 95%, 92%, 91% and 90%.Software based analysisNeed to test proposed method under various times like day, night.Test vehicle----
Table 5. A comprehensive summary of robust lane detection and tracking.
Table 5. A comprehensive summary of robust lane detection and tracking.
SourcesDataMethod UsedAdvantagesDrawbacksResultTool UsedFuture ProspectsDataReason for Drawbacks
SimulationReal
[49] YInverse perspective mapping method is applied to convert the image to bird’s eye view.Quick detection of lane.The algorithm performance drops due to the fluctuation in the lighting conditions.The lane detection error is 5%. The cross-track error is 25% lane detection time is 11 ms.Fisheye dashcam: inertial measurement unit; Arm processor-based computer.Enhancing the algorithm suitable for complex road scenario and with less light conditions.Data obtained by using a model car running at a speed of 1 m/sThe complex environment creates unnecessary tilt causing some inaccuracy in lane detection.
[50]Y Deep learning-based reinforcement learning is used for decision making in the changeover. The reward for decision making is based on the parameters like traffic efficiencyCooperative decision-making processes involving the reward function comparing delay of a vehicle and traffic.Validation expected to check the accuracy of the lane changing algorithm for heterogeneous environmentThe performance is fine-tuned based on the cooperation for both accident and non-accidental scenarioCustom made simulatorDynamic selection of cooperation coefficient under different traffic scenarioNewell car following model.----
[51]Y Reinforcement learning-based approach for decision making by using Q-function approximator.Decision-making process involving reward function comprising yaw rate, yaw acceleration and lane changing time.Need for more testing to check the efficiency of the approximator function for its suitability under different real-time conditions.The reward functions are used to learn the lane in a better way.Custom made simulatorTo test the efficiency of the proposed approach under different road geometrics and traffic conditions. Testing the feasibility of the reinforcement learning with fuzzy logic for image input and controller action based on the current situation.customMore parameters could be considered for the reward function.
[52]Y Probabilistic and prediction for the complex driving scenario.Usage of deterministic and probabilistic prediction of traffic of other vehicles to improve the robustnessAnalysis of the efficiency of the system under real-time noise is challenging.Robust decision making compared to the deterministic method. Lesser probability of collision.MATLAB/Simulink and carsim. Used real-time setup as following:
Hyundai-Kia motors K7, mobile eye camera system, micro auto box II, Delphi radars, IBEO laser scanner.
Testing undue different scenarioCustom dataset (collection of data using test vehicle).The algorithm to be modified for real suitability for real-time monitoring.
[53]Y Usage of pixel hierarchy to the occurrence of lane markings. Detection of the lane markings using a boosting algorithm. Tracking of lanes using a particle filter.Detection of the lane without prior knowledge on-road model and vehicle speed.Usage of vehicles inertial sensors GPS information and geometry model further improve performance under different environmental conditionsImproved performance by using support vector machines and artificial neural networks on the image.Machine with 4-GHz processor capable of working on image approximately 240 × 320 image at 15 frames per second.To test the efficiency of the algorithm by using the Kalman filter.custom dataCalibration of the sensors needs to be maintained.
Based on the review, some of the key observations from Table 3, Table 4 and Table 5 are summarized below:
  • Frequent calibration is required for accurate decision making in a complex environment.
  • Reinforcement learning with the model predictive control could be a better choice to avoid false lane detection.
  • Model-based approaches (robust lane detection and tracking) provide better results in different environmental conditions. Camera quality plays an important role in determining lane marking.
  • The algorithm’s performance depends on the type of filter used, and the Kalman filter is mostly used for lane tracking.
  • In a vision-based system, image smoothing is the initial lane detection and tracking stage, which plays a vital role in increasing systems performance.
  • External disturbances like weather conditions, vision quality, shadow and blazing, and internal disturbances such as too narrow, too wide, and unclear lane marking, drop algorithm performance.
  • The majority of researchers (>90%) have used custom datasets for research.
  • Monocular, stereo and infrared cameras have been used to capture images and videos. The algorithm’s accuracy depends on the type of camera used, and a stereo camera gives better performance than a monocular camera.
  • The lane markers can be occluded by a nearby vehicle while doing overtake.
  • There is an abrupt change in illumination as the vehicle gets out of a tunnel. Sudden changes in illumination affect the image quality and drop the system performance.
  • The results show that the lane detection and tracking efficiency rate under dry and light rain conditions is near 99% in most scenarios. However, the efficiency of lane marking detection is significantly affected by heavy rain conditions.
  • It has been seen that the performance of the system drops due to unclear and degraded lane markings.
  • IMU (Inertia measurement unit) and GPS are examples that help to improve RADAR and LIDAR’s performance of distance measurement.
  • One of the biggest problems with today’s ADAS is that changes in environmental and weather conditions have a major effect on the system’s performance.

3.3. Patented Works

According to the patent’s family size, it is observed that Toyota has a generally greater number of patents work (521), followed by Ford (406), General Motors (GM) (353), Honda motor (284) and Uber (245). Six of the top ten companies are from the United States, while four are from Asia. From a patent standpoint, Europe seems to be lagging behind in the battle for ADAS, and that the patents published in China and other Asian countries for lane detection and tracking are invented in the universities. Only Google and General Motor patent portfolios have a high technical relevance score among the top ten patent manufacturers. On the other hand, all portfolios have an above average market coverage score, indicating that their manufacturer believes their inventions are valuable enough to protect globally, and it highlights the significance and promises that companies perceive in autonomous driving. The detailed review of the patent works is beyond the scope of this study. However, given the commercial nature of lane detection and tracking, a sample of patented works, especially from the vehicle manufacturer, that align with the three approaches (feature-based, learning-based and model-based) has been presented in Table 6. Some of the key observations from Table 6 are:
  • By following the method of image and sensor-based lane detection, separate courses are calculated for precisely two of the lane markings to be tracked, with a set of binary parameters indicating the allocation of the determined offset values to one of the two separate courses [54]
  • By following the robust lane detection and tracking method, after a fixed number of computing cycles, a most probable hypothesis is calculated—the difference between the predicted courses of lane markings to only be tracked and the courses of recognized lane markings to be lowest [55].
  • A parametric estimation method, in particular a maximum likelihood method, is used to assign the calculated offset values to each of the separate courses of the lane markings to be tracked [56].
  • Only those two-lane markers that refer to the left and right lane boundaries of the vehicle’s own lane are applied to the tracking procedure [57].
  • The positive and negative ratios of the extracted characteristics of the frame are used to assess the system’s correctness. The degree of accuracy is enhanced by including the judgment in all extracted frames [58].
  • At a present calculation cycle, the lane change assistance calculates a target control amount comprising a feed-forward control using a target curvature of a track for changing the host vehicle’s lane [59].
  • Extra details analyzing signals mounted to determine if a collision between the host vehicle and any other vehicle is likely to occur, allowing action to be done to avoid the accident [60].
  • There are two kinds of issues that are often seen and corrected in dewarped perspective images: a stretching effect at the periphery region of a wide-angle image de warped by rectilinear projection, and duplicate images of objects in an area where the left and right camera views overlap [61].
  • The object identification system examines the pixels in order to identify the object that has not previously been identified in the 3D Environment [62].

4. Discussion

Based on the review of studies on lane detection and tracking in Section 3.2, it can be observed that there are limited data sets in the literature that researchers have used to test lane detection and tracking algorithms. Based on the literature review, a summary of the key data sets used in the literature or available to the researchers is presented in Table 7, which shows some of the key features, strengths, and weaknesses. It is expected that in future, more data sets may be available for the researchers as this field continues to grow, especially with the development of fully autonomous vehicles. As per the statistics survey of research papers published between 2000 and 2020, almost 42% of researchers mainly focused on Intrusion Detection System (IDS) matrix to evaluate the performance of the algorithms. This may be because the efficiency and effectiveness of IDS are better when compared to Point Clustering Comparison, Gaussian Distribution, Spatial Distribution and Key Points Estimation methods. The verification of the performance of the algorithms for lane detection and tracking system is done based on ground truth data set. There are four possibilities as true positive (TP), false negative (FN), false positive (FP) and true negative (TN), as shown in Table 8. There are many metrics available for the evaluation of performance, but the most common are accuracy, precision, F-score, Dice similarity coefficient (DSC) and receiver operating characteristic (ROC) curves. Table 9 provides the common metrics and the associated formulas used for the evaluation of the algorithms.
If the database is balanced, the accuracy rate should accurately reflect the algorithm’s global output. The precision reflects the goodness of optimistic forecasts. The greater the accuracy, the lower the number of “false alarms.” The recall, also called true positive rate (TPR), is the ratio of positive instances that are correctly detected by the algorithm. Therefore, the higher the recall, the higher the algorithm’s quality in detecting positive instances. The F1-Score is the Precision and Recall harmonic mean, and since they are combined into a concise metric, it can be used for comparing algorithms. Because it is more sensitive to low values, the harmonic mean is used rather than arithmetic. Hence, a valid algorithm has a satisfactory F1 score if it has accuracy and high recall. These parameters can be estimated as unique metrics for each class or as the algorithm’s overall metrics [73].
Table 10 shows the SWOT analysis of different approaches used for lane detection and tracking algorithms. The use of a Learning-based approach (model predictive controller) is considered an emerging approach for lane detection and tracking because it is computationally more efficient than the other two approaches, and it provides reasonable results in real-time scenarios. However, the risk of mismatching lanes and performance drop in inclement weather conditions are the drawback of the learning-based approach. Feature-based approach, while time-consuming, can provide better performance in optimization of lane detection and tracking. However, this approach poses challenges in handling high illumination or shadows. Image and sensor-based lane detection and tracking approaches have been used widely in lane detection and tracking patents.
In addition, from the literature synthesis, several gaps in knowledge are identified and are presented in Table 11. The literature review shows that clothoid and hyperbola shape roads are ignored for lane detection and algorithms road because of the complexity of road structure and unavailability of the dataset. Likewise, much work has already been done on structured roads’ pavement marking compared to unstructured roads (Figure 3). Most studies focus on straight roads. It is to be noted that unstructured roads are available in residential areas, hilly area roads, forest area roads. Much research has previously considered daytime, while night and rainy conditions are less studied. From the literature, it is observed that, in terms of speed flow conditions, they have been previously researched on the speed levels of 40 km/h to 80 km/h while high speed (above 80 km/hr) has received less attention. Further, occlusion due to overtaking vehicles or other objects (Figure 4), and high illumination also pose a challenge for lane detection and tracking. These issues should be addressed to move from level 3 automation (partial driving) to level 5 fully autonomous Also, new databases for more testing of algorithms are needed as researchers are constrained due to the unavailability of datasets. There is, however, the prospect of using synthetic sensor data generated by using a test vehicle or driving scenario designing through a driving simulator app available through commercial software.
Lane markings are usually yellow and white, although reflector lanes are designated with other colors. The number of lanes and their width varies per country. Due to the existence of shadows, there may be problems with vision clarity. The surrounding cars may obstruct the lane markings. Likewise, there is a dramatic shift in lighting as the car exits a tunnel. As a result, excessive light has an impact on visual clarity. Due to different weather conditions such as rain, fog, and snow, the visibility of the lane markings decreases. In the evening, visibility may be reduced. These difficulties in lane recognition and tracking lead to a drop in the performance of lane detection and tracking algorithms. Therefore, the development of a reliable lane detecting system is a challenge.

5. Conclusions

Over the last decade, many researchers have researched ADAS. This field continues to grow, as fully autonomous vehicles are predicted to enter the market soon [80,81]. There are limited studies in the literature that provides the state-of-art in lane detection and tracking algorithms and evaluation of the algorithms. To fulfil this gap, in this study, we have provided a comprehensive review of different methods of lane detection and tracking algorithms. In addition, we presented a summary of different data sets that researchers have used to test the algorithms, along with the approaches for evaluating the performance of the algorithms. Further, a summary of patented works has also been provided.
The use of a Learning-based approach is gaining popularity because it is computationally more efficient and provides reasonable results in real-time scenarios. The unavailability of rigorous and varied datasets to test the algorithms have been a constraint to the researchers. However, using synthetic sensor data generated by using a test vehicle or driving scenario through a vehicle simulator app availability in commercial software has opened the door for testing algorithms. Likewise, the following areas need more investigations in future:
  • lane detection and tracking under different complex geometric road design models, e.g., hyperbola and clothoid
  • achieving high reliability for detecting and tracking the lane under different weather conditions, different speeds and weather conditions, and
  • lane detection and tracking for the unstructured roads
This study aimed to comprehensively review previous literature on lane detection and tracking for ADAS and identify gaps in knowledge for future research. This is important because limited studies provide state-of-art lane detection and tracking algorithms for ADAS and a holistic overview of works in this area. The quantitative assessment of mathematical models and parameters is beyond the scope of this work. It is anticipated that this review paper will be a valuable resource for the researchers intending to develop reliable lane detection and tracking algorithms for emerging autonomous vehicles in future.

Author Contributions

Investigation, data collection, methodology, writing—original draft preparation, S.W.; Supervision, writing—review and editing, N.S.; Supervision, writing—review and editing, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The first author would like to acknowledge the Government of India, Ministry of Social Justice & Empowerment, for providing full scholarship to pursue PhD study at RMIT University. We want to thank the three anonymous reviewers whose constructive comments helped to improve the paper further.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nilsson, N.J. Shakey the Robot; Sri International Menlo Park: California, CA, USA, 1984. [Google Scholar]
  2. Tsugawa, S.; Yatabe, T.; Hirose, T.; Matsumoto, S. An Automobile with Artificial Intelligence. In Proceedings of the 6th International Joint Conference on Artificial Intelligence, Tokyo, Japan, 20 August 1979. [Google Scholar]
  3. Blackman, C.P. The ROVA and MARDI projects. In Proceedings of the IEEE Colloquium on Advanced Robotic Initiatives in the UK, London, UK, 17 April 1991; pp. 5/1–5/3. [Google Scholar]
  4. Thorpe, C.; Herbert, M.; Kanade, T.; Shafter, S. Toward autonomous driving: The CMU Navlab. II. Architecture and systems. IEEE Expert. 1991, 6, 44–52. [Google Scholar] [CrossRef]
  5. Horowitz, R.; Varaiya, P. Control design of an automated highway system. Proc. IEEE 2000, 88, 913–925. [Google Scholar] [CrossRef]
  6. Pomerleau, D.A.; Jochem, T. Rapidly Adapting Machine Vision for Automated Vehicle Steering. IEEE Expert. 1996, 11, 19–27. [Google Scholar] [CrossRef]
  7. Parent, M. Advanced Urban Transport: Automation Is on the Way. Intell. Syst. IEEE 2007, 22, 9–11. [Google Scholar] [CrossRef]
  8. Lari, A.Z.; Douma, F.; Onyiah, I. Self-Driving Vehicles and Policy Implications: Current Status of Autonomous Vehicle Development and Minnesota Policy Implications. Minn. J. Law Sci. Technol. 2015, 16, 735. [Google Scholar]
  9. Urmson, C. Green Lights for Our Self-Driving Vehicle Prototypes. Available online: https://blog.google/alphabet/self-driving-vehicle-prototypes-on-road/ (accessed on 30 September 2021).
  10. Campisi, T.; Severino, A.; Al-Rashid, M.A.; Pau, G. The Development of the Smart Cities in the Connected and Autonomous Vehicles (CAVs) Era: From Mobility Patterns to Scaling in Cities. Infrastructures 2021, 6, 100. [Google Scholar] [CrossRef]
  11. Severino, A.; Curto, S.; Barberi, S.; Arena, F.; Pau, G. Autonomous Vehicles: An Analysis both on Their Distinctiveness and the Potential Impact on Urban Transport Systems. Appl. Sci. 2021, 11, 3604. [Google Scholar] [CrossRef]
  12. Aly, M. Real time Detection of Lane Markers in Urban Streets. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. [Google Scholar] [CrossRef] [Green Version]
  13. Bar Hillel, A.; Lerner, R.; Levi, D.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl. 2014, 25, 727–745. [Google Scholar] [CrossRef]
  14. Ying, Z.; Li, G.; Zang, X.; Wang, R.; Wang, W. A Novel Shadow-Free Feature Extractor for Real-Time Road Detection. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016. [Google Scholar]
  15. Jothilashimi, S.; Gudivada, V. Machine Learning Based Approach. 2016. Available online: https://www.sciencedirect.com/topics/computer-science/machine-learning-based-approach (accessed on 20 August 2021).
  16. Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and Gabor filter. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 59–64. [Google Scholar]
  17. Zhao, H.; Teng, Z.; Kim, H.; Kang, D. Annealed Particle Filter Algorithm Used for Lane Detection and Tracking. J. Autom. Control Eng. 2013, 1, 31–35. [Google Scholar] [CrossRef] [Green Version]
  18. Paula, M.B.; Jung, C.R. Real-Time Detection and Classification of Road Lane Markings. In Proceedings of the 2013 XXVI Conference on Graphics, Patterns and Images, Arequipa, Peru, 5–8 August 2013. [Google Scholar]
  19. Kukkala, V.K.; Tunnell, J.; Pasricha, S.; Bradley, T. Advanced Driver-Assistance Systems: A Path toward Autonomous Vehicles. In IEEE Consumer Electronics Magazine; IEEE: Eindhoven, The Netherlands, 2018; Volume 7, pp. 18–25. [Google Scholar] [CrossRef]
  20. Yenkanchi, S. Multi Sensor Data Fusion for Autonomous Vehicles; University of Windsor: Windsor, ON, Canada, 2016. [Google Scholar]
  21. Synopsys.com. What Is ADAS (Advanced Driver Assistance Systems)?—Overview of ADAS Applications|Synopsys. 2021. Available online: https://www.synopsys.com/automotive/what-is-adas.html (accessed on 12 October 2021).
  22. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. In IEEE Transactions on Intelligent Transportation Systems; IEEE: Eindhoven, The Netherlands, 2006; Volume 7, pp. 20–37. [Google Scholar] [CrossRef] [Green Version]
  23. Veit, T.; Tarel, J.; Nicolle, P.; Charbonnier, P. Evaluation of Road Marking Feature Extraction. In Proceedings of the 2008 11th International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 174–181. [Google Scholar]
  24. Kuo, C.Y.; Lu, Y.R.; Yang, S.M. On the Image Sensor Processing for Lane Detection and Control in Vehicle Lane Keeping Systems. Sensors 2019, 19, 1665. [Google Scholar] [CrossRef] [Green Version]
  25. Kang, C.M.; Lee, S.H.; Kee, S.C.; Chung, C.C. Kinematics-based Fault-tolerant Techniques: Lane Prediction for an Autonomous Lane Keeping System. Int. J. Control Autom. Syst. 2018, 16, 1293–1302. [Google Scholar] [CrossRef]
  26. Borkar, A.; Hayes, M.; Smith, M.T. Robust lane detection and tracking with ransac and Kalman filter. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3261–3264. [Google Scholar] [CrossRef]
  27. Sun, Y.; Li, J.; Sun, Z. Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion. Sensors 2019, 19, 2305. [Google Scholar] [CrossRef] [Green Version]
  28. Lu, J.; Ming Yang, M.; Wang, H.; Zhang, B. Vision-based real-time road detection in urban traffic, Proc. SPIE 4666. In Real-Time Imaging VI; SPIE: Bellingham, WA, USA, 2002. [Google Scholar] [CrossRef]
  29. Zhang, X.; Shi, Z. Study on lane boundary detection in night scene. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 538–541. [Google Scholar] [CrossRef]
  30. Borkar, A.; Hayes, M.; Smith, M.T.; Pankanti, S. A layered approach to robust lane detection at night. In Proceedings of the 2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, Nashville, TN, USA, 30 March–2 April 2009; pp. 51–57. [Google Scholar] [CrossRef]
  31. Priyadharshini, P.; Niketha, P.; Saantha Lakshmi, K.; Sharmila, S.; Divya, R. Advances in Vision based Lane Detection Algorithm Based on Reliable Lane Markings. In Proceedings of the 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; pp. 880–885. [Google Scholar] [CrossRef]
  32. Hong, G.-S.; Kim, B.-G.; Dorra, D.P.; Roy, P.P. A Survey of Real-time Road Detection Techniques Using Visual Color Sensor. J. Multimed. Inf. Syst. 2018, 5, 9–14. [Google Scholar] [CrossRef]
  33. Park, H. Implementation of Lane Detection Algorithm for Self-driving Vehicles Using Tensor Flow. In International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing; Springer: Cham, Switzerland, 2018; pp. 438–447. [Google Scholar]
  34. El Hajjouji, I.; Mars, S.; Asrih, Z.; Mourabit, A.E. A novel FPGA implementation of Hough Transform for straight lane detection. Eng. Sci. Technol. Int. J. 2020, 23, 274–280. [Google Scholar] [CrossRef]
  35. Samadzadegan, F.; Sarafraz, A.; Tabibi, M. Automatic Lane Detection in Image Sequences for Vision-based Navigation Purposes. ISPRS Image Eng. Vis. Metrol. 2006. Available online: https://www.semanticscholar.org/paper/Automatic-Lane-Detection-in-Image-Sequences-for-Samadzadegan-Sarafraz/55f0683190eb6cb21bf52c5f64b443c6437b38ea (accessed on 12 August 2021).
  36. Cheng, H.-Y.; Yu, C.-C.; Tseng, C.-C.; Fan, K.-C.; Hwang, J.-N.; Jeng, B.-S. Environment classification and hierarchical lane detection for structured and unstructured roads. Comput. Vis. IET 2010, 4, 37–49. [Google Scholar] [CrossRef]
  37. Han, J.; Kim, D.; Lee, M.; Sunwoo, M. Road boundary detection and tracking for structured and unstructured roads using a 2D lidar sensor. Int. J. Automot. Technol. 2014, 15, 611–623. [Google Scholar] [CrossRef]
  38. Le, M.C.; Phung, S.L.; Bouzerdoum, A. Lane Detection in Unstructured Environments for Autonomous Navigation Systems. In Asian Conference on Computer Vision; Cremers, D., Reid, I., Saito, H., Yang, M.H., Eds.; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  39. Wang, J.; Ma, H.; Zhang, X.; Liu, X. Detection of Lane Lines on Both Sides of Road Based on Monocular Camera. In Proceedings of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, China, 5–8 August 2018; pp. 1134–1139. [Google Scholar]
  40. YenIaydin, Y.; Schmidt, K.W. Sensor Fusion of a Camera and 2D LIDAR for Lane Detection. In Proceedings of the 2019 27th Signal Processing and Communications Applications Conference (SIU), Sivas, Turkey, 24–26 April 2019; pp. 1–4. [Google Scholar]
  41. Kemsaram, N.; Das, A.; Dubbelman, G. An Integrated Framework for Autonomous Driving: Object Detection, Lane Detection, and Free Space Detection. In Proceedings of the 2019 Third World Conference on Smart Trends in Systems Security and Sustainablity (WorldS4), London, UK, 30–31 July 2019; pp. 260–265. [Google Scholar] [CrossRef]
  42. Lee, C.; Moon, J.-H. Robust Lane Detection and Tracking for Real-Time Applications. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1–6. [Google Scholar] [CrossRef]
  43. Son, Y.; Lee, E.S.; Kum, D. Robust multi-lane detection and tracking using adaptive threshold and lane classification. Mach. Vis. Appl. 2018, 30, 111–124. [Google Scholar] [CrossRef]
  44. Li, Q.; Zhou, J.; Li, B.; Guo, Y.; Xiao, J. Robust Lane-Detection Method for Low-Speed Environments. Sensors 2018, 18, 4274. [Google Scholar] [CrossRef] [Green Version]
  45. Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst. Appl. 2014, 42. [Google Scholar] [CrossRef]
  46. Chae, H.; Jeong, Y.; Kim, S.; Lee, H.; Park, J.; Yi, K. Design and Vehicle Implementation of Autonomous Lane Change Algorithm based on Probabilistic Prediction. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2845–2852. [Google Scholar] [CrossRef]
  47. Chen, P.R.; Lo, S.Y.; Hang, H.M.; Chan, S.W.; Lin, J.J. Efficient Road Lane Marking Detection with Deep Learning. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar]
  48. Lu, Z.; Xu, Y.; Shan, X. A lane detection method based on the ridge detector and regional G-RANSAC. Sensors 2019, 19, 4028. [Google Scholar] [CrossRef] [Green Version]
  49. Bian, Y.; Ding, J.; Hu, M.; Xu, Q.; Wang, J.; Li, K. An Advanced Lane-Keeping Assistance System with Switchable Assistance Modes. IEEE Trans. Intell. Transp. Syst. 2019, 21, 385–396. [Google Scholar] [CrossRef]
  50. Wang, G.; Hu, J.; Li, Z.; Li, L. Cooperative Lane Changing via Deep Reinforcement Learning. arXiv 2019, arXiv:1906.08662. [Google Scholar]
  51. Wang, P.; Chan, C.Y.; de La Fortelle, A. A reinforcement learning based approach for automated lane change maneuvers. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1379–1384. [Google Scholar]
  52. Suh, J.; Chae, H.; Yi, K. Stochastic model-predictive control for lane change decision of automated driving vehicles. IEEE Trans. Veh. Technol. 2018, 67, 4771–4782. [Google Scholar] [CrossRef]
  53. Gopalan, R.; Hong, T.; Shneier, M.; Chellappa, R. A learning approach towards detection and tracking of lane markings. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1088–1098. [Google Scholar] [CrossRef]
  54. Mueter, M.; Zhao, K. Method for Lane Detection. US20170068862A1. 2015. Available online: https://patents.google.com/patent/US20170068862A1/en (accessed on 12 August 2021).
  55. Joshi, A. Method for Generating Accurate Lane Level Maps. US9384394B2. 2013. Available online: https://patents.google.com/patent/US9384394B2/en (accessed on 12 August 2021).
  56. Kawazoe, H. Lane Tracking Control System for Vehicle. US20020095246A1. 2001. Available online: https://patents.google.com/patent/US20020095246 (accessed on 12 August 2021).
  57. Lisaka, A. Lane Detection Sensor and Navigation System Employing the Same. EP1143398A3. 1996. Available online: https://patents.google.com/patent/EP1143398A3/en (accessed on 12 August 2021).
  58. Zhitong, H.; Yuefeng, Z. Vehicle Detecting Method Based on Multi-Target Tracking and Cascade Classifier Combination. CN105205500A. 2015. Available online: https://patents.google.com/patent/CN105205500A/en (accessed on 12 August 2021).
  59. Fujii, S. Steering Support Device. JP6589941B2, 2019. Patentimages.storage.googleapis.com. 2021. Available online: https://patentimages.storage.googleapis.com/0b/d0/ff/978af5acfb7b35/JP6589941B2.pdf (accessed on 12 August 2021).
  60. Gurghian, A.; Koduri, T.; Nariyambut Murali, V.; Carey, K. Lane Detection Systems and Methods. US10336326B2. 2016. Available online: https://patents.google.com/patent/US10336326B2/en (accessed on 12 August 2021).
  61. Zhang, W.; Wang, J.; Lybecker, K.; Piasecki, J.; Brian Litkouhi, B.; Frakes, R. Enhanced Perspective View Generation in a Front Curb Viewing System Abstract. US9834143B2. 2014. Available online: https://patents.google.com/patent/US9834143B2/en (accessed on 12 August 2021).
  62. Vallespi-Gonzalez, C. Object Detection for an Autonomous Vehicle. US20170323179A1. 2016. Available online: https://patents.google.com/patent/US20170323179A1/en (accessed on 12 August 2021).
  63. Cu Lane Dataset. Available online: https://xingangpan.github.io/projects/CULane.html (accessed on 13 April 2020).
  64. Caltech Pedestrian Detection Benchmark. Available online: http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/ (accessed on 13 April 2020).
  65. Lee, E. Digital Image Media Lab. Diml.yonsei.ac.kr. 2020. Available online: http://diml.yonsei.ac.kr/dataset/ (accessed on 13 April 2020).
  66. Cvlibs.net. The KITTI Vision Benchmark Suite. Available online: http://www.cvlibs.net/datasets/kitti/ (accessed on 27 April 2020).
  67. Tusimple/Tusimple-Benchmark. Available online: https://github.com/TuSimple/tusimple-benchmark/tree/master/doc/velocity_estimation (accessed on 15 April 2020).
  68. Romera, E.; Luis, M.; Arroyo, L. Need Data for Driver Behavior Analysis? Presenting the Public UAH-Drive Set. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems, Rio de Janeiro, Brazil, 1–4 November 2016. [Google Scholar]
  69. BDD100K Dataset. Available online: https://mc.ai/bdd100k-dataset/ (accessed on 2 April 2020).
  70. Kumar, A.M.; Simon, P. Review of Lane Detection and Tracking Algorithms in Advanced Driver Assistance System. Int. J. Comput. Sci. Inf. Technol. 2015, 7, 65–78. [Google Scholar] [CrossRef]
  71. Hamed, T.; Kremer, S. Computer and Information Security Handbook, 3rd ed.; Elesevier: Amsterdam, The Netherlands, 2017; p. 114. [Google Scholar]
  72. Precision and Recall. Available online: https://en.wikipedia.org/wiki/Precision_and_recall (accessed on 13 January 2021).
  73. Fiorentini, N.; Losa, M. Long-Term-Based Road Blackspot Screening Procedures by Machine Learning Algorithms. Sustainability 2020, 12, 5972. [Google Scholar] [CrossRef]
  74. Wu, S.J.; Chiang, H.H.; Perng, J.W.; Chen, C.J.; Wu, B.F.; Lee, T.T. The heterogeneous systems integration design and implementation for lane keeping on a vehicle. IEEE Trans. Intell. Transp. Syst. 2008, 9, 246–263. [Google Scholar] [CrossRef]
  75. Liu, H.; Li, X. Sharp Curve Lane Detection for Autonomous Driving. Comput. Sci. Eng. 2019, 21, 80–95. [Google Scholar] [CrossRef]
  76. Han, J.; Yang, Z.; Hu, G.; Zhang, T.; Song, J. Accurate and robust vanishing point detection method in unstructured road scenes. J. Intell. Robot. Syst. 2019, 94, 143–158. [Google Scholar] [CrossRef]
  77. Tominaga, K.; Takeuchi, Y.; Tomoki, U.; Kameoka, S.; Kitano, H.; Quirynen, R.; Berntorp, K.; Di Cairano, S. GNSS Based Lane Keeping Assist System via Model Predictive Control. 2019. Available online: https://doi.org/10.4271/2019-01-0685 (accessed on 9 September 2021).
  78. Chen, Z.; Liu, Q.; Lian, C. PointLaneNet: Efficient end-to-end CNNs for Accurate Real-Time Lane Detection. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 2563–2568. [Google Scholar] [CrossRef]
  79. Feng, Y.; Rong-ben, W.; Rong-hui, Z. Research on Road Recognition Algorithm Based on Structure Environment for ITS. In Proceedings of the 2008 ISECS International Colloquium on Computing, Communication, Control, and Management, Guangzhou, China, 3–4 August 2008; pp. 84–87. [Google Scholar] [CrossRef]
  80. Nieuwenhuijsen, J.; de Almeida Correia, G.H.; Milakis, D.; van Arem, B.; van Daalen, E. Towards a quantitative method to analyze the long-term innovation diffusion of automated vehicles technology using system dynamics. Transp. Res. Part C Emerg. Technol. 2018, 86, 300–327. [Google Scholar] [CrossRef] [Green Version]
  81. Stasinopoulos, P.; Shiwakoti, N.; Beining, M. Use-stage life cycle greenhouse gas emissions of the transition to an autonomous vehicle fleet: A System Dynamics approach. J. Clean. Prod. 2021, 278, 123447. [Google Scholar] [CrossRef]
Figure 1. Flowchart showing the methodology adopted for the review.
Figure 1. Flowchart showing the methodology adopted for the review.
Sustainability 13 11417 g001
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumination.
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumination.
Sustainability 13 11417 g003
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane.
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane.
Sustainability 13 11417 g004
Table 1. SWOT analysis of sensors that are used in ADAS.
Table 1. SWOT analysis of sensors that are used in ADAS.
Type of SensorsRelative VelocityMeasured DistanceStrengthsWeaknessesApplicationOpportunitiesThreatsPerceived EnergyRecognizing Vehicle
LASER based sensorsDerivative of rangeTime of flightReliable for automatic car parking and collision mitigationVulnerable to dirty lenses and reflecting target reduced.Collision warning, assistant automatic parkingGives warning for excessive load or strain.Failure due to inclement weather600–1000 emitted laser waves (Nanometers)Resolved via spatial segmentation and motion
RADAR based sensorsFrequencyTime of flightSuitable for collision mitigation and adaptiveVulnerable and sometimes fails for extreme weather conditionCollision warning, assistant automatic parkingBetter accuracy and required no attentionInappropriate and difficult to implementation by non-professionalEmitted radio single wave (Millimeter)Resolved via tracking
Vision based sensorsDerivative of rangeModel parameterReadily available and affordable in the automobile sectorVulnerable to extreme weather conditions and sometimes fails to work in the night time.Collision warning, assistant automatic parking.Low cost, passive non-invasive sensors and low operating power.Less effective for bad weather, for complex illumination and shadowAmbient visible lightResolved via motion and appearance
Table 6. Summary of patents for lane detection and tracking algorithms.
Table 6. Summary of patents for lane detection and tracking algorithms.
CountryPatent NoAssigneeMethodKey FindingApproachInventor
USAUS20170068862A1Aptiv
Technologies Ltd.
Camera based vision based driver assistance system.State estimation and separate progression.Feature based approachMirko Mueter, Kun Zhao
USAUS9384394B2Toyota motor corporationGenerates accurate lane estimation using course map information and LIDAR sensors.Centre of the lane and multiple lanes.Model based approachAvdhut Joshi and Michael James
USAUS20020095246A1Nissan motor co Ltd.Controller is designed in such way that it detect lanes by controlling steering angle when vehicle move out of desired track.Measure the output of the signal.Learning based approachHiroshi Kawazoe
EuropeEP1143398A3Panasonic CorporationProposed an extraction method using Hough transform to detect the lanes in the opposite side of roads.Determine the maximum value of accumulators.Feature based approachAtsushi Lisaka, Mamoru Kaneko and Nobohiko Yasui
ChinaCN105205500ABeijing University of post and telecommunicationComputer graphical and vision-based technology with multi target filtering and sorter training is used.This method finds multi target tracking and cascade classifier with high detection processing speed.Model based approachZhitong, H. and Yuefeng, Z
JapanJP6589941B2Not availableDeveloped steering assist device for lane detection and tracking under periphery monitoring.Objective of this method is relative position host vehicle and their relation with lane has been identified.Model based approachShota Fujii
USAUS10336326Ford global technologies LLCProposed a deep learning-based front facing camera lane detection method.Exacted features of lane boundaries with the help of camera mounted at front.Feature based approachAlexandru Mihai, Tejaswi Koduri, Vidya Nariyambut Marali Kyle J Carey
USAUS9834143B2GM Global Technology Operations LLCThe improved perspective view is produced a new camera imaging surface model and other distortion correcting technique.Main objective is to improve the perspective view of the vehicle at front for lane detection and tracking.Featured based approachWende Zhang, Jinsong Wang, Kent S Lybecker, Jeffrey S. Piasecki, Bakhtiar Brian Litkouhi, Ryan M. Frakes
USAUS20170323179A1Uber technologies Inc.Sensor fusion data processing technique is used for surrounding object detection and lane detection.Generate 3D envirmental
data through sensor fusion to guide autonomous vehicle.
Leaning based approachCarlos Vallespi-Gonzalez
Table 7. A summary of datasets that have been used in the literature for verification of the algorithms.
Table 7. A summary of datasets that have been used in the literature for verification of the algorithms.
DatasetFeaturesStrengthWeakness
CU lane [63]55 h videos, 133,235 extracted frames, 88,880 training set, 9675 validations set and 34,680 test set.For unseen or occluded lane marking annotated manually with a cubic spline.Except for four lanes markings, others are not annotated
Caltech [64]10 h video 640 × 480 Hz of regular traffic in an urban environment. 250,000 frames, 350,000 boundary boxes annotated with occlusion and temporal.Entire dataset annotated, testing data also provided (set 06–set 10) and training data (set 00–set 05) each 1 GB.Not applicable for all types of road geometries and weather conditions.
Custom data (collection of data using test vehicle)Not applicableAvailable according to the requirementsTime-consuming and highly expensive
DIML [65]Multimodal dataset:
Sony cyber shot DSC-RX 100 camera, 5 different photometric variation pairs.
RGB-D dataset: More than 200indoor/outdoor scenes, Kinect Vz and zed stereo camera obtain RGB-D frames.
Lane dataset: 470 video sequences of downtown and urban roads.
Emotion Recognition dataset (CAER): more than 13,000 videos and 13,000 annotated videos
CoVieW18 dataset: untrimmed videos sample, 90,000 YouTube videos URLs.
Different scenarios have been covered, like a traffic jam, pedestrians and obstacles.Dataset for different weather conditions and lanes with no markings are missing.
KITTI [66]It contains stereo, optical flow, visual odometry etc. it contains an object detection dataset, monocular images and boundary boxes, 7481 training images, 7518 test images.Evaluation is done of orientation estimation of bird’s eye view and applicable for real-time object detection and 3D tracking. Evaluation metrics provided.Only 15 cars and 30 pedestrians have been considered while capturing images. Applicable for rural and highway roads dataset.
Tusimple [67]Training: 3222 annotated vehicles in 20 frames per second for 1074 clips of 25 videos.
Testing: 269 video clips
Supplementary data: 5066 images of position and velocity of vehicle marked by range sensors.
Lane detection challenge, velocity estimation challenge and ground truths have been provided.Calibration file for lane detection has not been provided.
UAH [68]Raw real time data:
Raw-GPS, RAW-Accelerometers.
Processed data as continuous variables: pro lane detection, pro vehicle detection and pro OpenStreetMap data.
Processed data as events: events list lane changes and events inertial.
Sematic information:
Sematic final and sematic online.
More than 500 min naturistic driving and processed sematic information have provided.Limited accessibility to the research community
BDD100K [69]100,000 videos for more than 1000 h, road object detection, drivable area, segmentation and full frame sematic segmentation.IMU data, timestamp and localization have been included in the dataset.Data for unstructured road has not covered.
Table 8. Performance metrics for verification of lane detection and tracking algorithms, compiled from ref. [70].
Table 8. Performance metrics for verification of lane detection and tracking algorithms, compiled from ref. [70].
PossibilityCondition 1Condition 2
True positiveGround truth existsWhen the algorithm detects lane markers.
False positiveNo ground truth existsWhen the algorithm detects lane markers.
False negativeGround truth exists in the imageWhen the algorithm detects lane markers.
True negativeNo ground truth exists in the imageWhen the algorithm is not detecting anything
Table 9. A summary of the equation of metrics used for evaluation of the performance of the algorithm, compiledfrom refs. [71,72].
Table 9. A summary of the equation of metrics used for evaluation of the performance of the algorithm, compiledfrom refs. [71,72].
Sr. noMetricsFormula *
1.Accuracy(A) A = ( T P + T N ) ( T P + T N + F P + F N )
2.Detection rate (DR) D R = ( T P ) ( T P + F N )
3.False positive rate (FPR) F P R = ( F N ) ( T P + F N )
4.False negative rate (FNR) F N R = F N ( F N + T P )
5.True negative rate (TNR) T N R = T N ( T N + T P )
6.Precision P r e c i s i o n = T P ( T N + F P )
7.F-measure F M e a s u r e = ( 2 × R e c a l l × P r e c i s i o n ) ( R e c a l l × P r e c i s i o n )
8.Error rate E r r o r = ( T P + F N ) ( F P + F N + T P + T N )
* Where, TP = True positive, i.e., both conditions are satisfied by the algorithm. FP = False positive. i.e., only one condition satisfied by the algorithm. TN = True negative. i.e., ground truth missing in the image. FN = False negative. i.e., algorithm fails to detect lane marking.
Table 10. SWOT analysis of different approaches used for lane detection and tracking algorithms.
Table 10. SWOT analysis of different approaches used for lane detection and tracking algorithms.
MethodsStrengthWeaknessOpportunitiesThreats
Feature based approachFeature extraction is used to determine false lane markings.Time-consumingBetter performance in optimizationLess effective for complex illumination and shadow
Learning based approachEasy and reliable methodMismatching lanesComputationally more efficientPerformance drops due to inclement weather
Model based approachCamera quality improves system performanceExpensive and time-consumingRobust performance for lane detection modelDifficult to mount sensor fusion system for complex geometry
Table 11. Lane detection under different conditions to identify the gaps in knowledge.
Table 11. Lane detection under different conditions to identify the gaps in knowledge.
Road GeometryPavement MarkingWeather ConditionSpeed
SourcesStraightClothoidHyperbolaStructuredUnstructuredDayNightRain
[26] Borkar et al. (2009)------------
[28] Lu et al. (2002)------------
[29] Zhang & Shi (2009)------------
[32] Hong et al. (2018)------------
[33] Park, H. et al. (2018)----------Low (40 km/h) & high (80 km/h)
[34] EI Hajiouji, H. (2019)--------120 km/h
[35] Samadzadegan et al. (2006)----------
[36] Cheng et al. (2010)----------
[40] Yeniaydin et al. (2019)----------
[41] Kemsoaram et al. (2019)------------
[43] Son et al. (2019)--------
[47] Chen et al. (2018)----------
[52] Suh et al. (2019)--------60–80 km/h
[53] Gopalan et al. (2018)--------
[74] Wu et al. (2008)----------40 km/h
[75] Liu & Li et al. (2018)------
[76] Han et al. (2019)--------30–50 km/h
[77] Tominaga et al. (2019)------------80 km/h
[78] Chen Z et al. (2019)------------
[79] Feng et al. (2019)----120 km/h
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Waykole, S.; Shiwakoti, N.; Stasinopoulos, P. Review on Lane Detection and Tracking Algorithms of Advanced Driver Assistance System. Sustainability 2021, 13, 11417. https://doi.org/10.3390/su132011417

AMA Style

Waykole S, Shiwakoti N, Stasinopoulos P. Review on Lane Detection and Tracking Algorithms of Advanced Driver Assistance System. Sustainability. 2021; 13(20):11417. https://doi.org/10.3390/su132011417

Chicago/Turabian Style

Waykole, Swapnil, Nirajan Shiwakoti, and Peter Stasinopoulos. 2021. "Review on Lane Detection and Tracking Algorithms of Advanced Driver Assistance System" Sustainability 13, no. 20: 11417. https://doi.org/10.3390/su132011417

APA Style

Waykole, S., Shiwakoti, N., & Stasinopoulos, P. (2021). Review on Lane Detection and Tracking Algorithms of Advanced Driver Assistance System. Sustainability, 13(20), 11417. https://doi.org/10.3390/su132011417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop