Next Article in Journal
Image Enhancement of Computational Reconstruction in Diffraction Grating Imaging Using Multiple Parallax Image Arrays
Next Article in Special Issue
Think Aloud Protocol Applied in Naturalistic Driving for Driving Rules Generation
Previous Article in Journal
A Study on Determining Time-Of-Flight Difference of Overlapping Ultrasonic Signal: Wave-Transform Network
Previous Article in Special Issue
Road-Aware Trajectory Prediction for Autonomous Driving on Highways
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection †

1
Department of Electrical Engineering, Advanced Institute of Manufacturing with High-Tech Innovation, National Chung Cheng University, Chiayu 621, Taiwan
2
Department of Electrical Engineering, National Chung Cheng University, Chiayi 621, Taiwan
*
Author to whom correspondence should be addressed.
This paper is an extended version of Jyun-Min Dai; Lu-Ting Wu; Huei-Yung Lin and Wen-Lung Tai, A driving assistance system with vision based vehicle detection techniques. In Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016.
Sensors 2020, 20(18), 5139; https://doi.org/10.3390/s20185139
Submission received: 4 August 2020 / Revised: 4 September 2020 / Accepted: 4 September 2020 / Published: 9 September 2020
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)

Abstract

:
One major concern in the development of intelligent vehicles is to improve the driving safety. It is also an essential issue for future autonomous driving and intelligent transportation. In this paper, we present a vision-based system for driving assistance. A front and a rear on-board camera are adopted for visual sensing and environment perception. The purpose is to avoid potential traffic accidents due to forward collision and vehicle overtaking, and assist the drivers or self-driving cars to perform safe lane change operations. The proposed techniques consist of lane change detection, forward collision warning, and overtaking vehicle identification. A new cumulative density function (CDF)-based symmetry verification method is proposed for the detection of front vehicles. The motion cue obtained from optical flow is used for overtaking detection. It is further combined with a convolutional neural network to remove repetitive patterns for more accurate overtaking vehicle identification. Our approach is able to adapt to a variety of highway and urban scenarios under different illumination conditions. The experiments and performance evaluation carried out on real scene images have demonstrated the effectiveness of the proposed techniques.

1. Introduction

Nowadays, traffic safety has become a significant issue due to the rapid worldwide growth in the number of vehicles. Most vehicles move at high speed, particularly on the highways, to provide the convenient transportation and shorten the travel time. Nevertheless, this will come with the danger of potential traffic accidents due to the road users as well, and a considerable number of accidents are caused by fatigue or distraction of the drivers. Thus, the development of driver assistance systems is important and with great practical values in today’s safety trend. In general, these systems are used to improve the traffic efficiency and keep away from traffic accidents. They also serve as the key components of autonomous driving vehicles, especially for the long distance highway travel [1]. Several related fields, including intelligent vehicles, advanced safety vehicles, and intelligent transportation systems, have attracted researchers and practitioners to investigate the driver assistance technologies.
According to the report of Taiwan Area National Freeway Bureau [2], the major traffic accidents in the highways are caused by the result of “improper lane changes” and then “not paying attentions to the state in front of the vehicle”. Most of the dangerous situations are due to insufficient safe driving distances, lane changing, and vehicle overtaking. The vehicle drivers usually assess the surrounding traffic before changing lanes, and make turns after checking their rear view and side mirrors. Thus, an effective front and side monitoring with rapid detection of vehicles is important. It should be able to provide the necessary surrounding information to the driver and issue the in-time warnings to avoid possible collision.
In recent years, the advanced driver assistance systems (ADAS) have achieved great advances in vehicle safety issues. Many techniques have been proposed for better environment perception and decision making. Some typical applications for important ADAS modules include lane change detection (LCD), forward collision warning (FCW), and overtaking vehicle identification (OVI) [3,4,5]. LCD is to provide the vehicle in-lane position or lane departure information by the identification of road markings. The objective of FCW is to perform front object detection and generate an alert to the driver to prevent the impending collision. On the other hand, OVI aims to detect the vehicles approaching from the rear, and provides an in-time warning for making a turn or changing lanes. At the present, many high-end vehicles are equipped with ADAS directly from car manufacturers, while there also exist after-market onboard traffic recorders equipped with various ADAS functions.
For the development of advanced driver assistance systems, many different sensors such as radar, lidar, and a camera, are adopted. Laser and radar are two commonly used sensors for distance measurement and forward collision warning. There are many advantages including the capability of detecting long range obstacles, even under rainy/cloudy weather conditions or at night. However, in addition to the expensive hardware cost, these approaches also lack of sufficient object recognition ability to analyze the traffic scenes. To identify individual lanes and drivable areas, visual sensors are commonly adopted for road surface marking detection. The transportation infrastructure can also be integrated with GPS-based vehicle localization for lane change assistance. For blind spot or rear obstacle detection, radar-based sensing techniques are commonly used [6].
To reduce the cost of driver assistance systems, vision-based techniques which adopt low-cost cameras as sensing devices have been proposed in recent years [7]. They have attracted considerable attentions due to not only the inexpensive hardware compared to radar or lidar technologies, but also the significant advances and maturity in computer vision and machine learning algorithms. The rich sensing information of the environment provided by the cameras can also be used to extend the functions of driver assistance systems. Some applications under investigation include traffic light detection [8], traffic sign recognition [9], vehicle identification [10], vehicle speed detection [11], pedestrian recognition [12], overtaking detection [13], and parking assistance [14], etc. Thus, despite the depth measurement precision under varying illumination conditions, it still has great potential to use image-based approaches for the vehicular system development.
The challenges of lane marking and vehicle detection include complex road conditions, dynamic scenes, and changing illumination. Moreover, the robustness and real-time are also the key indicators of the algorithm design and system implementation. Thus, we mainly adopt the feature-based approaches in this work, with the learning-based methods for validation purposes. A schematic diagram of the proposed system is depicted in Figure 1. The lane markings and front vehicles are detected and identified first. New algorithms are proposed to improve the detection accuracy for complex scenarios. A symmetry analysis technique is then adopted for the verification of vehicle detection results. For overtaking vehicle detection, a motion-based method is used for continuously tracking [15], followed by a convolutional neural network (CNN) for target region classification and identification.
In this paper, we propose an advanced driver assistance system using two cameras to capture image data from the front and rear views of the vehicle. Our objective was to develop efficient and robust techniques for lane change detection, forward collision warning, and overtaking vehicle identification. In the past few decades, the computation of image data has been greatly reduced due to the prevalence of graphics processor unit. The hardware progress makes the visual data processing approaches suitable for ADAS applications. Unlike the conventional radar-based approaches [16], our vision-based system can enhance the performance of existing object identification and lane detection techniques. When the camera is properly mounted and calibrated, it can be further integrated with other computer vision algorithms to form better safety protections to the drivers without extra hardware cost. The main contributions of this work include the following.
  • A driver assistance system including lane change detection, forward collision warning, and overtaking vehicle identification is presented.
  • We propose a new method for front vehicle detection using an adaptive ROI and the CDF-based verification.
  • The proposed overtaking detection approach with CNN-based classification is used to solve the difficult repetitive pattern problem.
  • The vision-based system is developed with comprehensive camera calibration procedures and evaluated on real traffic scene experiments.

2. Related Work

In recent years, many vision-based techniques for advanced driver assistance systems are investigated. The versatility of vision systems provides rich information such as color, shape, and depth information at a low hardware cost. The previous work on lane change detection can be categorized into three different approaches: the feature-based, model-based, and inverse perspective transform techniques. In feature-based methods, the road markings [17,18] or the lane lines [19] are taken as low level features to locate the lane position in the image. On the other hand, pre-defined curves (e.g., straight line or parabola functions [20], etc.) are used to approximate the lanes via parameter estimation in model-based methods. The inverse perspective transform methods take the 3D space as input and perform a linear transformation to a 2D space for processing [21,22]. In addition to these approaches, there are other techniques, such as transforming front-view images to top-view images for straight line detection [23] and lane detection and tracking using deformable models [24].
The existing techniques for forward collision warning include feature-based [25], stereo-based [4,26], motion-based [27,28], and learning-based [29,30] approaches. To detect the front vehicle, the underneath shadow is commonly adopted as a pattern from visual cues [31]. The shadow image region obtained from thresholding or gradients can be used to locate the front vehicle. Since the shadow intensity depends on the illumination condition of the scene, a fixed threshold is in general not suitable for detection. Adaptive thresholds will be required for different environments. Moreover, the candidate vehicle region associated with the shadow needs to be verified by symmetry [32,33] or training data. In [34], the feature matching and 3D feature clustering are used to select reliable features for a stereo-based technique. A transform model is estimated using modified RANSAC for vehicle detection and tracking. Lai et al. use the characteristics of regional color change and optical flow to identify the vehicle for collision avoidance [35]. Chen et al. adopt a multi-layer artificial neural network for robust detection of critical motion from vehicle contextual information [36]. Although the learning-based approach provides better results, it usually has high computational complexity and hardware requirements.
For overtaking detection, the cameras can be installed in the front, rear, or sides of the vehicle. Chen et al. and Hultqvistet et al. propose efficiency vehicle identification algorithms using optical flow [37,38]. A single camera is placed in the front of the vehicle, and optical flow is evaluated along the lines parallel to the motion of the overtaking vehicles. Based on the features tracked along the lines, the position of the overtaking vehicle is estimated. However, due to the limited field-of-view of the camera mounted in the front, these methods are not able to notify the driver about the occurrence of overtaking in advance. Alternatively, Alonso et al. present an overtaking detection technique for the blind-zone using the cameras mounted on the side-view mirrors [39]. Wu et al. also utilize the cameras installed below the side-view mirrors to develop an embedded blind-zone security system [40]. The low-level features such as shadows, edges, and headlights are first detected for vehicle localization, and then followed by tracking and behavior analysis.

3. Lane Change Detection

In our lane change detection approach, the input takes about a half of the original image with the unnecessary region such as the sky and buildings removed. The region of interest (ROI) is further defined before performing the low-level feature-based lane marking detection algorithm. To choose a suitable ROI for line detection, most current methods use a range above the free space and under the horizon. However, this can easily cover the extra regions in both sides of the image, which are usually not necessary for processing. To deal with this problem, a heuristic trapezoid region is set as our ROI for line detection.

3.1. Lane Marking Detection in Daytime

For a given ROI, a gradient map is obtained with the Sobel operator, and used to detect the lane markings and vehicle shadows for lane departure and forward collision warning. The gradient direction of each pixel is also used to restrict the orientation of the lines. To identify the road lanes, the gradient image is binarized for line detection. The connected component labeling is then carried out to reduce noise in the binary image. Due to the different characteristics of the scenes associated with different distances, the image is partitioned to upper, middle, and lower regions. The associated edge image regions are derived with different thresholds. Line detection using Hough transform is then carried out with the orientation constraints specified by the vanishing point to identify the lane markings.
In this work, we restrict the interested gradient directions in 15°–85° and 95°–165° for the left and right lane markings, respectively. This range can be used to suppress the horizontal and vertical lines frequently appeared in the environments with man-made structures. Figure 2 shows the lane change status, left, right, and normal, based on the Hough line detection results. For the real scene applications, the lane shift direction cannot be uniquely determined without sequential observation. To increase the robustness of lane change detection, a finite state machine is adopted. In our approach, only the ‘normal’ state can change to ‘shift right’ or ‘shift left’, and update the lane change status. As shown in the experiments, this method can successfully reduce the false alarms due to the lane markings occasionally occluded by the front vehicles.

3.2. Image Enhancement for Low Light Scenarios

To increase the contrast of an image, the gamma correction on the intensity is commonly adopted. In this work, a variant of gamma correction function is proposed to solve the contrast problems associated with low light traffic scenarios. The conventional gamma correction function utilizes a nonlinear operation to encode and decode luminance or tristimulus values in video or still image systems. It is defined by the power-law conversion
I o u t = c · I i n γ
where the input ( I i n ) and output ( I o u t ) are non-negative values and c is a constant.
For the regular images with the intensity range of 0 to 255, the constant c is given by c = 255 / 255 γ , where γ is less than 1. This process adjusts the pixel value and makes it brighter than its original intensity. We use the gamma correction function defined by
I o u t = I i n · I i n γ , if   I i n T ( I i n · I i n γ ) 256 , otherwise .
to calculate the value of each pixel. In the above equation, the constant c is changed to I i n for lane marking detection specifically in low light conditions. The threshold and gamma value are set as T = 52 and γ = 0.4 empirically for general night scenes. Figure 3 shows the results using different gamma correction approaches. The lane marking detection result using the proposed gamma correction formulation is shown in Figure 3d.

4. Forward Collision Warning

In our forward collision warning module, the input is the gradient image and lane marking detection results. The cast shadow is used for front vehicle detection since it is the most prominent feature in daytime. We propose the combination of an adaptive ROI and a fixed trapezoid for vehicle detection. As shown in Figure 4a, the adaptive ROI is created on top of a fixed trapezoid. A 1-D spatial gradient in the y direction is first computed, and the image is then binarized. The adaptive ROI in the figure is related to the width of the vehicle’s vertical edge pair. We define 3 × 3 and 11 × 11 convolution masks for erosion and dilation, respectively. Figure 4b shows the results after the morphological operations. The vertical edge pair is then detected by the histogram of vertical projection using several intervals. As illustrated in Figure 4c, using the intervals with different widths on the vertical projection is able to mitigate the influence of noise.

4.1. Front Vehicle Detection in Daytime

To detect the front vehicle location, a cumulative density function (CDF) given by
A j = [ Σ i = 0 j c i ] / P t o t a l
is used to search the under-vehicle shadow, where c i is the number of pixels with the intensity value i, P t o t a l is the total number of pixels in the ROI, and A j is the percentage of pixels with the intensity value less than j. Suppose the under-vehicle shadow is in the range of [ A 1 , A 2 ] , two thresholds T 1 and T 2 are defined by
T j = arg min T i = 0 T c i A j
where j = 1 , 2 . The threshold for the under-vehicle shadow is then computed by
T s h a d o w = T 1 + α ( T 2 T 1 ) .
Based on our evaluation, the under-vehicle shadow is generally in the range of 0.5 % to 15 % , and α is set as 0.6.
Given the threshold for under-vehicle shadow detection, the image is binarized, and followed by morphological operations and connected component labeling for noise reduction. The horizontal projection of the shadow is calculated and used to derive the distance of the front vehicle. It is updated continuously and verified with the lane width W l a n e using
λ 1 · W l a n e < W h p < λ 2 · W l a n e
where W h p is the length of the horizontal projection and 0 < λ 1 , λ 2 < 1 . The front vehicle is then identified based on the symmetry detection using SURF features. We define two 6 × 3 matrices
A = a 02 a 01 a 00 a 02 a 04 a 03 a 08 a 07 a 06 a 11 a 10 a 09 a 14 a 13 a 12 a 17 a 16 a 15   and   B = b 00 b 01 b 02 b 03 b 04 b 05 b 06 b 07 b 08 b 09 b 10 b 11 b 12 b 13 b 14 b 15 b 16 b 17 .
If there exist SURF features in the 15 × 15 sub-image block, the values of a i or b i are marked as 1, where i = 0 , 1 , , 17 . Otherwise, their values are denoted by 0. We then check the elements in A and B. The number of a i = b i is then used to evaluate the possibility of a vehicle.

4.2. Front Vehicle Detection in Nighttime

One major issue of nighttime images acquired by a conventional low dynamic range (LDR) driving recorder is the intensity saturation of vehicle tail-lights, as illustrated in Figure 5a. A similar saturation case is also caused by the reflection of headlights on the rear bumper and tailgate of the front vehicle as shown in Figure 5b,c. To deal with this problem, a nighttime vehicle detection algorithm is proposed. We define an inverted V-shaped ROI and use Equation (1) for image gamma correction. The average intensity of each row in the ROI is calculated, and the brightest area indicates the tail-light location. To suppress the influence of the road surface reflection, the average intensity histogram is generated using several frames. This works well because the front tail-light location usually does not change too much in the image but the ground intensity values tend to have a large variation during the vehicle motion.
Given the vertical location of the brightest area associated with the tail-lights, adaptive thresholding is carried out in the region to obtain a rough vehicle width. To identify the front vehicle in nighttime, we further detect the lower edge of the rear bumper using Equation (3), with A j greater than a threshold. The image is binarized, and followed by morphological processing for noise removal. It is then used to detect the vehicle width by the leftmost and rightmost linked binary pixels. This nighttime vehicle detection process is shown in Figure 6 for a better illustration. Figure 6a,e show the search area and the detected vehicle region, respectively.

5. Overtaking Vehicle Detection

The proposed overtaking vehicle detection approach consists of four stages: (1) pre-processing and segmentation, (2) convolutional neural network, (3) repetitive pattern removal, and (4) tracking and behavior detection, as illustrated in Figure 7. A single camera is mounted on the rear windshield for image acquisition.

5.1. Pre-Processing and Segmentation

The ROI for overtaking vehicle detection is first defined by removing the regions such as sky and buildings, before performing the image segmentation algorithm. To obtain the motion clues in an image, edges and corners are commonly used features for tracking. However, it is difficult to determine the number of features for cluttered scenes, and results in the uncertain computation time. To overcome this problem, we set a fixed number of feature points for each 10 × 10 sub-image region. The pyramid model of Lucas–Kanade tracker is then used to calculate the optical flow information of the feature points [41]. According to the direction and strength, the results are divided into five categories: (i) road and sky, (ii) lane marking, (iii) overtaking vehicle, (iv) object further away, and (v) uncertain region, as shown in Figure 8.
When moving forward, there is a large amount of optical flow around the vehicle. It is relative small for road and sky, so this characteristic can be used to identify different regions. If the y-direction optical flow in the image coordinates is very large and moves toward the vanishing point, the point is classified as lane marking. The points are considered as part of an overtaking vehicle if the x-direction optical flow is large and moves away from the vanishing point. On the other hand, it indicates that an object is moving away if the x-direction optical flow is large and moves towards the vanishing point. Finally, the situations which do not belong to any of the criteria are assigned to the uncertain region. Here, only the x-direction of the optical flow is used to distinguish the overtaking vehicle from the object moving further away, and the y-direction optical flow is not taken into account. This is due to the small displacement of motion vector in the y-direction, which usually introduces more correspondence mismatching (see Figure 9).
When overtaking vehicles are approaching, the space correlation between the vehicles is used for region filtering. The tracking points of the objects are accumulated to a grayscale image and binarized. A morphological dilation operation is carried out on the tracking points for merging and isolated region removal. The vehicle is then identified using the image segmentation result. One common problem of overtaking scenes is the false correspondence matching between the vehicles and the background. The repetitive patterns generated from constant appearance change will cause incorrect target tracking by the optical flow algorithm. Since the repetitive pattern usually appears in the same image region during a short period of time, it is not possible to use the spatio-temporal correlation of the image pixels to filter out the mismatched features. To deal with this problem, it is necessary to distinguish the repetitive patterns from the features extracted from other meaningful objects such as vehicles, lane markings, etc. The learning-based approaches such as support vector machine (SVM) and neural networks (NN) are commonly adopted in the image classification task. Recently, due to the success of deep neural networks applied to object recognition and classification, large sets of training data and tools are readily available. Thus, a convolutional neural network (CNN) is used for the verification of segmentation results and remove the non-vehicle areas in this paper.

5.2. Repetitive Pattern Removal Using Cnn

We adopt CaffeNet as the convolutional neural network architecture in this work. The network is originated from AlexNet [42] and designed to classify 1.2 million high-resolution images in ImageNet LSVRC-2010 contest to 1000 different classes. It consists of five convolutional layers, 60 million parameters and 650,000 neurons. Our dataset is used to fine-tune BVLC CaffeNet model for 6 categories. The output classes contain “front of vehicle”, “rear of vehicle”, “motorcycle”, “repetitive patterns”, “lane markings”, and “background”. The images are segmented by the algorithm described in the previous section, followed by the deep neural network for training and evaluation. The CNN with CaffeNet is used to identify the segmented area and remove the non-vehicle objects.
In some cases, the incorrect segmentation is derived with the vehicle from the opposite lane and recognized by CNN. It is mainly due to the optical flow feature tracking on repetitive patterns as illustrated in Figure 10. There are some common repetitive patterns which can be found in the street scenes. A moving vehicle in the opposite lane might be confused with the repeated pattern due to the separation poles. As shown in Figure 11, the forward and backward feature tracking inconsistency cause the optical flow disorder. To detect whether a segmented region contains repetitive patterns, multiple feature points are adopted for verification. We first detect “good features” to track, denoted by F ( t ) , on the frame at time t [43]. The succeeding F ( t + 1 ) is then found by referring to the forward optical flow o t + = ( u + , v + ) from time t to t + 1 . That is,
F ( t + 1 ) = F ( t ) + o t + ( F ( t ) ) .
Similarly, the point F ( t ) generated from the backward tracking is related by the backward motion
F ( t ) = F ( t + 1 ) + o t + 1 ( F ( t + 1 ) )
where o t + 1 = ( u , v ) is the backward optical flow from time t + 1 to t.
If a feature point F ( t ) is not part of repetitive patterns and the optical flow is correctly computed, then we have
F ( t ) F ( t ) = 0
However, in reality, there are always some errors in the optical flow estimation, i.e.,
F ( t ) F ( t ) ε
where ε denotes the small error between F ( t ) and F ( t ) . Thus, the identified object is classified as a vehicle only if most of the feature points on the target are smaller than ε . Otherwise, it is considered as a repetitive pattern and eliminated from overtaking detection.
After region segmentation and repetitive pattern removal, the object tracking is carried out until the object is disappeared from the image or no overtaking occurs. We adopt Lucas–Kanade optical flow for continuously tracking to overcome the shape, size, and scale of the object changes as it approaches the camera. A tracking example is shown in Figure 12. It is expected that CNN is able to identify and remove repetitive patterns, and provide the correct overtaking detection. However, due to the low image resolution, the region segmentation is not always good enough for identification. To reduce the false detection, we further assess whether the tracking direction continuously stays away from the vanishing point for a period of time.
For nighttime overtaking detection, the vehicle appearance cannot be clearly obtained due to the low illumination environment. It is also easily disturbed by other background lights if the vehicle headlights are used for target detection. To identify the overtaking vehicles, we present a method which uses motion cues combined with the brightness information. The approaching headlight areas are segmented using the algorithm described in Section 5.1. We binarize the images and track the brightness region, followed by the vehicle motion analysis presented previously. It is considered as the headlight of an overtaking vehicle if continuously moving away from the vanishing point.

6. Implementation and Experiments

The proposed technique adopts front and rear cameras to capture the video sequence and is used for lane marking, front vehicle, and overtaking vehicle detection.

6.1. Camera Setting and System Calibration

To make the system installation flexible and easy to use, we develop an auto-calibration mechanism for camera parameter estimation. Two commonly adopted camera settings, looking upward or downward, are considered, and the error introduced by imprecise system setup is analyzed. Assume a camera with the focal length f is installed on a vehicle at the height β . We first consider the case where the camera faces downward with an angle θ v with respect to the ground plane. Then we have
θ v = tan 1 2 h v h 2 f
where h is the image height and h v is the vanish point location in the image.
Assuming an object with the physical size λ is captured by a camera with the dimension λ o in the image, then
θ λ = tan 1 2 h λ h 2 f .
If we further assume that the object is located in front of the camera at the distance γ , then we have
β = λ γ tan ( θ λ θ v )
Since the vanishing point h v can be obtained using the parallel lane markings on the ground plane, the camera tilt angle θ v can be derived from Equation (6). If a calibration object is given with known size and distance, the camera’s position can be computed by Equations (7) and (8). Finally, the one-to-one correspondence between the image plane and the ground can be established, and the distance can be derived directly from the acquired image.
Similarly, for the case with the camera looking upward, the parameters θ v , θ λ , and β are calculated by
θ v = tan 1 h 2 h v 2 f , θ λ = tan 1 2 h λ h 2 f
and
β = λ γ tan ( θ λ + θ v )
respectively. The camera’s installation position and orientation can then be obtained accordingly.

6.2. Experiments and Evaluation

The proposed methods for lane change detection, forward collision warning, and overtaking vehicle identification have been tested using car digital video recorders (DVR) developed by Create Electronic Optical. Two DVRs are mounted behind the front and rear windshield to capture the image sequences for performance evaluation. The original image size and the sub-image region for processing are 1200 × 800 and 600 × 200 , respectively. Our processing platform is a PC with Ubuntu 16.04 operating system, 4.0 GHz Intel i7 CPU, and Nvidia GTX 1080 GPU. Some results of lane change detection and forward collision warning are shown in Figure 13 and Figure 14. The test video sequences cover a variety of illumination conditions such as sunny, cloudy or rainy weather, in a tunnel, or during the night time. Two horizontal lines, yellow and red, indicate the distances of 30 m (warning) and 50 m (danger), respectively. They represent an industrial standard used to generate the warning signals.
The performance of lane change detection and forward collision warning is evaluated based on the detection accuracy. Since it is usually not necessary and mostly infeasible to evaluate the system accuracy for all possible situations, some criteria are adopted as follows:
  • The width of the lanes is 3∼4 m.
  • The vehicle velocity is over 60 km/h.
  • The curvature of the lanes is over 250 m in radius.
  • The visible range of the camera is over 120 m.
  • The toll booth areas are not considered.
In the above, the lane width is based on the transportation infrastructure. The consideration of vehicle velocity, road curvature, toll booth area, and camera visible range are from the application specifications. There are a total 2738 images used for lane change detection in our experiment. The precision and recall of the proposed technique and the previous methods are tabulated in Table 1. For forward collision warning, there are 3464 images used in this work. Table 2 shows the precision and recall of previous techniques and our proposed method. It indicates that, compared to others, our results provide better precision but not the recall rate.
For overtaking detection, there are totally 7587 images collected in our training data with 6 categories. The images are manually segmented and labeled with 808 lane markings, 470 motorcycles, 2405 repetitive patterns, 1301 rear of vehicle, 1385 front of vehicle, and 1218 background. Similarly, the validation data of 2770 images are collected from 80-minute video sequences and marked manually. The recognition results for all categories are tabulated in Table 3. For performance evaluation, 50 image sequences (20 highway, 15 city, 15 night) are collected with each about 2 min long. The overtaking detection is successful if the vehicle is detected before disappeared from the image sequence. It is considered as failed if the approaching vehicle is not identified. The overtaking evaluation is tabulated in Table 4 and some result images are shown in Figure 15 for the city traffic, highway, and at night, respectively. Figure 15a shows the city traffic scenes with overtaking motorcycles and vehicles detected. It is noted that the vehicles behind but without overtaking or driving in the opposite direction are not identified. Figure 15b shows the highway traffic scenes with correct overtaking detection even with the repetitive pattern of the separation poles appeared in the images. The overtaking vehicles in the night scenes are identified by the headlight features as shown in Figure 15c. In our vehicle recognition module, it does not matter if the overtaking is identified or not since a subsequent tracking stage will be performed. However, a false alarm will occur if the vehicle makes turns with another behind. The previous works on overtaking vehicle identification are fairly limited. In [53], the results report the overtaking detection with 98 true positives, 5 false positives, 3 false negatives, at the precision and recall rates of 95.1% and 97.0%, respectively. The work in [54] shows the precision and recall rates of 96.3% and 86.9%, respectively. In the proposed method, the precision rates for ’city traffic’, ’highway’, and ’night’ are 88.7%, 98.8%, and 90.2%, respectively. The recall rates for these three scenarios are 100%, 100%, and 88.1%, respectively.

7. Conclusions and Future Work

In this paper, we propose a vision-based driver assistance system for highway, urban, and city environments. The efficient and robust techniques for lane change detection, forward collision warning, and overtaking vehicle identification are presented. We adopt two monocular car digital video recorders to capture the front and rear views of the traffic scenes. In our proposed approach, the front vehicles are identified by a new CDF-based symmetry detection technique. For overtaking detection, the motion cue obtained from optical flow is combined with convolutional neural networks for vehicle identification with repetitive patterns removal. In the system setup, the on-board camera projection geometry is formulated and the camera calibration is performed for single view depth measurement. The experiments and evaluation carried out on various real traffic scenarios have demonstrated the effectiveness of the proposed techniques.
In the future work, there are a few important research directions for practical uses in real-world application scenarios. First, the lane change detection can be improved with road surface segmentation. Since lane markings are not always clearly available, the drivable area should be considered for lane change assistance. Second, stereo vision can be adopted for more accurate depth estimation of the front vehicles. It will provide more reliable information for integration with other tasks such as cruise control. Third, the current overtaking detection technique does not perform very well when the vehicle makes turns. The ego-motion estimation can be carried out to eliminate the incorrect feature matching for overtaking vehicle identification. Finally, the proposed techniques aim to be implemented on an embedded platform and evaluated for the real-time capability.

Author Contributions

Conceptualization, H.-Y.L.; methodology, H.-Y.L., J.-M.D., L.-T.W. and L.-Q.C.; software, J.-M.D., L.-T.W. and L.-Q.C.; validation, H.-Y.L. and L.-Q.C.; formal analysis, H.-Y.L.; investigation, H.-Y.L.; resources, H.-Y.L.; writing—original draft preparation, H.-Y.L., J.-M.D. and L.-T.W.; writing—review and editing, H.-Y.L.; supervision, H.-Y.L.; project administration, H.-Y.L.; funding acquisition, H.-Y.L.. All authors have read and agreed to the published version of the manuscript.

Funding

The support of this work in part by Create Electronic Optical Co., LTD and Ministry of Science and Technology of Taiwan under Grant MOST 104-2221-E-194-058-MY2, is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kukkala, V.K.; Tunnell, J.; Pasricha, S.; Bradley, T. Advanced Driver-Assistance Systems: A Path Toward Autonomous Vehicles. IEEE Consum. Electron. Mag. 2018, 7, 18–25. [Google Scholar] [CrossRef]
  2. MOTC. Taiwan Area National Freeway Bureau; MOTC: Taipei, Taiwan, 2017. Available online: http://www.freeway.gov.tw/ (accessed on 13 July 2020).
  3. Su, C.; Deng, W.; Sun, H.; Wu, J.; Sun, B.; Yang, S. Forward collision avoidance systems considering driver’s driving behavior recognized by Gaussian Mixture Model. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 535–540. [Google Scholar]
  4. Song, W.; Yang, Y.; Fu, M.; Li, Y.; Wang, M. Lane Detection and Classification for Forward Collision Warning System Based on Stereo Vision. IEEE Sens. J. 2018, 18, 5151–5163. [Google Scholar] [CrossRef]
  5. Butakov, V.A.; Ioannou, P. Personalized Driver/Vehicle Lane Change Models for ADAS. IEEE Trans. Veh. Technol. 2015, 64, 4422–4431. [Google Scholar] [CrossRef]
  6. Liu, G.; Wang, L.; Zou, S. A radar-based blind spot detection and warning system for driver assistance. In Proceedings of the 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 25–26 March 2017; pp. 2204–2208. [Google Scholar]
  7. Dai, J.; Wu, L.; Lin, H.; Tai, W. A driving assistance system with vision based vehicle detection techniques. In Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016. [Google Scholar]
  8. Yeh, T.; Lin, S.; Lin, H.; Chan, S.; Lin, C.; Lin, Y. Traffic Light Detection using Convolutional Neural Networks and Lidar Data. In Proceedings of the 2019 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Taipei, Taiwan, 3–6 December 2019; pp. 1–2. [Google Scholar]
  9. Lin, H.Y.; Chang, C.C.; Tran, V.L.; Shi, J.H. Improved traffic sign recognition for in-car cameras. J. Chin. Inst. Eng. 2020, 43, 300–307. [Google Scholar] [CrossRef]
  10. Khan, S.D.; Ullah, H. A survey of advances in vision-based vehicle re-identification. Comput. Vis. Image Underst. 2019, 182, 50–63. [Google Scholar] [CrossRef] [Green Version]
  11. Lin, H.Y.; Li, K.J.; Chang, C.H. Vehicle speed detection from a single motion blurred image. Image Vis. Comput. 2008, 26, 1327–1337. [Google Scholar] [CrossRef]
  12. Brunetti, A.; Buongiorno, D.; Trotta, G.F.; Bevilacqua, V. Computer vision and deep learning techniques for pedestrian detection and tracking: A survey. Neurocomputing 2018, 300, 17–33. [Google Scholar] [CrossRef]
  13. Braunagel, C.; Rosenstiel, W.; Kasneci, E. Ready for Take-Over? A New Driver Assistance System for an Automated Classification of Driver Take-Over Readiness. IEEE Intell. Transp. Syst. Mag. 2017, 9, 10–22. [Google Scholar] [CrossRef]
  14. Heimberger, M.; Horgan, J.; Hughes, C.; McDonald, J.; Yogamani, S. Computer vision in automated parking systems: Design, implementation and challenges. Image Vis. Comput. 2017, 68, 88–101. [Google Scholar] [CrossRef]
  15. Tang, Z.; Wang, G.; Xiao, H.; Zheng, A.; Hwang, J.N. Single-Camera and Inter-Camera Vehicle Tracking and 3D Speed Estimation Based on Fusion of Visual and Semantic Features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  16. Ziebinski, A.; Cupek, R.; Erdogan, H.; Waechter, S. A Survey of ADAS Technologies for the Future Perspective of Sensor Fusion. In Computational Collective Intelligence; Nguyen, N.T., Iliadis, L., Manolopoulos, Y., Trawiński, B., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 135–146. [Google Scholar]
  17. Niu, J.; Lu, J.; Xu, M.; Lv, P.; Zhao, X. Robust Lane Detection using Two-stage Feature Extraction with Curve Fitting. Pattern Recognit. 2016, 59, 225–233. [Google Scholar] [CrossRef]
  18. Lu, Y.; Huang, J.; Chen, Y.; Heisele, B. Monocular localization in urban environments using road markings. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 468–474. [Google Scholar]
  19. Xing, Y. Dynamic integration and online evaluation of vision-based lane detection algorithms. IET Intell. Transp. Syst. 2019, 13, 55–62. [Google Scholar] [CrossRef]
  20. Narote, S.P.; Bhujbal, P.N.; Narote, A.S.; Dhane, D.M. A review of recent advances in lane detection and departure warning system. Pattern Recognit. 2018, 73, 216–234. [Google Scholar] [CrossRef]
  21. Bruls, T.; Porav, H.; Kunze, L.; Newman, P. The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 302–309. [Google Scholar]
  22. Ying, Z.; Li, G. Robust lane marking detection using boundary-based inverse perspective mapping. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 1921–1925. [Google Scholar]
  23. Chen, D.; Tian, Z.; Zhang, X. Lane Detection Algorithm Based on Inverse Perspective Mapping. In Man—Machine—Environment System Engineering; Long, S., Dhillon, B.S., Eds.; Springer: Singapore, 2020; pp. 247–255. [Google Scholar]
  24. Kluge, K.; Lakshmanan, S. A deformable-template approach to lane detection. In Proceedings of the Intelligent Vehicles ’95. Symposium, Detroit, MI, USA, 25–26 September 1995; pp. 54–59. [Google Scholar]
  25. Deng, Y.; Liang, H.; Wang, Z.; Huang, J. An integrated forward collision warning system based on monocular vision. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 1219–1223. [Google Scholar]
  26. Song, W.; Fu, M.; Yang, Y.; Wang, M.; Wang, X.; Kornhauser, A. Real-time lane detection and forward collision warning system based on stereo vision. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 493–498. [Google Scholar]
  27. Di, Z.; He, D. Forward Collision Warning system based on vehicle detection and tracking. In Proceedings of the 2016 International Conference on Optoelectronics and Image Processing (ICOIP), Warsaw, Poland, 10–12 June 2016; pp. 10–14. [Google Scholar]
  28. Baek, J.W.; Han, B.; Kang, H.; Chung, Y.; Lee, S. Fast and reliable tracking algorithm for on-road vehicle detection systems. In Proceedings of the 2016 Eighth International Conference on Ubiquitous and Future Networks (ICUFN), Vienna, Austria, 5–8 July 2016; pp. 70–72. [Google Scholar]
  29. Moujahid, A.; ElAraki Tantaoui, M.; Hina, M.D.; Soukane, A.; Ortalda, A.; ElKhadimi, A.; Ramdane-Cherif, A. Machine Learning Techniques in ADAS: A Review. In Proceedings of the 2018 International Conference on Advances in Computing and Communication Engineering (ICACCE), Paris, France, 22–23 June 2018; pp. 235–242. [Google Scholar]
  30. Fekri, P.; Abedi, V.; Dargahi, J.; Zadeh, M. A Forward Collision Warning System Using Deep Reinforcement Learning; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2020. [Google Scholar] [CrossRef]
  31. Nur, S.A.; Ibrahim, M.M.; Ali, N.M.; Nur, F.I.Y. Vehicle detection based on underneath vehicle shadow using edge features. In Proceedings of the 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Batu Ferringhi, Malaysia, 25–27 November 2016; pp. 407–412. [Google Scholar]
  32. Zebbara, K.; El Ansari, M.; Mazoul, A.; Oudani, H. A Fast Road Obstacle Detection Using Association and Symmetry recognition. In Proceedings of the 2019 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS), Fez, Morocco, 3–4 April 2019; pp. 1–5. [Google Scholar]
  33. Ma, X.; Sun, X. Detection and segmentation of occluded vehicles based on symmetry analysis. In Proceedings of the 2017 4th International Conference on Systems and Informatics (ICSAI), Hangzhou, China, 11–13 November 2017; pp. 745–749. [Google Scholar]
  34. Lim, Y.; Kang, M. Stereo vision-based visual tracking using 3D feature clustering for robust vehicle tracking. In Proceedings of the 2014 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Vienna, Austri, 1–3 September 2014; Volume 2, pp. 788–793. [Google Scholar]
  35. Lai, Y.; Huang, Y.; Hwang, C. Front moving object detection for car collision avoidance applications. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 367–368. [Google Scholar]
  36. Chen, D.; Wang, J.; Chen, C.; Chen, Y. Video-based intelligent vehicle contextual information extraction for night conditions. In Proceedings of the 2011 International Conference on Machine Learning and Cybernetics, Guilin, China, 10–13 July 2011; Volume 4, pp. 1550–1554. [Google Scholar]
  37. Hultqvist, D.; Roll, J.; Svensson, F.; Dahlin, J.; Schön, T.B. Detecting and positioning overtaking vehicles using 1D optical flow. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 861–866. [Google Scholar]
  38. Chen, Y.; Wu, Q. Moving vehicle detection based on optical flow estimation of edge. In Proceedings of the 2015 11th International Conference on Natural Computation (ICNC), Zhangjiajie, China, 15–17 August 2015; pp. 754–758. [Google Scholar]
  39. Diaz Alonso, J.; Ros Vidal, E.; Rotter, A.; Muhlenberg, M. Lane-Change Decision Aid System Based on Motion-Driven Vehicle Tracking. IEEE Trans. Veh. Technol. 2008, 57, 2736–2746. [Google Scholar] [CrossRef]
  40. Wu, B.F.; Kao, C.C.; Li, Y.F.; Tsai, M.Y. A real-time embedded blind spot safety assistance system. Int. J. Veh. Technol. 2012, 2012, 506235. [Google Scholar] [CrossRef] [Green Version]
  41. Oron, S.; Bar-Hille, A.; Avidan, S. Extended Lucas-Kanade Tracking. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 142–156. [Google Scholar]
  42. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  43. Shi, J. Good features to track. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  44. Woo, H.; Ji, Y.; Kono, H.; Tamura, Y.; Kuroda, Y.; Sugano, T.; Yamamoto, Y.; Yamashita, A.; Asama, H. Lane-Change Detection Based on Vehicle-Trajectory Prediction. IEEE Robot. Autom. Lett. 2017, 2, 1109–1116. [Google Scholar] [CrossRef]
  45. Mandalia, H.M.; Salvucci, M.D.D. Using Support Vector Machines for Lane-Change Detection. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2005, 49, 1965–1969. [Google Scholar] [CrossRef]
  46. Schlechtriemen, J.; Wedel, A.; Hillenbrand, J.; Breuel, G.; Kuhnert, K. A lane change detection approach using feature ranking with maximized predictive power. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 108–114. [Google Scholar]
  47. Aly, M. Real time detection of lane markers in urban streets. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. [Google Scholar]
  48. Ye, Y.Y.; Hao, X.L.; Chen, H.J. Lane detection method based on lane structural analysis and CNNs. IET Intell. Transp. Syst. 2018, 12, 513–520. [Google Scholar] [CrossRef]
  49. Shiru, Q.; Xu, L. Research on multi-feature front vehicle detection algorithm based on video image. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; pp. 3831–3835. [Google Scholar]
  50. Chen, C.; Chen, T.; Huang, D.; Feng, K. Front Vehicle Detection and Distance Estimation Using Single-Lens Video Camera. In Proceedings of the 2015 Third International Conference on Robot, Vision and Signal Processing (RVSP), Kaohsiung, Taiwan, 18–20 November 2015; pp. 14–17. [Google Scholar]
  51. Li, Y.; Wang, F.Y. Vehicle detection based on And–Or Graph and Hybrid Image Templates for complex urban traffic conditions. Transp. Res. Part C Emerg. Technol. 2015, 51, 19–28. [Google Scholar] [CrossRef]
  52. Yang, B.; Zhang, S.; Tian, Y.; Li, B. Front-Vehicle Detection in Video Images Based on Temporal and Spatial Characteristics. Sensors 2019, 19, 1728. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Chen, J.; Chen, Y.; Chuang, C. Overtaking Vehicle Detection Based on Deep Learning and Headlight Recognition. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW), YILAN, Taiwan, 20–22 May 2019; pp. 1–2. [Google Scholar]
  54. Tseng, C.; Liao, C.; Shen, P.; Guo, J. Using C3D to Detect Rear Overtaking Behavior. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 151–154. [Google Scholar]
Figure 1. The schematic diagram of the proposed vision-based driver assistance system. It consists of three modules: lane change detection, forward collision warning, and overtaking vehicle identification. Daytime and nighttime driving scenarios are also considered in the implementation.
Figure 1. The schematic diagram of the proposed vision-based driver assistance system. It consists of three modules: lane change detection, forward collision warning, and overtaking vehicle identification. Daytime and nighttime driving scenarios are also considered in the implementation.
Sensors 20 05139 g001
Figure 2. The illustration of lane departure status. Left: normal. Middle: shift left. Right: shift right. In the images, a trapezoid region is set as the region of interest (ROI) for line detection.
Figure 2. The illustration of lane departure status. Left: normal. Middle: shift left. Right: shift right. In the images, a trapezoid region is set as the region of interest (ROI) for line detection.
Sensors 20 05139 g002
Figure 3. The comparison of conventional gamma correction and proposed gamma correction results. (a) The detection result with no gamma correction; (b) the detection result using conventional gamma correction; (c) the input image using the proposed gamma correction; (d) the result using the proposed gamma correction.
Figure 3. The comparison of conventional gamma correction and proposed gamma correction results. (a) The detection result with no gamma correction; (b) the detection result using conventional gamma correction; (c) the input image using the proposed gamma correction; (d) the result using the proposed gamma correction.
Sensors 20 05139 g003
Figure 4. The front vehicle detection using a vertical edge pair. A combination of an adaptive ROI and a fixed trapezoid is used.
Figure 4. The front vehicle detection using a vertical edge pair. A combination of an adaptive ROI and a fixed trapezoid is used.
Sensors 20 05139 g004
Figure 5. The nighttime images captured by the conventional low dynamic range driving recorder. The intensity saturation commonly appears due to the vehicle tail-lights and the reflection of headlights on the rear bumper.
Figure 5. The nighttime images captured by the conventional low dynamic range driving recorder. The intensity saturation commonly appears due to the vehicle tail-lights and the reflection of headlights on the rear bumper.
Sensors 20 05139 g005
Figure 6. The calculation of the vehicle region using the aspect ratios and intensity histogram in the nighttime images.
Figure 6. The calculation of the vehicle region using the aspect ratios and intensity histogram in the nighttime images.
Sensors 20 05139 g006
Figure 7. The system flowchart of the proposed overtaking detection technique. It consists of three basic modules: (1) pre-processing and segmentation, (2) convolutional neural network and repetitive pattern removal, and (3) tracking and behavior detection.
Figure 7. The system flowchart of the proposed overtaking detection technique. It consists of three basic modules: (1) pre-processing and segmentation, (2) convolutional neural network and repetitive pattern removal, and (3) tracking and behavior detection.
Sensors 20 05139 g007
Figure 8. The pavement, lane marking, object approaching from the side, and object further away are shown in green, yellow, blue, and pink, respectively. The black part represents the uncertain region.
Figure 8. The pavement, lane marking, object approaching from the side, and object further away are shown in green, yellow, blue, and pink, respectively. The black part represents the uncertain region.
Sensors 20 05139 g008
Figure 9. Only the optical flow in the x-direction is used to distinguish the overtaking vehicle from the object moving further away.
Figure 9. Only the optical flow in the x-direction is used to distinguish the overtaking vehicle from the object moving further away.
Sensors 20 05139 g009
Figure 10. Some common repetitive patterns in the street scenes.
Figure 10. Some common repetitive patterns in the street scenes.
Sensors 20 05139 g010
Figure 11. An illustration of messy optical flow when repetitive patterns occur, which result in the forward and backward feature tracking inconsistency.
Figure 11. An illustration of messy optical flow when repetitive patterns occur, which result in the forward and backward feature tracking inconsistency.
Sensors 20 05139 g011
Figure 12. Overtaking tracking diagram. The red point is the initial position, and the green point is the current position. The blue line is the corresponding line.
Figure 12. Overtaking tracking diagram. The red point is the initial position, and the green point is the current position. The blue line is the corresponding line.
Sensors 20 05139 g012
Figure 13. The results of lane change detection. The images contain a variety of illumination conditions such as sunny, cloudy, and at night.
Figure 13. The results of lane change detection. The images contain a variety of illumination conditions such as sunny, cloudy, and at night.
Sensors 20 05139 g013
Figure 14. The results of forward collision warning. The images contain city, highway, nighttime, and tunnel traffic scenes.
Figure 14. The results of forward collision warning. The images contain city, highway, nighttime, and tunnel traffic scenes.
Sensors 20 05139 g014
Figure 15. The overtaking detection results for the city, highway, and night scenery.
Figure 15. The overtaking detection results for the city, highway, and night scenery.
Sensors 20 05139 g015
Table 1. The comparison of precision and recall of lane change detection for different algorithms. T.P.: true positive, F.P.: false positive, F.N.: false negative. Some previous works do not provide the numbers of true positives, false positives, false negatives, and recall rates.
Table 1. The comparison of precision and recall of lane change detection for different algorithms. T.P.: true positive, F.P.: false positive, F.N.: false negative. Some previous works do not provide the numbers of true positives, false positives, false negatives, and recall rates.
MethodT.P.F.P.F.N.PrecisionRecall
Woo et al. [44]96.3%100%
Mandalia et al. [45]80.0%80.5%
Schlechtriemen et al. [46]93.6%99.3%
Aly et al. [47]195523596.4%
Ye et al. [48]19984898.5%
Proposed method2423538097.9%96.8%
Table 2. The comparison of precision and recall of front vehicle detection for different algorithms. T.P.: true positive, F.P.: false positive, F.N.: false negative. Some previous works do not provide the numbers of true positives, false positives, and false negatives.
Table 2. The comparison of precision and recall of front vehicle detection for different algorithms. T.P.: true positive, F.P.: false positive, F.N.: false negative. Some previous works do not provide the numbers of true positives, false positives, and false negatives.
MethodT.P.F.P.F.N.PrecisionRecall
Qu et al. [49]90.3%80.5%
Chen et al. [50]90.8%78.8%
Li et al. [51]90.4%79.2%
Yang et al. [52]91.9%83.6%
Proposed method15100911100%62.4%
Table 3. The recognition results for each category and the recall. GT: ground-truth, BG: background, LM: lane marking, MC: motorcycle, RP: repetitive pattern, FV: front of vehicle, RV: rear of vehicle.
Table 3. The recognition results for each category and the recall. GT: ground-truth, BG: background, LM: lane marking, MC: motorcycle, RP: repetitive pattern, FV: front of vehicle, RV: rear of vehicle.
BGLMMCRPFVRV
GT375252545408480710
BG322120111
LM22490200
MC00524100
RP252040503
FV13016047034
RV130309662
Precision85.9%98.8%96.1%99.3%97.9%93.2%
Recall85.8%98.8%96.1%99.2%97.9%93.2%
Table 4. The experimental results and performance evaluation.
Table 4. The experimental results and performance evaluation.
SceneTrue OvertakesDetectedMissedFalse
City traffic102102013
Highway797901
At night road423754

Share and Cite

MDPI and ACS Style

Lin, H.-Y.; Dai, J.-M.; Wu, L.-T.; Chen, L.-Q. A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection. Sensors 2020, 20, 5139. https://doi.org/10.3390/s20185139

AMA Style

Lin H-Y, Dai J-M, Wu L-T, Chen L-Q. A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection. Sensors. 2020; 20(18):5139. https://doi.org/10.3390/s20185139

Chicago/Turabian Style

Lin, Huei-Yung, Jyun-Min Dai, Lu-Ting Wu, and Li-Qi Chen. 2020. "A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection" Sensors 20, no. 18: 5139. https://doi.org/10.3390/s20185139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop