Next Article in Journal
UAV Landing Using Computer Vision Techniques for Human Detection
Next Article in Special Issue
Surface Electromyography-Controlled Automobile Steering Assistance
Previous Article in Journal
“MicroMED” Optical Particle Counter: From Design to Flight Model
Previous Article in Special Issue
Recognizing New Classes with Synthetic Data in the Loop: Application to Traffic Sign Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Road-Surface Type Using Deep Neural Networks for Friction Coefficient Estimation

by
Eldar Šabanovič
1,
Vidas Žuraulis
1,
Olegas Prentkovskis
2,* and
Viktor Skrickij
1
1
Transport and Logistics Competence Centre; Vilnius Gediminas Technical University, Saulėtekio al. 11, LT-10223 Vilnius, Lithuania
2
Department of Mobile Machinery and Railway Transport, Vilnius Gediminas Technical University, Plytinės g. 27, LT-10105 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(3), 612; https://doi.org/10.3390/s20030612
Submission received: 19 December 2019 / Revised: 13 January 2020 / Accepted: 20 January 2020 / Published: 22 January 2020
(This article belongs to the Special Issue Advance in Sensors and Sensing Systems for Driving and Transportation)

Abstract

:
Nowadays, vehicles have advanced driver-assistance systems which help to improve vehicle safety and save the lives of drivers, passengers and pedestrians. Identification of the road-surface type and condition in real time using a video image sensor, can increase the effectiveness of such systems significantly, especially when adapting it for braking and stability-related solutions. This paper contributes to the development of the new efficient engineering solution aimed at improving vehicle dynamics control via the anti-lock braking system (ABS) by estimating friction coefficient using video data. The experimental research on three different road surface types in dry and wet conditions has been carried out and braking performance was established with a car mathematical model (MM). Testing of a deep neural networks (DNN)-based road-surface and conditions classification algorithm revealed that this is the most promising approach for this task. The research has shown that the proposed solution increases the performance of ABS with a rule-based control strategy.

1. Introduction

Currently, the automotive industry is facing the challenge of automated driving. Achievements in fields of vehicle dynamics, control engineering, and artificial intelligence enable implementation of this technology. However, automated driving requires combined sensing solutions and hardware for perception. Vehicle systems such as brakes, steering, and active suspension, can be improved significantly and implemented in the automated vehicle without additional cost only by using perception.
Road friction estimation is a useful tool for various aspects in driving safety alerting a driver about the road-surface conditions, modifying vehicle active safety systems thresholds or reporting information to a vehicle or road infrastructure network. The applications of such technologies have been already introduced for patenting and near future implementation in new vehicle production [1].
Nowadays, active systems in the vehicle are actuated after estimating vehicle dynamic parameter response generated by the impact and, as a result, actuators have only several milliseconds to tune their characteristics. Actuators used in such systems are complex, expensive, and consume a lot of energy. If the vehicle receives the data about the driving conditions before reaching the place where the control should be applied, it has more time to select optimal tuning parameters. As a result, the comfort level for passengers, and their safety may be increased, even using slower, but cheaper actuators [2].
Road pavement type and its condition identification is a vital task for brake system performance. There are several groups of sensors available for this task. The first group of sensors that can be used is inroad. Such sensors are placed in the road surface and can measure surface temperature, evaluate coating, and other parameters. Data received from inroad sensors can be sent to the vehicle using vehicle-to-infrastructure (V2I) and vehicle-to-roadside (V2R) communication [3,4]. Tanizaki et al. [5] used a vibration sensor placed under the road surface for tyre type recognition. The authors managed to discern winter from summer tyres. It is possible to implement the same technology to identify pavement coating. Stationary video cameras with different filters can be used for surface evaluation. Colace et al. [6] introduced and tested an original approach for the optical assessment of road conditions due to various atmospheric perturbations. The sensing system was based on measuring diffused and reflected light under near-infrared illumination, and extracting the polarisation contrast after reflection.
The second group of sensors is in-vehicle. Wang et al. [7] proposed two different approaches for road pavement type and its coating identification using two approaches: (i) effect-based—identifies road friction conditions through estimating dynamic parameter response of the vehicle; (ii) cause-based—detects causes, before they affect road friction, using various sensors. The main advantage of the cause-based approach is road friction conditions identification before reaching the measured surface point. A conventional ABS is an effect-based system because it uses sensor information about vehicle velocity, wheel angular velocity, acceleration, and wheel slip, and its performance may be improved combining effect-based and cause-based approaches.
Bhandari et al. [8] used the effect-based approach for surface prediction. The authors compared the measured coefficient of friction to the calculated value. The authors used six main types of road surfaces: dry/wet asphalt, dry/wet cobblestone, snow, and ice in their investigation. The only drawback may be the unnecessary loss of braking while checking for the surface change when the surface does not change. Alonso et al. [9] and Kalliris et al. [10] proposed the road classification system based on real-time acoustic analysis of tyre/road noise. This system can identify dry and wet asphalt surfaces with good accuracy. Theoretically, it is possible to identify if the surface is icy and snowy. Ngwangwa and Heyns [11] used acceleration sensors and an artificial neural network for estimating the condition of a road surface by approximation of its profiles and their roughness classes utilising displacement spectral densities. Taniguchi et al. [12] proposed the use of an ultrasonic distance sensor for monitoring road-surface conditions. The authors used a low-cost ultrasonic sensor to measure the road-surface roughness. Such a system may be useful in the prevention of accidents if the information on bad road-surface conditions, such as break, potholes, obstacles, bumps, is obtained in advance. The main advantage of such a system is that a road surface can be measured in the front of the moving vehicle before the front wheel contact with an obstacle. Niskanen and Tuononen [13] proposed the friction identification by estimating a three-axis accelerometer mounted inside the tyre. While such a method can be applied to detect friction potential indicators, different levels of pavement roughness still cause undesirable vibration and a negative influence on results. As an alternative to the accelerometer, the strain gauges have been mounted in the inner liner surface of the tyre to characterise the grip [14]. The experimental research showed strong straining parameters relation with tyre lateral force but for limited grip. While widely used vehicle active safety systems such as ABS, a traction control system (TCS), and an electronic stability program (ESP), have already used road friction estimation during initial cycles of its performance, the rapid development of ADAS technologies requires friction information in advance [15]. To maintain sufficient safety and comfort, the grip level must be estimated earlier, during free-rolling. However, slip-based approaches are insufficient for high excitation levels including road roughness in wet conditions. After experimental tests of vehicle state estimation including tire-road friction coefficient, significant inaccuracies of estimation during sharp cornering were detected [16].
Sensors in the automated vehicle are used for perception, and most of them are cause-based: an ultrasonic sensor; video, thermal and stereo cameras; radars; laser-based radar (LIDAR), a global positioning system (GPS), etc. Video cameras are usually used for determining path [17] and obstacles [18], line detection and road edge recognition [19,20]. Also, there are developments where image analysis methods are used for road distress, cracks and other road damage [21,22,23]. Most image processing can be done using traditional image-processing methods such as histograms, thresholding and other [24], but a currently emerging trend is the use of deep neural network (DNN)-based methods for feature extraction, image matching and decision making [25]. Similar research [26] included recognition of road type and quality but not conditions. Researchers used their small dataset of 512 images per class for road type classification and 221 for road quality classification, and the dataset was collected from Google Street View. Previously, visual recognition has been used for detection of bad road visibility conditions, weather and lighting conditions on roads [27,28]. A SqueezeNet model, as one of the deep learning models, was established as the most accurate model for road-surface condition estimation comparing with CNN and feature-based models [29]. By decreasing the number of input channels it was also the fastest, but more sensitive to training data compared with CNN architecture. A not significantly large number of 100 sets of high-quality road images were used to train the model for estimation of pavement friction level [30]. However, a DNN based on domain knowledge analysis method performed with high enough accuracy of 90.67%, but only by using additional double image distribution. There are datasets available online, for example, Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset [31], Udacity dataset [32], Oxford Robot Car team dataset [33], the Malaga Dataset [34] and can be used in the development of new processing algorithms, also simulated/synthetic data can be used from self-driving car simulators such as LG Silicon Valley Lab (LGSVL) simulator [35]. However, none of the above-reviewed datasets and simulators fully matched requirements for road pavement type and condition dataset samples. Therefore, the dataset presented in [2] was used during research.
To check the efficiency of the idea, that vehicle brake system with a preview option is effective, the mathematical model (MM) is needed. Wang et al. [7] and Bhandari et al. [8] used Burckhardt tyre model [36], which was developed using the fitting of experimental data collected from a large number of roads. The Pacejka Magic formula can also be used instead of the Burckhardt model. In both cases, numerical values of coefficients are needed. These coefficients can be identified solving optimisation task using experimental data [37]. Cabrera et al. [38] analysed the friction-slip curve as a function of speed using the Magic formula. After a tyre model is developed, the ABS algorithm is needed. The algorithm presented in [39] was used as a reference. The extensions of this algorithm in different vehicle dynamics cases are used by other researchers as well [40].
The next step is the development of the MM of the whole vehicle. Such a model is needed for comparison of a conventional ABS and a system with a preview option. Most commonly used models with 2, 4, 14 and 38 degrees of freedom (DOF). The simplest one is the quarter-car model [41] which has 2 DOF. Displacements of the unsprung mass and a quarter of the car body are included. The 4 DOF model is a half-car model, where the car body has 2 DOFs (displacement in a vertical direction and one rotation), and one DOFs for each wheel vertical displacement. The 14 DOF model consists of 6 DOFs of the vehicle (longitudinal, lateral, vertical motions, also pitch, roll, yaw rotations), 4 DOFs vertical motions of unsprung masses, and the remaining 4 DOFs are the rotations of wheels [42]. The 38 DOFs consist of 6 DOFs of vehicle, and 8 DOF for each wheel [43]. In this research vertical and longitudinal vehicle dynamics have been taken into account.
The main task of this investigation was to improve vehicle ABS performance using additional data from the video image sensor. The system of real-time road-surface classification using visual data and DNN was developed and applied for the vehicle brake system. The investigation of created system efficiency was provided using MM of a vehicle, tyres and ABS, MMs were validated using experimental data.

2. Materials and Methods

Blaupunkt BP 3.0 FHD GPS car camera, mounted on the front window of the car, was used for capturing videos for dataset creation. Video files were recorded using 1920 × 1080 px resolution 30 frames per second (FPS) settings. This camera has a 140° diagonal field of view wide-angle lens, OmniVision OV2710 1/2.7-inch 2 MP complementary metal-oxide-semiconductor (CMOS) video image sensor [44]. This sensor has 3 μm pixels, with low-light sensitivity of 3700 mV/lux-s, the signal-to-noise ratio of 40 dB and a peak dynamic range of 69 dB [45]. Car camera records video using h.264 codec. As the characteristics suggest, the camera should provide good quality in low light situations, but unfortunately, very lossy compression leads to high detail loss in bright and dark image areas. This camera has no selectable compression quality level nor bitrate, and uncompressed video cannot be recorded. While examining raw image frames, blurred frames due to vibrations, and low-detail frames due to dark conditions were found, some of the fragmented blurs appeared due to compression using h.264 codec.
Data were collected during different seasons to capture different kinds of weather conditions and road surfaces. The processing of raw data to prepare the dataset consisted of a few steps. Firstly, the videos were split into images and these images were sorted into six categories manually. Secondly, all excessively blurred images and images that were too dark were removed. This is due to the fact, that camera requires an external light source that is stronger than car headlights and light is not coming from the camera side or shining directly into the camera. Finally, a created dataset was split into training, validation, and test parts. The validation and training samples were selected by cutting random sequences of images from the full dataset, and the reminding was assigned to the training part. The prepared dataset consisted of 12,440 images separated into training (10,040), validation (1200) and testing (1200) parts, that were in separate folders inside which image samples sorted into folders by class. Created training and validation parts were used for training and measurement of developed algorithm performance, and testing parts were used for its performance evaluation. Such a distribution of data in training and validation parts led to better evaluation of the actual performance of the created models. This approach guaranteed that validation and training samples were not too similar to the training samples, but were from a similar situation, so samples training, validation and testing parts belonged to the same distribution.

2.1. Creating Deep Neural Network (DNN)-Based Classification Algorithm

An emerging artificial intelligence method of deep learning has been used for classification of six road-surface types and condition combinations: gravel wet, gravel dry, cobblestone wet, cobblestone dry, asphalt wet, asphalt dry; and is presented in this subsection. The convolutional neural network (CNN) model Alexnet, with modifications to process a bigger image, was used for image classification. The structure of the DNN model used is presented in Figure 1. The model was designed and trained using the Nvidia Deep Learning Graphics Processing Unit (GPU) Training System (DIGITS) [46]. This web-based user interface and underlying middle layer allow fast dataset preparation with training, validation, testing parts, prototyping of model using Caffe [47] or TensorFlow [48]. The developed DNN was trained using workstation with an Nvidia Geforce 2080 Ti GPU. The final model was converted using an Nvidia TensorRT [49] and tested on an Nvidia Jetson TX2 embedded system [50]. This embedded system consumes only up to 15 W of power and provides up to 1 TFLOP of DNN calculations. Also, it has hardware pipelines for camera video stream decoding and video file stream decoding and encoding that can be deployed in-vehicle for real-time image analysis.
The developed DNN model is made of five convolutional layers and three fully connected layers, all with rectified linear units as the activation function. To reduce classifier dependence on separate pixels, dropout layers were used. The main differences of developed DNN model compared to Alexnet is increased input image size of the first convolutional layer from 227 × 227 px to 448 × 448 px and stride increase from 4 to 8 of the same layer. A bigger input image permits better performance as provided by smaller details, and also local response normalization was removed as it proved to lead to no performance improvement for the task.
Convolution operations are implemented in a similar way to human vision feature extraction and processing capabilities in computer vision. This operation enables the learning of spatial feature representations for specific image processing tasks. During convolution operation, kernels are being shifted over an image and matrix multiplication operation between kernel and image part values that kernel is currently on, afterwards multiplication results are summed and processed by a selected activation function.
Max pooling layers allows for maximal value selection in local image field to improve shift and rotation invariance. The dropout operation between fully connected layers is used for improving stability of features, lowering dependency on single values by randomly zeroing half of the outputs of previous layers while training the network, and using a multiplication coefficient to achieve same signal level between layers during inference.
To train this DNN, a stochastic gradient descent algorithm was used with a base learning rate of 0.01, that was reduced by a factor of 0.3 every 4 from a total of 20 epochs. After training, the best performance snapshot after 28th epoch was selected because of the lowest loss value on validation data.
The input image was scaled and the pixel mean normalised before processing with the DNN-based algorithm. DNN input was a 3 channel image of 448 × 448 px. During training and validation process, the input image was cropped from training sample of 512 × 512 px size at random coordinates; these created more unique samples and reduced the possibility of overfitting. The softmax function was used for class selection. It provides normalised probabilities of a road in input image belonging to all classes and selects the class with the highest probability. Probability values can be used for further determination of reliability of classification results. Mean filtering may be used for single false classification removal.

2.2. System Implementation

An application that reads input images, analyses them using the CNN-based algorithm, and saves the results are presented in this subsection. The application that implements the CNN based evaluation algorithm for real-time camera image or video record image processing, was written in C++. In Figure 2, a block schematic of this application is presented. GStreamer pipelines with support of hardware decoding and encoding were used to reduce CPU load. GStreamer is a library that provides ways for creating video preprocessing, encoding and decoding including support for acceleration using hardware pipelines. These hardware pipelines configured to have direct access to camera and memory, without raised CPU usage. Road images can be streamed from camera or video record file while testing. A video encoding pipeline is used for result presentation on screen and recording to video file. In addition, results can be written to file or streamed over the network to controller. The basics of image recognition implementation were taken from the Nvidia tutorial [51] which shows how to use DIGITS and TensorRT for implementation of CNN models on the Jetson TX2 for real-time inferencing.
The created application supports command line parameters that allow fast inference testing using different DNN models. In addition, it supports processing of recorded video files and video, captured with a camera in real-time for the deployment in-vehicle for real-time road pavement type and condition evaluation. The results are calculated for each provided image at the input, no additional filtering or post-processing was used while testing.

2.3. Vehicle Mathematical Model

Vehicle MM used for performance evaluation of the developed road-surface type identification system is presented in this subsection. The car dynamic model with 7 (DOF) was developed (Figure 3) during the investigation. For vertical dynamics investigation the 4 DOF car model is used; one DOF for longitudinal dynamics, and the two DOF model for the wheels’ rotation. Toyota Prius was used as a reference vehicle during the simulation. Vehicle parameters used for the MM are presented in Table 1.
The first assumption, vehicle cornering (lateral displacement) was not taken into account in the MM. The second assumption, the longitudinal acceleration of the vehicle, is equal to the longitudinal acceleration of the sprung mass.
Verticals forces F z f and F z r were calculated using equations of motion:
m 1 q ¨ 1   =   k 3 ( q 3     l 1 φ     q 1 )   +   c 3 ( q ˙ 3     l 1 φ ˙     q ˙ 1 )     m 1 g     k 1 ( q 1     z 1 ) ;
m 2 q ¨ 2   =   k 4 ( q 3   +   l 2 φ     q 2 )   +   c 4 ( q ˙ 3   +   l 2 φ ˙     q ˙ 2 )     m 2 g     k 2 ( q 2     z 2 ) ;
m 3 q ¨ 3 =   k 3 ( q 3     l 1 φ     q 1 )     c 3 ( q ˙ 3     l 1 φ ˙     q ˙ 1 )     k 4 ( q 3   +   l 2 φ     q 2 )   c 4 ( q ˙ 3   +   l 2 φ ˙   q ˙ 2 )     m 3 g ;
I 3 φ ¨ = k 3 l 1 ( q 3     l 1 φ     q 1 )     c 3 l 1 ( q ˙ 3     l 1 φ ˙     q ˙ 1 )     k 4 l 2 ( q 3   +   l 2 φ     q 2 ) c 4 l 2 ( q ˙ 3   +   l 2 φ ˙   q ˙ 2 )     m 3 g F f l 1 + F r l 2 ;
where m 1 —front unsprung mass; m 2 —rear unsprung mass; m 3 —sprung mass of a vehicle; I 3 —moment of inertia of the vehicle; g —gravity; l 1 —distance from front wheel to the centre of gravity of the vehicle body; l 2 —distance from the rear wheel to the centre of gravity of the vehicle body; k 1 , 2 —front and rear tyre stiffness; k 3 , 4 —stiffness of the front and rear suspension; c 3 , 4 —damping of the front and rear suspension; F f   =   m 3 q ¨ h l 1   +   l 2 and F r   =   m 3 q ¨ h l 1   +   l 2 —forces that affect front and rear axles during the deceleration.
Vertical forces acting front and rear axles:
F z f   =   k 1 ( q 1     z 1 )   a n d   F z r   =   k 2 ( q 2     z 2 )
Vehicle longitudinal dynamic equation of motion during the braking:
m t o t q ¨   =   F t o t ;
where m t o t —is total vehicle mass; q ¨ —longitudinal vehicle acceleration; F t o t —total longitudinal tyre friction force:
F t o t   =   μ f F z f   +   μ r F z r
Wheel rotational dynamic equation of motion during the braking:
I f , r φ ¨ f , r   =   R f , r μ f , r F z f , r     T b f , r ;
where I f , r —moments of inertia for the front and rear axle wheels; φ ¨ f , r —angular accelerations of front and rear axle wheels; T b f , r —braking torques on the front and rear axles, for the vehicle under investigation T b f T b r   =   1.5 (value defined during experiment); R f , r —effective radii of front and rear axle wheels.
The wheel slip is defined as:
λ f , r   =   q ˙     φ ˙ f , r R f , r q ˙
where q ˙ —vehicle longitudinal velocity; φ ˙ f , r —angular velocities of the wheels on the front and rear axles.
Wheel slip λ f , r is required for the evaluation of the friction coefficients. μ f , r . In this paper Magic Formula tyre model proposed by Pacejka was used [37]:
μ   =   D s i n ( C a r c t a n ( B ( λ   +   S h ) )     E ( B ( λ   +   S h ) )     arctan ( B ( λ   +   S h ) ) )   +   S v
where D ,   C ,   B ,   E ,   S h ,   S v —coefficients evaluated from experimental measurements.
The ABS algorithm proposed in [39] was used in this investigation and it and the control strategy are presented in Appendix A.

2.4. Experimental Investigation

To parametrise the vehicle tyre, model an experimental investigation was carried out and is presented in this subsection. A Toyota Prius test vehicle was used during the experiment (Figure 4). It was equipped with Kistler group measurement equipment. Equipment for experimental research included: inertial measurement unit (IMU) for sprung mass acceleration and rate of angular rotations (Corrsys-Datron TANS-3215003M5), a non-contact optical sensor for vehicle speed (Correvit S-350 Aqua), wheel pulse transducer for wheel rotation speed (Corrsys-Datron WPT), laser distance sensor for wheel effective radius (Corrsys-Datron HF-500C). A data acquisition system (Corrsys-Datron DAS-3) was used for data logging with selected 200 Hz frequency.
Test braking on three different pavements was performed (Figure 5). All the experiments were performed when the road surface was dry and wet.
Each braking test was carried out from the same initial speed keeping constant press force of brake pedal and straight driving trajectory. The data collected during the experiments enabled the determination of longitudinal wheel slip ratio and its representation with vehicle braking efficiency expressed by longitudinal acceleration.

3. Results

3.1. Evaluation of the DNN-Based Classification Algorithm

The DNN-based road pavement type and condition classification algorithm was tested and the results are presented in this subsection. Results are presented in the form of a confusion matrix with per-class accuracy, precision, recall, F1 score metrics in Table 2. There we can see that the classification accuracy of wet conditions such as asphalt wet, cobblestone wet, and gravel wet were higher compared to dry conditions of same road types. The most errors are made between the same class conditions, especially gravel; dry gravel was confused with wet 47 times out of 200, but less with asphalt or cobblestone. It is hard to discern dry and wet, because a gravel pavement can be of different colour and tone. Dry cobblestone was confused with dry asphalt 15 times, and wet cobblestone with wet asphalt 9 times. This was mainly related to similar colours and the low detail of images because of low light or image motion blur, as during these conditions these road times look very similar. Dry cobblestone dry was confused with wet cobblestone 8 times mostly because of the shadows on the pavement. Results show that DNN-based algorithm provides sufficient performance for the planned use case.
The implemented system provided real-time video processing of 30 FPS, with 20 ms processing time per frame. Average classification accuracy of 6 classes was 88.8% on the validation dataset and 88.3% on the testing dataset. There were no algorithms developed whose performance could be directly compared because usually road type and conditions are classified separately. Therefore, to compare the algorithm’s performance to those developed by other researchers, the road-surface type and conditions classification were evaluated separately in Table 3 and Table 4. Confusion matrix and precision, recall and F1 score metrics are presented. There average road pavement type classification accuracy is 96% and average road condition classification accuracy is 92%. The achieved pavement type classification accuracy is higher compared to accuracy showed by Tumen et al. [26]. In addition, the used dataset is bigger and includes a wider variety of conditions, such as different seasons and time of day, and therefore results may be replicated easier. Achieved average road condition classification accuracy using our vision-based algorithm is less than 5% worse than best results showed by Alonso et al. [9] and Kalliris et al. [10] using acoustic sensors. Also, the achieved condition classification accuracy of 92% was lower compared to Roychowdhury et al. [29] and the visual classification of road conditions accuracy of 97.36%. The authors achieved this result using SqueezeNet to process image patches of 6 color channels in front of the car, while the model that is presented in this paper gets unprocessed input image of 3 colors; it would be a good idea to implement that decision in our future work. Similarly to Alonso et al. [9], our algorithm provides higher accuracy for wet pavement conditions. The algorithm presented in this paper can provide classification results in advance with 20 ms delay, while Alonso et al. [9] reported a response time of 0.2 s and Kalliris et al. [10] did not provide any classification or response times.
The analysed results showed that the classification of forefront road condition is a harder task compared to road-surface type classification. Mostly, wet and dry conditions of same road-surface type class were misclassified, especially gravel. Conditions are misclassified because of non-uniform illumination and shadows. Shadows of viaducts and bridges, as well as dark parts of tunnels, provide major difficulties as well, as they may be mistaken for wet pavement. Therefore, shadow compensation methods may be applied in this sector in future. The road-surface type classification errors are rarer. These errors are caused by the limited dynamic range of a video sensor, slow brightness adaptation, and mostly because of lost details in compression. The worst detail loss happens when there are a lot of details in the image and the compression algorithm cannot preserve all of them.
In summary, results also confirm that the visual spectrum camera has the same drawbacks as human vision. However, this solution can work very well and will not lose attention compared to the human driver. In the case of poor visibility due to mist, heavy rain, hail, direct sunlight the image cannot ensure good identification of the pavement. These conditions can be detected, and then an effect-based sensors should be used instead of the visual spectrum camera.

3.2. System Performance Evaluation

During the experiment, the road friction coefficient and longitudinal wheel slip ratio were measured for six different cases under investigation, and the results are presented in this subsection. Results were filtered using the methodology proposed by [37]. Magic formula coefficients (Equation 10) were defined using the non-linear least squares optimisation method. In Figure 6, experimental data is presented with points and marked (ED), and the results achieved using the Magic formula are shown with lines. Experimental data and μ     λ curves achieved using the Magic formula coincide well. However, in [37] the authors yielded even more consistent results, but tests were carried out under laboratory conditions, in our case all the measurements were carried out under real conditions.

3.3. Effectiveness of the Proposed Solution

After the evaluation of coefficients used in the Magic formula for all road surfaces, the MM presented in Section 2.3 was developed and validated using experimental data presented in Figure 7. During the validation numerical values of the main ABS parameters, a m i n ,   A ,   a m a x , λ r e f were evaluated and braking torque variation intensity was chosen.
As can be seen in Figure 7, the MM reproduces experimental data very well and can be used for further research. Using MM, the braking of the car was simulated on different pavement types, with initial velocity 50 km/h and results are presented in Table 5. It can be seen that in a conventional ABS system with λ r e f   =   0.15 performs optimally on dry asphalt and the developed system with preview has no advantage. However, in all the other cases the developed system performs better. The best results were achieved on wet asphalt, where stopping distance decreased by 18%. On wet gravel, the distance decreased 13%, for other cases effect was less than 10%. The car using conventional ABS achieves shorter stopping distance compared to a car without ABS on most road-surface types and conditions, excluding wet gravel. The shorter stopping distance without ABS on wet gravel may be caused by pushing gravel against the wheel.
As shown in Section 3.1. during the classification errors may appear and it may affect the braking distance. In Table 6 stopping distance is presented for cases when classification error appears.
As expected, a significant increase in stopping distance appeared. Only on dry cobblestone and dry/wet gravel did the developed system perform better, for some cases, than conventional ABS (Table 6, first column). To solve this challenge the output data need to be filtered and fused with other in-vehicle sensors. Also, the case of misclassification brings lower probability values for the actual output of the DNN-model, and therefore all results when the DNN-model output probability is lower than the set threshold can be ignored as not trustworthy. In such a case the slip 0.15 should be taken as a reference and the braking distance will be the same as in a conventional system.

4. Discussion and Conclusions

A developed road type classification solution based on video data and DNN provides wide application potential in the automotive industry. It uses sensors and hardware that will be already installed in automated vehicles. Information about road type can significantly increase the effectiveness of vehicle steering, braking, acceleration performance as well as stability and safety systems.
A DNN-based algorithm for road pavement type and conditions was developed. This algorithm was tested to achieve 88.8% average classification for 6 combinations of road pavement types and conditions: dry asphalt, wet asphalt, dry cobblestone, wet cobblestone, dry gravel, wet gravel. It achieved even higher accuracy of 96% for 3 road pavement types: asphalt, cobblestone, gravel and 92% for dry and wet classification. This algorithm was implemented and its execution speed tested using Nvidia Jetson TX2, which can be installed in-vehicle to process real-time video. An implemented algorithm was processing one image in 20 milliseconds, which is enough to process up to 50 frames per second, and allows monitoring of each half meter of road pavement at 100 km/h speed. The results can be then used for better control of other systems.
In this article, a case study with braking was under investigation. There are different configurations of ABS available on the market. It has to be admitted that the method proposed in this research will not be suitable for absolutely all braking systems. The numerical value of wheel slip when the friction coefficient is maximal varies. It depends on vertical load, tyre pressure, temperature, surface roughness and other parameters [15,52,53]. Implementation of our developed method in a high-dynamic decoupled electro-hydraulic brake system as presented in Savitski et al. [54] will not be effective. As in such systems, reference wheel slip is calculated defining max longitudinal force, and such factors as load, tyre pressure, the temperature are essentially irrelevant. The main shortcoming of the high-dynamic decoupled electro-hydraulic brake system is its complexity and cost. The main advantage of our solution is that it can be implemented in a simple brake system with a rule-based control algorithm. As shown, the effectiveness of our solution for such a system will be high.
The developed MM was validated and reproduced experimental data well. The stopping distance when braking on different pavements types without ABS, conventional ABS, and developed system were simulated. Results showed that the proposed solution reduced stopping distance for all analysed road types and conditions, excluding dry asphalt, on which the stopping distance was not changed. The best results were achieved on wet asphalt, where stopping distance decreased by 18%.
In future, the new dataset will be created using recorded uncompressed data as will be present during real-time processing in the vehicle. The dataset will include snowy and icy conditions as well as more complicated environments for camera sensing such as sunshine, twilight, fog or mist. The DNN-model will be improved to use fewer resources and perform with higher accuracy and speed. The algorithm safety and reliability will be improved by adding faulty classification detection and filtering. The feature system may use data fusion of the DNN-based algorithm’s result and in-vehicle sensors.

Author Contributions

Conceptualisation, V.S.; Methodology, V.S.; Investigation, E.Š., V.S., O.P. and V.Ž.; Experimental data V.Ž. and E.Š.; Writing—Original draft preparation, E.Š. V.S., V.Ž. and O.P.; Writing—Review and editing, O.P. and V.Ž. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this article.

Appendix A

Total longitudinal force F t o t of the vehicle with ABS can be calculated using Equation 7, taking into account that μ f is controlled using the ABS algorithm presented in Figure A1 and Table A1. This algorithm was developed by the authors based on that created in TU Ilmenau and described in [39]. The operating principle of the ABS algorithm used is based on predefined wheel accelerations and slip thresholds. During the braking, the algorithm starts from stage A (Table A1) and regulates the braking torque based on wheel acceleration and its slip (Figure A1).
Figure A1. ABS algorithm.
Figure A1. ABS algorithm.
Sensors 20 00612 g0a1
Since actual wheel slip becomes higher than the reference one, the algorithm switches to stage B. It should be noted that stage A is activated during initial braking only after that algorithm works in stages B, C, and D.
Table A1. Control strategy.
Table A1. Control strategy.
StageRuleStageRule
1 if   a w   <   [ a m i n ]   &   t   >   t u p ; t = 06 if   a w   >   [ A ] ; t = 0
2 if   λ     <   [   λ r e f ]   &   t   >   t u p ; t = 07if a w   >   [ a m a x ]   &     λ   <   [ λ r e f ] , o r   t > t s w ; t = 0
3if λ   >   [   λ r e f ] ; t = 08if λ   <   [   λ r e f ]   &   t   >   t u p ; t = 0
4if a w   <   [ A ]   &   t   >   t d o w n ; t = 09 if   λ   <   [   λ r e f ]   &   t   >   t u p ; t = 0
5if a w   <   [ A ]   &   t   >   t s w ; t = 010 if   λ   >   [   λ r e f ] ; t = 0

References

  1. Chowdhury, S.R.; Zhao, M.; Jonasson, M.; Ohlsson, N. Methods and Systems for Generating and Using a Road Friction Estimate Based on Camera Image Signal Processing. U.S. Patent Application No. US20190340445A1, 7 November 2019. [Google Scholar]
  2. Žuraulis, V.; Surblys, V.; Šabanovič, E. Technological measures of forefront road identification for vehicle comfort and safety improvement. Transport 2019, 34, 363–372. [Google Scholar] [CrossRef] [Green Version]
  3. Arslan, S.; Saritas, M. The effects of OFDM design parameters on the V2X communication performance. Surv. Veh. Commun. 2017, 7, 1–6. [Google Scholar] [CrossRef]
  4. Sousa, S.; Santos, A.; Costa, A.; Gama, Ó. A New approach on communications architectures for intelligent transportation systems. Procedia Comput. Sci. 2017, 110, 320–327. [Google Scholar] [CrossRef]
  5. Tanizaki, T.; Ueda, K.; Murabe, T.; Nomura, H.; Kamakura, T. Identification of winter tires using vibration signals generated on the road surface. Appl. Acoust. 2014, 83, 116–122. [Google Scholar] [CrossRef]
  6. Colace, L.; Santoni, F.; Assanto, G. A near-infrared optoelectronic approach to detection of road conditions. Opt. Lasers Eng. 2013, 51, 633–636. [Google Scholar] [CrossRef]
  7. Wang, B.; Guan, H.; Lu, P.; Zhang, A. Road surface condition identification approach based on road characteristic value. J. Terramech. 2014, 56, 103–117. [Google Scholar] [CrossRef]
  8. Bhandari, R.; Patil, S.; Singh, R.K. Surface prediction and control algorithms for anti-lock brake system. Transp. Res. Part C 2012, 21, 181–195. [Google Scholar] [CrossRef]
  9. Alonso, J.; López, J.M.; Pavón, I.; Recuero, M.; Asensio, C.; Arcas, G.; Bravo, A. On-board wet road surface identification using tyre/road noise and support vector machines. Appl. Acoust. 2014, 76, 407–415. [Google Scholar] [CrossRef]
  10. Kalliris, M.; Kanarachos, S.; Kotsakis, R.; Haas, O.; Blundell, M. Machine learning algorithms for wet road surface detection using acoustic measurements. Proceedings of 2019 IEEE International Conference on Mechatronics (ICM), Ilmenau, Germany, 18–20 March 2019; pp. 265–270. [Google Scholar] [CrossRef]
  11. Ngwangwa, H.M.; Heyns, P.S. Application of an ANN-based methodology for road surface condition identification on mining vehicles and roads. J. Terramech. 2014, 53, 59–74. [Google Scholar] [CrossRef] [Green Version]
  12. Taniguchi, Y.; Nishii, K.; Hisamatsu, H. Evaluation of a bicycle-mounted ultrasonic distance sensor for monitoring road surface condition. In Proceedings of the 7th International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN 2015), Riga, Latvia, 3–5 June 2015; pp. 31–34. [Google Scholar] [CrossRef]
  13. Niskanen, A.; Tounonen, A.J. Three three-axis IEPE accelerometers on the inner liner of a tire for finding the tire-road friction potential indicators. Sensors 2015, 15, 19251–19263. [Google Scholar] [CrossRef] [Green Version]
  14. Yunta, J.; Garcia-Pozuelo, D.; Diaz, V.; Olatunbosun, O. A strain-based method to detect tires’ loss of grip and estimate lateral friction coefficient from experimental data by fuzzy logic for intelligent tire development. Sensors 2018, 18, 490. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Acosta, M.; Kanarachos, S.; Blundell, M. Road friction virtual sensing: A review of estimation techniques with emphasis on low excitation approaches. Appl. Sci. 2017, 7, 1230. [Google Scholar] [CrossRef] [Green Version]
  16. Sun, F.; Huang, X.; Rudolph, J.; Lolenko, K. Vehicle state estimation for anti-lock control with nonlinear observer. Control Eng. 2015, 43, 69–84. [Google Scholar] [CrossRef]
  17. Chen, Z.; Huang, X. End-to-end learning for lane keeping of self-driving cars. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, 11–14 June 2017; pp. 856–1860. [Google Scholar] [CrossRef]
  18. Hane, C.; Sattler, T.; Pollefey, M. Obstacle detection for self-driving cars using only monocular cameras and wheel odometry. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5101–5108. [Google Scholar] [CrossRef]
  19. Wang, Q.; Wei, Z.; Wang, J.; Chen, W.; Wang, N. Curve recognition algorithm based on edge point curvature voting. Proc. Inst. Mech. Eng. D 2019, 1–14. [Google Scholar] [CrossRef]
  20. Cao, J.; Song, C.; Song, S.; Xiao, F.; Peng, S. Lane detection algorithm for intelligent vehicles in complex road conditions and dynamic environments. Sensors 2019, 19, 3166. [Google Scholar] [CrossRef] [Green Version]
  21. Cafiso, S.; D’Agostino, C.; Delfino, E.; Montella, A. From manual to automatic pavement distress detection and classification. In Proceedings of the 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), Napoli, Italy, 26–28 June 2017; pp. 433–438. [Google Scholar] [CrossRef]
  22. Shen, G. Road crack detection based on video image processing. In Proceedings of the 3rd International Conference on Systems and Informatics ICSAI, Shanghai, China, 19–21 November 2016; pp. 912–917. [Google Scholar] [CrossRef]
  23. Meignen, D.; Bernadet, M.; Briand, H. One application of neural networks for detection of defects using video data bases: Identification of road distress. In Proceedings of the 8th International Conference and Workshop on Database and Expert Systems Applications (DEXA ‘97), Toulouse, France, 1–5 September 1997; pp. 459–464. [Google Scholar] [CrossRef]
  24. Oliveira, H.; Lobato Correia, P. Identifying and retrieving distress images from road pavement surveys. In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP 2008), San Diego, CA, USA, 12–15 October 2008; pp. 57–60. [Google Scholar] [CrossRef]
  25. Smolyanskiy, N.; Kamenev, A.; Birchfield, S. On the importance of stereo for accurate depth estimation: An efficient semi-supervised deep neural network approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2018), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1120–1128. [Google Scholar] [CrossRef] [Green Version]
  26. Tumen, V.; Yildirim, O.; Ergen, B. Recognition of road type and quality for advanced driver assistance systems with deep learning. Elektronika ir Elektrotechnika 2018, 24, 67–74. [Google Scholar] [CrossRef] [Green Version]
  27. Gimonet, N.; Cord, A.; Saint Pierre, G. How to predict real road state from vehicle embedded camera? In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 593–598. [Google Scholar] [CrossRef]
  28. Cheng, G.; Wang, Z.; Zheng, J.Y. Modeling weather and illuminations in driving views based on big-video mining. IEEE Trans. Intell. Veh. 2018, 3, 522–533. [Google Scholar] [CrossRef]
  29. Roychowdhury, S.; Zhao, M.; Wallin, A.; Ohlsson, N.; Jonasson, M. Machine learning models for road surface and friction estimation using front-camera images. In Proceedings of the International Joint Conference on Neural Networks (IJCNN 2018), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  30. Du, Y.; Liu, C.; Song, Y.; Li, Y.; Shen, Y. Rapid estimation of road friction for anti-skid autonomous driving. IEEE Trans. Intell. Transp. 2019, 1–10. [Google Scholar] [CrossRef]
  31. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  32. Udacity. Dataset Wiki. Available online: https://github.com/udacity/self-driving-car/tree/master /datasets (accessed on 16 December 2019).
  33. Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 year, 1000 km: The Oxford RobotCar dataset. Int. J. Robot. Res. 2016, 36, 3–15. [Google Scholar] [CrossRef]
  34. Blanco-Claraco, J.L.; Moreno-Dueñas, F.Á.; González-Jiménez, J. The Málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario. Int. J. Robot. Res. 2014, 33, 207–214. [Google Scholar] [CrossRef] [Green Version]
  35. LG Electronics, Inc. LGSVL Simulator: An Autonomous Vehicle Simulator. Available online: https://github.com/lgsvl/simulator (accessed on 16 December 2016).
  36. Burckhardt, M.; Reimpell, J. Fahrwerktechnik, Radschlupf-Regelsysteme; Vogel Verlag: Wüzburg, Germany, 1993; p. 432. [Google Scholar]
  37. Ružinskas, A.; Sivilevičius, H. Magic formula tyre model application for a tyre-ice interaction. Procedia Eng. 2017, 187, 335–341. [Google Scholar] [CrossRef]
  38. Cabrera, J.A.; Castillo, J.J.; Pérez, J.; Velasco, J.M.; Guerra, A.J.; Hernández, P. A Procedure for determining tire-road friction characteristics using a modification of the magic formula based on experimental results. Sensors 2018, 18, 896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Shyrokau, B.; Wang, D.; Savitski, D.; Hoepping, K.; Ivanov, V. Vehicle motion control with subsystem prioritization. Mechatronics 2015, 30, 297–315. [Google Scholar] [CrossRef]
  40. Van der Merwe, N.; Els, P.S.; Žuraulis, V. ABS braking on rough terrain. J. Terramech. 2018, 80, 49–57. [Google Scholar] [CrossRef] [Green Version]
  41. Ngwangwa, H.M.; Heyns, P.S.; Labuschagne, K.F.J.J.; Kululanga, G.K. Overview of the neural network based technique for monitoring of road condition via reconstructed road profiles. In Proceedings of the 27th Southern African Transport Conference (SATC 2008), Pretoria, South Africa, 7–11 July 2008; pp. 312–329. [Google Scholar]
  42. Shyrokau, B.; Wang, D.; Savitski, D.; Ivanov, V. Vehicle dynamics control with energy recuperation based on control allocation for independent wheel motors and brake system. Int. J. Powertrains 2013, 2, 153–181. [Google Scholar] [CrossRef]
  43. Venture, G.; Ripert, P.J.; Khalil, W.; Gautier, M.; Bodson, P. Modeling and identification of passenger car dynamics using robotics formalism. IEEE Trans. Intell. Transp. Syst. 2006, 7, 349–359. [Google Scholar] [CrossRef]
  44. Blaupunkt. BP 3.0 FHD GPS; Blaupunkt: Hamelin, Germany, 2015; Available online: https://www.blaupunkt.com/uploads/tx_ddfproductsbp/BP%203.0%20User%20%20manual_English.pdf (accessed on 16 December 2019).
  45. OmniVision. OV2710-1E Full HD (1080p) Product Brief; OmniVision: Santa Clara, CA, USA, 2015; 1080p, Available online: https://www.ovt.com/download/sensorpdf/33/OmniVision_OV2710-1E.pdf (accessed on 16 December 2019).
  46. NVIDIA. NVIDIA DIGITS. Available online: https://developer.nvidia.com/digits (accessed on 16 December 2019).
  47. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. arXiv 2014, arXiv:1408.5093. [Google Scholar]
  48. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  49. NVIDIA. NVIDIA TensorRT. Available online: https://developer.nvidia.com/tensorrt (accessed on 16 December 2019).
  50. NVIDIA. NVIDIA Jetson TX2. Available online: https://www.nvidia.com/en-us/autonomous-machines /embedded-systems/jetson-tx2/ (accessed on 16 December 2019).
  51. NVIDIA. Two Days to a Demo. Available online: https://developer.nvidia.com/embedded/ twodaystoademo (accessed on 16 December 2019).
  52. Gnadler, R.; Unrau, H.J.; Hartmut, F.; Frey, M. FAT-Schriftenreihe 119; Forschungsvereinigung Automobiltechnik e.V.: Frankfurt, Germany, 1995; p. 170. [Google Scholar]
  53. Höpping, K.; Augsburg, K.; Büchner, F. Extending the HSRI tyre model for large inflation pressure changes. In Proceedings of the Engineering for a Changing World: 59th IWK, Ilmenau Scientific Colloquium, Ilmenau, Germany, 11–15 September 2017; pp. 1–20. [Google Scholar]
  54. Savitski, D.; Schleinin, D.; Ivanov, V.; Augsburg, K. Robust continuous wheel slip control with reference adaptation: Application to the brake system with decoupled architecture. IEEE Trans. Ind. Inform. 2018, 14, 4212–4223. [Google Scholar] [CrossRef]
Figure 1. Deep neural network (DNN) model for road type and conditions classification.
Figure 1. Deep neural network (DNN) model for road type and conditions classification.
Sensors 20 00612 g001
Figure 2. Blocks schematic of system implementation.
Figure 2. Blocks schematic of system implementation.
Sensors 20 00612 g002
Figure 3. Vehicle dynamic model.
Figure 3. Vehicle dynamic model.
Sensors 20 00612 g003
Figure 4. Test vehicle.
Figure 4. Test vehicle.
Sensors 20 00612 g004
Figure 5. Selected road surfaces for test braking: (a) asphalt, (b) cobblestone, (c) gravel.
Figure 5. Selected road surfaces for test braking: (a) asphalt, (b) cobblestone, (c) gravel.
Sensors 20 00612 g005
Figure 6. Numerical values of calculated and experimental data (ED) friction coefficients.
Figure 6. Numerical values of calculated and experimental data (ED) friction coefficients.
Sensors 20 00612 g006
Figure 7. Validation of vehicle MM with anti-lock braking system (ABS).
Figure 7. Validation of vehicle MM with anti-lock braking system (ABS).
Sensors 20 00612 g007
Table 1. Parameters for mathematical model (MM).
Table 1. Parameters for mathematical model (MM).
ParameterNumerical Value
Unsprung   mass   of   front   wheels ,   m 1   [ kg ] 60
Unsprung   mass   of   rear   wheels ,   m 2   [ kg ] 40
Vehicle   body   mass ,   m 3   [ kg ] 1160
Vehicle   body   inertia   moment ,   I 3   [ kgm 2 ] 1761.4
Gravitational   acceleration ,   g   [ m / s 2 ] 9.81
Stiffness of front wheels, k 1   [ N / m ] 2 10 5
Stiffness   of   rear   wheels ,   k 2   [ N / m ] 2 10 5
Stiffness   of   front   suspension ,   k 3   [ N / m ] 2 24 , 236.5
Stiffness   of   rear   suspension ,   k 4   [ N / m ] 2 16 , 965.5
Damping   of   front   suspension ,   c 3   [ Ns / m ] 2 2726.6
Damping   of   rear   suspension ,   c 4   [ Ns / m ] 2 1908.6
Distance   from   front   wheel   to   the   centre   of   vehicle   body   mass ,   a 1   [ m ] 1.37
Distance   from   the   rear   wheel   to   the   centre   of   vehicle   body   mass ,   a 2   [ m ] 1.33
Weight of centre of vehicle body mass 0.6
Wheel   radii ,   r   [ m ] 0.3
Front   axle   moment   of   inertia ,   I f   [ kgm 2 ] 2 1.155
Rear   axle   moment   of   inertia ,   I r   [ kgm 2 ] 2 0.77
Table 2. Confusion matrix for road pavement type and condition combinations.
Table 2. Confusion matrix for road pavement type and condition combinations.
PredictedC1C2C3C4C5C6Per Class AccuracyPrecisionRecallF1 Score
Actual
C1: Asphalt dry17414442287.0%0.900.870.89
C2: Asphalt wet3191040295.5%0.880.9550.92
C3: Cobblestone dry15317180385.5%0.960.8550.90
C4: Cobblestone wet1911880194.0%0.920.940.93
C5: Gravel dry00201514775.5%0.900.7550.82
C6: Gravel wet00001518592.5%0.770.930.84
Table 3. Confusion matrix for road pavement type only.
Table 3. Confusion matrix for road pavement type only.
PredictedAsphaltCobblestoneGravelPer Class AccuracyPrecisionRecallF1 Score
Actual
Asphalt38212695.5%0.930.9550.94
Cobblestone28368492%0.960.920.94
Gravel0239899.5%0.9750.9950.98
Table 4. Confusion matrix for road conditions only.
Table 4. Confusion matrix for road conditions only.
PredictedDryWetPer Class AccuracyPrecisionRecallF1 Score
Actual
Dry5198186.5%0.960.8650.91
Wet2058096.7%0.880.9650.92
Table 5. Stopping distance.
Table 5. Stopping distance.
Pavement TypeMax Friction CoefficientStopping Distance
Without ABSConventional ABSSYSTEM with Preview
mCompared to Conv. ABS, %mmCompared to Conv. ABS, %
Dry asphalt0.1521.37−6413.0513.050
Wet asphalt0.1132.45−3324.3620.0518
Dry cobble.0.3226.78−4218.8917.517
Wet cobble.0.2033.30−1828.2227.782
Dry gravel0.325.66−1422.5321.833
Wet gravel0.431.79834.6830.0513
Table 6. Stopping distance with wrong classification.
Table 6. Stopping distance with wrong classification.
Set SurfaceDry AsphaltWet AsphaltDry Cobble.Wet Cobble.Dry GravelWet Gravel
Actual Surface
Dry asphalt13.0513.5119.4019.1719.4020.37
Wet asphalt24.3620.0530.0127.8229.5931.09
Dry cobble.18.8920.7817.5117.8717.5318.31
Wet cobble.28.2229.5528.3427.7828.1629.24
Dry gravel22.5323.7421.8422.0221.8321.99
Wet gravel34.6838.3930.3232.3430.4830.05

Share and Cite

MDPI and ACS Style

Šabanovič, E.; Žuraulis, V.; Prentkovskis, O.; Skrickij, V. Identification of Road-Surface Type Using Deep Neural Networks for Friction Coefficient Estimation. Sensors 2020, 20, 612. https://doi.org/10.3390/s20030612

AMA Style

Šabanovič E, Žuraulis V, Prentkovskis O, Skrickij V. Identification of Road-Surface Type Using Deep Neural Networks for Friction Coefficient Estimation. Sensors. 2020; 20(3):612. https://doi.org/10.3390/s20030612

Chicago/Turabian Style

Šabanovič, Eldar, Vidas Žuraulis, Olegas Prentkovskis, and Viktor Skrickij. 2020. "Identification of Road-Surface Type Using Deep Neural Networks for Friction Coefficient Estimation" Sensors 20, no. 3: 612. https://doi.org/10.3390/s20030612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop