AI-Based Autonomous Driving System

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electrical and Autonomous Vehicles".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 73595

Special Issue Editors


E-Mail Website
Guest Editor
Intelligent Signal Processing Lab, School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
Interests: digital communication and mobile communication systems; ICT & automotive convergence; machine learning applications

E-Mail Website
Guest Editor
NCBS Lab, School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
Interests: onlinear estimation and filtering; sliding-mode control; vehicle dynamics and control; autonomous vehicle control; AI; signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Advanced Wireless and Communication Research Center (AWCC), The University of Electro-Communications, Tokyo 182-8585, Japan
Interests: wireless ad-hoc network; cognitive radio; wireless sensing technology; wireless network protocol; mobile network communications; ITS and software radio
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence technology and autonomous driver systems are known as one of the core technologies of the fourth industrial revolution. Artificial intelligence is combined with autonomous driver systems to advance the autonomous vehicle era.

The changing driving conditions at every moment can create a variety of situations not yet considered. Therefore, predicting the driving environment through artificial intelligence technology and securing the technology to determine the situation are essential factors in implementing the self-driving system.

The aim of this Special Issue is to introduce the latest technology of AI-based autonomous driver systems. The autonomous driver systems considered in this issue include cars, drones, underwater unmanned vehicles, and robots.

While this Special Issue invites topics broadly across the AI technologies for autonomous vehicles, V2X technologies, and emerging technologies for autonomous vehicles, some specific topics include, but are not limited to:

  • Deep learning technologies for autonomous vehicles
  • Multi-modal learning for autonomous vehicles
  • Vehicular communication and network systems
  • Autonomous vehicle interaction
  • Security for autonomous vehicle
  • Monitoring and control in autonomous vehicles
  • Advanced driver assistance systems (ADAS)
  • Advanced sensor systems for autonomous vehicles
  • Navigation, localization, map building and path planning
  • Hardware that is specific to AI-based autonomous vehicles
Prof. Dr. Dong Seog Han
Prof. Dr. Kalyana C. Veluvolu
Prof. Dr. Takeo Fujii

Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Autonomous vehicle
  • Artificial intelligence
  • Vehicular communication
  • Advanced driver assistance systems
  • Navigation and localization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 4596 KiB  
Article
An Autonomous Vehicle Stability Control Using Active Fault-Tolerant Control Based on a Fuzzy Neural Network
by Turki Alsuwian, Mian Hamza Usman and Arslan Ahmed Amin
Electronics 2022, 11(19), 3165; https://doi.org/10.3390/electronics11193165 - 1 Oct 2022
Cited by 9 | Viewed by 2809
Abstract
Due to instability issues in autonomous vehicles, the risk of danger is increasing rapidly. These problems arise due to unwanted faults in the sensor or the actuator, which decrease vehicle efficiency. In this modern era of autonomous vehicles, the risk factor is also [...] Read more.
Due to instability issues in autonomous vehicles, the risk of danger is increasing rapidly. These problems arise due to unwanted faults in the sensor or the actuator, which decrease vehicle efficiency. In this modern era of autonomous vehicles, the risk factor is also increased as the vehicles have become automatic, so there is a need for a fault-tolerant control system (FTCS) to avoid accidents and reduce the risk factors. This paper presents an active fault-tolerant control (AFTC) for autonomous vehicles with a fuzzy neural network that can autonomously identify any wheel speed problem to avoid instability issues in an autonomous vehicle. MATLAB/Simulink environment was used for simulation experiments and the results demonstrate the stable operation of the wheel speed sensors to avoid accidents in the event of faults in the sensor or actuator if the vehicle becomes unstable. The simulation results establish that the AFTC-based autonomous vehicle using a fuzzy neural network is a highly reliable solution to keep cars stable and avoid accidents. Active FTC and vehicle stability make the system more efficient and reliable, decreasing the chance of instability to a minimal point. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

14 pages, 4527 KiB  
Article
Deep-Learning-Based Network for Lane Following in Autonomous Vehicles
by Abida Khanum, Chao-Yang Lee and Chu-Sing Yang
Electronics 2022, 11(19), 3084; https://doi.org/10.3390/electronics11193084 - 27 Sep 2022
Cited by 21 | Viewed by 3787
Abstract
The research field of autonomous self-driving vehicles has recently become increasingly popular. In addition, motion-planning technology is essential for autonomous vehicles because it mitigates the prevailing on-road obstacles. Herein, a deep-learning-network-based architecture that was integrated with VGG16 and the gated recurrent unit (GRU) [...] Read more.
The research field of autonomous self-driving vehicles has recently become increasingly popular. In addition, motion-planning technology is essential for autonomous vehicles because it mitigates the prevailing on-road obstacles. Herein, a deep-learning-network-based architecture that was integrated with VGG16 and the gated recurrent unit (GRU) was applied for lane-following on roads. The normalized input image was fed to the three-layer VGG16 output layer as a pattern and the GRU output layer as the last layer. Next, the processed data were fed to the two fully connected layers, with a dropout layer added in between each layer. Afterward, to evaluate the deep-learning-network-based model, the steering angle and speed from the control task were predicted as output parameters. Experiments were conducted using the a dataset from the Udacity simulator and a real dataset. The results show that the proposed framework remarkably predicted steering angles in different directions. Furthermore, the proposed approach achieved higher mean square errors of 0.0230 and 0.0936 and and inference times of 3–4 and 3 ms. We also implemented our proposed framework on the NVIDIA Jetson embedded platform (Jetson Nano 4 GB) and compared it with the GPU’s computational time. The results revealed that the embedded system took 45–46 s to execute a single epoch in order to predict the steering angle. The results show that the proposed framework generates fruitful and accurate motion planning for lane-following in autonomous driving. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

17 pages, 7271 KiB  
Article
Traffic Landmark Matching Framework for HD-Map Update: Dataset Training Case Study
by Young-Kook Park, Hyunhee Park, Young-Su Woo, In-Gu Choi and Seung-Soo Han
Electronics 2022, 11(6), 863; https://doi.org/10.3390/electronics11060863 - 9 Mar 2022
Cited by 9 | Viewed by 3950
Abstract
High-definition (HD) maps determine the location of the vehicle under limited visibility based on the location information of safety signs detected by sensors. If a safety sign disappears or changes, incorrect information may be obtained. Thus, map data must be updated daily to [...] Read more.
High-definition (HD) maps determine the location of the vehicle under limited visibility based on the location information of safety signs detected by sensors. If a safety sign disappears or changes, incorrect information may be obtained. Thus, map data must be updated daily to prevent accidents. This study proposes a map update system (MUS) framework that maps objects detected by a road map detection system and the object present in the HD map. Based on traffic safety signs notified by the Korean National Police Agency, 151 types of objects, including traffic signs, traffic lights, and road markings, were annotated manually and semi-automatically. Approximately 3,000,000 annotations were trained based on the you only look once (YOLO) model, suitable for real-time detection by grouping safety signs with similar properties. The object coordinates were then extracted from the mobile mapping system point cloud, and the detection location accuracy was verified by comparing and evaluating the center point of the object detected in the MUS. The performance of the groups with and without specified properties was compared and their effectiveness was verified based on the dataset configuration. A model trained with a Korean road traffic dataset on our testbed achieved a group model of 95% mAP and no group model of 70.9% mAP. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

11 pages, 526 KiB  
Article
Application and Comparison of Deep Learning Methods to Detect Night-Time Road Surface Conditions for Autonomous Vehicles
by Hongyi Zhang, Rabia Sehab, Sheherazade Azouigui and Moussa Boukhnifer
Electronics 2022, 11(5), 786; https://doi.org/10.3390/electronics11050786 - 3 Mar 2022
Cited by 8 | Viewed by 2537
Abstract
Currently, road surface conditions ahead of autonomous vehicles are not well detected by the existing sensors on those autonomous vehicles. However, driving safety should be ensured for the weather-induced road conditions for day and night. An investigation into deep learning to recognize the [...] Read more.
Currently, road surface conditions ahead of autonomous vehicles are not well detected by the existing sensors on those autonomous vehicles. However, driving safety should be ensured for the weather-induced road conditions for day and night. An investigation into deep learning to recognize the road surface conditions in the day is conducted using the collected data from an embedded camera on the front of the vehicles. Deep learning models have only been proven to be successful in the day, but they have not been assessed for night conditions to date. The objective of this work is to propose deep learning models to detect on-line road surface conditions caused by weather ahead of the autonomous vehicles at night with a high accuracy. For this study, different deep learning models, namely traditional CNN, SqueezeNet, VGG, ResNet, and DenseNet models, are applied with performance comparison. Considering the current limitation of existing night-time detection, reflection features of different road surfaces are investigated in this paper. According to the features, night-time databases are collected with and without ambient illumination. These databases are collected from several public videos in order to make the selected models more applicable to more scenes. In addition, selected models are trained based on a collected database. Finally, in the validation, the accuracy of these models to classify dry, wet, and snowy road surface conditions at night can be up to 94%. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

13 pages, 3382 KiB  
Article
Real-Time LiDAR Point Cloud Semantic Segmentation for Autonomous Driving
by Xing Xie, Lin Bai and Xinming Huang
Electronics 2022, 11(1), 11; https://doi.org/10.3390/electronics11010011 - 22 Dec 2021
Cited by 19 | Viewed by 6545
Abstract
LiDAR has been widely used in autonomous driving systems to provide high-precision 3D geometric information about the vehicle’s surroundings for perception, localization, and path planning. LiDAR-based point cloud semantic segmentation is an important task with a critical real-time requirement. However, most of the [...] Read more.
LiDAR has been widely used in autonomous driving systems to provide high-precision 3D geometric information about the vehicle’s surroundings for perception, localization, and path planning. LiDAR-based point cloud semantic segmentation is an important task with a critical real-time requirement. However, most of the existing convolutional neural network (CNN) models for 3D point cloud semantic segmentation are very complex and can hardly be processed at real-time on an embedded platform. In this study, a lightweight CNN structure was proposed for projection-based LiDAR point cloud semantic segmentation with only 1.9 M parameters that gave an 87% reduction comparing to the state-of-the-art networks. When evaluated on a GPU, the processing time was 38.5 ms per frame, and it achieved a 47.9% mIoU score on Semantic-KITTI dataset. In addition, the proposed CNN is targeted on an FPGA using an NVDLA architecture, which results in a 2.74x speedup over the GPU implementation with a 46 times improvement in terms of power efficiency. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

16 pages, 4270 KiB  
Article
A Traffic-Aware Federated Imitation Learning Framework for Motion Control at Unsignalized Intersections with Internet of Vehicles
by Tianhao Wu, Mingzhi Jiang, Yinhui Han, Zheng Yuan, Xinhang Li and Lin Zhang
Electronics 2021, 10(24), 3050; https://doi.org/10.3390/electronics10243050 - 7 Dec 2021
Cited by 9 | Viewed by 3411
Abstract
The wealth of data and the enhanced computation capabilities of Internet of Vehicles (IoV) enable the optimized motion control of vehicles passing through an intersection without traffic lights. However, more intersections and demands for privacy protection pose new challenges to motion control optimization. [...] Read more.
The wealth of data and the enhanced computation capabilities of Internet of Vehicles (IoV) enable the optimized motion control of vehicles passing through an intersection without traffic lights. However, more intersections and demands for privacy protection pose new challenges to motion control optimization. Federated Learning (FL) can protect privacy via model interaction in IoV, but traditional FL methods hardly deal with the transportation issue. To address the aforementioned issue, this study proposes a Traffic-Aware Federated Imitation learning framework for Motion Control (TAFI-MC), consisting of Vehicle Interactors (VIs), Edge Trainers (ETs), and a Cloud Aggregator (CA). An Imitation Learning (IL) algorithm is integrated into TAFI-MC to improve motion control. Furthermore, a loss-aware experience selection strategy is explored to reduce communication overhead between ETs and VIs. The experimental results show that the proposed TAFI-MC outperforms imitated rules in the respect of collision avoidance and driving comfort, and the experience selection strategy can reduce communication overheads while ensuring convergence. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

12 pages, 6817 KiB  
Article
A Convolutional Neural Network-Based End-to-End Self-Driving Using LiDAR and Camera Fusion: Analysis Perspectives in a Real-World Environment
by Mingyu Park, Hyeonseok Kim and Seongkeun Park
Electronics 2021, 10(21), 2608; https://doi.org/10.3390/electronics10212608 - 26 Oct 2021
Cited by 10 | Viewed by 4475
Abstract
In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we [...] Read more.
In this paper, we develop end-to-end autonomous driving based on a 2D LiDAR sensor and camera sensor that predict the control value of the vehicle from the input data, instead of modeling rule-based autonomous driving. Different from many studies utilizing simulated data, we created an end-to-end autonomous driving algorithm with data obtained from real driving and analyzing the performance of our proposed algorithm. Based on the data obtained from an actual urban driving environment, end-to-end autonomous driving was possible in an informal environment such as a traffic signal by predicting the vehicle control value based on a convolution neural network. In addition, this paper solves the data imbalance problem by eliminating redundant data for each frame during stopping and driving in the driving environment so we can improve the performance of self-driving. Finally, we verified through the activation map how the network predicts the vertical and horizontal control values by recognizing the traffic facilities in the driving environment. Experiments and analysis will be shown to show the validity of the proposed algorithm. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

21 pages, 11829 KiB  
Article
Implementing a Gaze Tracking Algorithm for Improving Advanced Driver Assistance Systems
by Agapito Ledezma, Víctor Zamora, Óscar Sipele, M. Paz Sesmero and Araceli Sanchis
Electronics 2021, 10(12), 1480; https://doi.org/10.3390/electronics10121480 - 19 Jun 2021
Cited by 19 | Viewed by 3985
Abstract
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a [...] Read more.
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a system that is continuously sounding alarms can be overwhelming or confusing or both, and can be counterproductive. Using the driver’s attention to build an efficient ADAS is the main contribution of this work. To obtain this “attention value” the use of a Gaze tracking is proposed. Driver’s gaze direction is a crucial factor in understanding fatal distractions, as well as discerning when it is necessary to warn the driver about risks on the road. In this paper, a real-time gaze tracking system is proposed as part of the development of an ADAS that obtains and communicates the driver’s gaze information. The developed ADAS uses gaze information to determine if the drivers are looking to the road with their full attention. This work gives a step ahead in the ADAS based on the driver, building an ADAS that warns the driver only in case of distraction. The gaze tracking system was implemented as a model-based system using a Kinect v2.0 sensor and was adjusted on a set-up environment and tested on a suitable-features driving simulation environment. The average obtained results are promising, having hit ratios between 96.37% and 81.84%. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

21 pages, 6369 KiB  
Article
End-to-End Deep Neural Network Architectures for Speed and Steering Wheel Angle Prediction in Autonomous Driving
by Pedro J. Navarro, Leanne Miller, Francisca Rosique, Carlos Fernández-Isla and Alberto Gila-Navarro
Electronics 2021, 10(11), 1266; https://doi.org/10.3390/electronics10111266 - 25 May 2021
Cited by 22 | Viewed by 5922
Abstract
The complex decision-making systems used for autonomous vehicles or advanced driver-assistance systems (ADAS) are being replaced by end-to-end (e2e) architectures based on deep-neural-networks (DNN). DNNs can learn complex driving actions from datasets containing thousands of images and data obtained from the vehicle perception [...] Read more.
The complex decision-making systems used for autonomous vehicles or advanced driver-assistance systems (ADAS) are being replaced by end-to-end (e2e) architectures based on deep-neural-networks (DNN). DNNs can learn complex driving actions from datasets containing thousands of images and data obtained from the vehicle perception system. This work presents the classification, design and implementation of six e2e architectures capable of generating the driving actions of speed and steering wheel angle directly on the vehicle control elements. The work details the design stages and optimization process of the convolutional networks to develop six e2e architectures. In the metric analysis the architectures have been tested with different data sources from the vehicle, such as images, XYZ accelerations and XYZ angular speeds. The best results were obtained with a mixed data e2e architecture that used front images from the vehicle and angular speeds to predict the speed and steering wheel angle with a mean error of 1.06%. An exhaustive optimization process of the convolutional blocks has demonstrated that it is possible to design lightweight e2e architectures with high performance more suitable for the final implementation in autonomous driving. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

11 pages, 2732 KiB  
Article
Multi-Input Deep Learning Based FMCW Radar Signal Classification
by Daewoong Cha, Sohee Jeong, Minwoo Yoo, Jiyong Oh and Dongseog Han
Electronics 2021, 10(10), 1144; https://doi.org/10.3390/electronics10101144 - 12 May 2021
Cited by 18 | Viewed by 4831
Abstract
In autonomous driving vehicles, the emergency braking system uses lidar or radar sensors to recognize the surrounding environment and prevent accidents. The conventional classifiers based on radar data using deep learning are single input structures using range–Doppler maps or micro-Doppler. Deep learning with [...] Read more.
In autonomous driving vehicles, the emergency braking system uses lidar or radar sensors to recognize the surrounding environment and prevent accidents. The conventional classifiers based on radar data using deep learning are single input structures using range–Doppler maps or micro-Doppler. Deep learning with a single input structure has limitations in improving classification performance. In this paper, we propose a multi-input classifier based on convolutional neural network (CNN) to reduce the amount of computation and improve the classification performance using the frequency modulated continuous wave (FMCW) radar. The proposed multi-input deep learning structure is a CNN-based structure using a distance Doppler map and a point cloud map as multiple inputs. The classification accuracy with the range–Doppler map or the point cloud map is 85% and 92%, respectively. It has been improved to 96% with both maps. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

16 pages, 6195 KiB  
Article
Augmentation of Severe Weather Impact to Far-Infrared Sensor Images to Improve Pedestrian Detection System
by Paulius Tumas, Artūras Serackis and Adam Nowosielski
Electronics 2021, 10(8), 934; https://doi.org/10.3390/electronics10080934 - 14 Apr 2021
Cited by 14 | Viewed by 3634
Abstract
Pedestrian detection is an essential task for computer vision and the automotive industry. Complex systems like advanced driver-assistance systems are based on far-infrared data sensors, used to detect pedestrians at nighttime, fog, rain, and direct sun situations. The robust pedestrian detector should work [...] Read more.
Pedestrian detection is an essential task for computer vision and the automotive industry. Complex systems like advanced driver-assistance systems are based on far-infrared data sensors, used to detect pedestrians at nighttime, fog, rain, and direct sun situations. The robust pedestrian detector should work in severe weather conditions. However, only a few datasets include some examples of far-infrared images with distortions caused by atmospheric precipitation and dirt covering sensor optics. This paper proposes the deep learning-based data augmentation technique to enrich far-infrared images collected in good weather conditions by distortions, similar to those caused by bad weather. The six most accurate and fast detectors (TinyV3, TinyL3, You Only Look Once (YOLO)v3, YOLOv4, ResNet50, and ResNext50), performing faster than 15 FPS, were trained on 207,001 annotations and tested on 156,345 annotations, not used for training. The proposed data augmentation technique showed up to a 9.38 mean Average Precision (mAP) increase of pedestrian detection with a maximum of 87.02 mAP (YOLOv4). Proposed in this paper detectors’ Head modifications based on a confidence heat-map gave an additional boost of precision for all six detectors. The most accurate current detector, based on YOLOv4, reached up to 87.20 mAP during our experimental tests. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

23 pages, 92419 KiB  
Article
Virtual Scenario Simulation and Modeling Framework in Autonomous Driving Simulators
by Mingyun Wen, Jisun Park, Yunsick Sung, Yong Woon Park and Kyungeun Cho
Electronics 2021, 10(6), 694; https://doi.org/10.3390/electronics10060694 - 16 Mar 2021
Cited by 9 | Viewed by 4519
Abstract
Recently, virtual environment-based techniques to train sensor-based autonomous driving models have been widely employed due to their efficiency. However, a simulated virtual environment is required to be highly similar to its real-world counterpart to ensure the applicability of such models to actual autonomous [...] Read more.
Recently, virtual environment-based techniques to train sensor-based autonomous driving models have been widely employed due to their efficiency. However, a simulated virtual environment is required to be highly similar to its real-world counterpart to ensure the applicability of such models to actual autonomous vehicles. Though advances in hardware and three-dimensional graphics engine technology have enabled the creation of realistic virtual driving environments, the myriad of scenarios occurring in the real world can only be simulated up to a limited extent. In this study, a scenario simulation and modeling framework that simulates the behavior of objects that may be encountered while driving is proposed to address this problem. This framework maximizes the number of scenarios, their types, and the driving experience in a virtual environment. Furthermore, a simulator was implemented and employed to evaluate the performance of the proposed framework. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

18 pages, 1924 KiB  
Article
Neural Network Based Robust Lateral Control for an Autonomous Vehicle
by Subrat Kumar Swain, Jagat J. Rath and Kalyana C. Veluvolu
Electronics 2021, 10(4), 510; https://doi.org/10.3390/electronics10040510 - 22 Feb 2021
Cited by 25 | Viewed by 4075
Abstract
The lateral motion of an Automated Vehicle (AV) is highly affected by the model’s uncertainties and unknown external disturbances during its navigation in adverse environmental conditions. Among the variety of controllers, the sliding mode controller (SMC), known for its robustness towards disturbances, is [...] Read more.
The lateral motion of an Automated Vehicle (AV) is highly affected by the model’s uncertainties and unknown external disturbances during its navigation in adverse environmental conditions. Among the variety of controllers, the sliding mode controller (SMC), known for its robustness towards disturbances, is considered to generate a robust control signal under uncertainties. However, conventional SMC suffers from the issue of high frequency oscillations, called chattering. To address the issue of chattering and reduce the effect of unknown external disturbances in the absence of precise model information, a radial basis function neural network (RBFNN) is employed to estimate the equivalent control. Further, a higher order sliding mode (HOSM) based switching control is proposed in this paper to compensate for the effect of external disturbances. The effectiveness of the proposed controller in terms of lane-keeping and lateral stability is demonstrated through simulation in a high-fidelity Carsim-Matlab Simulink environment under a variety of road and environmental conditions. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

12 pages, 7527 KiB  
Article
Deep Feature-Level Sensor Fusion Using Skip Connections for Real-Time Object Detection in Autonomous Driving
by Vijay John and Seiichi Mita
Electronics 2021, 10(4), 424; https://doi.org/10.3390/electronics10040424 - 9 Feb 2021
Cited by 29 | Viewed by 4450
Abstract
Object detection is an important perception task in autonomous driving and advanced driver assistance systems. The visible camera is widely used for perception, but its performance is limited by illumination and environmental variations. For robust vision-based perception, we propose a deep learning framework [...] Read more.
Object detection is an important perception task in autonomous driving and advanced driver assistance systems. The visible camera is widely used for perception, but its performance is limited by illumination and environmental variations. For robust vision-based perception, we propose a deep learning framework for effective sensor fusion of the visible camera with complementary sensors. A feature-level sensor fusion technique, using skip connection, is proposed for the sensor fusion of the visible camera with the millimeter-wave radar and the thermal camera. The two networks are called the RV-Net and the TV-Net, respectively. These networks have two input branches and one output branch. The input branches contain separate branches for the individual sensor feature extraction, which are then fused in the output perception branch using skip connections. The RVNet and the TVNet simultaneously perform sensor-specific feature extraction, feature-level fusion and object detection within an end-to-end framework. The proposed networks are validated with baseline algorithms on public datasets. The results obtained show that the feature-level sensor fusion is better than baseline early and late fusion frameworks. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

15 pages, 9018 KiB  
Article
Hybrid Deep Learning Model Based Indoor Positioning Using Wi-Fi RSSI Heat Maps for Autonomous Applications
by Alwin Poulose and Dong Seog Han
Electronics 2021, 10(1), 2; https://doi.org/10.3390/electronics10010002 - 22 Dec 2020
Cited by 47 | Viewed by 6319
Abstract
Positioning using Wi-Fi received signal strength indication (RSSI) signals is an effective method for identifying the user positions in an indoor scenario. Wi-Fi RSSI signals in an autonomous system can be easily used for vehicle tracking in underground parking. In Wi-Fi RSSI signal [...] Read more.
Positioning using Wi-Fi received signal strength indication (RSSI) signals is an effective method for identifying the user positions in an indoor scenario. Wi-Fi RSSI signals in an autonomous system can be easily used for vehicle tracking in underground parking. In Wi-Fi RSSI signal based positioning, the positioning system estimates the signal strength of the access points (APs) to the receiver and identifies the user’s indoor positions. The existing Wi-Fi RSSI based positioning systems use raw RSSI signals obtained from APs and estimate the user positions. These raw RSSI signals can easily fluctuate and be interfered with by the indoor channel conditions. This signal interference in the indoor channel condition reduces localization performance of these existing Wi-Fi RSSI signal based positioning systems. To enhance their performance and reduce the positioning error, we propose a hybrid deep learning model (HDLM) based indoor positioning system. The proposed HDLM based positioning system uses RSSI heat maps instead of raw RSSI signals from APs. This results in better localization performance for Wi-Fi RSSI signal based positioning systems. When compared to the existing Wi-Fi RSSI based positioning technologies such as fingerprint, trilateration, and Wi-Fi fusion approaches, the proposed approach achieves reasonably better positioning results for indoor localization. The experiment results show that a combination of convolutional neural network and long short-term memory network (CNN-LSTM) used in the proposed HDLM outperforms other deep learning models and gives a smaller localization error than conventional Wi-Fi RSSI signal based localization approaches. From the experiment result analysis, the proposed system can be easily implemented for autonomous applications. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

19 pages, 6776 KiB  
Article
Autonomous Vehicle Fuel Economy Optimization with Deep Reinforcement Learning
by Hyunkun Kim, Hyeongoo Pyeon, Jong Sool Park, Jin Young Hwang and Sejoon Lim
Electronics 2020, 9(11), 1911; https://doi.org/10.3390/electronics9111911 - 13 Nov 2020
Cited by 11 | Viewed by 3780
Abstract
The ever-increasing number of vehicles on the road puts pressure on car manufacturers to make their car fuel-efficient. With autonomous vehicles, we can find new strategies to optimize fuels. We propose a reinforcement learning algorithm that trains deep neural networks to generate a [...] Read more.
The ever-increasing number of vehicles on the road puts pressure on car manufacturers to make their car fuel-efficient. With autonomous vehicles, we can find new strategies to optimize fuels. We propose a reinforcement learning algorithm that trains deep neural networks to generate a fuel-efficient velocity profile for autonomous vehicles given road altitude information for the planned trip. Using a highly accurate industry-accepted fuel economy simulation program, we train our deep neural network model. We developed a technique for adapting the heterogeneous simulation program on top of an open-source deep learning framework, and reduced dimension of the problem output with suitable parameterization to train the neural network much faster. The learned model combined with reinforcement learning-based strategy generation effectively generated the velocity profile so that autonomous vehicles can follow to control itself in a fuel efficient way. We evaluate our algorithm’s performance using the fuel economy simulation program for various altitude profiles. We also demonstrate that our method can teach neural networks to generate useful strategies to increase fuel economy even on unseen roads. Our method improved fuel economy by 8% compared to a simple grid search approach. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

Back to TopTop