sensors-logo

Journal Browser

Journal Browser

Advances in Sensing, Imaging and Computing for Autonomous Driving

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 January 2025 | Viewed by 16706

Special Issue Editors

Department of Computer Science, Georgia State University, Atlanta, GA 30302, USA
Interests: secure and privacy-aware computing; Internet of Things; big data; game theory

E-Mail Website
Guest Editor
Computer Science and Software Engineering, Miami University, Oxford, OH 45056, USA
Interests: wireless and mobile security; IoT; big data; privacy preservation

Special Issue Information

Dear Colleagues,

The techniques of autonomous driving have been experiencing profound innovation and becoming increasingly mature, which greatly accelerates the development of automobile industry. The success of the autonomous driving vehicles is mainly benefited from their high-performance perception and decision-making systems guided by a huge amount of perceptual data collected via various onboard sensors. For instances, GPS sensors collect real-time location data with exact coordinates, radar sensors detect surrounding objects and their distance to vehicles, behavior-relevant sensors monitor environment inside the car and record operation of passengers, and camera sensors work as the eyes of vehicles to perceive visual view and instruct driving behaviors. These sensory data can not only facilitate autonomous vehicles but also serve as precious data resource for smart city, smart transportation, and many other real applications.

This special issue solicits high-quality contributions that focus on the design and development of new technologies, algorithms, and tools to advance autonomous driving techniques. In particular, we encourage original and high-quality submissions related to one or more of the following topics (but not limited to):

  • Autonomous driving vehicles
  • Sensory data acquisition
  • Sensory data processing
  • Computer vision
  • Motion planning and decision making
  • Object detection, perception, and prediction
  • Attack and defense in autonomous driving
  • Safety, security, and privacy in autonomous driving
  • Anomaly detection in autonomous driving
  • Cooperative and coordinated autonomous driving 

Dr. Wei Li
Dr. Honglu Jiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous driving vehicles
  • sensory data acquisition
  • sensory data processing
  • computer vision
  • motion planning and decision making
  • object detection, perception, and prediction
  • attack and defense in autonomous driving
  • safety, security, and privacy in autonomous driving
  • anomaly detection in autonomous driving
  • cooperative and coordinated autonomous driving

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 4030 KiB  
Article
Enhancing Autonomous Vehicle Decision-Making at Intersections in Mixed-Autonomy Traffic: A Comparative Study Using an Explainable Classifier
by Erika Ziraldo, Megan Emily Govers and Michele Oliver
Sensors 2024, 24(12), 3859; https://doi.org/10.3390/s24123859 - 14 Jun 2024
Viewed by 659
Abstract
The transition to fully autonomous roadways will include a long period of mixed-autonomy traffic. Mixed-autonomy roadways pose a challenge for autonomous vehicles (AVs) which use conservative driving behaviours to safely negotiate complex scenarios. This can lead to congestion and collisions with human drivers [...] Read more.
The transition to fully autonomous roadways will include a long period of mixed-autonomy traffic. Mixed-autonomy roadways pose a challenge for autonomous vehicles (AVs) which use conservative driving behaviours to safely negotiate complex scenarios. This can lead to congestion and collisions with human drivers who are accustomed to more confident driving styles. In this work, an explainable multi-variate time series classifier, Time Series Forest (TSF), is compared to two state-of-the-art models in a priority-taking classification task. Responses to left-turning hazards at signalized and stop-sign-controlled intersections were collected using a full-vehicle driving simulator. The dataset was comprised of a combination of AV sensor-collected and V2V (vehicle-to-vehicle) transmitted features. Each scenario forced participants to either take (“go”) or yield (“no go”) priority at the intersection. TSF performed comparably for both the signalized and sign-controlled datasets, although all classifiers performed better on the signalized dataset. The inclusion of V2V data led to a slight increase in accuracy for all models and a substantial increase in the true positive rate of the stop-sign-controlled models. Additionally, incorporating the V2V data resulted in fewer chosen features, thereby decreasing the model complexity while maintaining accuracy. Including the selected features in an AV planning model is hypothesized to reduce the need for conservative AV driving behaviour without increasing the risk of collision. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

22 pages, 4887 KiB  
Article
Visual Detection of Road Cracks for Autonomous Vehicles Based on Deep Learning
by Ibrahim Meftah, Junping Hu, Mohammed A. Asham, Asma Meftah, Li Zhen and Ruihuan Wu
Sensors 2024, 24(5), 1647; https://doi.org/10.3390/s24051647 - 3 Mar 2024
Cited by 4 | Viewed by 2629
Abstract
Detecting road cracks is essential for inspecting and assessing the integrity of concrete pavement structures. Traditional image-based methods often require complex preprocessing to extract crack features, making them challenging when dealing with noisy concrete surfaces in diverse real-world scenarios, such as autonomous vehicle [...] Read more.
Detecting road cracks is essential for inspecting and assessing the integrity of concrete pavement structures. Traditional image-based methods often require complex preprocessing to extract crack features, making them challenging when dealing with noisy concrete surfaces in diverse real-world scenarios, such as autonomous vehicle road detection. This study introduces an image-based crack detection approach that combines a Random Forest machine learning classifier with a deep convolutional neural network (CNN) to address these challenges. Three state-of-the-art models, namely MobileNet, InceptionV3, and Xception, were employed and trained using a dataset of 30,000 images to build an effective CNN. A systematic comparison of validation accuracy across various base learning rates identified a base learning rate of 0.001 as optimal, achieving a maximum validation accuracy of 99.97%. This optimal learning rate was then applied in the subsequent testing phase. The robustness and flexibility of the trained models were evaluated using 6,000 test photos, each with a resolution of 224 × 224 pixels, which were not part of the training or validation sets. The outstanding results, boasting a remarkable 99.95% accuracy, 99.95% precision, 99.94% recall, and a matching 99.94% F1 Score, unequivocally affirm the efficacy of the proposed technique in precisely identifying road fractures in photographs taken on real concrete surfaces. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

18 pages, 4175 KiB  
Article
Semantic and Geometric-Aware Day-to-Night Image Translation Network
by Geonkyu Bang, Jinho Lee, Yuki Endo, Toshiaki Nishimori, Kenta Nakao and Shunsuke Kamijo
Sensors 2024, 24(4), 1339; https://doi.org/10.3390/s24041339 - 19 Feb 2024
Cited by 1 | Viewed by 1622
Abstract
Autonomous driving systems heavily depend on perception tasks for optimal performance. However, the prevailing datasets are primarily focused on scenarios with clear visibility (i.e., sunny and daytime). This concentration poses challenges in training deep-learning-based perception models for environments with adverse conditions (e.g., rainy [...] Read more.
Autonomous driving systems heavily depend on perception tasks for optimal performance. However, the prevailing datasets are primarily focused on scenarios with clear visibility (i.e., sunny and daytime). This concentration poses challenges in training deep-learning-based perception models for environments with adverse conditions (e.g., rainy and nighttime). In this paper, we propose an unsupervised network designed for the translation of images from day-to-night to solve the ill-posed problem of learning the mapping between domains with unpaired data. The proposed method involves extracting both semantic and geometric information from input images in the form of attention maps. We assume that the multi-task network can extract semantic and geometric information during the estimation of semantic segmentation and depth maps, respectively. The image-to-image translation network integrates the two distinct types of extracted information, employing them as spatial attention maps. We compare our method with related works both qualitatively and quantitatively. The proposed method shows both qualitative and qualitative improvements in visual presentation over related work. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

17 pages, 5864 KiB  
Article
Deep Reinforcement Learning for Autonomous Driving with an Auxiliary Actor Discriminator
by Qiming Gao, Fangle Chang, Jiahong Yang, Yu Tao, Longhua Ma and Hongye Su
Sensors 2024, 24(2), 700; https://doi.org/10.3390/s24020700 - 22 Jan 2024
Cited by 1 | Viewed by 1618
Abstract
In the research of robot systems, path planning and obstacle avoidance are important research directions, especially in unknown dynamic environments where flexibility and rapid decision makings are required. In this paper, a state attention network (SAN) was developed to extract features to represent [...] Read more.
In the research of robot systems, path planning and obstacle avoidance are important research directions, especially in unknown dynamic environments where flexibility and rapid decision makings are required. In this paper, a state attention network (SAN) was developed to extract features to represent the interaction between an intelligent robot and its obstacles. An auxiliary actor discriminator (AAD) was developed to calculate the probability of a collision. Goal-directed and gap-based navigation strategies were proposed to guide robotic exploration. The proposed policy was trained through simulated scenarios and updated by the Soft Actor-Critic (SAC) algorithm. The robot executed the action depending on the AAD output. Heuristic knowledge (HK) was developed to prevent blind exploration of the robot. Compared to other methods, adopting our approach in robot systems can help robots converge towards an optimal action strategy. Furthermore, it enables them to explore paths in unknown environments with fewer moving steps (showing a decrease of 33.9%) and achieve higher average rewards (showning an increase of 29.15%). Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

12 pages, 2258 KiB  
Article
Embedded Processing for Extended Depth of Field Imaging Systems: From Infinite Impulse Response Wiener Filter to Learned Deconvolution
by Alice Fontbonne, Pauline Trouvé-Peloux, Frédéric Champagnat, Gabriel Jobert and Guillaume Druart
Sensors 2023, 23(23), 9462; https://doi.org/10.3390/s23239462 - 28 Nov 2023
Cited by 1 | Viewed by 1022
Abstract
Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or [...] Read more.
Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size. The finite impulse response (FIR) filter coefficients are learned or computed based on a Wiener filter paradigm. It involves an optical model typical of codesigned systems for DoF extension and a scene power spectral density, which is either learned or modeled. We compare different FIR filters and present a method for dimensioning their sizes prior to a joint optimization. We also show that, among the filters compared, the learning approach enables an easy adaptation to a database, but the other approaches are equally robust. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

17 pages, 7637 KiB  
Article
DRGAN: Dense Residual Generative Adversarial Network for Image Enhancement in an Underwater Autonomous Driving Device
by Jin Qian, Hui Li, Bin Zhang, Sen Lin and Xiaoshuang Xing
Sensors 2023, 23(19), 8297; https://doi.org/10.3390/s23198297 - 7 Oct 2023
Cited by 1 | Viewed by 1450
Abstract
Underwater autonomous driving devices, such as autonomous underwater vehicles (AUVs), rely on visual sensors, but visual images tend to produce color aberrations and a high turbidity due to the scattering and absorption of underwater light. To address these issues, we propose the Dense [...] Read more.
Underwater autonomous driving devices, such as autonomous underwater vehicles (AUVs), rely on visual sensors, but visual images tend to produce color aberrations and a high turbidity due to the scattering and absorption of underwater light. To address these issues, we propose the Dense Residual Generative Adversarial Network (DRGAN) for underwater image enhancement. Firstly, we adopt a multi-scale feature extraction module to obtain a range of information and increase the receptive field. Secondly, a dense residual block is proposed, to realize the interaction of image features and ensure stable connections in the feature information. Multiple dense residual modules are connected from beginning to end to form a cyclic dense residual network, producing a clear image. Finally, the stability of the network is improved via adjustment to the training with multiple loss functions. Experiments were conducted using the RUIE and Underwater ImageNet datasets. The experimental results show that our proposed DRGAN can remove high turbidity from underwater images and achieve color equalization better than other methods. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 6394 KiB  
Review
Realistic 3D Simulators for Automotive: A Review of Main Applications and Features
by Ivo Silva, Hélder Silva, Fabricio Botelho and Cristiano Pendão
Sensors 2024, 24(18), 5880; https://doi.org/10.3390/s24185880 - 10 Sep 2024
Viewed by 918
Abstract
Recent advancements in vehicle technology have stimulated innovation across the automotive sector, from Advanced Driver Assistance Systems (ADAS) to autonomous driving and motorsport applications. Modern vehicles, equipped with sensors for perception, localization, navigation, and actuators for autonomous driving, generate vast amounts of data [...] Read more.
Recent advancements in vehicle technology have stimulated innovation across the automotive sector, from Advanced Driver Assistance Systems (ADAS) to autonomous driving and motorsport applications. Modern vehicles, equipped with sensors for perception, localization, navigation, and actuators for autonomous driving, generate vast amounts of data used for training and evaluating autonomous systems. Real-world testing is essential for validation but is complex, expensive, and time-intensive, requiring multiple vehicles and reference systems. To address these challenges, computer graphics-based simulators offer a compelling solution by providing high-fidelity 3D environments to simulate vehicles and road users. These simulators are crucial for developing, validating, and testing ADAS, autonomous driving systems, and cooperative driving systems, and enhancing vehicle performance and driver training in motorsport. This paper reviews computer graphics-based simulators tailored for automotive applications. It begins with an overview of their applications and analyzes their key features. Additionally, this paper compares five open-source (CARLA, AirSim, LGSVL, AWSIM, and DeepDrive) and ten commercial simulators. Our findings indicate that open-source simulators are best for the research community, offering realistic 3D environments, multiple sensor support, APIs, co-simulation, and community support. Conversely, commercial simulators, while less extensible, provide a broader set of features and solutions. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

25 pages, 4396 KiB  
Review
Object Detection Based on Roadside LiDAR for Cooperative Driving Automation: A Review
by Pengpeng Sun, Chenghao Sun, Runmin Wang and Xiangmo Zhao
Sensors 2022, 22(23), 9316; https://doi.org/10.3390/s22239316 - 30 Nov 2022
Cited by 15 | Viewed by 5625
Abstract
Light Detection and Ranging (LiDAR) technology has the advantages of high detection accuracy, a wide range of perception, and not being affected by light. The 3D LiDAR is placed at the commanding height of the traffic scene, the overall situation can be grasped [...] Read more.
Light Detection and Ranging (LiDAR) technology has the advantages of high detection accuracy, a wide range of perception, and not being affected by light. The 3D LiDAR is placed at the commanding height of the traffic scene, the overall situation can be grasped from the perspective of top view, and the trajectory of each object in the traffic scene can be accurately perceived in real time, and then the object information can be distributed to the surrounding vehicles or other roadside LiDAR through advanced wireless communication equipment, which can significantly improve the local perception ability of an autonomous vehicle. This paper first describes the characteristics of roadside LiDAR and the challenges of object detection and then reviews in detail the current methods of object detection based on a single roadside LiDAR and multi-LiDAR cooperatives. Then, some studies for roadside LiDAR perception in adverse weather and datasets released in recent years are introduced. Finally, some current open challenges and future works for roadside LiDAR perception are discussed. To the best of our knowledge, this is the first work to systematically study roadside LiDAR perception methods and datasets. It has an important guiding role in further promoting the research of roadside LiDAR perception for practical applications. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

Back to TopTop