sensors-logo

Journal Browser

Journal Browser

Sensors and Machine Learning for Robotic (Self-Driving) Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 31204

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronic and Computer Engineering, College of Engineering, Design and Physical Sciences, Brunel University London, Kingston Ln, Uxbridge UB8 3PH, UK
Interests: robotics; control systems; mobile robot; capsule robot; capsule endoscopy; artificial intelligence; deep learning; sensor fusion; robotic (self-driving) cars; search and rescue robot; pipe inspection

E-Mail Website
Guest Editor
Electronic and Computer Engineering, Brunel University London, Uxbridge, UK
Interests: evolutionary design and optimization; evolvable hardware; modelling and optimization of large systems; operational research; robotics; swarm optimization

Special Issue Information

Dear colleagues,

With the recent advancements in self-driving vehicles, it is only a matter of time before autonomous vehicles will be used on public roads. Before this technology is widely adopted, it is vital to ensure that the safety of other road users is considered so as to prevent road traffic-related incidents. These road users include pedestrians, cyclists, motorcyclists, and other vehicle users. Of these road users, pedestrians and cyclists are classed as vulnerable road users (VRUs). For this reason, pedestrian and cyclist detection has received significant attention. Therefore, it is pivotal that other road users, and especially VRUs, meet a level of safety while self-driving vehicles are on public roads. More recently, new machine learning algorithms, more specifically Deep Learning, have been implemented in order to provide unprecedented levels of performance. With such improvements, robotic vehicles can be designed to move closer to becoming fully autonomous, creating safer public roads for all road users.

To address this task, robotic vehicles need to be equipped with sensor networks (i.e., network of inter-connected sensors) to perceive the robot’s immediate surroundings. In this way, the robot is able to determine the safest path to follow with respect to the safety of road users. This provides a high level of safety as well as providing efficiency and comfort, as harsh braking and acceleration will be limited. Machine learning algorithms can be employed to learn from the output of the sensor network so as to detect and predict the future intentions of objects. With the combination of various sensors (e.g., visual, thermal IR, and LIDAR) and using effective machine learning methods, a high safety for road users can be achieved. Certain sensors, such as thermal sensors, have become more accessible because of a decrease in costs, allowing further research to be conducted into sensor fusion.

The aim of this Special Issue is to present the current state-of-the-art in machine learning methods and sensor systems used in robotic vehicles (both urban and non-urban). This Special Issue focuses on the following areas for contribution (but is not limited to them):

  • Robotic (self-driving) vehicles
  • Sensor systems in autonomous vehicle
  • Design of sensor networks
  • Sensor data processing
  • Machine learning/deep learning for sensor data
  • Robotic environmental interactions
  • Application of sensors for robotics
  • Sensor data fusion
  • Robotic (self-driving) vehicle safety
  • Robotic (self-driving) vehicle efficiency

Dr. Md Nazmul Huda
Dr. Tatiana Kalganova
Prof. Dr. Vasile Palade
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • Sensor network
  • Machine learning algorithms
  • Deep learning
  • Artificial intelligence
  • Robotic (self-driving) vehicles
  • Sensors for vision, thermal, and range applications
  • Intelligent sensor networks
  • Intelligent vehicle
  • Vision sensors
  • Range sensors
  • LIDAR
  • Thermal camera
 
 

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 4944 KiB  
Article
Rapid Localization and Mapping Method Based on Adaptive Particle Filters
by Anas Charroud, Karim El Moutaouakil, Ali Yahyaouy, Uche Onyekpe, Vasile Palade and Md Nazmul Huda
Sensors 2022, 22(23), 9439; https://doi.org/10.3390/s22239439 - 2 Dec 2022
Cited by 5 | Viewed by 1750
Abstract
With the development of autonomous vehicles, localization and mapping technologies have become crucial to equip the vehicle with the appropriate knowledge for its operation. In this paper, we extend our previous work by prepossessing a localization and mapping architecture for autonomous vehicles that [...] Read more.
With the development of autonomous vehicles, localization and mapping technologies have become crucial to equip the vehicle with the appropriate knowledge for its operation. In this paper, we extend our previous work by prepossessing a localization and mapping architecture for autonomous vehicles that do not rely on GPS, particularly in environments such as tunnels, under bridges, urban canyons, and dense tree canopies. The proposed approach is of two parts. Firstly, a K-means algorithm is employed to extract features from LiDAR scenes to create a local map of each scan. Then, we concatenate the local maps to create a global map of the environment and facilitate data association between frames. Secondly, the main localization task is performed by an adaptive particle filter that works in four steps: (a) generation of particles around an initial state (provided by the GPS); (b) updating the particle positions by providing the motion (translation and rotation) of the vehicle using an inertial measurement device; (c) selection of the best candidate particles by observing at each timestamp the match rate (also called particle weight) of the local map (with the real-time distances to the objects) and the distances of the particles to the corresponding chunks of the global map; (d) averaging the selected particles to derive the estimated position, and, finally, using a resampling method on the particles to ensure the reliability of the position estimation. The performance of the newly proposed technique is investigated on different sequences of the Kitti and Pandaset raw data with different environmental setups, weather conditions, and seasonal changes. The obtained results validate the performance of the proposed approach in terms of speed and representativeness of the feature extraction for real-time localization in comparison with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors and Machine Learning for Robotic (Self-Driving) Vehicles)
Show Figures

Figure 1

19 pages, 12695 KiB  
Article
LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors
by Weichen Dai, Shenzhou Chen, Zhaoyang Huang, Yan Xu and Da Kong
Sensors 2022, 22(19), 7533; https://doi.org/10.3390/s22197533 - 4 Oct 2022
Cited by 4 | Viewed by 4244
Abstract
Light Detection and Ranging (LiDAR) systems are novel sensors that provide robust distance and reflection strength by active pulsed laser beams. They have significant advantages over visual cameras by providing active depth and intensity measurements that are robust to ambient illumination. However, the [...] Read more.
Light Detection and Ranging (LiDAR) systems are novel sensors that provide robust distance and reflection strength by active pulsed laser beams. They have significant advantages over visual cameras by providing active depth and intensity measurements that are robust to ambient illumination. However, the systemsstill pay limited attention to intensity measurements since the output intensity maps of LiDAR sensors are different from conventional cameras and are too sparse. In this work, we propose exploiting the information from both intensity and depth measurements simultaneously to complete the LiDAR intensity maps. With the completed intensity maps, mature computer vision techniques can work well on the LiDAR data without any specific adjustment. We propose an end-to-end convolutional neural network named LiDAR-Net to jointly complete the sparse intensity and depth measurements by exploiting their correlations. For network training, an intensity fusion method is proposed to generate the ground truth. Experiment results indicate that intensity–depth fusion can benefit the task and improve performance. We further apply an off-the-shelf object (lane) segmentation algorithm to the completed intensity maps, which delivers consistent robust to ambient illumination performance. We believe that the intensity completion method allows LiDAR sensors to cope with a broader range of practice applications. Full article
(This article belongs to the Special Issue Sensors and Machine Learning for Robotic (Self-Driving) Vehicles)
Show Figures

Figure 1

28 pages, 12214 KiB  
Article
Centralised and Decentralised Sensor Fusion-Based Emergency Brake Assist
by Ankur Deo, Vasile Palade and Md. Nazmul Huda
Sensors 2021, 21(16), 5422; https://doi.org/10.3390/s21165422 - 11 Aug 2021
Cited by 8 | Viewed by 4176
Abstract
Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sensors. As mono-sensor systems cannot provide reliable and consistent readings under all circumstances because of errors and other limitations, [...] Read more.
Many advanced driver assistance systems (ADAS) are currently trying to utilise multi-sensor architectures, where the driver assistance algorithm receives data from a multitude of sensors. As mono-sensor systems cannot provide reliable and consistent readings under all circumstances because of errors and other limitations, fusing data from multiple sensors ensures that the environmental parameters are perceived correctly and reliably for most scenarios, thereby substantially improving the reliability of the multi-sensor-based automotive systems. This paper first highlights the significance of efficiently fusing data from multiple sensors in ADAS features. An emergency brake assist (EBA) system is showcased using multiple sensors, namely, a light detection and ranging (LiDAR) sensor and camera. The architectures of the proposed ‘centralised’ and ‘decentralised’ sensor fusion approaches for EBA are discussed along with their constituents, i.e., the detection algorithms, the fusion algorithm, and the tracking algorithm. The centralised and decentralised architectures are built and analytically compared, and the performance of these two fusion architectures for EBA are evaluated in terms of speed of execution, accuracy, and computational cost. While both fusion methods are seen to drive the EBA application at an acceptable frame rate (~20 fps or higher) on an Intel i5-based Ubuntu system, it was concluded through the experiments and analytical comparisons that the decentralised fusion-driven EBA leads to higher accuracy; however, it has the downside of a higher computational cost. The centralised fusion-driven EBA yields comparatively less accurate results, but with the benefits of a higher frame rate and lesser computational cost. Full article
(This article belongs to the Special Issue Sensors and Machine Learning for Robotic (Self-Driving) Vehicles)
Show Figures

Figure 1

20 pages, 5775 KiB  
Article
A Brain-Inspired Decision-Making Linear Neural Network and Its Application in Automatic Drive
by Tianjun Sun, Zhenhai Gao, Fei Gao, Tianyao Zhang, Siyan Chen and Kehan Zhao
Sensors 2021, 21(3), 794; https://doi.org/10.3390/s21030794 - 25 Jan 2021
Cited by 7 | Viewed by 3055
Abstract
Brain-like intelligent decision-making is a prevailing trend in today’s world. However, inspired by bionics and computer science, the linear neural network has become one of the main means to realize human-like decision-making and control. This paper proposes a method for classifying drivers’ driving [...] Read more.
Brain-like intelligent decision-making is a prevailing trend in today’s world. However, inspired by bionics and computer science, the linear neural network has become one of the main means to realize human-like decision-making and control. This paper proposes a method for classifying drivers’ driving behaviors based on the fuzzy algorithm and establish a brain-inspired decision-making linear neural network. Firstly, different driver experimental data samples were obtained through the driving simulator. Then, an objective fuzzy classification algorithm was designed to distinguish different driving behaviors in terms of experimental data. In addition, a brain-inspired linear neural network was established to realize human-like decision-making and control. Finally, the accuracy of the proposed method was verified by training and testing. This study extracts the driving characteristics of drivers through driving simulator tests, which provides a driving behavior reference for the human-like decision-making of an intelligent vehicle. Full article
(This article belongs to the Special Issue Sensors and Machine Learning for Robotic (Self-Driving) Vehicles)
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 9789 KiB  
Review
A Critical Review of Deep Learning-Based Multi-Sensor Fusion Techniques
by Benedict Marsh, Abdul Hamid Sadka and Hamid Bahai
Sensors 2022, 22(23), 9364; https://doi.org/10.3390/s22239364 - 1 Dec 2022
Cited by 9 | Viewed by 5542
Abstract
In this review, we provide a detailed coverage of multi-sensor fusion techniques that use RGB stereo images and a sparse LiDAR-projected depth map as input data to output a dense depth map prediction. We cover state-of-the-art fusion techniques which, in recent years, have [...] Read more.
In this review, we provide a detailed coverage of multi-sensor fusion techniques that use RGB stereo images and a sparse LiDAR-projected depth map as input data to output a dense depth map prediction. We cover state-of-the-art fusion techniques which, in recent years, have been deep learning-based methods that are end-to-end trainable. We then conduct a comparative evaluation of the state-of-the-art techniques and provide a detailed analysis of their strengths and limitations as well as the applications they are best suited for. Full article
(This article belongs to the Special Issue Sensors and Machine Learning for Robotic (Self-Driving) Vehicles)
Show Figures

Figure 1

47 pages, 8830 KiB  
Review
Pedestrian and Vehicle Detection in Autonomous Vehicle Perception Systems—A Review
by Luiz G. Galvao, Maysam Abbod, Tatiana Kalganova, Vasile Palade and Md Nazmul Huda
Sensors 2021, 21(21), 7267; https://doi.org/10.3390/s21217267 - 31 Oct 2021
Cited by 28 | Viewed by 10087
Abstract
Autonomous Vehicles (AVs) have the potential to solve many traffic problems, such as accidents, congestion and pollution. However, there are still challenges to overcome, for instance, AVs need to accurately perceive their environment to safely navigate in busy urban scenarios. The aim of [...] Read more.
Autonomous Vehicles (AVs) have the potential to solve many traffic problems, such as accidents, congestion and pollution. However, there are still challenges to overcome, for instance, AVs need to accurately perceive their environment to safely navigate in busy urban scenarios. The aim of this paper is to review recent articles on computer vision techniques that can be used to build an AV perception system. AV perception systems need to accurately detect non-static objects and predict their behaviour, as well as to detect static objects and recognise the information they are providing. This paper, in particular, focuses on the computer vision techniques used to detect pedestrians and vehicles. There have been many papers and reviews on pedestrians and vehicles detection so far. However, most of the past papers only reviewed pedestrian or vehicle detection separately. This review aims to present an overview of the AV systems in general, and then review and investigate several detection computer vision techniques for pedestrians and vehicles. The review concludes that both traditional and Deep Learning (DL) techniques have been used for pedestrian and vehicle detection; however, DL techniques have shown the best results. Although good detection results have been achieved for pedestrians and vehicles, the current algorithms still struggle to detect small, occluded, and truncated objects. In addition, there is limited research on how to improve detection performance in difficult light and weather conditions. Most of the algorithms have been tested on well-recognised datasets such as Caltech and KITTI; however, these datasets have their own limitations. Therefore, this paper recommends that future works should be implemented on more new challenging datasets, such as PIE and BDD100K. Full article
(This article belongs to the Special Issue Sensors and Machine Learning for Robotic (Self-Driving) Vehicles)
Show Figures

Figure 1

Back to TopTop