sensors-logo

Journal Browser

Journal Browser

Perception Sensors for Road Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (1 October 2019) | Viewed by 124220

Special Issue Editor


grade E-Mail Website
Guest Editor
Instituto Universitario de Investigación del Automóvil (INSIA), Universidad Politécnica de Madrid, 28040 Madrid, Spain
Interests: intelligent transport systems; advanced driver assistance systems; vehicle positioning; inertial sensors; digital maps; vehicle dynamics; driver monitoring; perception; autonomous vehicles; cooperative services; connected and autonomous driving
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

New assistance systems and the applications of autonomous driving of road vehicles imply ever greater requirements for perception systems in order to increase the robustness of decisions and avoid false positives or false negatives.

In this sense, there are many technologies that can be used, both in the vehicle and infrastructure. In a first case, technologies, such as LiDAR or computer vision, are the basis for growth in automation levels of vehicles, although their actual deployment is also demonstrating the problems that can be found in real scenarios, and that must be solved to continue on the path of improving the safety and efficiency of road traffic.

Usually, given the limitations of each of the technologies, it is common to resort to sensorial fusion, both of the same type sensors and of different types.

Additionally, obtaining data for decision-making does not come from only on-board sensors, but wireless communication with the outside world allow a vehicle to offer a greater electronic horizon. In the same way, positioning in precise and detailed digital maps provides additional information that can be very useful to interpret the environment.

The sensors also cover the driver, in order to analyse their ability to perform tasks safely.

In all areas, it is crucial to study the limitations of each of the solutions and sensors, as well as to establish tools that try to alleviate these issues, either through improvements in hardware or in software. In this sense, the specifications requested of sensors must be established and specific methods must be developed for the validation of said specifications for the sensors and complete systems.

Finally, studies of the state-of-the-art, in relation to the evolution of perception sensors and their impact on the evolution of road transport, are also welcome.

In conclusion, this Special Issue aims to bring together innovative developments in areas related to sensors in vehicles and in infrastructure, including, but not limited to:

  • environment perception
  • LiDAR
  • Computer vision
  • Radar
  • vehicle dynamics sensors
  • driver surveillance
  • infrastructure sensors
  • new assistance systems based on perception sensors
  • Sensor fusion techniques for autonomous systems
  • Interaction of autonomous systems and driver
  • Decision algorithms for autonomous actions
  • Cooperation between autonomous vehicles and infrastructure
  • sensors requirements
  • state-of-the-art review of perception sensors and technologies

Authors are invited to contact the guest editor prior to submission if they are uncertain whether their work falls within the general scope of this Special Issue.

Dr. Felipe Jimenez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Road vehicles
  • Sensors
  • Environment perception sensors
  • Vehicle surroundings surveillance
  • Vehicle dynamics sensors
  • Driver assistance systems
  • Positioning
  • LIDAR
  • Computer vision
  • Radar
  • Sensor fusion
  • Infrastructure sensors
  • Autonomous vehicles
  • Sensor fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 175 KiB  
Editorial
Perception Sensors for Road Applications
by Felipe Jiménez
Sensors 2019, 19(23), 5294; https://doi.org/10.3390/s19235294 - 1 Dec 2019
Cited by 1 | Viewed by 2404
Abstract
New assistance systems and the applications of autonomous driving of road vehicles imply ever-greater requirements for perception systems that are necessary in order to increase the robustness of decisions and to avoid false positives or false negatives [...] Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)

Research

Jump to: Editorial, Review

13 pages, 1638 KiB  
Article
Empirical Analysis of Safe Distance Calculation by the Stereoscopic Capturing and Processing of Images Through the Tailigator System
by Gerd Christian Krizek, Rene Hausleitner, Laura Böhme and Cristina Olaverri-Monreal
Sensors 2019, 19(22), 5044; https://doi.org/10.3390/s19225044 - 19 Nov 2019
Viewed by 3568
Abstract
Driver disregard for the minimum safety distance increases the probability of rear-end collisions. In order to contribute to active safety on the road, we propose in this work a low-cost Forward Collision Warning system that captures and processes images. Using cameras located in [...] Read more.
Driver disregard for the minimum safety distance increases the probability of rear-end collisions. In order to contribute to active safety on the road, we propose in this work a low-cost Forward Collision Warning system that captures and processes images. Using cameras located in the rear section of a leading vehicle, this system serves the purpose of discouraging tailgating behavior from the vehicle driving behind. We perform in this paper the pertinent field tests to assess system performance, focusing on the calculated distance from the processing of images and the error margins in a straight line, as well as in a curve. Based on the evaluation results, the current version of the Tailigator can be used at speeds up to 50 km per hour without any restrictions. The measurements showed similar characteristics both on the straight line and in the curve. At close distances, between 3 and 5 m, the values deviated from the real value. At average distances, around 10 to 15 m, the Tailigator achieved the best results. From distances higher than 20 m, the deviations increased steadily with the distance. We contribute to the state of the art with an innovative low-cost system to identify tailgating behavior and raise awareness, which works independently of the rear vehicle’s communication capabilities or equipment. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

21 pages, 13878 KiB  
Article
Interference Mitigation in Automotive Radars Using Pseudo-Random Cyclic Orthogonal Sequences
by Sruthy Skaria, Akram Al-Hourani, Robin J. Evans, Kandeepan Sithamparanathan and Udaya Parampalli
Sensors 2019, 19(20), 4459; https://doi.org/10.3390/s19204459 - 15 Oct 2019
Cited by 20 | Viewed by 5170
Abstract
The number of small sophisticated wireless sensors which share the electromagnetic spectrum is expected to grow rapidly over the next decade and interference between these sensors is anticipated to become a major challenge. In this paper we study the interference mechanisms in one [...] Read more.
The number of small sophisticated wireless sensors which share the electromagnetic spectrum is expected to grow rapidly over the next decade and interference between these sensors is anticipated to become a major challenge. In this paper we study the interference mechanisms in one such sensor, automotive radars, where our results are directly applicable to a range of other sensor situations. In particular, we study the impact of radar waveform design and the associated receiver processing on the statistics of radar–radar interference and its effects on sensing performance. We propose a novel interference mitigation approach based on pseudo-random cyclic orthogonal sequences (PRCOS), which enable sensors to rapidly learn the interference environment and avoid using frequency overlapping waveforms, which in turn results in a significant interference mitigation with analytically tractable statistical characterization. The performance of our new approach is benchmarked against the popular random stepped frequency waveform sequences (RSFWS), where both simulation and analytic results show considerable interference reduction. Furthermore, we perform experimental measurements on commercially available automotive radars to verify the proposed model and framework. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

20 pages, 9477 KiB  
Article
Predicting Agent Behaviour and State for Applications in a Roundabout-Scenario Autonomous Driving
by Naveed Muhammad and Björn Åstrand
Sensors 2019, 19(19), 4279; https://doi.org/10.3390/s19194279 - 2 Oct 2019
Cited by 6 | Viewed by 3991
Abstract
As human drivers, we instinctively employ our understanding of other road users’ behaviour for enhanced efficiency of our drive and safety of the traffic. In recent years, different aspects of assisted and autonomous driving have gotten a lot of attention from the research [...] Read more.
As human drivers, we instinctively employ our understanding of other road users’ behaviour for enhanced efficiency of our drive and safety of the traffic. In recent years, different aspects of assisted and autonomous driving have gotten a lot of attention from the research and industrial community, including the aspects of behaviour modelling and prediction of future state. In this paper, we address the problem of modelling and predicting agent behaviour and state in a roundabout traffic scenario. We present three ways of modelling traffic in a roundabout based on: (i) the roundabout geometry; (ii) mean path taken by vehicles inside the roundabout; and (iii) a set of reference trajectories traversed by vehicles inside the roundabout. The roundabout models are compared in terms of exit-direction classification and state (i.e., position inside the roundabout) prediction of query vehicles inside the roundabout. The exit-direction classification and state prediction are based on a particle-filter classifier algorithm. The results show that the roundabout model based on set of reference trajectories is better suited for both the exit-direction and state prediction. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

16 pages, 3978 KiB  
Article
Implementation of a Potential Field-Based Decision-Making Algorithm on Autonomous Vehicles for Driving in Complex Environments
by Carlos Martínez and Felipe Jiménez
Sensors 2019, 19(15), 3318; https://doi.org/10.3390/s19153318 - 28 Jul 2019
Cited by 15 | Viewed by 4319
Abstract
Autonomous driving is undergoing huge developments nowadays. It is expected that its implementation will bring many benefits. Autonomous cars must deal with tasks at different levels. Although some of them are currently solved, and perception systems provide quite an accurate and complete description [...] Read more.
Autonomous driving is undergoing huge developments nowadays. It is expected that its implementation will bring many benefits. Autonomous cars must deal with tasks at different levels. Although some of them are currently solved, and perception systems provide quite an accurate and complete description of the environment, high-level decisions are hard to obtain in challenging scenarios. Moreover, they must comply with safety, reliability and predictability requirements, road user acceptance, and comfort specifications. This paper presents a path planning algorithm based on potential fields. Potential models are adjusted so that their behavior is appropriate to the environment and the dynamics of the vehicle and they can face almost any unexpected scenarios. The response of the system considers the road characteristics (e.g., maximum speed, lane line curvature, etc.) and the presence of obstacles and other users. The algorithm has been tested on an automated vehicle equipped with a GPS receiver, an inertial measurement unit and a computer vision system in real environments with satisfactory results. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

14 pages, 1502 KiB  
Article
Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems
by Jaechan Cho, Yongchul Jung, Dong-Sun Kim, Seongjoo Lee and Yunho Jung
Sensors 2019, 19(14), 3217; https://doi.org/10.3390/s19143217 - 22 Jul 2019
Cited by 33 | Viewed by 6700
Abstract
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion [...] Read more.
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system. Experimental results demonstrate that our design achieves better detection performance than existing MOD systems. The proposed moving object detector was implemented with 13.2K logic slices, 104 DSP48s, and 163 BRAM and can support real-time processing of 30 fps at an operating frequency of 200 MHz. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

17 pages, 4678 KiB  
Article
Machine Learning Techniques for Undertaking Roundabouts in Autonomous Driving
by Laura García Cuenca, Javier Sanchez-Soriano, Enrique Puertas, Javier Fernandez Andrés and Nourdine Aliane
Sensors 2019, 19(10), 2386; https://doi.org/10.3390/s19102386 - 24 May 2019
Cited by 37 | Viewed by 6328
Abstract
This article presents a machine learning-based technique to build a predictive model and generate rules of action to allow autonomous vehicles to perform roundabout maneuvers. The approach consists of building a predictive model of vehicle speeds and steering angles based on collected data [...] Read more.
This article presents a machine learning-based technique to build a predictive model and generate rules of action to allow autonomous vehicles to perform roundabout maneuvers. The approach consists of building a predictive model of vehicle speeds and steering angles based on collected data related to driver–vehicle interactions and other aggregated data intrinsic to the traffic environment, such as roundabout geometry and the number of lanes obtained from Open-Street-Maps and offline video processing. The study systematically generates rules of action regarding the vehicle speed and steering angle required for autonomous vehicles to achieve complete roundabout maneuvers. Supervised learning algorithms like the support vector machine, linear regression, and deep learning are used to form the predictive models. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

20 pages, 2300 KiB  
Article
Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking
by Muhammad Sualeh and Gon-Woo Kim
Sensors 2019, 19(6), 1474; https://doi.org/10.3390/s19061474 - 26 Mar 2019
Cited by 82 | Viewed by 12797
Abstract
Environmental perception plays an essential role in autonomous driving tasks and demands robustness in cluttered dynamic environments such as complex urban scenarios. In this paper, a robust Multiple Object Detection and Tracking (MODT) algorithm for a non-stationary base is presented, using multiple 3D [...] Read more.
Environmental perception plays an essential role in autonomous driving tasks and demands robustness in cluttered dynamic environments such as complex urban scenarios. In this paper, a robust Multiple Object Detection and Tracking (MODT) algorithm for a non-stationary base is presented, using multiple 3D LiDARs for perception. The merged LiDAR data is treated with an efficient MODT framework, considering the limitations of the vehicle-embedded computing environment. The ground classification is obtained through a grid-based method while considering a non-planar ground. Furthermore, unlike prior works, 3D grid-based clustering technique is developed to detect objects under elevated structures. The centroid measurements obtained from the object detection are tracked using Interactive Multiple Model-Unscented Kalman Filter-Joint Probabilistic Data Association Filter (IMM-UKF-JPDAF). IMM captures different motion patterns, UKF handles the nonlinearities of motion models, and JPDAF associates the measurements in the presence of clutter. The proposed algorithm is implemented on two slightly dissimilar platforms, giving real-time performance on embedded computers. The performance evaluation metrics by MOT16 and ground truths provided by KITTI Datasets are used for evaluations and comparison with the state-of-the-art. The experimentation on platforms and comparisons with state-of-the-art techniques suggest that the proposed framework is a feasible solution for MODT tasks. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

25 pages, 7860 KiB  
Article
Robust and Real-Time Detection and Tracking of Moving Objects with Minimum 2D LiDAR Information to Advance Autonomous Cargo Handling in Ports
by Victor Vaquero, Ely Repiso and Alberto Sanfeliu
Sensors 2019, 19(1), 107; https://doi.org/10.3390/s19010107 - 29 Dec 2018
Cited by 27 | Viewed by 7176
Abstract
Detecting and tracking moving objects (DATMO) is an essential component for autonomous driving and transportation. In this paper, we present a computationally low-cost and robust DATMO system which uses as input only 2D laser rangefinder (LRF) information. Due to its low requirements both [...] Read more.
Detecting and tracking moving objects (DATMO) is an essential component for autonomous driving and transportation. In this paper, we present a computationally low-cost and robust DATMO system which uses as input only 2D laser rangefinder (LRF) information. Due to its low requirements both in sensor needs and computation, our DATMO algorithm is meant to be used in current Autonomous Guided Vehicles (AGVs) to improve their reliability for the cargo transportation tasks at port terminals, advancing towards the next generation of fully autonomous transportation vehicles. Our method follows a Detection plus Tracking paradigm. In the detection step we exploit the minimum information of 2D-LRFs by segmenting the elements of the scene in a model-free way and performing a fast object matching to pair segmented elements from two different scans. In this way, we easily recognize dynamic objects and thus reduce consistently by between two and five times the computational burden of the adjacent tracking method. We track the final dynamic objects with an improved Multiple-Hypothesis Tracking (MHT), to which special functions for filtering, confirming, holding, and deleting targets have been included. The full system is evaluated in simulated and real scenarios producing solid results. Specifically, a simulated port environment has been developed to gather realistic data of common autonomous transportation situations such as observing an intersection, joining vehicle platoons, and perceiving overtaking maneuvers. We use different sensor configurations to demonstrate the robustness and adaptability of our approach. We additionally evaluate our system with real data collected in a port terminal the Netherlands. We show that it is able to accomplish the vehicle following task successfully, obtaining a total system recall of more than 98% while running faster than 30 Hz. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

18 pages, 8019 KiB  
Article
Robust Lane-Detection Method for Low-Speed Environments
by Qingquan Li, Jian Zhou, Bijun Li, Yuan Guo and Jinsheng Xiao
Sensors 2018, 18(12), 4274; https://doi.org/10.3390/s18124274 - 4 Dec 2018
Cited by 31 | Viewed by 5220
Abstract
Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and [...] Read more.
Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and a series of rectangular detection regions are dynamically constructed along the road. Then, an improved symmetrical local threshold edge extraction is introduced to extract the edge points of the lane markings based on accurate marking width limitations. In order to meet real-time requirements, a novel Bresenham line voting space is proposed to improve the process of line segment detection. Combined with straight lines, polylines, and curves, the proposed geometric fitting method has the ability to adapt to various road shapes. Finally, different status vectors and Kalman filter transfer matrices are used to track the key points of the linear and nonlinear parts of the lane. The proposed method was tested on a public database and our autonomous platform. The experimental results show that the method is robust and efficient and can meet the real-time requirements of autonomous vehicles. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

12 pages, 4181 KiB  
Article
Multi-Target Detection Method Based on Variable Carrier Frequency Chirp Sequence
by Wei Wang, Jinsong Du and Jie Gao
Sensors 2018, 18(10), 3386; https://doi.org/10.3390/s18103386 - 10 Oct 2018
Cited by 17 | Viewed by 4918
Abstract
Continuous waveform (CW) radar is widely used in intelligent transportation systems, vehicle assisted driving, and other fields because of its simple structure, low cost and high integration. There are several waveforms which have been developed in the last years. The chirp sequence waveform [...] Read more.
Continuous waveform (CW) radar is widely used in intelligent transportation systems, vehicle assisted driving, and other fields because of its simple structure, low cost and high integration. There are several waveforms which have been developed in the last years. The chirp sequence waveform has the ability to extract the range and velocity parameters of multiple targets. However, conventional chirp sequence waveforms suffer from the Doppler ambiguity problem. This paper proposes a new waveform that follows the practical application requirements, high precision requirements, and low system complexity requirements. The new waveform consists of two chirp sequences, which are intertwined to each other. Each chirp signal has the same frequency modulation, the same bandwidth and the same chirp duration. The carrier frequencies are different and there is a frequency shift which is large enough to ensure that the Doppler frequencies for the same moving target are different. According to the sign and numerical relationship of the Doppler frequencies (possibly frequency aliasing), the Doppler frequency ambiguity problem is solved in eight cases. Theoretical analysis and simulation results verify that the new radar waveform is capable of measuring range and radial velocity simultaneously and unambiguously, with high accuracy and resolution even in multi-target situations. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

28 pages, 14927 KiB  
Article
Extended Line Map-Based Precise Vehicle Localization Using 3D LIDAR
by Jun-Hyuck Im, Sung-Hyuck Im and Gyu-In Jee
Sensors 2018, 18(10), 3179; https://doi.org/10.3390/s18103179 - 20 Sep 2018
Cited by 31 | Viewed by 4471
Abstract
An Extended Line Map (ELM)-based precise vehicle localization method is proposed in this paper, and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of 1 [...] Read more.
An Extended Line Map (ELM)-based precise vehicle localization method is proposed in this paper, and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of 1 and the rest have a value of 0 was created using the reflectivity and distance data of the 3D LIDAR. From the map, lines were detected using a Hough transform. After the detected lines were converted into the node and link forms, they were stored as a map. This map is called an extended line map, of which data size is extremely small (134 KB/km). The ELM-based localization is performed through correlation matching. The ELM is converted back into an occupancy grid map and matched to the map generated using the current 3D LIDAR. In this instance, a Fast Fourier Transform (FFT) was applied as the correlation matching method, and the matching time was approximately 78 ms (based on MATLAB). The experiment was carried out in the Gangnam area of Seoul, South Korea. The traveling distance was approximately 4.2 km, and the maximum traveling speed was approximately 80 km/h. As a result of localization, the root mean square (RMS) position errors for the lateral and longitudinal directions were 0.136 m and 0.223 m, respectively. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

17 pages, 1454 KiB  
Article
Multi-Timescale Drowsiness Characterization Based on a Video of a Driver’s Face
by Quentin Massoz, Jacques G. Verly and Marc Van Droogenbroeck
Sensors 2018, 18(9), 2801; https://doi.org/10.3390/s18092801 - 25 Aug 2018
Cited by 18 | Viewed by 3810
Abstract
Drowsiness is a major cause of fatal accidents, in particular in transportation. It is therefore crucial to develop automatic, real-time drowsiness characterization systems designed to issue accurate and timely warnings of drowsiness to the driver. In practice, the least intrusive, physiology-based approach is [...] Read more.
Drowsiness is a major cause of fatal accidents, in particular in transportation. It is therefore crucial to develop automatic, real-time drowsiness characterization systems designed to issue accurate and timely warnings of drowsiness to the driver. In practice, the least intrusive, physiology-based approach is to remotely monitor, via cameras, facial expressions indicative of drowsiness such as slow and long eye closures. Since the system’s decisions are based upon facial expressions in a given time window, there exists a trade-off between accuracy (best achieved with long windows, i.e., at long timescales) and responsiveness (best achieved with short windows, i.e., at short timescales). To deal with this trade-off, we develop a multi-timescale drowsiness characterization system composed of four binary drowsiness classifiers operating at four distinct timescales (5 s, 15 s, 30 s, and 60 s) and trained jointly. We introduce a multi-timescale ground truth of drowsiness, based on the reaction times (RTs) performed during standard Psychomotor Vigilance Tasks (PVTs), that strategically enables our system to characterize drowsiness with diverse trade-offs between accuracy and responsiveness. We evaluated our system on 29 subjects via leave-one-subject-out cross-validation and obtained strong results, i.e., global accuracies of 70%, 85%, 89%, and 94% for the four classifiers operating at increasing timescales, respectively. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

29 pages, 3364 KiB  
Review
A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research
by Francisca Rosique, Pedro J. Navarro, Carlos Fernández and Antonio Padilla
Sensors 2019, 19(3), 648; https://doi.org/10.3390/s19030648 - 5 Feb 2019
Cited by 327 | Viewed by 50380
Abstract
This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents [...] Read more.
This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.). Furthermore, their strengths and weaknesses are shown, and the quantification of their features using spider charts will allow proper selection of different sensors depending on 11 features. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. For this purpose, the paper describes simulators for model-based development, the main game engines that can be used for simulation, simulators from the robotics field, and lastly simulators used specifically for AV. Finally, the current state of regulations that are being applied in different countries around the world on issues concerning the implementation of autonomous vehicles is presented. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

Back to TopTop