sensors-logo

Journal Browser

Journal Browser

Research Progress on Intelligent Electric Vehicles-2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (20 December 2023) | Viewed by 11379

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical and Electrical Engineering, Xiamen University, Xiamen 384002, China
Interests: multi-agent system; distributed control; intelligent driving; advanced sensors
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automation, Xiamen University, Xiamen 384002,China
Interests: intelligent electric vehicles; vehicle dynamics and control; vision system
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue is a continuation of our previous Special Issue, “Research Progress on Intelligent Electric Vehicles I”.

Intelligent electric vehicles are equipped with advanced sensors and electronic systems (e.g., vision system, global positioning system, wireless communication network), and have received a great deal of research interest as an effective method to reduce energy consumption and enhance the traffic safety and efficiency of intelligent transportation systems. The development and application of intelligent electric vehicles require a variety of technologies, including environmental perception, intelligent decision, information safety, human–machine shared driving, vehicle dynamics control, etc. The above technologies have encountered new challenges due to a new round of scientific and technological revolution represented by mobile Internet, big data and cloud computing.

This Special Issue addresses new environmental perception, intelligent decision, information safety, and vehicle dynamics control methods for intelligent electric vehicles. Survey papers are also welcome.

Dr. Jinghua Guo
Dr. Jingyao Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 28691 KiB  
Article
Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes
by Shuguang Li, Jiafu Yan, Haoran Chen and Ke Zheng
Sensors 2023, 23(17), 7560; https://doi.org/10.3390/s23177560 - 31 Aug 2023
Cited by 1 | Viewed by 1823
Abstract
Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering [...] Read more.
Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering the highly structured characteristics of driving scenes, we propose a dual-branch network to predict dense depth maps by fusing radar and RGB images. The driving scene is divided into three parts in the proposed architecture, each predicting a depth map, which is finally merged into one by implementing the fusion strategy in order to make full use of the potential semantic information in the driving scene. In addition, a variant L1 loss function is applied in the training phase, directing the network to focus more on those areas of interest when driving. Our proposed method is evaluated on the nuScenes dataset. Experiments demonstrate its effectiveness in comparison with previous state of the art methods. Full article
(This article belongs to the Special Issue Research Progress on Intelligent Electric Vehicles-2nd Edition)
Show Figures

Figure 1

14 pages, 5023 KiB  
Article
Research on Road Scene Understanding of Autonomous Vehicles Based on Multi-Task Learning
by Jinghua Guo, Jingyao Wang, Huinian Wang, Baoping Xiao, Zhifei He and Lubin Li
Sensors 2023, 23(13), 6238; https://doi.org/10.3390/s23136238 - 7 Jul 2023
Cited by 7 | Viewed by 2635
Abstract
Road scene understanding is crucial to the safe driving of autonomous vehicles. Comprehensive road scene understanding requires a visual perception system to deal with a large number of tasks at the same time, which needs a perception model with a small size, fast [...] Read more.
Road scene understanding is crucial to the safe driving of autonomous vehicles. Comprehensive road scene understanding requires a visual perception system to deal with a large number of tasks at the same time, which needs a perception model with a small size, fast speed, and high accuracy. As multi-task learning has evident advantages in performance and computational resources, in this paper, a multi-task model YOLO-Object, Drivable Area, and Lane Line Detection (YOLO-ODL) based on hard parameter sharing is proposed to realize joint and efficient detection of traffic objects, drivable areas, and lane lines. In order to balance tasks of YOLO-ODL, a weight balancing strategy is introduced so that the weight parameters of the model can be automatically adjusted during training, and a Mosaic migration optimization scheme is adopted to improve the evaluation indicators of the model. Our YOLO-ODL model performs well on the challenging BDD100K dataset, achieving the state of the art in terms of accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Research Progress on Intelligent Electric Vehicles-2nd Edition)
Show Figures

Figure 1

16 pages, 6600 KiB  
Article
YOLOv5s-Fog: An Improved Model Based on YOLOv5s for Object Detection in Foggy Weather Scenarios
by Xianglin Meng, Yi Liu, Lili Fan and Jingjing Fan
Sensors 2023, 23(11), 5321; https://doi.org/10.3390/s23115321 - 3 Jun 2023
Cited by 10 | Viewed by 3792
Abstract
In foggy weather scenarios, the scattering and absorption of light by water droplets and particulate matter cause object features in images to become blurred or lost, presenting a significant challenge for target detection in autonomous driving vehicles. To address this issue, this study [...] Read more.
In foggy weather scenarios, the scattering and absorption of light by water droplets and particulate matter cause object features in images to become blurred or lost, presenting a significant challenge for target detection in autonomous driving vehicles. To address this issue, this study proposes a foggy weather detection method based on the YOLOv5s framework, named YOLOv5s-Fog. The model enhances the feature extraction and expression capabilities of YOLOv5s by introducing a novel target detection layer called SwinFocus. Additionally, the decoupled head is incorporated into the model, and the conventional non-maximum suppression method is replaced with Soft-NMS. The experimental results demonstrate that these improvements effectively enhance the detection performance for blurry objects and small targets in foggy weather conditions. Compared to the baseline model, YOLOv5s, YOLOv5s-Fog achieves a 5.4% increase in mAP on the RTTS dataset, reaching 73.4%. This method provides technical support for rapid and accurate target detection in adverse weather conditions, such as foggy weather, for autonomous driving vehicles. Full article
(This article belongs to the Special Issue Research Progress on Intelligent Electric Vehicles-2nd Edition)
Show Figures

Figure 1

Review

Jump to: Research

43 pages, 3010 KiB  
Review
Review of Integrated Chassis Control Techniques for Automated Ground Vehicles
by Viktor Skrickij, Paulius Kojis, Eldar Šabanovič, Barys Shyrokau and Valentin Ivanov
Sensors 2024, 24(2), 600; https://doi.org/10.3390/s24020600 - 17 Jan 2024
Cited by 2 | Viewed by 2650
Abstract
Integrated chassis control systems represent a significant advancement in the dynamics of ground vehicles, aimed at enhancing overall performance, comfort, handling, and stability. As vehicles transition from internal combustion to electric platforms, integrated chassis control systems have evolved to meet the demands of [...] Read more.
Integrated chassis control systems represent a significant advancement in the dynamics of ground vehicles, aimed at enhancing overall performance, comfort, handling, and stability. As vehicles transition from internal combustion to electric platforms, integrated chassis control systems have evolved to meet the demands of electrification and automation. This paper analyses the overall control structure of automated vehicles with integrated chassis control systems. Integration of longitudinal, lateral, and vertical systems presents complexities due to the overlapping control regions of various subsystems. The presented methodology includes a comprehensive examination of state-of-the-art technologies, focusing on algorithms to manage control actions and prevent interference between subsystems. The results underscore the importance of control allocation to exploit the additional degrees of freedom offered by over-actuated systems. This paper systematically overviews the various control methods applied in integrated chassis control and path tracking. This includes a detailed examination of perception and decision-making, parameter estimation techniques, reference generation strategies, and the hierarchy of controllers, encompassing high-level, middle-level, and low-level control components. By offering this systematic overview, this paper aims to facilitate a deeper understanding of the diverse control methods employed in automated driving with integrated chassis control, providing insights into their applications, strengths, and limitations. Full article
(This article belongs to the Special Issue Research Progress on Intelligent Electric Vehicles-2nd Edition)
Show Figures

Figure 1

Back to TopTop