sensors-logo

Journal Browser

Journal Browser

Intelligent Vehicles Based on Computer Vision, Multimodal Sensing and Autonomous Systems for Complex Transportation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 1979

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
Interests: visual object tracking; traffic sign detection and recognition; image processing and applications; sensor networks and IoT applications; big data technology and applications
School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
Interests: network and information security; artificial intelligence security; edge computing; fog computing; Internet of Vehicles

Special Issue Information

Dear Colleagues,

An intelligent vehicle is a vehicle enhanced with perception, reasoning, and actuating devices that enable the automation of driving tasks such as safe lane following, obstacle avoidance, overtaking slower traffic, following the vehicle ahead, assessing and avoiding dangerous situations, and determining the route. Over the past decade, deep-learning-based methods (CNN, RNN, LSTM, GAN, GNN, etc.) have been utilized with great success in intelligent vehicles, as they are comprehensively superior to traditional methods. In the field of complex transportation, the use of deep-learning-based computer vision, multimodal sensing, and autonomous systems has received extensive attention, enabling more accurate, efficient, and cheaper sensing, modelling, analysing, and decision making. These techniques make motoring safer, more convenient and more efficient, and have dramatically changed transportation systems. In the future, there will be huge demand and broad application prospects for intelligent vehicles.

Intelligent vehicle applications require knowledge on the following: 1) the state of the environment surrounding the vehicle; 2) the state of the driver and occupants; 3) communication with roadside infrastructure or other vehicles; 4) the position and the kinematic and dynamic state of the vehicle; and 5) access to digital maps and satellite data. The aim of this Special Issue is to present both original research and review articles on various disciplines of intelligent vehicle and applications, particularly computer vision, multimodal sensing, deep learning, and autonomous systems for intelligent transportation applications.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Traffic image/video quality enhancement in severe weather conditions;
  • Traffic sign/light detection and recognition, and road/lane line detection;
  • Driver monitoring;
  • Vehicle forward collision warnings, blind spot monitoring, and lane departure warnings;
  • Vehicle/cyclist/pedestrian detection, counting, tracking, and reidentification;
  • Vehicular sensing (visible, infrared, ultrasound, radar, lidar, laser range finders, and so on);
  • Simultaneous localization and mapping (SLAM);
  • Behavioural decision making, path planning, and motion control;
  • Congestion prediction and control, and accident prediction;

We look forward to receiving your contributions.

Prof. Dr. Jianming Zhang
Dr. Ke Gu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • image processing
  • multimodal sensing
  • autonomous systems
  • intelligent vehicles
  • intelligent transportation systems
  • deep learning
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1487 KiB  
Article
Robust Long-Term Vehicle Trajectory Prediction Using Link Projection and a Situation-Aware Transformer
by Minsung Kim, Byung Il Kwak, Jong-Uk Hou and Taewoon Kim
Sensors 2024, 24(8), 2398; https://doi.org/10.3390/s24082398 - 9 Apr 2024
Viewed by 503
Abstract
The trajectory prediction of a vehicle emerges as a pivotal component in Intelligent Transportation Systems. On urban roads where external factors such as intersections and traffic control devices significantly affect driving patterns along with the driver’s intrinsic habits, the prediction task becomes much [...] Read more.
The trajectory prediction of a vehicle emerges as a pivotal component in Intelligent Transportation Systems. On urban roads where external factors such as intersections and traffic control devices significantly affect driving patterns along with the driver’s intrinsic habits, the prediction task becomes much more challenging. Furthermore, long-term forecasting of trajectories accumulates prediction errors, leading to substantially inaccurate predictions that may deviate from the actual road. As a solution to these challenges, we propose a long-term vehicle trajectory prediction method that is robust to error accumulation and prevents off-road predictions. In this study, the Transformer model is utilized to analyze and forecast vehicle trajectories. In addition, we propose an extra encoding network to precisely capture the effect of the external factors on the driving pattern by producing an abstract representation of the situation nearby the driver. To avoid off-road predictions, we propose a post-processing method, called link projection, which projects predictions onto the road geometry. Moreover, to overcome the limitations of Euclidean distance-based evaluation metrics in evaluating the accuracy of the entire trajectory, we propose a new metric called area-between-curves (ABC). It measures the similarity between two trajectories, and thus the accordance between the two can be effectively evaluated. Extensive evaluations are conducted using real-world datasets against widely-used methods to demonstrate the effectiveness of the proposed approach. The results show that the proposed approach outperforms the conventional deep learning models by up to 65.74% (RMSE), 60.13% (MAE) and 91.45% (ABC). Full article
Show Figures

Figure 1

19 pages, 9633 KiB  
Article
Surround Sensing Technique for Trucks Based on Multi-Features and Improved Yolov5 Algorithm
by Zixian Li, Yongtao Li, Hanyan Li, Liting Deng and Rungang Yan
Sensors 2024, 24(7), 2112; https://doi.org/10.3390/s24072112 - 26 Mar 2024
Viewed by 400
Abstract
The traditional rearview mirror method cannot fully guarantee safety when driving trucks. RGB and infrared images collected by cameras are used for registration and recognition, so as to achieve the perception of surroundings and ensure safe driving. The traditional scale-invariant feature transform (SIFT) [...] Read more.
The traditional rearview mirror method cannot fully guarantee safety when driving trucks. RGB and infrared images collected by cameras are used for registration and recognition, so as to achieve the perception of surroundings and ensure safe driving. The traditional scale-invariant feature transform (SIFT) algorithm has a mismatching rate, and the YOLO algorithm has an optimization space in feature extraction. To address these issues, this paper proposes a truck surround sensing technique based on multi-features and an improved YOLOv5 algorithm. Firstly, the edge corner points and infrared features of the preset target region are extracted, and then a feature point set containing the improved SIFT algorithm is generated for registration. Finally, the YOLOv5 algorithm is improved by fusing infrared features and introducing a composite prediction mechanism at the prediction end. The simulation results show that, on average, the image stitching accuracy is improved by 17%, the time is reduced by 89%, and the target recognition accuracy is improved by 2.86%. The experimental results show that this method can effectively perceive the surroundings of trucks, accurately identify targets, and reduce the missed alarm rate and false alarm rate. Full article
Show Figures

Figure 1

17 pages, 28670 KiB  
Article
Train Distance Estimation for Virtual Coupling Based on Monocular Vision
by Yang Hao, Tao Tang and Chunhai Gao
Sensors 2024, 24(4), 1179; https://doi.org/10.3390/s24041179 - 11 Feb 2024
Viewed by 615
Abstract
By precisely controlling the distance between two train sets, virtual coupling (VC) enables flexible coupling and decoupling in urban rail transit. However, relying on train-to-train communication for obtaining the train distance can pose a safety risk in case of communication malfunctions. In this [...] Read more.
By precisely controlling the distance between two train sets, virtual coupling (VC) enables flexible coupling and decoupling in urban rail transit. However, relying on train-to-train communication for obtaining the train distance can pose a safety risk in case of communication malfunctions. In this paper, a distance-estimation framework based on monocular vision is proposed. First, key structure features of the target train are extracted by an object-detection neural network, whose strategies include an additional detection head in the feature pyramid, labeling of object neighbor areas, and semantic filtering, which are utilized to improve the detection performance for small objects. Then, an optimization process based on multiple key structure features is implemented to estimate the distance between the two train sets in VC. For the validation and evaluation of the proposed framework, experiments were implemented on Beijing Subway Line 11. The results show that for train sets with distances between 20 m and 100 m, the proposed framework can achieve a distance estimation with an absolute error that is lower than 1 m and a relative error that is lower than 1.5%, which can be a reliable backup for communication-based VC operations. Full article
Show Figures

Figure 1

Back to TopTop