sensors-logo

Journal Browser

Journal Browser

Advances in Automated Driving: Sensing and Control

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 June 2024) | Viewed by 786

Special Issue Editors


E-Mail Website
Guest Editor
Black Sesame Technologies Inc., San Jose, CA 95134, USA
Interests: deep learning; computer vision; autonomous driving; FPGA

E-Mail Website
Guest Editor
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
Interests: computational experiments; urban rail transit; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Perception systems play a key role in autonomous driving (AD). There are a wide variety of sensors included in such systems. Cameras, radars, and lidars are the most usual ones. Radars and cameras are the preferred option in the industry in order to avoid anti-aesthetic effects on cars’ appearance. The latest ones have undergone a small revolution thanks to the application of convolutional neural networks in the image processing. 

The current scene-understanding technologies and methodologies depend on multiple sensor systems, and there is a broad variety of sensors. GPS, IMU, cameras, radars, and lidars are the most common, and they are based on highly complex and sophisticated algorithms, which include artificial intelligence. These sensors are used for localization (visual odometry, lidar odometry, 3D maps, map matching, etc.), perception (trajectory planning, scene understanding, traffic sign detection, drivable space detection, obstacle avoidance, etc.), and other applications.

The aim of this Special Issue is to achieve an insightful perspective on the latest works in these fields and to provide the reader with a clear picture of the advances that are on the horizon. Welcomed topics include, but are not strictly limited to, the following:

  • Computer vision and image processing;
  • Lidar and 3D sensors;
  • Radar and other proximity sensors;
  • Intelligent network vehicles;
  • Vehicle intelligent detection algorithms and control;
  • Automatic vehicle trajectory planning and control;
  • Infrastructure ITS applications;
  • Advanced driver assistance systems onboard vehicles;
  • Self-driving car perception and navigation systems.

Dr. Lin Bai
Prof. Dr. Fenghua Zhu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous driving
  • computer vision
  • intelligent transportation system
  • vehicle localization
  • lidar and 3D sensors
  • advanced driver assistance systems (ADASs)

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2943 KiB  
Article
Security in Transformer Visual Trackers: A Case Study on the Adversarial Robustness of Two Models
by Peng Ye, Yuanfang Chen, Sihang Ma, Feng Xue, Noel Crespi, Xiaohan Chen and Xing Fang
Sensors 2024, 24(14), 4761; https://doi.org/10.3390/s24144761 - 22 Jul 2024
Abstract
Visual object tracking is an important technology in camera-based sensor networks, which has a wide range of practicability in auto-drive systems. A transformer is a deep learning model that adopts the mechanism of self-attention, and it differentially weights the significance of each part [...] Read more.
Visual object tracking is an important technology in camera-based sensor networks, which has a wide range of practicability in auto-drive systems. A transformer is a deep learning model that adopts the mechanism of self-attention, and it differentially weights the significance of each part of the input data. It has been widely applied in the field of visual tracking. Unfortunately, the security of the transformer model is unclear. It causes such transformer-based applications to be exposed to security threats. In this work, the security of the transformer model was investigated with an important component of autonomous driving, i.e., visual tracking. Such deep-learning-based visual tracking is vulnerable to adversarial attacks, and thus, adversarial attacks were implemented as the security threats to conduct the investigation. First, adversarial examples were generated on top of video sequences to degrade the tracking performance, and the frame-by-frame temporal motion was taken into consideration when generating perturbations over the depicted tracking results. Then, the influence of perturbations on performance was sequentially investigated and analyzed. Finally, numerous experiments on OTB100, VOT2018, and GOT-10k data sets demonstrated that the executed adversarial examples were effective on the performance drops of the transformer-based visual tracking. White-box attacks showed the highest effectiveness, where the attack success rates exceeded 90% against transformer-based trackers. Full article
(This article belongs to the Special Issue Advances in Automated Driving: Sensing and Control)
14 pages, 1877 KiB  
Article
Multi-Resolution Learning and Semantic Edge Enhancement for Super-Resolution Semantic Segmentation of Urban Scene Images
by Ruijun Shu and Shengjie Zhao
Sensors 2024, 24(14), 4522; https://doi.org/10.3390/s24144522 - 12 Jul 2024
Viewed by 285
Abstract
Super-resolution semantic segmentation (SRSS) is a technique that aims to obtain high-resolution semantic segmentation results based on resolution-reduced input images. SRSS can significantly reduce computational cost and enable efficient, high-resolution semantic segmentation on mobile devices with limited resources. Some of the existing methods [...] Read more.
Super-resolution semantic segmentation (SRSS) is a technique that aims to obtain high-resolution semantic segmentation results based on resolution-reduced input images. SRSS can significantly reduce computational cost and enable efficient, high-resolution semantic segmentation on mobile devices with limited resources. Some of the existing methods require modifications of the original semantic segmentation network structure or add additional and complicated processing modules, which limits the flexibility of actual deployment. Furthermore, the lack of detailed information in the low-resolution input image renders existing methods susceptible to misdetection at the semantic edges. To address the above problems, we propose a simple but effective framework called multi-resolution learning and semantic edge enhancement-based super-resolution semantic segmentation (MS-SRSS) which can be applied to any existing encoder-decoder based semantic segmentation network. Specifically, a multi-resolution learning mechanism (MRL) is proposed that enables the feature encoder of the semantic segmentation network to improve its feature extraction ability. Furthermore, we introduce a semantic edge enhancement loss (SEE) to alleviate the false detection at the semantic edges. We conduct extensive experiments on the three challenging benchmarks, Cityscapes, Pascal Context, and Pascal VOC 2012, to verify the effectiveness of our proposed MS-SRSS method. The experimental results show that, compared with the existing methods, our method can obtain the new state-of-the-art semantic segmentation performance. Full article
(This article belongs to the Special Issue Advances in Automated Driving: Sensing and Control)
Show Figures

Figure 1

Back to TopTop