applsci-logo

Journal Browser

Journal Browser

Autonomous Vehicles and Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: 10 December 2024 | Viewed by 10789

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information and Technology, Beijing University of Technology, Beijing 100124, China
Interests: interactive cognition; machine vision; intelligent driving; knowledge discovery and intelligent system

E-Mail Website
Guest Editor
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
Interests: computer modeling and simulation calculation; artificial intelligence; knowledge engineering; image processing

E-Mail Website
Guest Editor
School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China
Interests: computer vision; medical image analysis; action recognition; intelligent robot collaboration

Special Issue Information

Dear Colleagues,

The automobile industry is being transformed by three-dimensional disruptions, moving from the internal combustion engine to the electric engine, from the human driver to autonomous driving and from the ownership business model to mobility as a service. These transformations will not only reshape the automobile industry in the coming decade, with trillion-dollar new market opportunities, but also offer humanity a better future with more clean energy consumption, cheaper and safer transportation services and a more efficient use of the urban infrastructure.

However, due to the application of immature and unreliable software, fatal accidents have weakened trust in these systems. In this regard, we must improve their safety, security and reliability. The IEEE International Symposium on Autonomous Vehicle Software (AVS) creates a unique venue for researchers, engineers, industry players and policy makers to present the latest advancements and innovations in theoretical work, open source projects, software applications and regulations in autonomous vehicle software.

The IEEE International Symposium on Autonomous Vehicle Software (AVS2023) aims to bring together researchers and scientists from artificial intelligence and robots and researchers from various application areas to discuss problems and solutions in the area, to identify new issues, and to shape future directions for research.

This Special Issue will introduce works on autonomous vehicle software, robot perception and interaction, covering areas such as autonomous vehicle software vulnerability assessment, risk analysis, attack and threat models, visual understanding, machine vision, intelligent interaction and other related topics.

We invite you and your colleagues to submit a contribution in the form of an original scientific research article for this Special Issue. We encourage your submissions and thank presenters and speakers in advance for your attendance at this conference, and look forward to a stimulating exchange of ideas.

The AVS2023 conference will be held in Tokyo, Japan, from August 10th to 11th 2023.

Prof. Dr. Nan Ma
Dr. Taohong Zhang
Dr. Yang Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous vehicle
  • intelligent interaction
  • self-driving environment perception
  • machine vision
  • visual understanding

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 23704 KiB  
Article
PE-SLAM: A Modified Simultaneous Localization and Mapping System Based on Particle Swarm Optimization and Epipolar Constraints
by Cuiming Li, Zhengyu Shang, Jinxin Wang, Wancai Niu and Ke Yang
Appl. Sci. 2024, 14(16), 7097; https://doi.org/10.3390/app14167097 - 13 Aug 2024
Viewed by 797
Abstract
Due to various typical unstructured factors in the environment of photovoltaic power stations, such as high feature similarity, weak textures, and simple structures, the motion model of the ORB-SLAM2 algorithm performs poorly, leading to a decline in tracking accuracy. To address this issue, [...] Read more.
Due to various typical unstructured factors in the environment of photovoltaic power stations, such as high feature similarity, weak textures, and simple structures, the motion model of the ORB-SLAM2 algorithm performs poorly, leading to a decline in tracking accuracy. To address this issue, we propose PE-SLAM, which improves the ORB-SLAM2 algorithm’s motion model by incorporating the particle swarm optimization algorithm combined with epipolar constraint to eliminate mismatches. First, a new mutation strategy is proposed to introduce perturbations to the pbest (personal best value) during the late convergence stage of the PSO algorithm, thereby preventing the PSO algorithm from falling into local optima. Then, the improved PSO algorithm is used to solve the fundamental matrix between two images based on the feature matching relationships obtained from the motion model. Finally, the epipolar constraint is applied using the computed fundamental matrix to eliminate incorrect matches produced by the motion model, thereby enhancing the tracking accuracy and robustness of the ORB-SLAM2 algorithm in unstructured photovoltaic power station scenarios. In feature matching experiments, compared to the ORB algorithm and the ORB+HAMMING algorithm, the ORB+PE-match algorithm achieved an average accuracy improvement of 19.5%, 14.0%, and 6.0% in unstructured environments, respectively, with better recall rates. In the trajectory experiments of the TUM dataset, PE-SLAM reduced the average absolute trajectory error compared to ORB-SLAM2 by 29.1% and the average relative pose error by 27.0%. In the photovoltaic power station scene mapping experiment, the dense point cloud map constructed has less overlap and is complete, reflecting that PE-SLAM has basically overcome the unstructured factors of the photovoltaic power station scene and is suitable for applications in this scene. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

18 pages, 3577 KiB  
Article
RL-Based Sim2Real Enhancements for Autonomous Beach-Cleaning Agents
by Francisco Quiroga, Gabriel Hermosilla, German Varas, Francisco Alonso and Karla Schröder
Appl. Sci. 2024, 14(11), 4602; https://doi.org/10.3390/app14114602 - 27 May 2024
Viewed by 1242
Abstract
This paper explores the application of Deep Reinforcement Learning (DRL) and Sim2Real strategies to enhance the autonomy of beach-cleaning robots. Experiments demonstrate that DRL agents, initially refined in simulations, effectively transfer their navigation skills to real-world scenarios, achieving precise and efficient operation in [...] Read more.
This paper explores the application of Deep Reinforcement Learning (DRL) and Sim2Real strategies to enhance the autonomy of beach-cleaning robots. Experiments demonstrate that DRL agents, initially refined in simulations, effectively transfer their navigation skills to real-world scenarios, achieving precise and efficient operation in complex natural environments. This method provides a scalable and effective solution for beach conservation, establishing a significant precedent for the use of autonomous robots in environmental management. The key advancements include the ability of robots to adhere to predefined routes and dynamically avoid obstacles. Additionally, a newly developed platform validates the Sim2Real strategy, proving its capability to bridge the gap between simulated training and practical application, thus offering a robust methodology for addressing real-life environmental challenges. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

21 pages, 7158 KiB  
Article
Exploring High-Order Skeleton Correlations with Physical and Non-Physical Connection for Action Recognition
by Cheng Wang, Nan Ma and Zhixuan Wu
Appl. Sci. 2024, 14(9), 3832; https://doi.org/10.3390/app14093832 - 30 Apr 2024
Viewed by 790
Abstract
Hypergraphs have received widespread attention in modeling complex data correlations due to their superior performance. In recent years, some researchers have used hypergraph structures to characterize complex non-pairwise joints in the human skeleton and model higher-order correlations of the human skeleton. However, traditional [...] Read more.
Hypergraphs have received widespread attention in modeling complex data correlations due to their superior performance. In recent years, some researchers have used hypergraph structures to characterize complex non-pairwise joints in the human skeleton and model higher-order correlations of the human skeleton. However, traditional methods of constructing hypergraphs based on physical connections ignore the dependencies among non-physically connected joints or bones, and it is difficult to model the correlation among joints or bones that are highly correlated in human action but are physically connected at long distances. To address these issues, we propose a skeleton-based action recognition method for hypergraph learning based on skeleton correlation, which explores the effects of physically and non-physically connected skeleton information on accurate action recognition. Specifically, in this paper, spatio-temporal correlation modeling is performed on the natural connections inherent in humans (physical connections) and the joints or bones that are more dependent but not directly connected (non-physical connection) during human actions. In order to better learn the hypergraph structure, we construct a spatio-temporal hypergraph neural network to extract the higher-order correlations of the human skeleton. In addition, we use an attentional mechanism to compute the attentional weights among different hypergraph features, and adaptively fuse the rich feature information in different hypergraphs. Extensive experiments are conducted on two datasets, NTU-RGB+D 60 and Kinetics-Skeleton, and the results show that compared with the state-of-the-art skeleton-based methods, our proposed method can achieve an optimal level of performance with significant advantages, providing a more accurate environmental perception and action analysis for the development of embodied intelligence. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

16 pages, 2152 KiB  
Article
STA-Net: A Spatial–Temporal Joint Attention Network for Driver Maneuver Recognition, Based on In-Cabin and Driving Scene Monitoring
by Bin He, Ningmei Yu, Zhiyong Wang and Xudong Chen
Appl. Sci. 2024, 14(6), 2460; https://doi.org/10.3390/app14062460 - 14 Mar 2024
Viewed by 1030
Abstract
Next-generation advanced driver-assistance systems (ADASs) are a promising direction for intelligent transportation systems. To achieve intelligent security monitoring, it is imperative that vehicles possess the ability to accurately comprehend driver maneuvers amidst diverse driver behaviors and complex driving scenarios. Existing CNN-based and transformer-based [...] Read more.
Next-generation advanced driver-assistance systems (ADASs) are a promising direction for intelligent transportation systems. To achieve intelligent security monitoring, it is imperative that vehicles possess the ability to accurately comprehend driver maneuvers amidst diverse driver behaviors and complex driving scenarios. Existing CNN-based and transformer-based driver maneuver recognition methods face challenges in effectively capturing global and local features across temporal and spatial dimensions. This paper proposes a Spatial–Temporal Joint Attention Network (STA-Net) to realize high-efficient temporal and spatial feature extractions in driver maneuver recognition. First, we introduce a two-stream architecture for a concurrent analysis of in-cabin driver behaviors and out-cabin environmental information. Second, we propose a Multi-Scale Transposed Attention (MSTA) module and Multi-Scale Feedforward Network (MSFN) to extract features at multiple scales, addressing receptive field inadequacies and combining high-level and low-level information. Third, to address the information redundancy in multi-scale features, we propose a Cross-Spatial Attention Module (CSAM) and Multi-Scale Cross-Spatial Fusion Module (MCFM) to select essential features. Additionally, we introduce an asymmetric loss function to effectively tackle the issue of sample imbalance across diverse categories of driving maneuvers. The proposed method demonstrates a remarkable accuracy of 90.97% and an F1 score of 89.37% on the Brain4Cars dataset, surpassing the performance of the methods compared. These results substantiate the fact that our approach effectively enhances driver maneuver recognition. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

18 pages, 1329 KiB  
Article
Adaptive Kalman Filter for Real-Time Visual Object Tracking Based on Autocovariance Least Square Estimation
by Jiahong Li, Xinkai Xu, Zhuoying Jiang and Beiyan Jiang
Appl. Sci. 2024, 14(3), 1045; https://doi.org/10.3390/app14031045 - 25 Jan 2024
Viewed by 1481
Abstract
Real-time visual object tracking (VOT) may suffer from performance degradation and even divergence owing to inaccurate noise statistics typically engendered by non-stationary video sequences or alterations in the tracked object. This paper presents a novel adaptive Kalman filter (AKF) algorithm, termed AKF-ALS, based [...] Read more.
Real-time visual object tracking (VOT) may suffer from performance degradation and even divergence owing to inaccurate noise statistics typically engendered by non-stationary video sequences or alterations in the tracked object. This paper presents a novel adaptive Kalman filter (AKF) algorithm, termed AKF-ALS, based on the autocovariance least square estimation (ALS) methodology to improve the accuracy and robustness of VOT. The AKF-ALS algorithm involves object detection via an adaptive thresholding-based background subtraction technique and object tracking through real-time state estimation via the Kalman filter (KF) and noise covariance estimation using the ALS method. The proposed algorithm offers a robust and efficient solution to adapting the system model mismatches or invalid offline calibration, significantly improving the state estimation accuracy in VOT. The computation complexity of the AKF-ALS algorithm is derived and a numerical analysis is conducted to show its real-time efficiency. Experimental validations on tracking the centroid of a moving ball subjected to projectile motion, free-fall bouncing motion, and back-and-forth linear motion, reveal that the AKF-ALS algorithm outperforms a standard KF with fixed noise statistics. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

21 pages, 5764 KiB  
Article
Self-Learning Robot Autonomous Navigation with Deep Reinforcement Learning Techniques
by Borja Pintos Gómez de las Heras, Rafael Martínez-Tomás and José Manuel Cuadra Troncoso
Appl. Sci. 2024, 14(1), 366; https://doi.org/10.3390/app14010366 - 30 Dec 2023
Cited by 1 | Viewed by 1510
Abstract
Complex and high-computational-cost algorithms are usually the state-of-the-art solution for autonomous driving cases in which non-holonomic robots must be controlled in scenarios with spatial restrictions and interaction with dynamic obstacles while fulfilling at all times safety, comfort, and legal requirements. These highly complex [...] Read more.
Complex and high-computational-cost algorithms are usually the state-of-the-art solution for autonomous driving cases in which non-holonomic robots must be controlled in scenarios with spatial restrictions and interaction with dynamic obstacles while fulfilling at all times safety, comfort, and legal requirements. These highly complex software solutions must cover the high variability of use cases that might appear in traffic conditions, especially when involving scenarios with dynamic obstacles. Reinforcement learning algorithms are seen as a powerful tool in autonomous driving scenarios since the complexity of the algorithm is automatically learned by trial and error with the help of simple reward functions. This paper proposes a methodology to properly define simple reward functions and come up automatically with a complex and successful autonomous driving policy. The proposed methodology has no motion planning module so that the computational power can be limited like in the reactive robotic paradigm. Reactions are learned based on the maximization of the cumulative reward obtained during the learning process. Since the motion is based on the cumulative reward, the proposed algorithm is not bound to any embedded model of the robot and is not being affected by uncertainties of these models or estimators, making it possible to generate trajectories with the consideration of non-holonomic constrains. This paper explains the proposed methodology and discusses the setup of experiments and the results for the validation of the methodology in scenarios with dynamic obstacles. A comparison between the reinforcement learning algorithm and state-of-the-art approaches is also carried out to highlight how the methodology proposed outperforms state-of-the-art algorithms. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

24 pages, 13722 KiB  
Article
Advancing Image Object Detection: Enhanced Feature Pyramid Network and Gradient Density Loss for Improved Performance
by Ying Wang, Qinghui Wang, Ruirui Zou, Falin Wen, Fenglin Liu, Yihang Zhang, Shaoyi Du and Wei Zeng
Appl. Sci. 2023, 13(22), 12174; https://doi.org/10.3390/app132212174 - 9 Nov 2023
Cited by 1 | Viewed by 1603
Abstract
In the era of artificial intelligence, the significance of images and videos as intuitive conveyors of information cannot be overstated. Computer vision techniques rooted in deep learning have revolutionized our ability to autonomously and accurately identify objects within visual media, making them a [...] Read more.
In the era of artificial intelligence, the significance of images and videos as intuitive conveyors of information cannot be overstated. Computer vision techniques rooted in deep learning have revolutionized our ability to autonomously and accurately identify objects within visual media, making them a focal point of contemporary research. This study addresses the pivotal role of image object detection, particularly in the contexts of autonomous driving and security surveillance, by presenting an in-depth exploration of this field with a focus on enhancing the feature pyramid network. One of the key challenges in existing object detection methodologies lies in mitigating information loss caused by multi-scale feature fusion. To tackle this issue, we propose the enhanced feature pyramid, which adeptly amalgamates features extracted across different scales. This strategic enhancement effectively curbs information attrition across various layers, thereby strengthening the feature extraction capabilities of the foundational network. Furthermore, we confront the issue of excessive classification loss in image object detection tasks by introducing the gradient density loss function, designed to mitigate classification discrepancies. Empirical results unequivocally demonstrate the efficacy of our approach in enhancing the detection of multi-scale objects within images. When evaluated across benchmark datasets, including MS COCO 2017, MS COCO 2014, Pascal VOC 2007, and Pascal VOC 2012, our method achieves impressive average precision scores of 39.4%, 42.0%, 51.5%, and 49.9%, respectively. This performance clearly outperforms alternative state-of-the-art methods in the field. This research not only contributes to the evolving landscape of computer vision and object detection but also has practical implications for a wide range of applications, aligning with the transformative trends in the automotive industry and security technologies. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

18 pages, 6961 KiB  
Article
Visual Odometry of a Low-Profile Pallet Robot Based on Ortho-Rectified Ground Plane Image from Fisheye Camera
by Soon-Yong Park, Ung-Gyo Lee and Seung-Hae Baek
Appl. Sci. 2023, 13(16), 9095; https://doi.org/10.3390/app13169095 - 9 Aug 2023
Viewed by 1204
Abstract
This study presents a visual-only odometry technique of a low-profile pallet robot using image feature tracking in ground plane images generated from a fisheye camera. The fisheye camera is commonly used in many robot vision applications because it provides a larger field of [...] Read more.
This study presents a visual-only odometry technique of a low-profile pallet robot using image feature tracking in ground plane images generated from a fisheye camera. The fisheye camera is commonly used in many robot vision applications because it provides a larger field of view (FoV) around a robot. However, because of the large radial distortion, the fisheye image is generally converted to a pinhole image for visual feature tracking or matching. Although the radial distortion can be eliminated via image undistortion with the lens calibration parameters, it causes several side effects, such as degraded image resolution and a significant reduction in the FoV. In this paper, instead of using the pinhole model, we propose to generate a ground plane image (GPI) from the fisheye image. GPI is a virtual top-view image that only contains the ground plane at the front of the robot. First, the original fisheye image is projected to several virtual pinhole images to generate a cubemap. Second, the front and bottom faces of the cubemap are projected to a GPI. Third, the GPI is homographically transformed again to further reduce image distortion. As a result, an accurate ortho-rectified ground plane image is obtained from the virtual top-view camera. For visual odometry using the ortho-rectified GPI, a number of 2D motion vectors are obtained using feature extraction and tracking between the previous and current frames in the GPI. By calculating a scaled motion vector, which is the measurement of the virtual wheel encoder of the mobile robot, we estimate the velocity and steering angle of the virtual wheel using the motion vector. Finally, we estimate the pose of the mobile robot by applying a kinematic model to the mobile robot. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics)
Show Figures

Figure 1

Back to TopTop