Computer Vision Applications for Autonomous Vehicles

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electrical and Autonomous Vehicles".

Deadline for manuscript submissions: closed (15 April 2024) | Viewed by 3007

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics Engineering, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
Interests: machine learning; artificial intelligence

E-Mail Website
Guest Editor
School of Artificial Intelligence, Yong In University, Yongin 17092, Republic of Korea
Interests: multi-sensor-based computer vision; advanced driver assistance systems

Special Issue Information

Dear Colleagues,

Computer vision (CV) methods are extensively utilized by engineers to address a variety of practical vision challenges. We urge researchers to share their experimental and theoretical findings, focusing on the practical application of CV techniques to autonomous vehicles such as cars, drones, robots, and more across all scientific and engineering domains. Papers submitted should highlight innovative applications of CV in real-world engineering contexts related to autonomous vehicles. There are no limitations on paper length. If electronic files or software detailing calculations or experimental processes cannot be conventionally published, they can be provided as supplementary digital content.

The focal points of this Special Issue include but are not limited to innovative applications of:

  • Image and video interpretation;
  • Video analysis and captioning;
  • Image retrieval;
  • Image enhancement;
  • Vision-based robotics;
  • Sensor fusion;
  • Multimedia;
  • 3D reconstruction and localization;
  • Object detection and tracking;
  • Event prediction.

Dr. Yuseok Ban
Dr. Kyungjae Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • machine learning
  • computer vision
  • autonomous vehicle
  • image and video understanding

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 73945 KiB  
Article
Route Positioning System for Campus Shuttle Bus Service Using a Single Camera
by Jhonghyun An
Electronics 2024, 13(11), 2004; https://doi.org/10.3390/electronics13112004 - 21 May 2024
Viewed by 323
Abstract
A route positioning system is a technology that identifies the current route when driving from one stop to the next, commonly found in public transportation systems such as shuttle buses that follow fixed routes. This is especially useful for smaller-scale services, such as [...] Read more.
A route positioning system is a technology that identifies the current route when driving from one stop to the next, commonly found in public transportation systems such as shuttle buses that follow fixed routes. This is especially useful for smaller-scale services, such as shuttle buses, where using expensive technology and sensors for location tracking might not be feasible. Particularly in urban areas with tall buildings or mountainous regions with lots of trees, relying solely on GPS can lead to many errors. Therefore, this paper suggests a cost-effective solution that uses just one camera sensor to accurately determine the location of small-scale transportation services on fixed routes. For this, this paper uses a single-stage detection network that quickly identifies objects and tracks them using a simple algorithm. These detected features are compiled into a “codebook” using the bag-of-visual-words technique. During actual trips, this pre-created codebook is compared with landmarks that the camera sees. This comparison helps to determine the route currently being traveled. To test the effectiveness of this approach, this paper used the route of a shuttle bus on the Gachon University campus, which is similar to a downtown area with tall buildings or a wooded mountainous area. The results showed that the shuttle bus’s route was recognized with an accuracy of 0.60. Areas with distinct features were recognized with an accuracy of 0.99, while stops with simple, nondescript structures were recognized with an accuracy of 0.29. Additionally, applying the SORT algorithm to enhance performance slightly improved the accuracy from 0.60 to 0.61. This demonstrates that our proposed method can effectively perform location recognition using only cameras in small shuttle buses. Full article
(This article belongs to the Special Issue Computer Vision Applications for Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 8829 KiB  
Article
Multiple Moving Vehicles Tracking Algorithm with Attention Mechanism and Motion Model
by Jiajun Gao, Guangjie Han, Hongbo Zhu and Lyuchao Liao
Electronics 2024, 13(1), 242; https://doi.org/10.3390/electronics13010242 - 4 Jan 2024
Viewed by 1507
Abstract
With the acceleration of urbanization and the increasing demand for travel, current road traffic is experiencing rapid growth and more complex spatio-temporal logic. Vehicle tracking on roads presents several challenges, including complex scenes with frequent foreground–background transitions, fast and nonlinear vehicle movements, and [...] Read more.
With the acceleration of urbanization and the increasing demand for travel, current road traffic is experiencing rapid growth and more complex spatio-temporal logic. Vehicle tracking on roads presents several challenges, including complex scenes with frequent foreground–background transitions, fast and nonlinear vehicle movements, and the presence of numerous unavoidable low-score detection boxes. In this paper, we propose AM-Vehicle-Track, following the proven-effective paradigm of tracking by detection (TBD). At the detection stage, we introduce the lightweight channel block attention mechanism (LCBAM), facilitating the detector to concentrate more on foreground features with limited computational resources. At the tracking stage, we innovatively propose the noise-adaptive extended Kalman filter (NSA-EKF) module to extract vehicles’ motion information while considering the impact of detection confidence on observation noise when dealing with nonlinear motion. Additionally, we borrow the Byte data association method to address unavoidable low-score detection boxes, enabling secondary association to reduce ID switches. We achieve 42.2 MOTA, 51.2 IDF1, and 364 IDs on the test set of VisDrone-MOT with 72 FPS. The experimental results showcase our approach’s highly competitive performance, attaining SOTA tracking performance with a fast speed. Full article
(This article belongs to the Special Issue Computer Vision Applications for Autonomous Vehicles)
Show Figures

Figure 1

13 pages, 3140 KiB  
Article
Prex-Net: Progressive Exploration Network Using Efficient Channel Fusion for Light Field Reconstruction
by Dong-Myung Kim, Young-Suk Yoon, Yuseok Ban and Jae-Won Suh
Electronics 2023, 12(22), 4661; https://doi.org/10.3390/electronics12224661 - 15 Nov 2023
Viewed by 735
Abstract
Light field (LF) reconstruction is a technique for synthesizing views between LF images and various methods have been proposed to obtain high-quality LF reconstructed images. In this paper, we propose a progressive exploration network using efficient channel fusion for light field reconstruction (Prex-Net), [...] Read more.
Light field (LF) reconstruction is a technique for synthesizing views between LF images and various methods have been proposed to obtain high-quality LF reconstructed images. In this paper, we propose a progressive exploration network using efficient channel fusion for light field reconstruction (Prex-Net), which consists of three parts to quickly produce high-quality synthesized LF images. The initial feature extraction module uses 3D convolution to obtain deep correlations between multiple LF input images. In the channel fusion module, the extracted initial feature map passes through successive up- and down-fusion blocks and continuously searches for features required for LF reconstruction. The fusion block collects the pixels of channels by pixel shuffle and applies convolution to the collected pixels to fuse the information existing between channels. Finally, the LF restoration module synthesizes LF images with high angular resolution through simple convolution using the concatenated outputs of down-fusion blocks. The proposed Prex-Net synthesizes views between LF images faster than existing LF restoration methods and shows good results in the PSNR performance of the synthesized image. Full article
(This article belongs to the Special Issue Computer Vision Applications for Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop