Advances in Detection and Tracking Applications for Drones and UAM Systems

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 20 December 2024 | Viewed by 8312

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Engineering. University of Naples Federico II, 80125 Naples, Italy
Interests: UAS; payloads for UAV; electro-optical sensors; radar systems; ATM and UTM; attitude sensors
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Industrial Engineering. University of Naples Federico II, 80125 Naples, Italy
Interests: unmanned aerial systems; unmanned traffic management; trajectory prediction

Special Issue Information

Dear Colleagues,

The technological improvement of Unmanned Aerial Systems supports innovative tasks for civil and military solutions. Detection and tracking tasks can be achieved by integrated on-board systems that exploit advanced processing based on reliable and high-performance sensors, such as radars, optical sensors and lidars, acoustic sensors, and RF analyzers. Traffic management and surveillance aims need accurate and efficient systems that must be properly integrated for new applications of Unmanned Traffic Management and Urban Air Mobility.

The proposed Special Issue aims to investigate innovative detection and tracking solutions that can be used for navigation, traffic management, traffic integration, detect-and-avoid, and surveillance purposes. Artificial Intelligence techniques can be used for proper data processing in simulated or real scenarios. Advances in on-board data processing for target detection and tracking aim to improve aerial vehicle performance or advanced payload tasks. Surveillance can be supported by properly designed ground systems and services also considering the Urban Air Mobility scenario under developement.

We are pleased to invite original contributions and reviews. Topics can be related (but not limited) to the detection and tracking of targets and incoming traffic for navigation, traffic management, traffic integration, detection and tracking of Unmanned Aerial Systems for surveillance, detect-and-avoid, Urban Air Moblity applications and services, innovative image processing and sensor fusion, advanced solutions based on electro-optical, radar and/or lidar.

Prof. Dr. Giancarlo Rufino
Dr. Claudia Conte
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • unmanned aerial systems
  • urban air mobility
  • sense and avoid
  • artificial intelligence
  • sensor fusion
  • tracking

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9360 KiB  
Article
An All-Time Detection Algorithm for UAV Images in Urban Low Altitude
by Yuzhuo Huang, Jingyi Qu, Haoyu Wang and Jun Yang
Drones 2024, 8(7), 332; https://doi.org/10.3390/drones8070332 - 18 Jul 2024
Viewed by 462
Abstract
With the rapid development of urban air traffic, Unmanned Aerial Vehicles (UAVs) are gradually being widely used in cities. Since UAVs are prohibited over important places in Urban Air Mobility (UAM), such as government and airports, it is important to develop air–ground non-cooperative [...] Read more.
With the rapid development of urban air traffic, Unmanned Aerial Vehicles (UAVs) are gradually being widely used in cities. Since UAVs are prohibited over important places in Urban Air Mobility (UAM), such as government and airports, it is important to develop air–ground non-cooperative UAV surveillance for air security all day and night. In the paper, an all-time UAV detection algorithm based on visible images during the day and infrared images at night is proposed by our team. We construct a UAV dataset used in urban visible backgrounds (UAV–visible) and a UAV dataset used in urban infrared backgrounds (UAV–infrared). In the daytime, the visible images are less accurate for UAV detection in foggy environments; therefore, we incorporate a defogging algorithm with the detection network that can ensure the undistorted output of images for UAV detection based on the realization of defogging. At night, infrared images have the characteristics of a low-resolution, unclear object contour, and complex image background. We integrate the attention and the transformation of space feature maps into depth feature maps to detect small UAVs in images. The all-time detection algorithm is trained separately on these two datasets, which can achieve 96.3% and 94.7% mAP50 on the UAV–visible and UAV–infrared datasets and perform real-time object detection with an inference speed of 40.16 FPS and 28.57 FPS, respectively. Full article
Show Figures

Figure 1

20 pages, 3902 KiB  
Article
Lightweight and Efficient Tiny-Object Detection Based on Improved YOLOv8n for UAV Aerial Images
by Min Yue, Liqiang Zhang, Juan Huang and Haifeng Zhang
Drones 2024, 8(7), 276; https://doi.org/10.3390/drones8070276 - 21 Jun 2024
Viewed by 781
Abstract
The task of multiple-tiny-object detection from diverse perspectives in unmanned aerial vehicles (UAVs) using onboard edge devices is a significant and complex challenge within computer vision. In order to address this challenge, we propose a lightweight and efficient tiny-object-detection algorithm named LE-YOLO, based [...] Read more.
The task of multiple-tiny-object detection from diverse perspectives in unmanned aerial vehicles (UAVs) using onboard edge devices is a significant and complex challenge within computer vision. In order to address this challenge, we propose a lightweight and efficient tiny-object-detection algorithm named LE-YOLO, based on the YOLOv8n architecture. To improve the detection performance and optimize the model efficiency, we present the LHGNet backbone, a more extensive feature extraction network, integrating depth-wise separable convolution and channel shuffle modules. This integration facilitates a thorough exploration of the inherent features within the network at deeper layers, promoting the fusion of local detail information and channel characteristics. Furthermore, we introduce the LGS bottleneck and LGSCSP fusion module incorporated into the neck, aiming to decrease the computational complexity while preserving the detector’s accuracy. Additionally, we enhance the detection accuracy by modifying its structure and the size of the feature maps. These improvements significantly enhance the model’s capability to capture tiny objects. The proposed LE-YOLO detector is examined in ablation and comparative experiments on the VisDrone2019 dataset. In contrast to YOLOv8n, the proposed LE-YOLO model achieved a 30.0% reduction in the parameter count, accompanied by a 15.9% increase in the mAP(0.5). These comprehensive experiments indicate that our approach can significantly enhance the detection accuracy and optimize the model efficiency through the organic combination of our suggested enhancements. Full article
Show Figures

Figure 1

19 pages, 746 KiB  
Article
Impact Analysis of Time Synchronization Error in Airborne Target Tracking Using a Heterogeneous Sensor Network
by Seokwon Lee, Zongjian Yuan, Ivan Petrunin and Hyosang Shin
Drones 2024, 8(5), 167; https://doi.org/10.3390/drones8050167 - 23 Apr 2024
Viewed by 1034
Abstract
This paper investigates the influence of time synchronization on sensor fusion and target tracking. As a benchmark, we design a target tracking system based on track-to-track fusion architecture. Heterogeneous sensors detect targets and transmit measurements through a communication network, while local tracking and [...] Read more.
This paper investigates the influence of time synchronization on sensor fusion and target tracking. As a benchmark, we design a target tracking system based on track-to-track fusion architecture. Heterogeneous sensors detect targets and transmit measurements through a communication network, while local tracking and track fusion are performed in the fusion center to integrate measurements from these sensors into a fused track. The time synchronization error is mathematically modeled, and local time is biased from the reference clock during the holdover phase. The influence of the time synchronization error on target tracking system components such as local association, filtering, and track fusion is discussed. The results demonstrate that an increase in the time synchronization error leads to deteriorating association and filtering performance. In addition, the results of the simulation study validate the impact of the time synchronization error on the sensor network. Full article
Show Figures

Figure 1

20 pages, 3022 KiB  
Article
PVswin-YOLOv8s: UAV-Based Pedestrian and Vehicle Detection for Traffic Management in Smart Cities Using Improved YOLOv8
by Noor Ul Ain Tahir, Zhe Long, Zuping Zhang, Muhammad Asim and Mohammed ELAffendi
Drones 2024, 8(3), 84; https://doi.org/10.3390/drones8030084 - 28 Feb 2024
Cited by 5 | Viewed by 3294
Abstract
In smart cities, effective traffic congestion management hinges on adept pedestrian and vehicle detection. Unmanned Aerial Vehicles (UAVs) offer a solution with mobility, cost-effectiveness, and a wide field of view, and yet, optimizing recognition models is crucial to surmounting challenges posed by small [...] Read more.
In smart cities, effective traffic congestion management hinges on adept pedestrian and vehicle detection. Unmanned Aerial Vehicles (UAVs) offer a solution with mobility, cost-effectiveness, and a wide field of view, and yet, optimizing recognition models is crucial to surmounting challenges posed by small and occluded objects. To address these issues, we utilize the YOLOv8s model and a Swin Transformer block and introduce the PVswin-YOLOv8s model for pedestrian and vehicle detection based on UAVs. Firstly, the backbone network of YOLOv8s incorporates the Swin Transformer model for global feature extraction for small object detection. Secondly, to address the challenge of missed detections, we opt to integrate the CBAM into the neck of the YOLOv8. Both the channel and the spatial attention modules are used in this addition because of how well they extract feature information flow across the network. Finally, we employ Soft-NMS to improve the accuracy of pedestrian and vehicle detection in occlusion situations. Soft-NMS increases performance and manages overlapped boundary boxes well. The proposed network reduced the fraction of small objects overlooked and enhanced model detection performance. Performance comparisons with different YOLO versions ( for example YOLOv3 extremely small, YOLOv5, YOLOv6, and YOLOv7), YOLOv8 variants (YOLOv8n, YOLOv8s, YOLOv8m, and YOLOv8l), and classical object detectors (Faster-RCNN, Cascade R-CNN, RetinaNet, and CenterNet) were used to validate the superiority of the proposed PVswin-YOLOv8s model. The efficiency of the PVswin-YOLOv8s model was confirmed by the experimental findings, which showed a 4.8% increase in average detection accuracy (mAP) compared to YOLOv8s on the VisDrone2019 dataset. Full article
Show Figures

Figure 1

14 pages, 765 KiB  
Article
Towards Real-Time On-Drone Pedestrian Tracking in 4K Inputs
by Chanyoung Oh, Moonsoo Lee and Chaedeok Lim
Drones 2023, 7(10), 623; https://doi.org/10.3390/drones7100623 - 6 Oct 2023
Cited by 1 | Viewed by 1752
Abstract
Over the past several years, significant progress has been made in object tracking, but challenges persist in tracking objects in high-resolution images captured from drones. Such images usually contain very tiny objects, and the movement of the drone causes rapid changes in the [...] Read more.
Over the past several years, significant progress has been made in object tracking, but challenges persist in tracking objects in high-resolution images captured from drones. Such images usually contain very tiny objects, and the movement of the drone causes rapid changes in the scene. In addition, the computing power of mission computers on drones is often insufficient to achieve real-time processing of deep learning-based object tracking. This paper presents a real-time on-drone pedestrian tracker that takes as the input 4K aerial images. The proposed tracker effectively hides the long latency required for deep learning-based detection (e.g., YOLO) by exploiting both the CPU and GPU equipped in the mission computer. We also propose techniques to minimize detection loss in drone-captured images, including a tracker-assisted confidence boosting and an ensemble for identity association. In our experiments, using real-world inputs captured by drones at a height of 50 m, the proposed method with an NVIDIA Jetson TX2 proves its efficacy by achieving real-time detection and tracking in 4K video streams. Full article
Show Figures

Figure 1

Back to TopTop