Recent Advances of Targeted Observation by Radar/Optical Sensors and UAS

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (15 February 2024) | Viewed by 6819

Special Issue Editors


E-Mail Website
Guest Editor
School of Microelectronics, Southern University of Science and Technology, Shenzhen 518055, China
Interests: high-frequency high-speed integrated circuits

E-Mail Website
Guest Editor
School of Microelectronics, Southern University of Science and Technology, Shenzhen 518055, China
Interests: unmanned aerial vehicles; precision agriculture; Deep Learning; remote sensing
Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen 518055, China
Interests: applications of drones; target localization; tracking; optimal sensor placement; intelligent control

Special Issue Information

Dear Colleagues,

In recent years, thanks to the advancement of front-end radar/optical sensor technologies, Unmanned Aerial Systems (UAS), also known as drones, are accelerating the transformation of various industries such as industry, and agriculture. This can be attributed to two aspects: on the one hand, as the eyes of UAS, radar or optical sensors can provide guidance for their autonomous and safe flight. On the other hand, the combination of radar or optical sensors and UAS can conduct targeted observation. In addition, the intelligent interpretation of back-end data based on deep neural networks and deep learning algorithms is expected to further promote the advancement of UAS and expand their application fields. However, there are still many challenges in the deep fusion of radar or optical sensors, deep learning algorithms, and UAS to efficiently extract target information of interest.

The goal of this Special Issue is to collect papers (original research articles and review papers) to give insights about intelligently targeted observation by radar/optical sensors and UAS. This Special Issue focuses on radar or optical target observation from the drone's perspective, with no restrictions on the field of application, which means the research results of radar/optical sensors and UAVs in precision agriculture, medical care, Internet of Things, logistics, smart grid, emergency rescue, wildlife protection, etc., are all welcome.

This Special Issue will welcome manuscripts that link the following themes:

  • Design and development of UAS-borne radar/optical sensor;
  • Signal/image processing;
  • Integration of high-performance sensors and UAV systems;
  • Radar/optical-based remote sensing;
  • Information interpretation based on deep learning;
  • Intelligent environment perception and autonomous obstacle avoidance for UAS;
  • Applications of radar/optical sensors and UAS in various fields.

We look forward to receiving your original research articles and reviews.

Prof. Dr. Xiaoguang Liu
Dr. Dashuai Wang
Dr. Sheng Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAS/UAV/drone
  • radars/optical sensors
  • targeted observation
  • signal processing
  • 2D/3D imaging
  • remote sensing
  • deep learning
  • recognition, detection, localization, tracking and navigation

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 2043 KiB  
Article
Towards mmWave Altimetry for UAS: Exploring the Potential of 77 GHz Automotive Radars
by Maaz Ali Awan, Yaser Dalveren, Ali Kara and Mohammad Derawi
Drones 2024, 8(3), 94; https://doi.org/10.3390/drones8030094 - 11 Mar 2024
Viewed by 1278
Abstract
Precise altitude data are indispensable for flight navigation, particularly during the autonomous landing of unmanned aerial systems (UASs). Conventional light and barometric sensors employed for altitude estimation are limited by poor visibility and temperature conditions, respectively, whilst global positioning system (GPS) receivers provide [...] Read more.
Precise altitude data are indispensable for flight navigation, particularly during the autonomous landing of unmanned aerial systems (UASs). Conventional light and barometric sensors employed for altitude estimation are limited by poor visibility and temperature conditions, respectively, whilst global positioning system (GPS) receivers provide the altitude from the mean sea level (MSL) marred with a slow update rate. To cater to the landing safety requirements, UASs necessitate precise altitude information above ground level (AGL) impervious to environmental conditions. Radar altimeters, a mainstay in commercial aviation for at least half a century, realize these requirements through minimum operational performance standards (MOPSs). More recently, the proliferation of 5G technology and interference with the universally allocated band for radar altimeters from 4.2 to 4.4 GHz underscores the necessity to explore novel avenues. Notably, there is no dedicated MOPS tailored for radar altimeters of UASs. To gauge the performance of a radar altimeter offering for UASs, existing MOPSs are the de facto choice. Historically, frequency-modulated continuous wave (FMCW) radars have been extensively used in a broad spectrum of ranging applications including radar altimeters. Modern monolithic millimeter wave (mmWave) automotive radars, albeit designed for automotive applications, also employ FMCW for precise ranging with a cost-effective and compact footprint. Given the technology maturation with excellent size, weight, and power (SWaP) metrics, there is a growing trend in industry and academia to explore their efficacy beyond the realm of the automotive industry. To this end, their feasibility for UAS altimetry remains largely untapped. While the literature on theoretical discourse is prevalent, a specific focus on mmWave radar altimetry is lacking. Moreover, clutter estimation with hardware specifications of a pure look-down mmWave radar is unreported. This article argues the applicability of MOPSs for commercial aviation for adaptation to a UAS use case. The theme of the work is a tutorial based on a simplified mathematical and theoretical discussion on the understanding of performance metrics and inherent intricacies. A systems engineering approach for deriving waveform specifications from operational requirements of a UAS is offered. Lastly, proposed future research directions and insights are included. Full article
Show Figures

Figure 1

23 pages, 3609 KiB  
Article
Dual-UAV Collaborative High-Precision Passive Localization Method Based on Optoelectronic Platform
by Xu Kang, Yu Shao, Guanbing Bai, He Sun, Tao Zhang and Dejiang Wang
Drones 2023, 7(11), 646; https://doi.org/10.3390/drones7110646 - 25 Oct 2023
Cited by 1 | Viewed by 1314
Abstract
Utilizing the optical characteristics of the target for detection and localization does not require actively emitting signals and has the advantage of strong concealment. Once the optoelectronic platform mounted on the unmanned aerial vehicle (UAV) detects the target, the vector pointing to the [...] Read more.
Utilizing the optical characteristics of the target for detection and localization does not require actively emitting signals and has the advantage of strong concealment. Once the optoelectronic platform mounted on the unmanned aerial vehicle (UAV) detects the target, the vector pointing to the target in the camera coordinate system can estimate the angle of arrival (AOA) of the target relative to the UAV in the Earth-centered Earth-fixed (ECEF) coordinate system through a series of rotation transformations. By employing two UAVs and the corresponding AOA measurements, passive localization of an unknown target is possible. To achieve high-precision target localization, this paper investigates the following three aspects. Firstly, two error transfer models are established to estimate the noise distributions of the AOA and the UAV position in the ECEF coordinate system. Next, to reduce estimation errors, a weighted least squares (WLS) estimator is designed. Theoretical analysis proves that the mean squared error (MSE) of the target position estimation can reach the Cramér–Rao lower bound (CRLB) under the condition of small noise. Finally, we study the optimal placement problem of two coplanar UAVs relative to the target based on the D-optimality criterion and provide explicit conclusions. Simulation experiments validate the effectiveness of the localization method. Full article
Show Figures

Figure 1

22 pages, 8147 KiB  
Article
SODCNN: A Convolutional Neural Network Model for Small Object Detection in Drone-Captured Images
by Lu Meng, Lijun Zhou and Yangqian Liu
Drones 2023, 7(10), 615; https://doi.org/10.3390/drones7100615 - 1 Oct 2023
Viewed by 1670
Abstract
Drone images contain a large number of small, dense targets. And they are vital for agriculture, security, monitoring, and more. However, detecting small objects remains an unsolved challenge, as they occupy a small proportion of the image and have less distinct features. Conventional [...] Read more.
Drone images contain a large number of small, dense targets. And they are vital for agriculture, security, monitoring, and more. However, detecting small objects remains an unsolved challenge, as they occupy a small proportion of the image and have less distinct features. Conventional object detection algorithms fail to produce satisfactory results for small objects. To address this issue, an improved algorithm for small object detection is proposed by modifying the YOLOv7 network structure. Firstly, redundant detection head for large objects is removed, and the feature extraction for small object detection advances. Secondly, the number of anchor boxes is increased to improve the recall rate for small objects. And, considering the limitations of the CIoU loss function in optimization, the EIoU loss function is employed as the bounding box loss function, to achieve more stable and effective regression. Lastly, an attention-based feature fusion module is introduced to replace the Concat module in FPN. This module considers both global and local information, effectively addressing the challenges in multiscale and small object fusion. Experimental results on the VisDrone2019 dataset demonstrate that the proposed algorithm achieves an mAP50 of 54.03% and an mAP50:90 of 32.06%, outperforming the latest similar research papers and significantly enhancing the model’s capability for small object detection in dense scenes. Full article
Show Figures

Figure 1

27 pages, 15777 KiB  
Article
A Real-Time Strand Breakage Detection Method for Power Line Inspection with UAVs
by Jichen Yan, Xiaoguang Zhang, Siyang Shen, Xing He, Xuan Xia, Nan Li, Song Wang, Yuxuan Yang and Ning Ding
Drones 2023, 7(9), 574; https://doi.org/10.3390/drones7090574 - 10 Sep 2023
Cited by 4 | Viewed by 1752 | Correction
Abstract
Power lines are critical infrastructure components in power grid systems. Strand breakage is a kind of serious defect of power lines that can directly impact the reliability and safety of power supply. Due to the slender morphology of power lines and the difficulty [...] Read more.
Power lines are critical infrastructure components in power grid systems. Strand breakage is a kind of serious defect of power lines that can directly impact the reliability and safety of power supply. Due to the slender morphology of power lines and the difficulty in acquiring sufficient sample data, strand breakage detection remains a challenging task. Moreover, power grid corporations prefer to detect these defects on-site during power line inspection using unmanned aerial vehicles (UAVs), rather than transmitting all of the inspection data to the central server for offline processing which causes sluggish response and huge communication burden. According to the above challenges and requirements, this paper proposes a novel method for detecting broken strands on power lines in images captured by UAVs. The method features a multi-stage light-weight pipeline that includes power line segmentation, power line local image patch cropping, and patch classification. A power line segmentation network is designed to segment power lines from the background; thus, local image patches can be cropped along the power lines which preserve the detailed features of power lines. Subsequently, the patch classification network recognizes broken strands in the image patches. Both the power line segmentation network and the patch classification network are designed to be light-weight, enabling efficient online processing. Since the power line segmentation network can be trained with normal power line images that are easy to obtain and the compact patch classification network can be trained with relatively few positive samples using a multi-task learning strategy, the proposed method is relatively data efficient. Experimental results show that, trained on limited sample data, the proposed method can achieve an F1-score of 0.8, which is superior to current state-of-the-art object detectors. The average inference speed on an embedded computer is about 11.5 images per second. Therefore, the proposed method offers a promising solution for conducting real-time on-site power line defect detection with computing sources carried by UAVs. Full article
Show Figures

Figure 1

Back to TopTop