Previous Issue
Volume 8, April
 
 

Drones, Volume 8, Issue 5 (May 2024) – 16 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 15086 KiB  
Article
Vision-Guided Tracking and Emergency Landing for UAVs on Moving Targets
by Yisak Debele, Ha-Young Shi, Assefinew Wondosen, Henok Warku, Tae-Wan Ku and Beom-Soo Kang
Drones 2024, 8(5), 182; https://doi.org/10.3390/drones8050182 - 03 May 2024
Viewed by 423
Abstract
This paper presents a vision-based adaptive tracking and landing method for multirotor Unmanned Aerial Vehicles (UAVs), designed for safe recovery amid propulsion system failures that reduce maneuverability and responsiveness. The method addresses challenges posed by external disturbances such as wind and agile target [...] Read more.
This paper presents a vision-based adaptive tracking and landing method for multirotor Unmanned Aerial Vehicles (UAVs), designed for safe recovery amid propulsion system failures that reduce maneuverability and responsiveness. The method addresses challenges posed by external disturbances such as wind and agile target movements, specifically, by considering maneuverability and control limitations caused by propulsion system failures. Building on our previous research in actuator fault detection and tolerance, our approach employs a modified adaptive pure pursuit guidance technique with an extra adaptation parameter to account for reduced maneuverability, thus ensuring safe tracking of moving objects. Additionally, we present an adaptive landing strategy that adapts to tracking deviations and minimizes off-target landings caused by lateral tracking errors and delayed responses, using a lateral offset-dependent vertical velocity control. Our system employs vision-based tag detection to ascertain the position of the Unmanned Ground Vehicle (UGV) in relation to the UAV. We implemented this system in a mid-mission emergency landing scenario, which includes actuator health monitoring of emergency landings. Extensive testing and simulations demonstrate the effectiveness of our approach, significantly advancing the development of safe tracking and emergency landing methods for UAVs with compromised control authority due to actuator failures. Full article
Show Figures

Figure 1

22 pages, 23766 KiB  
Article
Fine-Grained Feature Perception for Unmanned Aerial Vehicle Target Detection Algorithm
by Shi Liu, Meng Zhu, Rui Tao and Honge Ren
Drones 2024, 8(5), 181; https://doi.org/10.3390/drones8050181 - 03 May 2024
Viewed by 289
Abstract
Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This [...] Read more.
Unmanned aerial vehicle (UAV) aerial images often present challenges such as small target sizes, high target density, varied shooting angles, and dynamic poses. Existing target detection algorithms exhibit a noticeable performance decline when confronted with UAV aerial images compared to general scenes. This paper proposes an outstanding small target detection algorithm for UAVs, named Fine-Grained Feature Perception YOLOv8s-P2 (FGFP-YOLOv8s-P2), based on YOLOv8s-P2 architecture. We specialize in improving inspection accuracy while meeting real-time inspection requirements. First, we enhance the targets’ pixel information by utilizing slice-assisted training and inference techniques, thereby reducing missed detections. Then, we propose a feature extraction module with deformable convolutions. Decoupling the learning process of offset and modulation scalar enables better adaptation to variations in the size and shape of diverse targets. In addition, we introduce a large kernel spatial pyramid pooling module. By cascading convolutions, we leverage the advantages of large kernels to flexibly adjust the model’s attention to various regions of high-level feature maps, better adapting to complex visual scenes and circumventing the cost drawbacks associated with large kernels. To match the excellent real-time detection performance of the baseline model, we propose an improved Random FasterNet Block. This block introduces randomness during convolution and captures spatial features of non-linear transformation channels, enriching feature representations and enhancing model efficiency. Extensive experiments and comprehensive evaluations on the VisDrone2019 and DOTA-v1.0 datasets demonstrate the effectiveness of FGFP-YOLOv8s-P2. This achievement provides robust technical support for efficient small target detection by UAVs in complex scenarios. Full article
Show Figures

Figure 1

18 pages, 528 KiB  
Article
Dual-driven Learning-Based Multiple-Input Multiple-Output Signal Detection Unmanned Aerial Vehicle Air-to-Ground Communications
by Haihan Li , Yongming He , Shuntian Zheng , Fan Zhou  and Hongwen Yang 
Drones 2024, 8(5), 180; https://doi.org/10.3390/drones8050180 - 02 May 2024
Viewed by 306
Abstract
Unmanned aerial vehicle (UAV) air-to-ground (AG) communication plays a critical role in the evolving space–air–ground integrated network of the upcoming sixth-generation cellular network (6G). The integration of massive multiple-input multiple-output (MIMO) systems has become essential for ensuring optimal performing communication technologies. This article [...] Read more.
Unmanned aerial vehicle (UAV) air-to-ground (AG) communication plays a critical role in the evolving space–air–ground integrated network of the upcoming sixth-generation cellular network (6G). The integration of massive multiple-input multiple-output (MIMO) systems has become essential for ensuring optimal performing communication technologies. This article presents a novel dual-driven learning-based network for millimeter-wave (mm-wave) massive MIMO symbol detection of UAV AG communications. Our main contribution is that the proposed approach combines a data-driven symbol-correction network with a model-driven orthogonal approximate message passing network (OAMP-Net). Through joint training, the dual-driven network reduces symbol detection errors propagated through each iteration of the model-driven OAMP-Net. The numerical results demonstrate the superiority of the dual-driven detector over the conventional minimum mean square error (MMSE), orthogonal approximate message passing (OAMP), and OAMP-Net detectors at various noise powers and channel estimation errors. The dual-driven MIMO detector exhibits a 2–3 dB lower signal-to-noise ratio (SNR) requirement compared to the MMSE and OAMP-Net detectors to achieve a bit error rate (BER) of 1×102 when the channel estimation error is −30 dB. Moreover, the dual-driven MIMO detector exhibits an increased tolerance to channel estimation errors by 2–3 dB to achieve a BER of 1×103. Full article
(This article belongs to the Special Issue Advances in Detection, Security, and Communication for UAV)
17 pages, 1894 KiB  
Article
Model-Free RBF Neural Network Intelligent-PID Control Applying Adaptive Robust Term for Quadrotor System
by Sung-Jae Kim and Jin-Ho Suh
Drones 2024, 8(5), 179; https://doi.org/10.3390/drones8050179 - 01 May 2024
Viewed by 455
Abstract
This paper proposes a quadrotor system control scheme using an intelligent–proportional–integral–differential control (I-PID)-based controller augmented with a radial basis neural network (RBF neural network) and the proposed adaptive robust term. The I-PID controller, similar to the widely utilized PID controller in quadrotor systems, [...] Read more.
This paper proposes a quadrotor system control scheme using an intelligent–proportional–integral–differential control (I-PID)-based controller augmented with a radial basis neural network (RBF neural network) and the proposed adaptive robust term. The I-PID controller, similar to the widely utilized PID controller in quadrotor systems, demonstrates notable robustness. To enhance this robustness further, the time-delay estimation error was compensated with an RBF neural network. Additionally, an adaptive robust term was proposed to address the shortcomings of the neural network system, thereby constructing a more robust controller. This supplementary control input integrated an adaptation term to address significant signal changes and was amalgamated with a reverse saturation filter to remove unnecessary control input during a steady state. The adaptive law of the proposed controller was designed based on Lyapunov stability to satisfy control system stability. To verify the control system, simulations were conducted on a quadrotor system maneuvering along a spiral path in a disturbed environment. The simulation results demonstrate that the proposed controller achieves high tracking performance across all six axes. Therefore, the controller proposed in this paper can be configured similarly to the previous PID controller and shows satisfactory performance. Full article
23 pages, 2167 KiB  
Article
Integration of UAV Digital Surface Model and HEC-HMS Hydrological Model System in iRIC Hydrological Simulation—A Case Study of Wu River
by Yen-Po Huang, Hui-Ping Tsai and Li-Chi Chiang
Drones 2024, 8(5), 178; https://doi.org/10.3390/drones8050178 - 30 Apr 2024
Viewed by 241
Abstract
This research investigates flood susceptibility in the mid- and downstream areas of Taiwan’s Wu River, historically prone to flooding in central Taiwan. The study integrates the Hydrologic Engineering Center—Hydrologic Modeling System (HEC-HMS) for flow simulations with unmanned aerial vehicle (UAV)-derived digital surface models [...] Read more.
This research investigates flood susceptibility in the mid- and downstream areas of Taiwan’s Wu River, historically prone to flooding in central Taiwan. The study integrates the Hydrologic Engineering Center—Hydrologic Modeling System (HEC-HMS) for flow simulations with unmanned aerial vehicle (UAV)-derived digital surface models (DSMs) at varying resolutions. Flood simulations, executed through the International River Interface Cooperative (iRIC), assess flood depths using diverse DSM resolutions. Notably, HEC-HMS simulations exhibit commendable Nash–Sutcliffe efficiency (NSE) exceeding 0.88 and a peak flow percentage error (PEPF) below 5%, indicating excellent suitability. In iRIC flood simulations, optimal results emerge with a 2 m resolution UAV-DSM. Furthermore, the study incorporates rainfall data at different recurrence intervals in iRIC flood simulations, presenting an alternative flood modeling approach. This research underscores the efficacy of integrating UAV-DSM into iRIC flood simulations, enabling precise flood depth assessment and risk analysis for flood control management. Full article
(This article belongs to the Special Issue Applications of UAVs in Civil Infrastructure)
18 pages, 6001 KiB  
Article
Improving Target Geolocation Accuracy with Multi-View Aerial Images in Long-Range Oblique Photography
by Chongyang Liu, Yalin Ding, Hongwen Zhang, Jihong Xiu and Haipeng Kuang
Drones 2024, 8(5), 177; https://doi.org/10.3390/drones8050177 - 30 Apr 2024
Viewed by 416
Abstract
Target geolocation in long-range oblique photography (LOROP) is a challenging study due to the fact that measurement errors become more evident with increasing shooting distance, significantly affecting the calculation results. This paper introduces a novel high-accuracy target geolocation method based on multi-view observations. [...] Read more.
Target geolocation in long-range oblique photography (LOROP) is a challenging study due to the fact that measurement errors become more evident with increasing shooting distance, significantly affecting the calculation results. This paper introduces a novel high-accuracy target geolocation method based on multi-view observations. Unlike the usual target geolocation methods, which heavily depend on the accuracy of GNSS (Global Navigation Satellite System) and INS (Inertial Navigation System), the proposed method overcomes these limitations and demonstrates an enhanced effectiveness by utilizing multiple aerial images captured at different locations without any additional supplementary information. In order to achieve this goal, camera optimization is performed to minimize the errors measured by GNSS and INS sensors. We first use feature matching between the images to acquire the matched keypoints, which determines the pixel coordinates of the landmarks in different images. A map-building process is then performed to obtain the spatial positions of these landmarks. With the initial guesses of landmarks, bundle adjustment is used to optimize the camera parameters and the spatial positions of the landmarks. After the camera optimization, a geolocation method based on line-of-sight (LOS) is used to calculate the target geolocation based on the optimized camera parameters. The proposed method is validated through simulation and an experiment utilizing unmanned aerial vehicle (UAV) images, demonstrating its efficiency, robustness, and ability to achieve high-accuracy target geolocation. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

19 pages, 4235 KiB  
Article
Assessing the Severity of Verticillium Wilt in Cotton Fields and Constructing Pesticide Application Prescription Maps Using Unmanned Aerial Vehicle (UAV) Multispectral Images
by Xiaojuan Li, Zhi Liang, Guang Yang, Tao Lin and Bo Liu
Drones 2024, 8(5), 176; https://doi.org/10.3390/drones8050176 - 30 Apr 2024
Viewed by 253
Abstract
Cotton Verticillium wilt is a common fungal disease during the growth of cotton, leading to the yellowing of leaves, stem dryness, and root rot, severely affecting the yield and quality of cotton. Current monitoring methods for Verticillium wilt mainly rely on manual inspection [...] Read more.
Cotton Verticillium wilt is a common fungal disease during the growth of cotton, leading to the yellowing of leaves, stem dryness, and root rot, severely affecting the yield and quality of cotton. Current monitoring methods for Verticillium wilt mainly rely on manual inspection and field investigation, which are inefficient and costly, and the methods of applying pesticides in cotton fields are singular, with issues of low pesticide efficiency and uneven application. This study aims to combine UAV remote sensing monitoring of cotton Verticillium wilt with the precision spraying characteristics of agricultural drones, to provide a methodological reference for monitoring and precision application of pesticides for cotton diseases. Taking the cotton fields of Shihezi City, Xinjiang as the research subject, high-resolution multispectral images were collected using drones. Simultaneously, 150 sets of field samples with varying degrees of Verticillium wilt were collected through ground data collection, utilizing data analysis methods such as partial least squares regression (PLSR) and neural network models; additionally, a cotton Verticillium wilt monitoring model based on drone remote sensing images was constructed. The results showed that the estimation accuracy of the PLSR and BP neural network models based on EVI, RENDVI, SAVI, MSAVI, and RDVI vegetation indices were 0.778 and 0.817, respectively, with of 0.126 and 0.117, respectively. Based on this, an analysis of the condition of the areas to be treated was performed, combining the operational parameters of agricultural drones, resulting in a prescription map for spraying against cotton Verticillium wilt. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
15 pages, 9376 KiB  
Article
Comparison and Optimal Method of Detecting the Number of Maize Seedlings Based on Deep Learning
by Zhijie Jia, Xinlong Zhang, Hongye Yang, Yuan Lu, Jiale Liu, Xun Yu, Dayun Feng, Kexin Gao, Jianfu Xue, Bo Ming, Chenwei Nie and Shaokun Li
Drones 2024, 8(5), 175; https://doi.org/10.3390/drones8050175 - 28 Apr 2024
Viewed by 394
Abstract
Effective agricultural management in maize production operations starts with the early quantification of seedlings. Accurately determining plant presence allows growers to optimize planting density, allocate resources, and detect potential growth issues early on. This study provides a comprehensive analysis of the performance of [...] Read more.
Effective agricultural management in maize production operations starts with the early quantification of seedlings. Accurately determining plant presence allows growers to optimize planting density, allocate resources, and detect potential growth issues early on. This study provides a comprehensive analysis of the performance of various object detection models in maize production, with a focus on the effects of planting density, growth stages, and flight altitudes. The findings of this study demonstrate that one-stage models, particularly YOLOv8n and YOLOv5n, demonstrated superior performance with AP50 scores of 0.976 and 0.951, respectively, outperforming two-stage models in terms of resource efficiency and seedling quantification accuracy. YOLOv8n, along with Deformable DETR, Faster R-CNN, and YOLOv3-tiny, were identified for further examination based on their performance metrics and architectural features. The study also highlights the significant impact of plant density and growth stage on detection accuracy. Increased planting density and advanced growth stages (particularly V6) were associated with decreased model accuracy due to increased leaf overlap and image complexity. The V2–V3 growth stages were identified as the optimal periods for detection. Additionally, flight altitude negatively affected image resolution and detection accuracy, with higher altitudes leading to poorer performance. In field applications, YOLOv8n proved highly effective, maintaining robust performance across different agricultural settings and consistently achieving rRMSEs below 1.64% in high-yield fields. The model also demonstrated high reliability, with Recall, Precision, and F1 scores exceeding 99.00%, affirming its suitability for practical agricultural use. These findings suggest that UAV-based image collection systems employing models like YOLOv8n can significantly enhance the accuracy and efficiency of seedling detection in maize production. The research elucidates the critical factors that impact the accuracy of deep learning detection models in the context of corn seedling detection and selects a model suited for this specific task in practical agricultural production. These findings offer valuable insights into the application of object detection technology and lay a foundation for the future development of precision agriculture, particularly in optimizing deep learning models for varying environmental conditions that affect corn seedling detection. Full article
(This article belongs to the Special Issue UAS in Smart Agriculture: 2nd Edition)
Show Figures

Figure 1

29 pages, 20581 KiB  
Article
Manipulating Camera Gimbal Positioning by Deep Deterministic Policy Gradient Reinforcement Learning for Drone Object Detection
by Ming-You Ma, Yu-Hsiang Huang, Shang-En Shen and Yi-Cheng Huang
Drones 2024, 8(5), 174; https://doi.org/10.3390/drones8050174 - 28 Apr 2024
Viewed by 246
Abstract
The object recognition technology of unmanned aerial vehicles (UAVs) equipped with “You Only Look Once” (YOLO) has been validated in actual flights. However, here, the challenge lies in efficiently utilizing camera gimbal control technology to swiftly capture images of YOLO-identified target objects in [...] Read more.
The object recognition technology of unmanned aerial vehicles (UAVs) equipped with “You Only Look Once” (YOLO) has been validated in actual flights. However, here, the challenge lies in efficiently utilizing camera gimbal control technology to swiftly capture images of YOLO-identified target objects in aerial search missions. Enhancing the UAV’s energy efficiency and search effectiveness is imperative. This study aims to establish a simulation environment by employing the Unity simulation software for target tracking by controlling the gimbal. This approach involves the development of deep deterministic policy-gradient (DDPG) reinforcement-learning techniques to train the gimbal in executing effective tracking actions. The outcomes of the simulations indicate that when actions are appropriately rewarded or penalized in the form of scores, the reward value can be consistently converged within the range of 19–35. This convergence implies that a successful strategy leads to consistently high rewards. Consequently, a refined set of training procedures is devised, enabling the gimbal to accurately track the target. Moreover, this strategy minimizes unnecessary tracking actions, thus enhancing tracking efficiency. Numerous benefits arise from training in a simulated environment. For instance, the training in this simulated environment is facilitated through a dataset composed of actual flight photographs. Furthermore, offline operations can be conducted at any given time without any constraint of time and space. Thus, this approach effectively enables the training and enhancement of the gimbal’s action strategies. The findings of this study demonstrate that a coherent set of action strategies can be proficiently cultivated by employing DDPG reinforcement learning. Furthermore, these strategies empower the UAV’s gimbal to rapidly and precisely track designated targets. Therefore, this approach provides both convenience and opportunities to gather more flight-scenario training data in the future. This gathering of data will lead to immediate training opportunities and help improve the system’s energy consumption. Full article
Show Figures

Figure 1

27 pages, 17520 KiB  
Article
VizNav: A Modular Off-Policy Deep Reinforcement Learning Framework for Vision-Based Autonomous UAV Navigation in 3D Dynamic Environments
by Fadi AlMahamid and Katarina Grolinger
Drones 2024, 8(5), 173; https://doi.org/10.3390/drones8050173 - 27 Apr 2024
Viewed by 406
Abstract
Unmanned aerial vehicles (UAVs) provide benefits through eco-friendliness, cost-effectiveness, and reduction of human risk. Deep reinforcement learning (DRL) is widely used for autonomous UAV navigation; however, current techniques often oversimplify the environment or impose movement restrictions. Additionally, most vision-based systems lack precise depth [...] Read more.
Unmanned aerial vehicles (UAVs) provide benefits through eco-friendliness, cost-effectiveness, and reduction of human risk. Deep reinforcement learning (DRL) is widely used for autonomous UAV navigation; however, current techniques often oversimplify the environment or impose movement restrictions. Additionally, most vision-based systems lack precise depth perception, while range finders provide a limited environmental overview, and LiDAR is energy-intensive. To address these challenges, this paper proposes VizNav, a modular DRL-based framework for autonomous UAV navigation in dynamic 3D environments without imposing conventional mobility constraints. VizNav incorporates the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm with Prioritized Experience Replay and Importance Sampling (PER) to improve performance in continuous action spaces and mitigate overestimations. Additionally, VizNav employs depth map images (DMIs) to enhance visual navigation by accurately estimating objects’ depth information, thereby improving obstacle avoidance. Empirical results show that VizNav, by leveraging TD3, improves navigation, and the inclusion of PER and DMI further boosts performance. Furthermore, the deployment of VizNav across various experimental settings confirms its flexibility and adaptability. The framework’s architecture separates the agent’s learning from the training process, facilitating integration with various DRL algorithms, simulation environments, and reward functions. This modularity creates a potential to influence RL simulation in various autonomous navigation systems, including robotics control and autonomous vehicles. Full article
Show Figures

Figure 1

26 pages, 23380 KiB  
Article
Monitoring Change and Recovery of an Embayed Beach in Response to Typhoon Storms Using UAV LiDAR
by Qiujia Lei, Xinkai Wang, Yifei Liu, Junli Guo, Tinglu Cai and Xiaoming Xia
Drones 2024, 8(5), 172; https://doi.org/10.3390/drones8050172 - 27 Apr 2024
Viewed by 372
Abstract
The monitoring of beach topographical changes and recovery processes under typhoon storm influence has primarily relied on traditional techniques that lack high spatial resolution. Therefore, we used an unmanned aerial vehicle light detection and ranging (UAV LiDAR) system to obtain the four time [...] Read more.
The monitoring of beach topographical changes and recovery processes under typhoon storm influence has primarily relied on traditional techniques that lack high spatial resolution. Therefore, we used an unmanned aerial vehicle light detection and ranging (UAV LiDAR) system to obtain the four time periods of topographic data from Tantou Beach, a sandy beach in Xiangshan County, Zhejiang Province, China, to explore beach topography and geomorphology in response to typhoon events. The UAV LiDAR data in four survey periods showed an overall vertical accuracy of approximately 5 cm. Based on the evaluated four time periods of the UAV LiDAR data, we created four corresponding DEMs for the beach. We calculated the DEM of difference (Dod), which showed that the erosion and siltation on Tantou Beach over different temporal scales had a significant alongshore zonal feature with a broad change range. The tidal level significantly impacted beach erosion and siltation changes. However, the storm surge did not affect the beach area above the spring high-tide level. After storms, siltation occurred above the spring high-tide zone. This study reveals the advantage of UAV LiDAR in monitoring beach changes and provides novel insights into the impacts of typhoon storms on coastal topographic and geomorphological change and recovery processes. Full article
Show Figures

Figure 1

33 pages, 4285 KiB  
Article
A Path-Planning Method for UAV Swarm under Multiple Environmental Threats
by Xiangyu Fan, Hao Li, You Chen and Danna Dong
Drones 2024, 8(5), 171; https://doi.org/10.3390/drones8050171 - 26 Apr 2024
Viewed by 309
Abstract
To weaken or avoid the impact of dynamic threats such as wind and extreme weather on the real-time path of a UAV swarm, a path-planning method based on improved long short-term memory (LSTM) network prediction parameters was constructed. First, models were constructed for [...] Read more.
To weaken or avoid the impact of dynamic threats such as wind and extreme weather on the real-time path of a UAV swarm, a path-planning method based on improved long short-term memory (LSTM) network prediction parameters was constructed. First, models were constructed for wind, static threats, and dynamic threats during the flight of the drone. Then, it was found that atmospheric parameters are typical time series data with spatial correlation. The LSTM network was optimized and used to process time series parameters to construct a network for predicting atmospheric parameters. The state of the drone was adjusted in real time based on the prediction results to mitigate the impact of wind or avoid the threat of extreme weather. Finally, a path optimization method based on an improved LSTM network was constructed. Through simulation, it can be seen that compared to the path that does not consider atmospheric effects, the optimized path has significantly improved flightability and safety. Full article
(This article belongs to the Special Issue Distributed Control, Optimization, and Game of UAV Swarm Systems)
Show Figures

Figure 1

22 pages, 8871 KiB  
Article
Early Drought Detection in Maize Using UAV Images and YOLOv8+
by Shanwei Niu, Zhigang Nie, Guang Li and Wenyu Zhu
Drones 2024, 8(5), 170; https://doi.org/10.3390/drones8050170 - 24 Apr 2024
Viewed by 392
Abstract
The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the [...] Read more.
The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the real-time monitoring and precise assessment of drought severity. In this study, a novel early drought detection method for maize based on unmanned aerial vehicle (UAV) images and Yolov8+ is proposed. In the Backbone section, the C2F-Conv module is adopted to reduce model parameters and deployment costs, while incorporating the CA attention mechanism module to effectively capture tiny feature information in the images. The Neck section utilizes the BiFPN fusion architecture and spatial attention mechanism to enhance the model’s ability to recognize small and occluded targets. The Head section introduces an additional 10 × 10 output, integrates loss functions, and enhances accuracy by 1.46%, reduces training time by 30.2%, and improves robustness. The experimental results demonstrate that the improved Yolov8+ model achieves precision and recall rates of approximately 90.6% and 88.7%, respectively. The mAP@50 and mAP@50:95 reach 89.16% and 71.14%, respectively, representing respective increases of 3.9% and 3.3% compared to the original Yolov8. The UAV image detection speed of the model is up to 24.63 ms, with a model size of 13.76 MB, optimized by 31.6% and 28.8% compared to the original model, respectively. In comparison with the Yolov8, Yolov7, and Yolo5s models, the proposed method exhibits varying degrees of superiority in mAP@50, mAP@50:95, and other metrics, utilizing drone imagery and deep learning techniques to truly propel agricultural modernization. Full article
Show Figures

Figure 1

16 pages, 1176 KiB  
Article
A Control-Theoretic Spatio-Temporal Model for Wildfire Smoke Propagation Using UAV-Based Air Pollutant Measurements
by Prabhash Ragbir, Ajith Kaduwela, Xiaodong Lan, Adam Watts and Zhaodan Kong
Drones 2024, 8(5), 169; https://doi.org/10.3390/drones8050169 - 24 Apr 2024
Viewed by 425
Abstract
Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is [...] Read more.
Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is to utilize unmanned aerial vehicles (UAVs) that are capable of tracking the chemical concentration gradient of smoke emitted by wildfires. A spatiotemporal model of wildfire smoke plume dynamics can allow for efficient tracking of the chemicals by utilizing both real-time information from sensors as well as future information from the model predictions. This study investigates a spatiotemporal modeling approach based on subspace identification (SID) to develop a data-driven smoke plume dynamics model for the purposes of early wildfire detection. The model was learned using CO2 concentration data which were collected using an air quality sensor package onboard a UAV during two prescribed burn experiments. Our model was evaluated by comparing the predicted values to the measured values at random locations and showed mean errors of 6.782 ppm and 30.01 ppm from the two experiments. Additionally, our model was shown to outperform the commonly used Gaussian puff model (GPM) which showed mean errors of 25.799 ppm and 104.492 ppm, respectively. Full article
(This article belongs to the Topic Application of Remote Sensing in Forest Fire)
Show Figures

Figure 1

18 pages, 2003 KiB  
Article
Enhancing UAV Aerial Docking: A Hybrid Approach Combining Offline and Online Reinforcement Learning
by Yuting Feng, Tao Yang and Yushu Yu
Drones 2024, 8(5), 168; https://doi.org/10.3390/drones8050168 - 24 Apr 2024
Viewed by 485
Abstract
In our study, we explore the task of performing docking maneuvers between two unmanned aerial vehicles (UAVs) using a combination of offline and online reinforcement learning (RL) methods. This task requires a UAV to accomplish external docking while maintaining stable flight control, representing [...] Read more.
In our study, we explore the task of performing docking maneuvers between two unmanned aerial vehicles (UAVs) using a combination of offline and online reinforcement learning (RL) methods. This task requires a UAV to accomplish external docking while maintaining stable flight control, representing two distinct types of objectives at the task execution level. Direct online RL training could lead to catastrophic forgetting, resulting in training failure. To overcome these challenges, we design a rule-based expert controller and accumulate an extensive dataset. Based on this, we concurrently design a series of rewards and train a guiding policy through offline RL. Then, we conduct comparative verification on different RL methods, ultimately selecting online RL to fine-tune the model trained offline. This strategy effectively combines the efficiency of offline RL with the exploratory capabilities of online RL. Our approach improves the success rate of the UAV’s aerial docking task, increasing it from 40% under the expert policy to 95%. Full article
Show Figures

Figure 1

19 pages, 746 KiB  
Article
Impact Analysis of Time Synchronization Error in Airborne Target Tracking Using a Heterogeneous Sensor Network
by Seokwon Lee, Zongjian Yuan, Ivan Petrunin and Hyosang Shin
Drones 2024, 8(5), 167; https://doi.org/10.3390/drones8050167 - 23 Apr 2024
Viewed by 425
Abstract
This paper investigates the influence of time synchronization on sensor fusion and target tracking. As a benchmark, we design a target tracking system based on track-to-track fusion architecture. Heterogeneous sensors detect targets and transmit measurements through a communication network, while local tracking and [...] Read more.
This paper investigates the influence of time synchronization on sensor fusion and target tracking. As a benchmark, we design a target tracking system based on track-to-track fusion architecture. Heterogeneous sensors detect targets and transmit measurements through a communication network, while local tracking and track fusion are performed in the fusion center to integrate measurements from these sensors into a fused track. The time synchronization error is mathematically modeled, and local time is biased from the reference clock during the holdover phase. The influence of the time synchronization error on target tracking system components such as local association, filtering, and track fusion is discussed. The results demonstrate that an increase in the time synchronization error leads to deteriorating association and filtering performance. In addition, the results of the simulation study validate the impact of the time synchronization error on the sensor network. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop