Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,548)

Search Parameters:
Keywords = UAV parameters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 55414 KiB  
Article
Parameter-Efficient Fine-Tuning for Individual Tree Crown Detection and Species Classification Using UAV-Acquired Imagery
by Jiuyu Zhang, Fan Lei and Xijian Fan
Remote Sens. 2025, 17(7), 1272; https://doi.org/10.3390/rs17071272 - 3 Apr 2025
Viewed by 49
Abstract
Pre-trained foundation models, trained on large-scale datasets, have demonstrated significant success in a variety of downstream vision tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt these foundation models to new domains by updating only a small subset of parameters, thereby reducing computational overhead. [...] Read more.
Pre-trained foundation models, trained on large-scale datasets, have demonstrated significant success in a variety of downstream vision tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt these foundation models to new domains by updating only a small subset of parameters, thereby reducing computational overhead. However, the effectiveness of these PEFT methods, especially in the context of forestry remote sensing—specifically for individual tree detection—remains largely unexplored. In this work, we present a simple and efficient PEFT approach designed to transfer pre-trained transformer models to the specific tasks of tree crown detection and species classification in unmanned aerial vehicle (UAV) imagery. To address the challenge of mitigating the influence of irrelevant ground targets in UAV imagery, we propose an Adaptive Salient Channel Selection (ASCS) method, which can be simply integrated into each transformer block during fine-tuning. In the proposed ASCS, task-specific channels are adaptively selected based on class-wise importance scores, where the channels most relevant to the target class are highlighted. In addition, a simple bias term is introduced to facilitate the learning of task-specific knowledge, enhancing the adaptation of the pre-trained model to the target tasks. The experimental results demonstrate that the proposed ASCS fine-tuning method, which utilizes a small number of task-specific learnable parameters, significantly outperforms the latest YOLO detection framework and surpasses the state-of-the-art PEFT method in tree detection and classification tasks. These findings demonstrate that the proposed ASCS is an effective PEFT method, capable of adapting the pre-trained model’s capabilities for tree crown detection and species classification using UAV imagery. Full article
(This article belongs to the Special Issue Intelligent Extraction of Phenotypic Traits in Agroforestry)
Show Figures

Figure 1

22 pages, 8528 KiB  
Article
MSEA-Net: Multi-Scale and Edge-Aware Network for Weed Segmentation
by Akram Syed, Baifan Chen, Adeel Ahmed Abbasi, Sharjeel Abid Butt and Xiaoqing Fang
AgriEngineering 2025, 7(4), 103; https://doi.org/10.3390/agriengineering7040103 - 3 Apr 2025
Viewed by 45
Abstract
Accurate weed segmentation in Unmanned Aerial Vehicle (UAV) imagery remains a significant challenge in precision agriculture due to environmental variability, weak contextual representation, and inaccurate boundary detection. To address these limitations, we propose the Multi-Scale and Edge-Aware Network (MSEA-Net), a lightweight and efficient [...] Read more.
Accurate weed segmentation in Unmanned Aerial Vehicle (UAV) imagery remains a significant challenge in precision agriculture due to environmental variability, weak contextual representation, and inaccurate boundary detection. To address these limitations, we propose the Multi-Scale and Edge-Aware Network (MSEA-Net), a lightweight and efficient deep learning framework designed to enhance segmentation accuracy while maintaining computational efficiency. Specifically, we introduce the Multi-Scale Spatial-Channel Attention (MSCA) module to recalibrate spatial and channel dependencies, improving local–global feature fusion while reducing redundant computations. Additionally, the Edge-Enhanced Bottleneck Attention (EEBA) module integrates Sobel-based edge detection to refine boundary delineation, ensuring sharper object separation in dense vegetation environments. Extensive evaluations on publicly available datasets demonstrate the effectiveness of MSEA-Net, achieving a mean Intersection over Union (IoU) of 87.42% on the Motion-Blurred UAV Images of Sorghum Fields dataset and 71.35% on the CoFly-WeedDB dataset, outperforming benchmark models. MSEA-Net also maintains a compact architecture with only 6.74 M parameters and a model size of 25.74 MB, making it suitable for UAV-based real-time weed segmentation. These results highlight the potential of MSEA-Net for improving automated weed detection in precision agriculture while ensuring computational efficiency for edge deployment. Full article
Show Figures

Figure 1

18 pages, 8005 KiB  
Article
Durum Wheat (Triticum durum Desf.) Grain Yield and Protein Estimation by Multispectral UAV Monitoring and Machine Learning Under Mediterranean Conditions
by Giuseppe Badagliacca, Gaetano Messina, Emilio Lo Presti, Giovanni Preiti, Salvatore Di Fazio, Michele Monti, Giuseppe Modica and Salvatore Praticò
AgriEngineering 2025, 7(4), 99; https://doi.org/10.3390/agriengineering7040099 - 1 Apr 2025
Viewed by 98
Abstract
Durum wheat (Triticum durum Desf.), among the herbaceous crops, is one of the most extensively grown in the Mediterranean area due to its fundamental role in supporting typical food productions like bread, pasta, and couscous. Among the environmental and technical aspects, nitrogen [...] Read more.
Durum wheat (Triticum durum Desf.), among the herbaceous crops, is one of the most extensively grown in the Mediterranean area due to its fundamental role in supporting typical food productions like bread, pasta, and couscous. Among the environmental and technical aspects, nitrogen (N) fertilization is crucial to shaping plant development and that of kernels by also affecting their protein concentration. Today, new techniques for monitoring fields using uncrewed aerial vehicles (UAVs) can detect crop multispectral (MS) responses, while advanced machine learning (ML) models can enable accurate predictions. However, to date, there is still little research related to the prediction of the N nutritional status and its effects on the productivity of durum wheat grown in the Mediterranean environment through the application of these techniques. The present research aimed to monitor the MS responses of two different wheat varieties, one ancient (Timilia) and one modern (Ciclope), grown under three different N fertilization regimens (0, 60, and 120 kg N ha−1), and to estimate their quantitative and qualitative production (i.e., grain yield and protein concentration) through the Pearson’s correlations and five different ML approaches. The results showed the difficulty of obtaining good predictive results with Pearson’s correlation for both varieties of data merged together and for the Timilia variety. In contrast, for Ciclope, several vegetation indices (VIs) (i.e., CVI, GNDRE, and SRRE) performed well (r-value > 0.7) in estimating both productive parameters. The implementation of ML approaches, particularly random forest (RF) regression, neural network (NN), and support vector machine (SVM), overcame the limitations of correlation in estimating the grain yield (R2 > 0.6, RMSE = 0.56 t ha−1, MAE = 0.43 t ha−1) and protein (R2 > 0.7, RMSE = 1.2%, MAE 0.47%) in Timilia, whereas for Ciclope, the RF approach outperformed the other predictive methods (R2 = 0.79, RMSE = 0.56 t ha−1, MAE = 0.44 t ha−1). Full article
(This article belongs to the Section Sensors Technology and Precision Agriculture)
Show Figures

Figure 1

25 pages, 13110 KiB  
Article
An Improved Unmanned Aerial Vehicle Forest Fire Detection Model Based on YOLOv8
by Bensheng Yun, Xiaohan Xu, Jie Zeng, Zhenyu Lin, Jing He and Qiaoling Dai
Fire 2025, 8(4), 138; https://doi.org/10.3390/fire8040138 - 31 Mar 2025
Viewed by 70
Abstract
Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, the top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency and cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms have emerged as [...] Read more.
Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, the top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency and cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms have emerged as a favored research trend and have seen extensive application. However, in the process of drone monitoring, fires often appear very small and are easily obstructed by trees, which greatly limits the amount of effective information that algorithms can extract. Meanwhile, considering the limitations of unmanned aerial vehicles, the algorithm model also needs to have lightweight characteristics. To address challenges such as the small targets, occlusions, and image blurriness in UAV-captured wildfire images, this paper proposes an improved UAV forest fire detection model based on YOLOv8. Firstly, we incorporate SPDConv modules, enhancing the YOLOv8 architecture and boosting its efficacy in dealing with minor objects and images with low resolution. Secondly, we introduce the C2f-PConv module, which effectively improves computational efficiency by reducing redundant calculations and memory access. Thirdly, the model boosts classification precision through the integration of a Mixed Local Channel Attention (MLCA) strategy preceding the three detection outputs. Finally, the W-IoU loss function is utilized, which adaptively modifies the weights for different target boxes within the loss computation, to efficiently address the difficulties associated with detecting small targets. The experimental results showed that the accuracy of our model increased by 2.17%, the recall increased by 5.5%, and the mAP@0.5 increased by 1.9%. In addition, the number of parameters decreased by 43.8%, with only 5.96M parameters, while the model size and GFlops decreased by 43.3% and 36.7%, respectively. Our model not only reduces the number of parameters and computational complexity, but also exhibits superior accuracy and effectiveness in UAV fire image recognition tasks, thereby offering a robust and reliable solution for UAV fire monitoring. Full article
(This article belongs to the Special Issue Intelligent Forest Fire Prediction and Detection)
Show Figures

Figure 1

25 pages, 14872 KiB  
Article
RFAG-YOLO: A Receptive Field Attention-Guided YOLO Network for Small-Object Detection in UAV Images
by Chengmeng Wei and Wenhong Wang
Sensors 2025, 25(7), 2193; https://doi.org/10.3390/s25072193 - 30 Mar 2025
Viewed by 115
Abstract
The YOLO series of object detection methods have achieved significant success in a wide range of computer vision tasks due to their efficiency and accuracy. However, detecting small objects in UAV images remains a formidable challenge due to factors such as a low [...] Read more.
The YOLO series of object detection methods have achieved significant success in a wide range of computer vision tasks due to their efficiency and accuracy. However, detecting small objects in UAV images remains a formidable challenge due to factors such as a low resolution, complex background interference, and significant scale variations, which collectively degrade the quality of feature extraction and limit detection performance. To address these challenges, we propose the receptive field attention-guided YOLO (RFAG-YOLO) method, an advanced adaptation of YOLOv8 tailored for small-object detection in UAV imagery, with a focus on improving feature representation and detection robustness. To this end, we introduce a novel network building block, termed the receptive field network block (RFN block), which leverages dynamic kernel parameter adjustments to enhance the model’s ability to capture fine-grained local details. To effectively harness multi-scale features, we designed an enhanced FasterNet module based on RFN blocks as the core component of the backbone network in RFAG-YOLO, enabling robust feature extraction across varying resolutions. This approach achieves a balance of semantic information by employing staged downsampling and a hierarchical arrangement of RFN blocks, ensuring optimal feature representation across different resolutions. Additionally, we introduced a Scale-Aware Feature Amalgamation (SAF) component prior to the detection head of RFAG-YOLO. This component employs a scale attention mechanism to dynamically weight features from both higher and lower layers, facilitating richer information flow and significantly improving the model’s robustness to complex backgrounds and scale variations. Experimental results on the VisDrone2019 dataset demonstrated that RFAG-YOLO outperformed state-of-the-art models, including YOLOv7, YOLOv8, YOLOv10, and YOLOv11, in terms of detection accuracy and efficiency. In particular, RFAG-YOLO achieved an mAP50 of 38.9%, representing substantial improvements over multiple baseline models: a 12.43% increase over YOLOv7, a 5.99% improvement over YOLOv10, and significant gains of 16.12% compared to YOLOv8n and YOLOv11. Moreover, compared to the larger YOLOv8s model, RFAG-YOLO achieved 97.98% of its mAP50 performance while utilizing only 53.51% of the parameters, highlighting its exceptional efficiency in terms of the performance-to-parameter ratio and making it highly suitable for resource-constrained UAV applications. These results underscore the substantial potential of RFAG-YOLO for real-world UAV applications, particularly in scenarios demanding accurate detection of small objects under challenging conditions such as varying lighting, complex backgrounds, and diverse scales. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 7999 KiB  
Article
Adaptive Impact-Time-Control Cooperative Guidance Law for UAVs Under Time-Varying Velocity Based on Reinforcement Learning
by Zhenyu Liu, Gang Lei, Yong Xian, Leliang Ren, Shaopeng Li and Daqiao Zhang
Drones 2025, 9(4), 262; https://doi.org/10.3390/drones9040262 - 29 Mar 2025
Viewed by 98
Abstract
In this study, an adaptive impact-time-control cooperative guidance law based on deep reinforcement learning considering field-of-view (FOV) constraints is proposed for high-speed UAVs with time-varying velocity. Firstly, a reinforcement learning framework for the high-speed UAVs’ guidance problem is established. The optimization objective is [...] Read more.
In this study, an adaptive impact-time-control cooperative guidance law based on deep reinforcement learning considering field-of-view (FOV) constraints is proposed for high-speed UAVs with time-varying velocity. Firstly, a reinforcement learning framework for the high-speed UAVs’ guidance problem is established. The optimization objective is to maximize the impact velocity; and the constraints for impact time, dive attacking, and FOV are considered simultaneously. The time-to-go estimation method is improved so that it can be applied to high-speed UAVs with time-varying velocity. Then, in order to improve the applicability and robustness of the agent, environmental uncertainties, including aerodynamic parameter errors, observation noise, and target random maneuvers, are incorporated into the training process. Furthermore, inspired by the RL2 algorithm, the recurrent layer is introduced into both the policy and value network. In this way, the agent can automatically adapt to different mission scenarios by updating the hidden states of the recurrent layer. In addition, a compound reward function is designed to train the agent to satisfy the requirements of impact-time control and dive attack simultaneously. Finally, the effectiveness and robustness of the proposed guidance law are validated through numerical simulations conducted across a wide range of scenarios. Full article
Show Figures

Figure 1

26 pages, 65178 KiB  
Article
Comparison of UAV-Based LiDAR and Photogrammetric Point Cloud for Individual Tree Species Classification of Urban Areas
by Qixia Man, Xinming Yang, Haijian Liu, Baolei Zhang, Pinliang Dong, Jingru Wu, Chunhui Liu, Changyin Han, Cong Zhou, Zhuang Tan and Qian Yu
Remote Sens. 2025, 17(7), 1212; https://doi.org/10.3390/rs17071212 - 28 Mar 2025
Viewed by 241
Abstract
UAV LiDAR and digital aerial photogrammetry (DAP) have shown great performance in forest inventory due to their advantage in three-dimensional information extraction. Many studies have compared their performance in individual tree segmentation and structural parameters extraction (e.g. tree height). However, few studies have [...] Read more.
UAV LiDAR and digital aerial photogrammetry (DAP) have shown great performance in forest inventory due to their advantage in three-dimensional information extraction. Many studies have compared their performance in individual tree segmentation and structural parameters extraction (e.g. tree height). However, few studies have compared their performance in tree species classification. Therefore, we have compared the performance of UAV LiDAR and DAP-based point clouds in individual tree species classification with the following steps: (1) Point cloud data processing: Denoising, smoothing, and normalization were conducted on LiDAR and DAP-based point cloud data separately. (2) Feature extraction: Spectral, structural, and texture features were extracted from the pre-processed LiDAR and DAP-based point cloud data. (3) Individual tree segmentation: The marked watershed algorithm was used to segment individual trees on canopy height models (CHM) derived from LiDAR and DAP data, respectively. (4) Pixel-based tree species classification: The random forest classifier (RF) was used to classify urban tree species with features derived from LiDAR and DAP data separately. (5) Individual tree species classification: Based on the segmented individual tree boundaries and pixel-based classification results, the majority filtering method was implemented to obtain the final individual tree species classification results. (6) Fused with hyperspectral data: LiDAR-hyperspectral and DAP-hyperspectral fused data were used to conduct individual tree species classification. (7) Accuracy assessment and comparison: The accuracy of the above results were assessed and compared. The results indicate that LiDAR outperformed DAP in individual tree segmentation (F-score 0.83 vs. 0.79), while DAP achieved higher pixel-level classification accuracy (73.83% vs. 57.32%) due to spectral-textural features. Fusion with hyperspectral data narrowed the gap, with LiDAR reaching 95.98% accuracy in individual tree classification. Our findings suggest that DAP offers a cost-effective alternative for urban forest management, balancing accuracy and operational costs. Full article
Show Figures

Figure 1

25 pages, 6012 KiB  
Article
Design of Flight Attitude Simulator for Plant Protection UAV Based on Simulation of Pesticide Tank Sloshing
by Pengxiang Ren, Junke Rong, Ruichang Zhao and Pei Cao
Agronomy 2025, 15(4), 822; https://doi.org/10.3390/agronomy15040822 - 26 Mar 2025
Viewed by 75
Abstract
Changes in the flight attitude of plant protection unmanned aerial vehicles (UAVs) can lead to oscillations in the liquid level of their medicine tanks, which may affect operational accuracy and stability, and could even pose a threat to flight safety. To address this [...] Read more.
Changes in the flight attitude of plant protection unmanned aerial vehicles (UAVs) can lead to oscillations in the liquid level of their medicine tanks, which may affect operational accuracy and stability, and could even pose a threat to flight safety. To address this issue, this article presents the design of a flight attitude simulator for crop protection UAVs, constructed on a six-degree-of-freedom motion platform. This simulator can replicate the various flight attitudes, such as emergency stops, turns, and point rotations, of plant protection UAVs. This article initially outlines the determination and design process for the structural parameters and 3D model of the flight attitude simulator specific to plant protection UAVs. Subsequently, simulations were performed to analyze liquid sloshing in the pesticide tank under various liquid flushing ratios during flight conditions, including emergency stops, climbs, and circling maneuvers. Finally, the influence of liquid sloshing on the flight stability of the plant protection UAVs in different attitudes and with varying liquid flushing ratios is presented. These results serve as a cornerstone for optimizing the flight parameters of plant protection UAVs, analyzing the characteristics of pesticide application, and designing effective pesticide containers. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

25 pages, 4966 KiB  
Article
Artificial Intelligence-Driven Aircraft Systems to Emulate Autopilot and GPS Functionality in GPS-Denied Scenarios Through Deep Learning
by César García-Gascón, Pablo Castelló-Pedrero, Francisco Chinesta and Juan A. García-Manrique
Drones 2025, 9(4), 250; https://doi.org/10.3390/drones9040250 - 26 Mar 2025
Viewed by 156
Abstract
This paper presents a methodology for training a Deep Learning model aimed at flight management tasks in a fixed-wing unmanned aerial vehicle (UAV), specifically autopilot control and GPS prediction. In this formulation, sensor data and the most recent GPS signal are first processed [...] Read more.
This paper presents a methodology for training a Deep Learning model aimed at flight management tasks in a fixed-wing unmanned aerial vehicle (UAV), specifically autopilot control and GPS prediction. In this formulation, sensor data and the most recent GPS signal are first processed by an LSTM to produce an initial coordinate prediction. This preliminary estimate is then merged with additional sensor inputs and passed to an MLP, which replaces the conventional autopilot algorithm by generating the control commands for real-time navigation. The approach is particularly valuable in scenarios where the aircraft must follow a predetermined route—such as surveillance operations—or maintain a remote ground link under varying GPS availability. The study focuses on Class I UAVs; however, the proposed methodology can be adapted to larger classes (II and III) by adjusting sensor configurations and network parameters. To collect training data, a small fixed-wing aircraft was instrumented to record kinematic and control inputs, which then served as inputs to the neural network. Despite the limited sensor suite and the use of an open-source flight controller (SpeedyBee), the flexibility of the proposed approach allows for easy adaptation to more complex UAVs equipped with additional sensors, potentially improving prediction accuracy. The performance of the neural network, implemented in PyTorch, was evaluated by comparing its predicted data with actual flight logs. In addition, the method has been shown to be robust to both short and prolonged GPS outages, as it relies on waypoint-based navigation along previously explored routes, ensuring reliable performance in known operational contexts. Full article
Show Figures

Figure 1

20 pages, 12820 KiB  
Article
Hyperspectral Remote Sensing Estimation and Spatial Scale Effect of Leaf Area Index in Moso Bamboo (Phyllostachys pubescens) Forests Under the Stress of Pantana phyllostachysae Chao
by Haitao Li, Zhanghua Xu, Yifan Li, Lei Sun, Huafeng Zhang, Chaofei Zhang, Yuanyao Yang, Xiaoyu Guo, Zenglu Li and Fengying Guan
Forests 2025, 16(4), 575; https://doi.org/10.3390/f16040575 - 26 Mar 2025
Viewed by 131
Abstract
Leaf area index (LAI) serves as a crucial indicator for assessing vegetation growth status, and unmanned aerial vehicle (UAV) optical remote sensing technology provides an effective approach for forest pest-related research. This study investigated the feasibility of LAI estimation in Moso bamboo ( [...] Read more.
Leaf area index (LAI) serves as a crucial indicator for assessing vegetation growth status, and unmanned aerial vehicle (UAV) optical remote sensing technology provides an effective approach for forest pest-related research. This study investigated the feasibility of LAI estimation in Moso bamboo (Phyllostachys pubescens) forests with different damage levels using UAV data while simultaneously exploring the scale effects of various spatial resolutions. Through image resampling using 10 distinct spatial resolutions and field data classification based on Pantana phyllostachysae Chao pest severity (healthy and mild damaged as Scheme 1, moderate damaged and severe damaged as Scheme 2, and all as Scheme 3), three machine learning algorithms (SVM, RF, and XGBoost) were employed to establish LAI estimation models for both single and mixed damage levels. Comparative analysis was conducted across different schemes, algorithms, and spatial resolutions to identify optimal estimation models. The results showed that (1) XGBoost-based regression models achieved superior performance across all schemes, with optimal model accuracy consistently observed at 3 m spatial resolutions; (2) minimal scale effects occurred at a 3 m resolution for Schemes 1 and 2, while Scheme 3 showed lowest scale effects at 1.5 m followed by 3 m resolutions; (3) Scheme 3 exhibited significant advantages in mixed damaged bamboo forest inversion with robust performance across all damage levels, whereas Schemes 1 and 2 demonstrated higher accuracy for single damaged scenarios compared to mixed damaged. This research validates the feasibility of incorporating pest stress factors into LAI estimation through different pest damage models, offering novel perspectives and technical support for parameter inversion in Moso bamboo forests. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 5582 KiB  
Article
A Multi-Timescale Method for State of Charge Estimation for Lithium-Ion Batteries in Electric UAVs Based on Battery Model and Data-Driven Fusion
by Xiao Cao and Li Liu
Drones 2025, 9(4), 247; https://doi.org/10.3390/drones9040247 - 26 Mar 2025
Viewed by 106
Abstract
This study focuses on the critical problem of precise state of charge (SOC) estimation for electric unmanned aerial vehicle (UAV) battery systems, addressing a fundamental challenge in enhancing energy management reliability and flight safety. The current data-driven methods require big data and high [...] Read more.
This study focuses on the critical problem of precise state of charge (SOC) estimation for electric unmanned aerial vehicle (UAV) battery systems, addressing a fundamental challenge in enhancing energy management reliability and flight safety. The current data-driven methods require big data and high computational complexity, and model-based methods need high-quality model parameters. To address these challenges, a multi-timescale fusion method that integrates battery model and data-driven technologies for SOC estimation in lithium-ion batteries has been developed. Firstly, under the condition of no data or insufficient data, an adaptive extended Kalman filtering with multi-innovation algorithm (MI-AEKF) is introduced to estimate SOC based on the Thévenin model in a fast timescale. Then, a hybrid bidirectional time convolutional network (BiTCN), bidirectional gated recurrent unit (BiGRU), and attention mechanism (BiTCN-BiGRU-Attention) deep learning model using battery model parameters is used to correct SOC error in a relatively slow timescale. The performance of the proposed model is validated under various dynamic profiles of battery. The results show that the the maximum error (ME), mean absolute error (MAE) and the root mean square error (RMSE) for zero data-driving, insufficient data-driving, and sufficient data-driving under various dynamic conditions are below 2.3%, 1.3% and 1.5%, 0.9%, 0.4% and 0.4%, and 0.6%, 0.3% and 0.3%, respectively, which showcases the robustness and remarkable generalization performance of the proposed method. These findings significantly advance energy management strategies for Li-ion battery systems in UAVs, thereby improving operational efficiency and extending flight endurance. Full article
Show Figures

Figure 1

29 pages, 1337 KiB  
Article
Adaptive Q-Learning Grey Wolf Optimizer for UAV Path Planning
by Golam Moktader Nayeem, Mingyu Fan and Golam Moktader Daiyan
Drones 2025, 9(4), 246; https://doi.org/10.3390/drones9040246 - 25 Mar 2025
Viewed by 112
Abstract
Path planning is crucial for safely and efficiently navigating unmanned aerial vehicles (UAVs) toward operational goals. Often, this is a complex, multi-constraint, and non-linear optimization problem, and metaheuristic algorithms are frequently used to solve it. Grey Wolf Optimization (GWO) is one of the [...] Read more.
Path planning is crucial for safely and efficiently navigating unmanned aerial vehicles (UAVs) toward operational goals. Often, this is a complex, multi-constraint, and non-linear optimization problem, and metaheuristic algorithms are frequently used to solve it. Grey Wolf Optimization (GWO) is one of the most popular algorithms for solving such problems. However, standard GWO has several limitations, such as premature convergence, susceptibility to local minima, and unsuitability for dynamic environments due to its lack of adaptive learning. We propose a Q-learning-based GWO algorithm to address these issues in this study. QGWO introduces four key features: a Q-learning-based adaptive convergence factor, a segmented and parameterized position update strategy, a long-jump mechanism for population diversity preservation, and the replacement of non-dominant wolves for improved exploration. In addition, the Bayesian optimization algorithm is used to set parameters in QGWO for better performance. To evaluate the quality and robustness of QGWO, extensive numerical and simulation experiments were conducted on IEEE CEC 2022 benchmark functions, comparing it with standard GWO and some of its recent variants. In path planning simulation, QGWO lowers the path cost by 27.4%, improves the convergence speed by 19.06%, and reduces the area under the curve (AUC) by 23.8% over standard GWO, achieving optimal trajectory. Results show that QGWO is an efficient, reliable algorithm for UAV path planning in dynamic environments. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

23 pages, 7982 KiB  
Article
YOLO-SMUG: An Efficient and Lightweight Infrared Object Detection Model for Unmanned Aerial Vehicles
by Xinzhe Luo and Xiaogang Zhu
Drones 2025, 9(4), 245; https://doi.org/10.3390/drones9040245 - 25 Mar 2025
Viewed by 147
Abstract
To tackle the high computational demands and accuracy limitations in UAV-based infrared object detection, this study proposes YOLO-SMUG, a lightweight detection algorithm optimized for small object identification. The model incorporates an enhanced backbone architecture that integrates the lightweight Shuffle_Block algorithm and the Multi-Scale [...] Read more.
To tackle the high computational demands and accuracy limitations in UAV-based infrared object detection, this study proposes YOLO-SMUG, a lightweight detection algorithm optimized for small object identification. The model incorporates an enhanced backbone architecture that integrates the lightweight Shuffle_Block algorithm and the Multi-Scale Dilated Attention (MSDA) mechanism, enabling effective small object feature extraction while significantly reducing parameter size and computational cost without compromising detection accuracy. Additionally, a lightweight inverted bottleneck structure, C2f_UIB, along with the GhostConv module, replaces the conventional C2f and standard convolutional layers. This modification decreases computational complexity while maintaining the model’s ability to capture and integrate essential feature information across multiple scales. Furthermore, the standard CIoU loss is substituted with MPDIoU loss, improving object localization accuracy and enhancing overall positioning precision in infrared imagery. Experimental results on the HIT-UAV dataset, which consists of infrared imagery collected by UAV platforms, demonstrate that YOLO-SMUG outperforms the baseline YOLOv8s, achieving a 3.58% increase in accuracy, a 6.49% improvement in the F1-score, a 57.04% reduction in computational cost, and a 64.38% decrease in parameter count. These findings underscore the efficiency and effectiveness of YOLO-SMUG, making it a promising solution for UAV-based infrared small object detection in complex environments. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

25 pages, 7710 KiB  
Article
Top2Vec Topic Modeling to Analyze the Dynamics of Publication Activity Related to Environmental Monitoring Using Unmanned Aerial Vehicles
by Vladimir Albrekht, Ravil I. Mukhamediev, Yelena Popova, Elena Muhamedijeva and Asset Botaibekov
Publications 2025, 13(2), 15; https://doi.org/10.3390/publications13020015 - 25 Mar 2025
Viewed by 228
Abstract
Unmanned aerial vehicles (UAVs) play a key role in the process of contemporary environmental monitoring, enabling more frequent and detailed observations of various environmental parameters. With the rapid growth of scientific publications on this topic, it is important to identify the key trends [...] Read more.
Unmanned aerial vehicles (UAVs) play a key role in the process of contemporary environmental monitoring, enabling more frequent and detailed observations of various environmental parameters. With the rapid growth of scientific publications on this topic, it is important to identify the key trends and directions. This study uses the Top2Vec algorithm for topic modeling algorithm aimed at analyzing abstracts of more than 556 thousand scientific articles published on the arXiv platform from 2010 to 2023. The analysis was conducted in five key domains: air, water, and surface pollution monitoring; causes of pollution; and challenges in the use of UAVs. The research method included data collection and pre-processing, topic modeling, and quantitative analysis of publication activity using indicators of the rate (D1) and acceleration (D2) of change in the number of publications. The study allows concluding that the main challenge for the researchers is the task of processing data obtained in the course of monitoring. The second most important factor is the reduction in restrictions on the UAV flight duration. Among the causes of pollution, agricultural activities will be considered as a priority. Research in monitoring greenhouse gas emissions will be the most topical in air quality monitoring, while erosion and sedimentation—in the area of land surface control. Thermal pollution, microplastics, and chemical pollution are most relevant in the field of water quality control. On the other hand, the interest of the scientific community in topics related to soil pollution, particulate matter, sensor calibration, and volatile organic compounds is decreasing. Full article
Show Figures

Figure 1

29 pages, 1776 KiB  
Article
Deep Reinforcement Learning-Enabled Computation Offloading: A Novel Framework to Energy Optimization and Security-Aware in Vehicular Edge-Cloud Computing Networks
by Waleed Almuseelem
Sensors 2025, 25(7), 2039; https://doi.org/10.3390/s25072039 - 25 Mar 2025
Viewed by 205
Abstract
The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven [...] Read more.
The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven workload distribution across spatial Roadside Units (RSUs). Moreover, ensuring data security and optimizing energy usage within this framework remain significant challenges. To this end, this study introduces a deep reinforcement learning-enabled computation offloading framework for multi-tier VECC networks. First, a dynamic load-balancing algorithm is developed to optimize the balance among RSUs, incorporating real-time analysis of heterogeneous network parameters, including RSU computational load, channel capacity, and proximity-based latency. Additionally, to alleviate congestion in static RSU deployments, the framework proposes deploying UAVs in high-density zones, dynamically augmenting both storage and processing resources. Moreover, an Advanced Encryption Standard (AES)-based mechanism, secured with dynamic one-time encryption key generation, is implemented to fortify data confidentiality during transmissions. Further, a context-aware edge caching strategy is implemented to preemptively store processed tasks, reducing redundant computations and associated energy overheads. Subsequently, a mixed-integer optimization model is formulated that simultaneously minimizes energy consumption and guarantees latency constraint. Given the combinatorial complexity of large-scale vehicular networks, an equivalent reinforcement learning form is given. Then a deep learning-based algorithm is designed to learn close-optimal offloading solutions under dynamic conditions. Empirical evaluations demonstrate that the proposed framework significantly outperforms existing benchmark techniques in terms of energy savings. These results underscore the framework’s efficacy in advancing sustainable, secure, and scalable intelligent transportation systems. Full article
(This article belongs to the Special Issue Vehicle-to-Everything (V2X) Communication Networks 2024–2025)
Show Figures

Figure 1

Back to TopTop