Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,843)

Search Parameters:
Keywords = aerial image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2718 KB  
Article
Deep Learning Image-Based Classification for Post-Earthquake Damage Level Prediction Using UAVs
by Norah Alsaaran and Adel Soudani
Sensors 2025, 25(17), 5406; https://doi.org/10.3390/s25175406 (registering DOI) - 2 Sep 2025
Abstract
Unmanned Aerial Vehicles (UAVs) integrated with lightweight deep learning models represent an effective solution for image-based rapid post-earthquake damage assessment. UAVs, equipped with cameras, capture high-resolution aerial imagery of disaster-stricken areas, providing essential data for evaluating structural damage. When paired with light eight [...] Read more.
Unmanned Aerial Vehicles (UAVs) integrated with lightweight deep learning models represent an effective solution for image-based rapid post-earthquake damage assessment. UAVs, equipped with cameras, capture high-resolution aerial imagery of disaster-stricken areas, providing essential data for evaluating structural damage. When paired with light eight Convolutional Neural Network (CNN) models, these UAVs can process the captured images onboard, enabling real-time, accurate damage level predictions that might with potential interest to orient efficiently the efforts of the Search and Rescue (SAR) teams. This study investigates the use of the MobileNetV3-Small lightweight CNN model for real-time post-earthquake damage level prediction using UAV-captured imagery. The model is trained to classify three levels of post-earthquake damage, ranging from no damage to severe damage. Experimental results show that the adapted MobileNetV3-Small model achieves the lowest FLOPs, with a significant reduction of 58.8% compared to the ShuffleNetv2 model. Fine-tuning the last five layers resulted in a slight increase of approximately 0.2% in FLOPs, but significantly improved accuracy and robustness, yielding a 4.5% performance boost over the baseline. The model achieved a weighted average F-score of 0.93 on a merged dataset composed of three post-earthquake damage level datasets. It was successfully deployed and tested on a Raspberry Pi 5, demonstrating its feasibility for edge-device applications. This deployment highlighted the model’s efficiency and real-time performance in resource-constrained environments. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

23 pages, 34310 KB  
Article
One-to-Many Retrieval Between UAV Images and Satellite Images for UAV Self-Localization in Real-World Scenarios
by Jiaqi Li, Yuli Sun, Yaobing Xiang and Lin Lei
Remote Sens. 2025, 17(17), 3045; https://doi.org/10.3390/rs17173045 - 1 Sep 2025
Abstract
Matching drone images to satellite reference images is a critical step for achieving UAV self-localization. Existing drone visual localization datasets mainly focus on target localization, where each drone image is paired with a corresponding satellite image slice, typically with identical coverage. However, this [...] Read more.
Matching drone images to satellite reference images is a critical step for achieving UAV self-localization. Existing drone visual localization datasets mainly focus on target localization, where each drone image is paired with a corresponding satellite image slice, typically with identical coverage. However, this one-to-one approach does not reflect real-world UAV self-localization needs as it cannot guarantee exact matches between drone images and satellite tiles nor reliably identify the correct satellite slice. To bridge this gap, we propose a one-to-many matching method between drone images and satellite reference tiles. First, we enhance the UAV-VisLoc dataset, making it the first in the field tailored for one-to-many imperfect matching in UAV self-localization. Second, we introduce a novel loss function, Incomp-NPair Loss, which better reflects real-world imperfect matching scenarios than traditional methods. Finally, to address challenges such as limited dataset size, training instability, and large-scale differences between drone images and satellite tiles, we adopt a Vision Transformer (ViT) baseline and integrate CNN-extracted features into its patch embedding layer. Full article
Show Figures

Figure 1

20 pages, 8235 KB  
Article
Enhancing Search and Rescue Missions with UAV Thermal Video Tracking
by Piero Fraternali, Luca Morandini and Riccardo Motta
Remote Sens. 2025, 17(17), 3032; https://doi.org/10.3390/rs17173032 - 1 Sep 2025
Abstract
Wilderness Search and Rescue (WSAR) missions are time-critical emergency response operations that require locating a lost person within a short timeframe. Large forested terrains must be explored in challenging environments and adverse conditions. Unmanned Aerial Vehicles (UAVs) equipped with thermal cameras enable the [...] Read more.
Wilderness Search and Rescue (WSAR) missions are time-critical emergency response operations that require locating a lost person within a short timeframe. Large forested terrains must be explored in challenging environments and adverse conditions. Unmanned Aerial Vehicles (UAVs) equipped with thermal cameras enable the efficient exploration of vast areas. However, manual analysis of the huge amount of collected data is difficult, time-consuming, and prone to errors, increasing the risk of missing a person. This work proposes an object detection and tracking pipeline that automatically analyzes UAV thermal videos in real-time to identify lost people in forest environments. The tracking module combines information from multiple viewpoints to suppress false alarms and focus responders’ efforts. In this moving camera scenario, tracking performance is enhanced by introducing a motion compensation module based on known camera poses. Experimental results on the collected thermal video dataset demonstrate the effectiveness of the proposed tracking-based approach by achieving a Precision of 90.3% and a Recall of 73.4%. On a dataset of UAV thermal images, the introduced camera alignment technique increases the Recall by 6.1%, with negligible computational overhead, reaching 35.2 FPS. The proposed approach, optimized for real-time video processing, has direct application in real-world WSAR missions to improve operational efficiency. Full article
(This article belongs to the Section Earth Observation for Emergency Management)
Show Figures

Figure 1

28 pages, 1950 KB  
Review
Remote Sensing Approaches for Water Hyacinth and Water Quality Monitoring: Global Trends, Techniques, and Applications
by Lakachew Y. Alemneh, Daganchew Aklog, Ann van Griensven, Goraw Goshu, Seleshi Yalew, Wubneh B. Abebe, Minychl G. Dersseh, Demesew A. Mhiret, Claire I. Michailovsky, Selamawit Amare and Sisay Asress
Water 2025, 17(17), 2573; https://doi.org/10.3390/w17172573 - 31 Aug 2025
Abstract
Water hyacinth (Eichhornia crassipes), native to South America, is a highly invasive aquatic plant threatening freshwater ecosystems worldwide. Its rapid proliferation negatively impacts water quality, biodiversity, and navigation. Remote sensing offers an effective means to monitor such aquatic environments by providing extensive spatial [...] Read more.
Water hyacinth (Eichhornia crassipes), native to South America, is a highly invasive aquatic plant threatening freshwater ecosystems worldwide. Its rapid proliferation negatively impacts water quality, biodiversity, and navigation. Remote sensing offers an effective means to monitor such aquatic environments by providing extensive spatial and temporal coverage with improved resolution. This systematic review examines remote sensing applications for monitoring water hyacinth and water quality in studies published from 2014 to 2024. Seventy-eight peer-reviewed articles were selected from the Web of Science, Scopus, and Google Scholar following strict criteria. The research spans 25 countries across five continents, focusing mainly on lakes (61.5%), rivers (21%), and wetlands (10.3%). Approximately 49% of studies addressed water quality, 42% focused on water hyacinth, and 9% covered both. The Sentinel-2 Multispectral Instrument (MSI) was the most used sensor (35%), followed by the Landsat 8 Operational Land Imager (OLI) (26%). Multi-sensor fusion, especially Sentinel-2 MSI with Unmanned Aerial Vehicles (UAVs), was frequently applied to enhance monitoring capabilities. Detection accuracies ranged from 74% to 98% using statistical, machine learning, and deep learning techniques. Key challenges include limited ground-truth data and inadequate atmospheric correction. The integration of high-resolution sensors with advanced analytics shows strong promise for effective inland water monitoring. Full article
(This article belongs to the Section Ecohydrology)
Show Figures

Figure 1

21 pages, 6783 KB  
Article
The Uptake and Translocation of Lead, Chromium, Cadmium, and Zinc by Tomato Plants Grown in Nutrient and Contaminated Nutrient Solutions: Implications for Food Safety
by Radmila Milačič Ščančar, Katarina Kozlica, Stefan Marković, Pia Leban, Janja Vidmar, Ester Heath, Nina Kacjan Maršić, Špela Železnikar and Janez Ščančar
Toxics 2025, 13(9), 738; https://doi.org/10.3390/toxics13090738 (registering DOI) - 31 Aug 2025
Abstract
The uptake and translocation of Pb, Cr, Cd, and Zn in tomato plants (Solanum lycopersicum L. Rally) were investigated. Tomato seedlings were grown for five weeks in pots containing 40 L of Hoagland nutrient solution (pH 7) or contaminated nutrient solutions at [...] Read more.
The uptake and translocation of Pb, Cr, Cd, and Zn in tomato plants (Solanum lycopersicum L. Rally) were investigated. Tomato seedlings were grown for five weeks in pots containing 40 L of Hoagland nutrient solution (pH 7) or contaminated nutrient solutions at two concentration levels for each element: Cr (100 and 1000 ng/mL), Zn (100 and 1000 ng/mL), Pb (100 and 500 ng/mL), and Cd (50 and 500 ng/mL). The solutions were replenished weekly to maintain a volume of 40 L (pH 7), and 10 mL samples were collected for elemental analysis. After five weeks, the plants were harvested and separated into roots, stems, leaves, and fruits. These samples underwent microwave-assisted digestion, and the element concentrations were determined by inductively coupled plasma mass spectrometry (ICP-MS). The results revealed that the elements were mainly accumulated in the roots, with much lower concentrations determined in the fruits. Pb and Cr accumulated only minimally in fruits, with Pb levels of 0.0009 mg/kg wet weight at LI and 0.003 mg/kg wet weight at LII, and Cr levels of 0.028 mg/kg wet weight at LI and 0.031 mg/kg wet weight at LII. The Pb levels did not exceed the permissible limits set by EC regulations (0.05 mg/kg wet weight). Zn exhibited the highest accumulation in fruits, with 2.17 mg/kg wet weight at LI and 4.8 mg/kg wet weight at LII. By contrast, the Cd concentrations in fruits (0.25 mg/kg wet weight at LI and 1.1 mg/kg wet weight at LII) exceeded the EC regulatory limit of 0.02 mg/kg wet weight. The uptake of other essential elements into the tomato plant remained largely unaffected by the presence of contaminants. These results provide valuable insights into food safety. Laser ablation (LA)-ICP-MS imaging revealed an even distribution of Cd and Zn in the leaves of plants grown in contaminated nutrient solutions. By contrast, Cr and Pb were predominantly localized in the leaf veins and at the leaf apex, suggesting different transport mechanisms for these elements from the roots to the aerial parts of the plant. Full article
Show Figures

Graphical abstract

20 pages, 5971 KB  
Article
A Novel UAV- and AI-Based Remote Sensing Approach for Quantitative Monitoring of Jellyfish Populations: A Case Study of Acromitus flagellatus in Qinglan Port
by Fang Zhang, Shuo Wang, Yanhao Qiu, Nan Wang, Song Sun and Hongsheng Bi
Remote Sens. 2025, 17(17), 3020; https://doi.org/10.3390/rs17173020 - 31 Aug 2025
Abstract
The frequency of jellyfish blooms in marine ecosystems has been rising globally, attracting significant attention from the scientific community and the general public. Low-altitude remote sensing with Unmanned Aerial Vehicles (UAVs) offers a promising approach for rapid, large-scale, and automated image acquisition, making [...] Read more.
The frequency of jellyfish blooms in marine ecosystems has been rising globally, attracting significant attention from the scientific community and the general public. Low-altitude remote sensing with Unmanned Aerial Vehicles (UAVs) offers a promising approach for rapid, large-scale, and automated image acquisition, making it an effective tool for jellyfish population monitoring. This study employed UAVs for extensive sea surface surveys, achieving quantitative monitoring of the spatial distribution of jellyfish and optimizing flight altitude through gradient experiments. We developed a “bell diameter measurement model” for estimating jellyfish bell diameters from aerial images and used the Mask R-CNN algorithm to identify and count jellyfish automatically. This method was tested in Qinglan Port, where we monitored Acromitus flagellatus populations from mid-April to mid-May 2021 and late May 2023. Our results show that the UAVs can monitor jellyfish with bell diameters of 5 cm or more, and the optimal flight height is 100–150 m. The bell diameter measurement model, defined as L = 0.0103 × H × N + 0.1409, showed no significant deviation from field measurements. Compared to visual identification by human experts, the automated method achieved high accuracy while reducing labor and time costs. Case analysis revealed that the abundance of A. flagellatus in Qinglan Port initially increased and then decreased from mid-April to mid-May 2021, displaying a distinct patchy distribution. During this period, the average bell diameter gradually increased from 15.0 ± 3.4 cm to 15.5 ± 4.3 cm, with observed sizes ranging from 8.2 to 24.5 cm. This study introduces a novel, efficient, and cost-effective UAV-based method for quantitative monitoring of large jellyfish populations in surface waters, with broad applicability. Full article
Show Figures

Figure 1

17 pages, 16767 KB  
Article
AeroLight: A Lightweight Architecture with Dynamic Feature Fusion for High-Fidelity Small-Target Detection in Aerial Imagery
by Hao Qiu, Xiaoyan Meng, Yunjie Zhao, Liang Yu and Shuai Yin
Sensors 2025, 25(17), 5369; https://doi.org/10.3390/s25175369 (registering DOI) - 30 Aug 2025
Viewed by 20
Abstract
Small-target detection in Unmanned Aerial Vehicle (UAV) aerial images remains a significant and unresolved challenge in aerial image analysis, hampered by low target resolution, dense object clustering, and complex, cluttered backgrounds. In order to cope with these problems, we present AeroLight, a novel [...] Read more.
Small-target detection in Unmanned Aerial Vehicle (UAV) aerial images remains a significant and unresolved challenge in aerial image analysis, hampered by low target resolution, dense object clustering, and complex, cluttered backgrounds. In order to cope with these problems, we present AeroLight, a novel and efficient detection architecture that achieves high-fidelity performance in resource-constrained environments. AeroLight is built upon three key innovations. First, we have optimized the feature pyramid at the architectural level by integrating a high-resolution head specifically designed for minute object detection. This design enhances sensitivity to fine-grained spatial details while streamlining redundant and computationally expensive network layers. Second, a Dynamic Feature Fusion (DFF) module is proposed to adaptively recalibrate and merge multi-scale feature maps, mitigating information loss during integration and strengthening object representation across diverse scales. Finally, we enhance the localization precision of irregular-shaped objects by refining bounding box regression using a Shape-IoU loss function. AeroLight is shown to improve mAP50 and mAP50-95 by 7.5% and 3.3%, respectively, on the VisDrone2019 dataset, while reducing the parameter count by 28.8% when compared with the baseline model. Further validation on the RSOD dataset and Huaxing Farm Drone dataset confirms its superior performance and generalization capabilities. AeroLight provides a powerful and efficient solution for real-world UAV applications, setting a new standard for lightweight, high-precision object recognition in aerial imaging scenarios. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

30 pages, 25011 KB  
Article
Multi-Level Contextual and Semantic Information Aggregation Network for Small Object Detection in UAV Aerial Images
by Zhe Liu, Guiqing He and Yang Hu
Drones 2025, 9(9), 610; https://doi.org/10.3390/drones9090610 - 29 Aug 2025
Viewed by 194
Abstract
In recent years, detection methods for generic object detection have achieved significant progress. However, due to the large number of small objects in aerial images, mainstream detectors struggle to achieve a satisfactory detection performance. The challenges of small object detection in aerial images [...] Read more.
In recent years, detection methods for generic object detection have achieved significant progress. However, due to the large number of small objects in aerial images, mainstream detectors struggle to achieve a satisfactory detection performance. The challenges of small object detection in aerial images are primarily twofold: (1) Insufficient feature representation: The limited visual information for small objects makes it difficult for models to learn discriminative feature representations. (2) Background confusion: Abundant background information introduces more noise and interference, causing the features of small objects to easily be confused with the background. To address these issues, we propose a Multi-Level Contextual and Semantic Information Aggregation Network (MCSA-Net). MCSA-Net includes three key components: a Spatial-Aware Feature Selection Module (SAFM), a Multi-Level Joint Feature Pyramid Network (MJFPN), and an Attention-Enhanced Head (AEHead). The SAFM employs a sequence of dilated convolutions to extract multi-scale local context features and combines a spatial selection mechanism to adaptively merge these features, thereby obtaining the critical local context required for the objects, which enriches the feature representation of small objects. The MJFPN introduces multi-level connections and weighted fusion to fully leverage the spatial detail features of small objects in feature fusion and enhances the fused features further through a feature aggregation network. Finally, the AEHead is constructed by incorporating a sparse attention mechanism into the detection head. The sparse attention mechanism efficiently models long-range dependencies by computing the attention between the most relevant regions in the image while suppressing background interference, thereby enhancing the model’s ability to perceive targets and effectively improving the detection performance. Extensive experiments on four datasets, VisDrone, UAVDT, MS COCO, and DOTA, demonstrate that the proposed MCSA-Net achieves an excellent detection performance, particularly in small object detection, surpassing several state-of-the-art methods. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

19 pages, 13244 KB  
Article
MWR-Net: An Edge-Oriented Lightweight Framework for Image Restoration in Single-Lens Infrared Computational Imaging
by Xuanyu Qian, Xuquan Wang, Yujie Xing, Guishuo Yang, Xiong Dun, Zhanshan Wang and Xinbin Cheng
Remote Sens. 2025, 17(17), 3005; https://doi.org/10.3390/rs17173005 - 29 Aug 2025
Viewed by 185
Abstract
Infrared video imaging is an cornerstone technology for environmental perception, particularly in drone-based remote sensing applications such as disaster assessment and infrastructure inspection. Conventional systems, however, rely on bulky optical architectures that limit deployment on lightweight aerial platforms. Computational imaging offers a promising [...] Read more.
Infrared video imaging is an cornerstone technology for environmental perception, particularly in drone-based remote sensing applications such as disaster assessment and infrastructure inspection. Conventional systems, however, rely on bulky optical architectures that limit deployment on lightweight aerial platforms. Computational imaging offers a promising alternative by integrating optical encoding with algorithmic reconstruction, enabling compact hardware while maintaining imaging performance comparable to sophisticated multi-lens systems. Nonetheless, achieving real-time video-rate computational image restoration on resource-constrained unmanned aerial vehicles (UAVs) remains a critical challenge. To address this, we propose Mobile Wavelet Restoration-Net (MWR-Net), a lightweight deep learning framework tailored for real-time infrared image restoration. Built on a MobileNetV4 backbone, MWR-Net leverages depthwise separable convolutions and an optimized downsampling scheme to minimize parameters and computational overhead. A novel wavelet-domain loss enhances high-frequency detail recovery, while the modulation transfer function (MTF) is adopted as an optics-aware evaluation metric. With only 666.37 K parameters and 6.17 G MACs, MWR-Net achieves a PSNR of 37.10 dB and an SSIM of 0.964 on a custom dataset, outperforming a pruned U-Net baseline. Deployed on an RK3588 chip, it runs at 42 FPS. These results demonstrate MWR-Net’s potential as an efficient and practical solution for UAV-based infrared sensing applications. Full article
Show Figures

Figure 1

23 pages, 7196 KB  
Article
Field-Scale Maize Yield Estimation Using Remote Sensing with the Integration of Agronomic Traits
by Shuai Bao, Yiang Wang, Shinai Ma, Huanjun Liu, Xiyu Xue, Yuxin Ma, Mingcong Zhang and Dianyao Wang
Agriculture 2025, 15(17), 1834; https://doi.org/10.3390/agriculture15171834 - 29 Aug 2025
Viewed by 262
Abstract
Maize (Zea mays L.) is a key global cereal crop with significant relevance to food security. Maize yield prediction is challenged by cultivar diversity and varying management practices. This preliminary study was conducted at Youyi Farm, Heilongjiang Province, China. Three maize cultivars [...] Read more.
Maize (Zea mays L.) is a key global cereal crop with significant relevance to food security. Maize yield prediction is challenged by cultivar diversity and varying management practices. This preliminary study was conducted at Youyi Farm, Heilongjiang Province, China. Three maize cultivars (Songyu 438, Dika 1220, Dika 2188), two fertilization rates (700 and 800 kg·ha−1), and three planting densities (70,000, 75,000, and 80,000 plants·ha−1) were evaluated across 18 distinct cropping treatments. During the V6 (Vegetative 6-leaf stage), VT (Tasseling stage), R3 (Milk stage), and R6 (Physiological maturity) growth stages of maize, multi-temporal canopy spectral images were acquired using an unmanned aerial vehicle (UAV) equipped with a multispectral sensor. In situ measurements of key agronomic traits, including plant height (PH), stem diameter (SD), leaf area index (LAI), and relative chlorophyll content (SPAD), were conducted. The optimal vegetation indices (VIs) and agronomic traits were selected for developing a maize yield prediction model using the random forest (RF) algorithm. Results showed the following: (1) Vegetation indices derived from the red-edge band, particularly the normalized difference red-edge index (NDRE), exhibited a strong correlation with maize yield (R = 0.664), especially during the tasseling to milk ripening stage; (2) The integration of LAI and SPAD with NDRE improved model performance, achieving an R2 of 0.69—an increase of 23.2% compared to models based solely on VIs; (3) Incorporating SPAD values from middle-canopy leaves during the milk ripening stage further enhanced prediction accuracy (R2 = 0.74, RMSE = 0.88 t·ha−1), highlighting the value of vertical-scale physiological parameters in yield modeling. This study not only furnishes critical technical support for the application of UAV-based remote sensing in precision agriculture at the field-plot scale, but also charts a clear direction for the synergistic optimization of multi-dimensional agronomic traits and spectral features. Full article
Show Figures

Figure 1

23 pages, 2766 KB  
Article
A Novel 6-DOF Passive Vibration Isolation System for Aviation Optoelectronic Turret and Its Impact Analysis on Optical Systems Imaging Performance
by Wenxin Shi, Lei Li, Haishuang Fu, Chen Shui, Yijian Wang, Dejiang Wang, Xiantao Li and Bao Zhang
Aerospace 2025, 12(9), 778; https://doi.org/10.3390/aerospace12090778 - 28 Aug 2025
Viewed by 121
Abstract
In recent years, with the rapid development of the unmanned aerial vehicle industry, aviation optoelectronic turrets have been widely applied in fields such as terrain exploration, disaster prevention and mitigation, and national defense. Vibration isolation systems play a critical role in ensuring their [...] Read more.
In recent years, with the rapid development of the unmanned aerial vehicle industry, aviation optoelectronic turrets have been widely applied in fields such as terrain exploration, disaster prevention and mitigation, and national defense. Vibration isolation systems play a critical role in ensuring their imaging performance. This paper proposes a novel eight-leg six-degree-of-freedom (6-DOF) passive vibration isolation system tailored to the characteristics of aviation optoelectronic turrets, addressing the limitations of traditional Stewart passive vibration isolation platforms. A static analysis of the system is conducted, deriving the general form of the mass matrix under application conditions for aviation optoelectronic turrets. Structural configuration conditions are established to ensure that the stiffness matrix and damping matrix are diagonal matrices. In dynamic analysis and simulations, the transmissibility in each direction is simulated, and the impact of leg failure on the vibration isolation performance of this redundant system is further investigated. Under random vibration excitation, the maximum rotational vibration angles of a specific aviation optoelectronic turret are simulated and analyzed, confirming its stable tracking capability and validating the effectiveness of the redundant leg design in the vibration isolation system. Full article
(This article belongs to the Section Aeronautics)
26 pages, 29132 KB  
Article
DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery
by Xiaozheng Zhao, Zhongjun Yang and Huaici Zhao
Remote Sens. 2025, 17(17), 2989; https://doi.org/10.3390/rs17172989 - 28 Aug 2025
Viewed by 282
Abstract
Small object detection in UAV-based remote sensing imagery is crucial for applications such as traffic monitoring, emergency response, and urban management. However, aerial images often suffer from low object resolution, complex backgrounds, and varying lighting conditions, leading to missed or false detections. To [...] Read more.
Small object detection in UAV-based remote sensing imagery is crucial for applications such as traffic monitoring, emergency response, and urban management. However, aerial images often suffer from low object resolution, complex backgrounds, and varying lighting conditions, leading to missed or false detections. To address these challenges, we propose DCS-YOLOv8, an enhanced object detection framework tailored for small target detection in UAV scenarios. The proposed model integrates a Dynamic Convolution Attention Mixture (DCAM) module to improve global feature representation and combines it with the C2f module to form the C2f-DCAM block. The C2f-DCAM block, together with a lightweight SCDown module for efficient downsampling, constitutes the backbone DCS-Net. In addition, a dedicated P2 detection layer is introduced to better capture high-resolution spatial features of small objects. To further enhance detection accuracy and robustness, we replace the conventional CIoU loss with a novel Scale-based Dynamic Balanced IoU (SDBIoU) loss, which dynamically adjusts loss weights based on object scale. Extensive experiments on the VisDrone2019 dataset demonstrate that the proposed DCS-YOLOv8 significantly improves small object detection performance while maintaining efficiency. Compared to the baseline YOLOv8s, our model increases precision from 51.8% to 54.2%, recall from 39.4% to 42.1%, mAP0.5 from 40.6% to 44.5%, and mAP0.5:0.95 from 24.3% to 26.9%, while reducing parameters from 11.1 M to 9.9 M. Moreover, real-time inference on RK3588 embedded hardware validates the model’s suitability for onboard UAV deployment in remote sensing applications. Full article
Show Figures

Figure 1

13 pages, 2141 KB  
Article
Transformer-Based Semantic Segmentation of Japanese Knotweed in High-Resolution UAV Imagery Using Twins-SVT
by Sruthi Keerthi Valicharla, Roghaiyeh Karimzadeh, Xin Li and Yong-Lak Park
Information 2025, 16(9), 741; https://doi.org/10.3390/info16090741 - 28 Aug 2025
Viewed by 244
Abstract
Japanese knotweed (Fallopia japonica) is a noxious invasive plant species that requires scalable and precise monitoring methods. Current visually based ground surveys are resource-intensive and inefficient for detecting Japanese knotweed in landscapes. This study presents a transformer-based semantic segmentation framework for [...] Read more.
Japanese knotweed (Fallopia japonica) is a noxious invasive plant species that requires scalable and precise monitoring methods. Current visually based ground surveys are resource-intensive and inefficient for detecting Japanese knotweed in landscapes. This study presents a transformer-based semantic segmentation framework for the automated detection of Japanese knotweed patches using high-resolution RGB imagery acquired with unmanned aerial vehicles (UAVs). We used the Twins Spatially Separable Vision Transformer (Twins-SVT), which utilizes a hierarchical architecture with spatially separable self-attention to effectively model long-range dependencies and multiscale contextual features. The model was trained on 6945 annotated aerial images collected in three sites infested with Japanese knotweed in West Virginia, USA. The results of this study showed that the proposed framework achieved superior performance compared to other transformer-based baselines. The Twins-SVT model achieved a mean Intersection over Union (mIoU) of 94.94% and an Average Accuracy (AAcc) of 97.50%, outperforming SegFormer, Swin-T, and ViT. These findings highlight the model’s ability to accurately distinguish Japanese knotweed patches from surrounding vegetation. The method and protocol presented in this research provide a robust, scalable solution for mapping Japanese knotweed through aerial imagery and highlight the successful use of advanced vision transformers in ecological and geospatial information analysis. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Graphical abstract

12 pages, 2172 KB  
Article
Instance Segmentation Method for Insulators in Complex Backgrounds Based on Improved SOLOv2
by Ze Chen, Yangpeng Ji, Xiaodong Du, Shaokang Zhao, Zhenfei Huo and Xia Fang
Sensors 2025, 25(17), 5318; https://doi.org/10.3390/s25175318 - 27 Aug 2025
Viewed by 308
Abstract
To precisely delineate the contours of insulators in complex transmission line images obtained from Unmanned Aerial Vehicle (UAV) inspections and thereby facilitate subsequent defect analysis, this study proposes an instance segmentation framework predicated upon an enhanced SOLOv2 model. The proposed framework integrates a [...] Read more.
To precisely delineate the contours of insulators in complex transmission line images obtained from Unmanned Aerial Vehicle (UAV) inspections and thereby facilitate subsequent defect analysis, this study proposes an instance segmentation framework predicated upon an enhanced SOLOv2 model. The proposed framework integrates a preprocessed edge channel, generated through the Non-Subsampled Contourlet Transform (NSCT), which augments the model’s capability to accurately capture the edges of insulators. Moreover, the input image resolution to the network is heightened to 1200 × 1600, permitting more detailed extraction of edges. Rather than the original ResNet + FPN architecture, the improved HRNet is utilized as the backbone to effectively harness multi-scale feature information, thereby enhancing the model’s overall efficacy. In response to the increased input size, there is a reduction in the network’s channel count, concurrent with an increase in the number of layers, ensuring an adequate receptive field without substantially escalating network parameters. Additionally, a Convolutional Block Attention Module (CBAM) is incorporated to refine mask quality and augment object detection precision. Furthermore, to bolster the model’s robustness and minimize annotation demands, a virtual dataset is crafted utilizing the fourth-generation Unreal Engine (UE4). Empirical results reveal that the proposed framework exhibits superior performance, with AP0.50 (90.21%), AP0.75 (83.34%), and AP[0.50:0.95] (67.26%) on a test set consisting of images supplied by the power grid. This framework surpasses existing methodologies and contributes significantly to the advancement of intelligent transmission line inspection. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Intelligent Fault Diagnostics)
Show Figures

Figure 1

55 pages, 5431 KB  
Review
Integration of Drones in Landscape Research: Technological Approaches and Applications
by Ayşe Karahan, Neslihan Demircan, Mustafa Özgeriş, Oğuz Gökçe and Faris Karahan
Drones 2025, 9(9), 603; https://doi.org/10.3390/drones9090603 - 26 Aug 2025
Viewed by 322
Abstract
Drones have rapidly emerged as transformative tools in landscape research, enabling high-resolution spatial data acquisition, real-time environmental monitoring, and advanced modelling that surpass the limitations of traditional methodologies. This scoping review systematically explores and synthesises the technological applications of drones within the context [...] Read more.
Drones have rapidly emerged as transformative tools in landscape research, enabling high-resolution spatial data acquisition, real-time environmental monitoring, and advanced modelling that surpass the limitations of traditional methodologies. This scoping review systematically explores and synthesises the technological applications of drones within the context of landscape studies, addressing a significant gap in the integration of Uncrewed Aerial Systems (UASs) into environmental and spatial planning disciplines. The study investigates the typologies of drone platforms—including fixed-wing, rotary-wing, and hybrid systems—alongside a detailed examination of sensor technologies such as RGB, LiDAR, multispectral, and hyperspectral imaging. Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, a comprehensive literature search was conducted across Scopus, Web of Science, and Google Scholar, utilising predefined inclusion and exclusion criteria. The findings reveal that drone technologies are predominantly applied in mapping and modelling, vegetation and biodiversity analysis, water resource management, urban planning, cultural heritage documentation, and sustainable tourism development. Notably, vegetation analysis and water management have shown a remarkable surge in application over the past five years, highlighting global shifts towards sustainability-focused landscape interventions. These applications are critically evaluated in terms of spatial efficiency, operational flexibility, and interdisciplinary relevance. This review concludes that integrating drones with Geographic Information Systems (GISs), artificial intelligence (AI), and remote sensing frameworks substantially enhances analytical capacity, supports climate-resilient landscape planning, and offers novel pathways for multi-scalar environmental research and practice. Full article
(This article belongs to the Special Issue Drones for Green Areas, Green Infrastructure and Landscape Monitoring)
Show Figures

Figure 1

Back to TopTop