Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,439)

Search Parameters:
Keywords = aerial imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5026 KB  
Article
Estimating Aboveground Biomass of Oilseed Rape by Fusing Point Cloud Voxelization and Vegetation Indices Derived from UAV RGB Imagery
by Bingyu Bai, Tianci Chen, Yanxi Mo, Yushan Wu, Jiuyue Sun, Qiong Zou, Shaohong Fu, Yun Li, Haoran Shi, Qiaobo Wu, Jin Yang and Wanzhuo Gong
Remote Sens. 2026, 18(9), 1323; https://doi.org/10.3390/rs18091323 (registering DOI) - 25 Apr 2026
Abstract
To support low-cost, non-destructive crop growth monitoring, this study systematically compared different vegetation indices, voxel sizes, and camera angles using a point cloud voxelization approach combined with a vegetation index weighted canopy volume index (CVMVI) to assess aboveground biomass (AGB) in [...] Read more.
To support low-cost, non-destructive crop growth monitoring, this study systematically compared different vegetation indices, voxel sizes, and camera angles using a point cloud voxelization approach combined with a vegetation index weighted canopy volume index (CVMVI) to assess aboveground biomass (AGB) in winter oilseed rape (Brassica napus L.). Field experiments were conducted from 2021 to 2024 at the Yangma Experimental Base of the Chengdu Academy of Agricultural and Forestry Sciences. Red, green, blue (RGB) imagery of oilseed rape was acquired using an unmanned aerial vehicle (UAV) during the following five key growth stages: seedling, bolting, flowering, podding, and maturity. Collected images were processed to generate point clouds, which were subsequently voxelized at four resolutions (0.03, 0.05, 0.07, and 0.1 m). CVMVI was constructed by integrating vegetation indices (VIs) derived from the RGB data and the voxelized canopy structural information. Regression models were established between the CVMVI values and field-measured AGB to estimate biomass. Model performance was evaluated using the coefficient of determination (R2), root mean square error (RMSE), and relative error (RE). There were strong correlations (r > 0.80) between the estimated and measured AGB across all voxelization treatments throughout the growth period. Among the 20 VIs tested, regression methods based on the blue green ratio index (BGI), color intensity index, blue red ratio index, vegetative index, and green red ratio index consistently showed superior estimation performance across three consecutive years, demonstrating their good applicability for estimating AGB in oilseed rape under varying agronomic conditions (different varieties, densities, and sowing dates). The cubic regression model CVMBGI performed best under a 45° UAV camera angle, with the highest R2 and lowest RMSE and RE (2021–2022: R2 = 0.864, RMSE = 2414.18 kg/ha, RE = 14.8%; 2022–2023: R2 = 0.754, RMSE = 2550.53 kg/ha, RE = 14.9%; 2023–2024: R2 = 0.863, RMSE = 1953.61 kg/ha, RE = 22.9%). Since the estimation performance showed negligible differences among voxel sizes, and the 0.1–m voxel offered the smallest data volume and shortest analysis time, the CVMBGI model with a 0.1–m voxel was selected as the preferred approach, providing a practical balance between estimation performance and processing demand. These findings highlight the application potential of point cloud voxelization technology for crop biomass estimation. This study proposes a novel, non-destructive, and efficient framework for estimating field crop AGB using low-cost UAV RGB imagery, facilitating the wider adoption of UAV technology in practical agricultural production. Full article
Show Figures

Figure 1

21 pages, 12435 KB  
Article
Mapping the Spatial Distribution of Urban Agriculture with a Novel Classification Framework: A Case Study of the Pearl River Delta Region
by Shanshan Feng, Ruiqing Chen, Shun Jiang, Xuying Huang, Chengrui Mao, Lei Zhang and Canfang Zhou
Agronomy 2026, 16(9), 862; https://doi.org/10.3390/agronomy16090862 - 24 Apr 2026
Abstract
Urban agriculture plays a critical yet increasingly complex role in sustainable urban development, especially in high-density regions undergoing rapid transformation. Accurate mapping of its spatial distribution and functional composition remains a methodological challenge due to its fragmented landscape, small plot sizes, and multifunctional [...] Read more.
Urban agriculture plays a critical yet increasingly complex role in sustainable urban development, especially in high-density regions undergoing rapid transformation. Accurate mapping of its spatial distribution and functional composition remains a methodological challenge due to its fragmented landscape, small plot sizes, and multifunctional nature. This study addresses this gap by developing and applying a novel hierarchical classification framework that integrates agricultural land cover types with key socio-economic functions to map urban agriculture in the Pearl River Delta (PRD), China. This framework is structured around agricultural land categories (i.e., cropland, garden, forest, grass, and water body) and further delineated by two primary production functions, planting and breeding, with a third functional dimension, leisure activities, proposed as a conceptual extension for future research. Using unmanned aerial vehicle (UAV) imagery and high-resolution satellite data, we constructed a spatial sample database for urban agriculture. The random forest algorithm was applied to classify urban agriculture with Gaofen-2 imagery, generating detailed spatial distribution maps across the study area, with consistently reliable overall accuracy (79.07–81.82%), though this may be slightly optimistic due to potential spatial autocorrelation between training and testing samples. While the framework performed exceptionally well for spectrally and spatially distinct classes such as water bodies and perennial plantations, challenges remained in discriminating among annual field crops due to spectral similarity. These findings underscore the potential of integrating multi-temporal remote sensing data to capture phenological variations for improved classification. This study provides a replicable, functionally informed mapping approach that not only advances the methodological toolkit for urban agriculture characterization but also offers a valuable evidence base for land use planning, agricultural policy, and sustainable urban development in rapidly urbanizing regions. Full article
Show Figures

Figure 1

25 pages, 2086 KB  
Article
Estimating Canopy Structure Parameters and Leaf Nitrogen in Olive Orchards Using UAV Imagery Across Two Agro-Ecological Zones in Tunisia
by Marius Hobart, Olfa Boussadia, Amel Ben Hamouda, Antje Giebel, Pierre Ellssel, Cornelia Weltzien and Michael Schirrmann
Remote Sens. 2026, 18(9), 1300; https://doi.org/10.3390/rs18091300 - 24 Apr 2026
Abstract
Optimizing olive orchard management requires timely, per-tree data to enhance productivity and sustainability. Unoccupied aerial vehicle (UAV)-based red, green, and blue (RGB) imagery offers a low-cost solution for acquiring high-resolution spatiotemporal insights for orchard management, which are not yet common in Tunisia. This [...] Read more.
Optimizing olive orchard management requires timely, per-tree data to enhance productivity and sustainability. Unoccupied aerial vehicle (UAV)-based red, green, and blue (RGB) imagery offers a low-cost solution for acquiring high-resolution spatiotemporal insights for orchard management, which are not yet common in Tunisia. This study monitored tree structural parameters, leaf area index (LAI), and leaf nitrogen content (%N DW) in two Tunisian olive orchards during 2022 and 2023. UAV-derived imagery was photogrammetrically processed into 3D point clouds and analyzed using an automated approach. Target variables of the automated approach included tree-wise estimates of height, projected crown area, and crown volume, as well as raster cell counts of the canopy cloud and spectral indices such as the normalized green-red difference index (NGRDI) and green leaf index (GLI). In addition, the estimated parameters per tree were used to model LAI and leaf nitrogen content. Analyses were conducted separately for trees represented by a high and a low number of points in the dense point cloud. Outcomes were compared to reference data collected in the field on dates close to the UAV flights. The findings showed strong relationships for the projected crown area (R2 = 0.82 and 0.91) and tree height (R2 = 0.89 and 0.88) when compared to reference values. Linear regression models for LAI (R2 = 0.73 and 0.68) and crown volume (R2 = 0.85 and 0.91) estimation also show strong relationships. However, leaf nitrogen estimation was not feasible from RGB spectral index values, as it showed a weak relationship (R2 = 0.34). A dataset with multispectral imagery could overcome this limitation but would increase costs, making it less suitable for the low-budget approach required in price-sensitive farming contexts, particularly in low-income regions. Full article
20 pages, 6728 KB  
Article
Early Post-Fire Assessments of Wildfires in a Natural Mixed Forest in Northeastern Japan Using Sentinel-2 dNBR and UAV RGB Imagery
by Le Tien Nguyen, Maximo Larry Lopez Caceres, Vladislav Bukin, Giacomo Corda and Takashi Kunisaki
Remote Sens. 2026, 18(9), 1262; https://doi.org/10.3390/rs18091262 - 22 Apr 2026
Viewed by 256
Abstract
Unmanned aerial vehicles (UAVs) have become an important component of multi-sensor remote sensing frameworks for post-fire forest monitoring because they provide ultra-high-resolution imagery for evaluating fine-scale vegetation response. This study assessed early-stage post-fire burn severity and forest health condition in a natural mixed [...] Read more.
Unmanned aerial vehicles (UAVs) have become an important component of multi-sensor remote sensing frameworks for post-fire forest monitoring because they provide ultra-high-resolution imagery for evaluating fine-scale vegetation response. This study assessed early-stage post-fire burn severity and forest health condition in a natural mixed forest affected by the 2024 wildfire in Nanyo, Yamagata, northeastern Japan. Burn severity was quantified using the differenced Normalized Burn Ratio (dNBR) derived from Sentinel-2 imagery acquired five months after the fire (October 2024). High-resolution UAV RGB orthomosaics and field surveys were used to classify trees into healthy, damaged, and dead categories. Mean plot-level burn severity was estimated using a weighted midpoint dNBR approach, and the tree mortality rate was calculated from plot-based tree counts. The results showed that low and moderate–low burn severity classes dominated most plots, with mean dNBR values ranging from 0.085 to 0.386. UAV-based interpretation revealed substantial variability in tree health condition among plots. In 2024, fire effects were expressed mainly as canopy damage rather than immediate stand-level mortality. Mortality rates ranged from 14.9% to 58.6%, and some higher-severity plots contained greater damage. Overall, Sentinel-2 dNBR captured landscape-scale burn severity patterns, whereas UAV imagery improved interpretation of fine-scale health variability in heterogeneous burned forests. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

23 pages, 19480 KB  
Article
A Multi-Spatial Scale Integration Framework of UAV Image Features and Machine Learning for Predicting Root-Zone Soil Electrical Conductivity in the Arid Oasis Cotton Fields of Xinjiang
by Chenyu Li, Xinjun Wang, Qingfu Liang, Wenli Dong, Wanzhi Zhou, Yu Huang, Rui Qi, Shenao Wang and Jiandong Sheng
Agriculture 2026, 16(8), 913; https://doi.org/10.3390/agriculture16080913 - 21 Apr 2026
Viewed by 397
Abstract
Soil salinization is one of the primary forms of land degradation in arid and semi-arid regions, severely constraining agricultural production in Xinjiang’s oases. Unmanned aerial vehicle (UAV) imagery provides an effective means for precise monitoring of soil salinization, with image spatial resolution being [...] Read more.
Soil salinization is one of the primary forms of land degradation in arid and semi-arid regions, severely constraining agricultural production in Xinjiang’s oases. Unmanned aerial vehicle (UAV) imagery provides an effective means for precise monitoring of soil salinization, with image spatial resolution being a key factor affecting assessment accuracy. However, traditional single-scale remote sensing monitoring methods rely solely on spectral and textural features at the leaf scale (0.1 m resolution captures leaf-scale characteristics), neglecting the contribution of multi-scale features (single-row canopy scale and single-membrane-covered area scale (6-row crop canopy)) to soil salinity. For instance, 0.5–1 m reflects single-row canopy scale, while 2 m reflects single-membrane-covered area scale. Therefore, this study developed a multi-scale UAV imagery and machine learning framework to enhance soil electrical conductivity prediction accuracy. This study focuses on oasis cotton fields in Shaya County, Xinjiang. Based on UAV multispectral imagery, we resampled data to generate eight datasets at different spatial resolutions: 0.1, 0.5, 1, 1.5, 2, 2.5, 5, and 10 m. For each resolution, we calculated 21 spectral indices and 48 texture features to construct a feature set. At both single and multispatial scales, spectral indices, texture features, and their spectral-texture fusion features were constructed. Combining these with Backpropagation Neural Network (BPNN), Random Forest Regression (RFR), and Extreme Gradient Boosting (XGBoost) models, a soil EC estimation framework was developed. The impact of three feature combination schemes on cotton field soil conductivity estimation using single-scale UAV imagery was compared. The accuracy of soil EC estimation for cotton fields was compared between multi-spatial scale and single-scale UAV image features. The optimal combination strategy for a multi-spatial scale and multiple features was determined. Results indicate that combining spectral and texture features yields the highest estimation accuracy for cotton field soil electrical conductivity in single-scale analysis. Multi-spatial scale image features outperform single-scale image features in estimating cotton field soil electrical conductivity accuracy. By comparing different feature combinations, when integrating 0.5 m spatial-scale spectra (S1, EVI, DVI, NDVI, Int1, SI) with 0.1 m texture features (RE1_ent, R_cor, RE1_cor, G_hom, B_mea, R_con, NIR_con), the XGBoost model achieved the optimal prediction accuracy (R2 = 0.693, RMSE = 0.515 dS/m), outperforming the methods using multiple features at a single scale. This study developed a novel multi-scale image feature fusion technique to construct a machine learning model. This method describes the image characteristics of soil electrical conductivity at different geographical scales, providing a reference approach for the rapid and accurate prediction of soil electrical conductivity in arid regions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 3485 KB  
Article
A Trajectory Data-Driven Personalized Autonomous Driving Decision System for Driving Simulators
by Wenpeng Sun, Yu Zhang and Nengchao Lyu
Vehicles 2026, 8(4), 94; https://doi.org/10.3390/vehicles8040094 - 19 Apr 2026
Viewed by 135
Abstract
To meet the high-fidelity testing environment requirements for autonomous driving system development, driving simulators are gradually evolving from tools that “only provide scenes and interaction interfaces” into integrated verification platforms for autonomous driving capabilities. These simulators, in particular, need to feature testable and [...] Read more.
To meet the high-fidelity testing environment requirements for autonomous driving system development, driving simulators are gradually evolving from tools that “only provide scenes and interaction interfaces” into integrated verification platforms for autonomous driving capabilities. These simulators, in particular, need to feature testable and scalable decision-making modules. However, the autonomous driving functions in existing driving simulators mostly rely on rule-based or simplified model approaches, which are inadequate for depicting the complex interactions in real-world traffic and fail to meet the personalized decision-making needs under various driving styles. To address these challenges, this paper designs and implements a trajectory data-driven personalized autonomous driving decision system, using drone aerial imagery as the core data source to provide realistic background traffic flow and human-like decision-making capabilities. The proposed system can be interpreted as an integrated decision–planning–control framework deployed within a high-fidelity driving simulation platform. It consists of a driving style classification module based on drone trajectory data, a personalized decision module integrating inverse reinforcement learning and dynamic game theory, and a planning and control module. First, a natural driving database is built using 4997 real vehicle trajectories, and prior features of different driving styles are extracted through trajectory feature engineering and an improved K-means++ method. Based on this, a personalized decision-making framework that combines dynamic game theory and maximum entropy inverse reinforcement learning is proposed, aiming to learn the preference weights of different driving styles in terms of safety, comfort, and efficiency. Furthermore, the Dueling Network Architecture (DuDQN) is used to generate human-like lane-changing strategies. Subsequently, a real-time closed-loop execution of personalized decisions in the simulation platform is achieved through fifth-order polynomial trajectory planning, lateral Linear Quadratic Regulator (LQR) control, and longitudinal cascade Proportional–Integral–Derivative (PID) control. Experimental results show that the personalized decision model trained with drone data can realistically reproduce vehicle decision-making behaviors in natural traffic flows within the simulation environment and generate autonomous driving strategies that are highly consistent with different driving styles. This significantly enhances the humanization and personalization capabilities of the autonomous driving module in the driving simulator. Full article
(This article belongs to the Special Issue Data-Driven Smart Transportation Planning)
Show Figures

Figure 1

19 pages, 4121 KB  
Technical Note
drone2report: A Configuration-Driven Multi-Sensor Batch-Processing Engine for UAV-Based Plot Analysis in Precision Agriculture
by Nelson Nazzicari, Giulia Moscatelli, Agostino Fricano, Elisabetta Frascaroli, Roshan Paudel, Eder Groli, Paolo De Franceschi, Giorgia Carletti, Nicolò Franguelli and Filippo Biscarini
Drones 2026, 10(4), 301; https://doi.org/10.3390/drones10040301 - 18 Apr 2026
Viewed by 482
Abstract
Unmanned aerial vehicles (UAVs) have become indispensable tools in precision agriculture and plant phenotyping, enabling the rapid, non-destructive assessment of crop traits across space and time. Equipped with RGB, multispectral, thermal, and other sensors, UAVs provide detailed information on canopy structure, physiology, and [...] Read more.
Unmanned aerial vehicles (UAVs) have become indispensable tools in precision agriculture and plant phenotyping, enabling the rapid, non-destructive assessment of crop traits across space and time. Equipped with RGB, multispectral, thermal, and other sensors, UAVs provide detailed information on canopy structure, physiology, and stress responses that can guide management decisions and accelerate breeding programs. Despite these advances, the downstream processing of UAV imagery remains technically demanding. Converting orthomosaics into standardized, biologically meaningful data often requires a combination of photogrammetry, geospatial analysis, and custom scripting, which can limit reproducibility and accessibility across research groups. We present drone2report, an open-source python-based software that processes orthomosaics from UAV flights to generate vegetation indices, summary statistics, derived subimages, and text (html) reports, supporting both research and applied crop breeding needs. Alongside the basic structure and functioning of drone2report, we also present five case studies that illustrate practical applications common in UAV-/drone-phenotyping of plants: (i) thresholding to remove background noise and highlight regions of interest; (ii) monitoring plant phenotypes over time; (iii) extracting information on plant height to detect events like lodging or the falling over of spikes; (iv) integrating multiple sensors (cameras) to construct and optimize new synthetic indices; (v) integrate a trained deep learning network to implement a classification task. These examples demonstrate the tool’s ability to automate analysis, integrate heterogeneous data and models, and support reproducible computation of agronomically relevant traits. drone2report streamlines orthorectified UAV-image processing for precision agriculture by linking orthomosaics to standardized, plot-level outputs. Its modular, configuration-driven design allows transparent workflows, easy customization, and integration of multiple sensors within a unified analytical framework. By facilitating reproducible, multi-modal image analysis, drone2report lowers technical barriers to UAV-based phenotyping and opens the way to robust, data-driven crop monitoring and breeding applications. Full article
(This article belongs to the Special Issue Advances in UAV-Based Remote Sensing for Climate-Smart Agriculture)
Show Figures

Figure 1

23 pages, 1462 KB  
Article
From Above: Drone-Driven Computer Vision for Reliable Elephant Body Condition Assessment
by Dede Aulia Rahman, Toto Haryanto and Riki Herliansyah
Conservation 2026, 6(2), 49; https://doi.org/10.3390/conservation6020049 - 17 Apr 2026
Viewed by 200
Abstract
Assessing individual animal health is essential for detecting early ecological stress that may scale to population-level impacts. Yet, conventional capture-based methods are invasive and logistically challenging, particularly for large mammals. This study evaluates the accuracy of drone-based morphometric measurements as a non-invasive approach [...] Read more.
Assessing individual animal health is essential for detecting early ecological stress that may scale to population-level impacts. Yet, conventional capture-based methods are invasive and logistically challenging, particularly for large mammals. This study evaluates the accuracy of drone-based morphometric measurements as a non-invasive approach for estimating elephants’ Body Condition Index (BCI). Research was conducted in Way Kambas National Park, Sumatra, using a DJI Matrice 300 RTK equipped with a multisensor camera to acquire aerial imagery, primarily from a top-down perspective. Morphometric parameters were extracted through image preprocessing, segmentation, and edge detection using an OpenCV-based Canny algorithm, followed by coordinate and Euclidean distance analyses. Drone-derived measurements were validated against field-based morphometry in captive Sumatran elephants. Linear regression revealed strong agreement between methods, with R2 values ranging from 0.91 to 0.97. Mid-body width showed the highest accuracy (R2 = 0.97, MAPE = 2.66%, RMSE = 2.36), while other body dimensions also performed consistently well. BCI-related morphometric ratios exhibited minimal differences between drone and field measurements, confirming methodological reliability. As an exploratory extension, a preliminary allometric scaling framework was applied to estimate body condition proxies in free-ranging wild elephants except for mid-body width; however, these estimates are model-derived from total body length and should be interpreted as indicative rather than as direct morphometric assessments of body condition. These findings demonstrate that drone-based photogrammetry provides a validated, practical, and non-invasive method for morphometric measurement in captive elephants, with promising but as yet incompletely validated potential for application to wild populations. Full article
21 pages, 6896 KB  
Article
Comparative Evaluation of Segmentation-Based and Pose-Assisted Head Temperature Estimation from UAS Thermal Imagery Under Controlled Conditions
by Owais Ahmed, Justin Guye, M. Hassan Tanveer and Adeel Khalid
Drones 2026, 10(4), 295; https://doi.org/10.3390/drones10040295 - 17 Apr 2026
Viewed by 154
Abstract
This paper presents a vision-based framework for detecting humans and estimating head surface temperature from aerial thermal imagery acquired by Unmanned Aerial Systems (UAS). A comparative evaluation of recent object detection architectures was conducted to identify the most stable and reliable model for [...] Read more.
This paper presents a vision-based framework for detecting humans and estimating head surface temperature from aerial thermal imagery acquired by Unmanned Aerial Systems (UAS). A comparative evaluation of recent object detection architectures was conducted to identify the most stable and reliable model for thermal human detection under varying flight altitudes. The selected framework integrates two head localization strategies, namely, segmentation-based mask slicing and pose-assisted keypoint localization, to extract head regions and compute per-pixel temperature values from radiometric metadata. The results show that cross-domain inference using pre-trained YOLOv11 models achieves reliable human detection across controlled outdoor environments. Between the two pipelines, the pose-assisted method produced temperature estimates closer to the expected human physiological range (36–38 °C), whereas the segmentation-based approach exhibited higher values attributable to mask boundary contamination and solar surface heating. In the absence of ground-truth validation from medical-grade sensors, these findings are characterized as relative comparisons rather than absolute accuracy claims. This study establishes a methodological foundation for future UAS-based thermal assessment systems and identifies critical calibration and validation requirements for field deployment. Full article
Show Figures

Graphical abstract

26 pages, 5204 KB  
Article
A Spatial-Frequency Joint Decoupling Network for Dense Small-Object Detection
by Zhexiang Zhao, Jintong Li and Peng Liu
Remote Sens. 2026, 18(8), 1203; https://doi.org/10.3390/rs18081203 - 16 Apr 2026
Viewed by 311
Abstract
Small-object detection in remote sensing imagery faces two specific challenges that existing lightweight detectors fail to address jointly: the irreversible loss of high-frequency boundary cues during repeated downsampling, and feature smearing between neighboring instances caused by uniform multi-scale fusion. This paper presents SFD-Net, [...] Read more.
Small-object detection in remote sensing imagery faces two specific challenges that existing lightweight detectors fail to address jointly: the irreversible loss of high-frequency boundary cues during repeated downsampling, and feature smearing between neighboring instances caused by uniform multi-scale fusion. This paper presents SFD-Net, a spatial–frequency adaptive network designed to explicitly address these two limitations for aerial imagery. A backbone network and a spatial–frequency adaptive neck are used in the proposed model. Wavelet-based downsampling is applied in the backbone to reduce aliasing while preserving high-frequency information. The direction-sensitive aggregation is incorporated to better capture oriented structural patterns. In the neck, asymmetric and scale-dependent feature routing is introduced to enhance shallow boundary cues, improve instance separation in crowded regions, and limit interference from deep semantic features. Experiments on the VisDrone-DET2019, UAVDT, SIMD, and NWPU VHR-10 datasets demonstrate that SFD-Net achieves a favorable balance between detection accuracy and computational cost. In particular, on the SIMD dataset, SFD-Net achieves 82.2% mAP@0.5 and 66.7% mAP@0.5:0.95 with only 3.4 M parameters and 8.3 GFLOPs. These results indicate that the proposed method is an effective and parameter-efficient solution for remote sensing small-object detection, especially in resource-constrained deployment scenarios. Full article
Show Figures

Figure 1

36 pages, 23824 KB  
Article
Differential Morphological Profile Neural Networks for Semantic Segmentation
by David Huangal and J. Alex Hurt
Remote Sens. 2026, 18(8), 1188; https://doi.org/10.3390/rs18081188 - 15 Apr 2026
Viewed by 324
Abstract
Semantic segmentation of overhead remote sensing imagery supports critical applications in mapping, urban planning, and disaster response, yet state-of-the-art segmentation networks are predominantly designed for ground-perspective imagery and do not directly address remote sensing challenges such as extreme scale variation, foreground–background imbalance, and [...] Read more.
Semantic segmentation of overhead remote sensing imagery supports critical applications in mapping, urban planning, and disaster response, yet state-of-the-art segmentation networks are predominantly designed for ground-perspective imagery and do not directly address remote sensing challenges such as extreme scale variation, foreground–background imbalance, and large image sizes. Rather than proposing new architectures, we take an architecture-agnostic approach by incorporating the differential morphological profile (DMP), a multi-scale shape extraction method based on grayscale morphology, as supplementary input to modern segmentation networks. We evaluate two integration strategies: a Direct-In approach, which adapts the input stem to accept DMP channels in place of or alongside RGB data, and a Hybrid DMP dual-stream architecture in which separate RGB and DMP encoders process each modality independently. Experiments on the iSAID, ISPRS Potsdam, and LoveDA benchmark datasets assess multiple DMP differentials and structuring element shapes. Results show that use of the DMP as direct input into models generally under-perform RGB-only baselines, while the Hybrid DMP approach substantially closes this gap and in some cases surpasses baseline performance, with gains varying across object categories. In the strongest case, a Hybrid DMP SegNeXt-S model achieves a gain of +3.19 mIoU over the RGB-only baseline on the ISPRS Potsdam dataset, and Hybrid DMP models outperform the RGB-only baseline on two of the three benchmark datasets evaluated. These findings suggest that DMP features provide complementary shape information that, when properly integrated, can enhance semantic segmentation performance for overhead remote sensing imagery. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

35 pages, 1113 KB  
Article
Intelligent UAV-UGV-SN Systems for Monitoring and Avoiding Wildfires in Context of Sustainable Development of Smart Regions
by Dmytro Korniienko, Nazar Serhiichuk, Vyacheslav Kharchenko, Herman Fesenko, Jose Borges and Nikolaos Bardis
Sustainability 2026, 18(8), 3908; https://doi.org/10.3390/su18083908 - 15 Apr 2026
Viewed by 274
Abstract
Advancing environmental monitoring through coordinated autonomous systems is central to sustainable smart region governance and data-driven territorial management. The article presents an engineering-oriented architecture and deployment methodology for an integrated wildfire monitoring and response system that combines unmanned aerial vehicles (UAVs), unmanned ground [...] Read more.
Advancing environmental monitoring through coordinated autonomous systems is central to sustainable smart region governance and data-driven territorial management. The article presents an engineering-oriented architecture and deployment methodology for an integrated wildfire monitoring and response system that combines unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), and stationary sensor networks (SNs). We formalise hub-and-spoke infrastructure placement as a mixed-integer optimisation problem that accounts for platform types, endurance, travel times and logistical constraints, and propose a practical pre-processing pipeline (confidence scoring, resampling, Kalman/median filtering, strategy fusion) for heterogeneous telemetry and imagery. The system couples multimodal neural network processing (image backbones, clustering and time-series models) with online resource-allocation and mission-planning mechanisms to prioritise UAV/UGV sorties and dynamically select launch sites. The article describes scenario-driven operational modes (early warning, alarm verification, autonomous local extinguishing, post-fire recovery, sensor-gap compensation, and inter-hub reinforcement), defines validation protocols (synthetic experiments, precision/recall/F1, and hardware-in-the-loop testing), and proposes KPIs to assess environmental, social, and economic impacts for smart regions. The contribution is a reproducible, deployment-focused blueprint that bridges conceptual UAV–UGV–SN research and practical implementation, highlighting trade-offs in reliability, communication redundancy, and sustainability, and outlining directions for simulation, field pilots and algorithmic refinement. Full article
21 pages, 23093 KB  
Article
Keyframe-Guided Crack Segmentation and 3D Localization for UAV-Based Monocular Inspection
by Feifei Tang, Wuyuntana Gongzhabayier, Jing Li, Tao Zhou, Yue Qiu, Yong Zhan and Qiulin Song
Symmetry 2026, 18(4), 657; https://doi.org/10.3390/sym18040657 - 15 Apr 2026
Viewed by 246
Abstract
In unmanned aerial vehicle (UAV)-based monocular inspection, cracks typically present as geometrically asymmetric, elongated, low-contrast weak targets, making accurate segmentation and spatial localization challenging. Existing methods are susceptible to missed detections and false positives when handling slender cracks, and monocular 3D reconstruction for [...] Read more.
In unmanned aerial vehicle (UAV)-based monocular inspection, cracks typically present as geometrically asymmetric, elongated, low-contrast weak targets, making accurate segmentation and spatial localization challenging. Existing methods are susceptible to missed detections and false positives when handling slender cracks, and monocular 3D reconstruction for localization is often burdened by redundant frames, resulting in limited modeling efficiency. To mitigate these issues, we propose a high-precision framework for crack segmentation and spatial localization from UAV imagery. First, Oriented FAST and Rotated BRIEF–Simultaneous Localization and Mapping, version 3 (ORB-SLAM3) is adopted for keyframe selection to suppress data redundancy and improve reconstruction stability. Second, we develop an enhanced YOLOv11-seg model by integrating the Dilation-wise Residual Segmentation (DWRSeg) module, the Weighted IoU (WIoU) loss, and the Lightweight shared convolutional separator batch-normalization detection head (LSCSBD) to strengthen feature discrimination and segmentation robustness for slender cracks, yielding high-quality crack masks. Finally, the predicted masks are projected onto the reconstructed 3D surface to obtain precise spatial localization. Our experimental results demonstrate that the proposed approach improves the segmentation mAP@50 by 7.2% over the baseline while reducing computational complexity from 10.2 to 9.8 GFLOPs. In addition, keyframe-based processing reduces the 3D modeling time by 59.4% compared to that with full-frame reconstruction. Overall, the proposed framework jointly enhances crack segmentation accuracy and substantially accelerates 3D modeling and localization, providing an effective solution for efficient UAV-based crack inspection. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Intelligent Transportation)
Show Figures

Figure 1

25 pages, 2513 KB  
Article
YOLO-DAA: Directional Area Attention for Lightweight Tiny Object Detection in Maritime UAV Imagery
by Kuan-Chou Chen, Vinay Malligere Shivanna and Jiun-In Guo
Drones 2026, 10(4), 283; https://doi.org/10.3390/drones10040283 - 14 Apr 2026
Viewed by 390
Abstract
Tiny object detection in maritime Unmanned Aerial Vehicles (UAV) imagery remains challenging due to low-resolution targets, dynamic lighting, and vast water backgrounds that obscure fine spatial cues. This study introduces You Only Look Once – Directional Area Attention (YOLO-DAA), a lightweight yet direction-aware [...] Read more.
Tiny object detection in maritime Unmanned Aerial Vehicles (UAV) imagery remains challenging due to low-resolution targets, dynamic lighting, and vast water backgrounds that obscure fine spatial cues. This study introduces You Only Look Once – Directional Area Attention (YOLO-DAA), a lightweight yet direction-aware detection framework designed to enhance spatial reasoning and feature discrimination for maritime environments. The proposed model integrates two key components: the Spatial Reconstruction Unit (SRU), which dynamically filters redundant activations and reconstructs informative spatial features, and the Directional Area Attention (DAA), which introduces controllable row–column attention to model anisotropic dependencies. Together, they enable the network to capture orientation-sensitive structures such as elongated vessels and vertically aligned swimmers while maintaining real-time efficiency. Experimental results on Common Objects in Context (COCO) and SeaDronesSee datasets demonstrate that YOLO-DAA achieves significant improvements in both precision and recall, outperforming the YOLOv12-turbo baseline across multiple scales. In particular, the lightweight YOLO-DAA-n variant achieves a 12.5% AP95 gain on SeaDronesSee with minimal computational overhead. The findings confirm that directional attention and spatial reconstruction jointly enhance the representation of tiny maritime targets, offering an effective balance between accuracy and efficiency for real-world UAV deployments. Full article
Show Figures

Figure 1

24 pages, 11059 KB  
Article
Large-Scale Modeling of Urban Rooftop Solar Energy Potential Using UAS-Based Digital Photogrammetry and GIS Spatial Analysis: A Case Study of Sofia City, Bulgaria
by Stelian Dimitrov, Martin Iliev, Bilyana Borisova, Stefan Petrov, Ivo Ihtimanski, Leonid Todorov, Ivan Ivanov, Stoyan Valchev and Kristian Georgiev
Urban Sci. 2026, 10(4), 210; https://doi.org/10.3390/urbansci10040210 - 14 Apr 2026
Viewed by 916
Abstract
Urban rooftop photovoltaic systems represent a substantial yet still underutilized renewable energy resource, particularly in high-density residential environments. Accurate large-scale assessment of rooftop solar potential, however, remains challenging due to the complex geometry of urban morphology and the limited availability of high-resolution geospatial [...] Read more.
Urban rooftop photovoltaic systems represent a substantial yet still underutilized renewable energy resource, particularly in high-density residential environments. Accurate large-scale assessment of rooftop solar potential, however, remains challenging due to the complex geometry of urban morphology and the limited availability of high-resolution geospatial data. This study presents a large-scale methodological framework for estimating the theoretical photovoltaic potential of urban rooftop spaces using Unmanned Aerial System (UAS)-based digital photogrammetry and GIS-based spatial analysis. The approach integrates centimeter-resolution Digital Surface Models (DSMs) and orthophotos derived from fixed-wing UAS surveys with detailed rooftop vectorization and solar radiation modeling implemented in a GIS environment. The methodology accounts for rooftop geometry, surface orientation, slope, shading effects, and rooftop-mounted obstacles. The methodology consists of data collection of high-resolution RGB imagery suitable for detailed three-dimensional reconstruction. The images are captured with a UAS equipped with a S.O.D.A. 3D photogrammetric camera, creating a dense, georeferenced three-dimensional point cloud based on UAS imagery. Based on the point cloud, a high-resolution Digital Surface Model (DSM) was produced. Rooftop boundaries and rooftop-mounted structures were digitized on the basis of an orthophoto created from UAS imagery. The analysis workflow consists of solar modeling using ArcGIS Pro, including calculating the solar radiation. The next methodological step is to filter low radiation rooftops, steep slopes, and northern-oriented rooftops. Finally, we calculate the potential electricity production. The framework was applied to high-density residential districts in Sofia, Bulgaria, dominated by prefabricated panel buildings with predominantly flat rooftops. Drone applications in such studies are typically restricted to modeling individual roofs, which severely limits their scalability for district-wide evaluations. To overcome this, the study employs a specialized fixed-wing UAS uniquely certified for legal operations over densely populated urban environments. This platform rapidly maps large territories, ensuring consistent lighting and shading conditions that significantly enhance the accuracy of subsequent rooftop digitization. Furthermore, the resulting centimeter-level precision enables the exact vectorization of micro-rooftop obstacles. Capturing these intricate details is a critical innovation that effectively prevents the overestimation of solar energy potential commonly observed in conventional large-scale models. Solar radiation was modeled at the pixel level for a full annual cycle and filtered using photovoltaic suitability criteria, including minimum annual radiation thresholds, slope, and aspect constraints. Theoretical electricity production was subsequently estimated using zonal statistics and system performance parameters representative of contemporary photovoltaic installations. The results indicate a total theoretical annual electricity potential of approximately 76.7 GWh for the analyzed rooftop spaces, with an average production of about 34 MWh per rooftop and pronounced spatial variability driven by rooftop geometry and exposure conditions. The findings demonstrate the significant renewable energy potential embedded in existing urban rooftop infrastructure and highlight the applicability of UAS-based photogrammetry for high-resolution, large-area solar potential assessments. The proposed framework provides actionable information for urban energy planning, municipal solar cadaster development, and the strategic integration of photovoltaic systems into dense urban environments, particularly in regions lacking open-access high-resolution geospatial datasets. Full article
(This article belongs to the Special Issue Remote Sensing & GIS Applications in Urban Science)
Show Figures

Figure 1

Back to TopTop