Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,876)

Search Parameters:
Keywords = light detection and ranging (LiDAR)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 8109 KB  
Article
Development of an Orchard Inspection Robot: A ROS-Based LiDAR-SLAM System with Hybrid A*-DWA Navigation
by Jiwei Qu, Yanqiu Gu, Zhinuo Qiu, Kangquan Guo and Qingzhen Zhu
Sensors 2025, 25(21), 6662; https://doi.org/10.3390/s25216662 (registering DOI) - 1 Nov 2025
Abstract
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using [...] Read more.
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using Light Detection and Ranging (LiDAR) technology. A mobile robot that integrates tightly coupled multi-sensors is developed and implemented. The integration of LiDAR and Inertial Measurement Units (IMUs) enables the perception of environmental information. Moreover, the robot’s kinematic model is established, and coordinate transformations are performed based on the Unified Robotics Description Format (URDF). The URDF facilitates the visualization of robot features within the Robot Operating System (ROS). ROS navigation nodes are configured for path planning, where an improved A* algorithm, combined with the Dynamic Window Approach (DWA), is introduced to achieve efficient global and local path planning. The comparison of the simulation results with classical algorithms demonstrated the implemented algorithm exhibits superior search efficiency and smoothness. The robot’s navigation performance is rigorously tested, focusing on navigation accuracy and obstacle avoidance capability. Results demonstrated that, during temporary stops at waypoints, the robot exhibits an average lateral deviation of 0.163 m and a longitudinal deviation of 0.282 m from the target point. The average braking time and startup time of the robot at the four waypoints are 0.46 s and 0.64 s, respectively. In obstacle avoidance tests, optimal performance is observed with an expansion radius of 0.4 m across various obstacle sizes. The proposed combined method achieves efficient and stable global and local path planning, serving as a reference for future applications of mobile inspection robots in autonomous navigation. Full article
Show Figures

Figure 1

36 pages, 60911 KB  
Article
Effectiveness of Unmanned Aerial Vehicle-Based LiDAR for Assessing the Impact of Catastrophic Windstorm Events on Timberland
by Dipika Badal, Richard Cristan, Lana L. Narine, Sanjiv Kumar, Arjun Rijal and Manisha Parajuli
Drones 2025, 9(11), 756; https://doi.org/10.3390/drones9110756 (registering DOI) - 31 Oct 2025
Abstract
The southeastern United States (US) is known for its highly productive forests, but they are under intense threat from increasing climate-induced windstorms like hurricanes and tornadoes. This study explored the effectiveness of unmanned aerial vehicles (UAVs) equipped with Light Detection and Ranging (LiDAR) [...] Read more.
The southeastern United States (US) is known for its highly productive forests, but they are under intense threat from increasing climate-induced windstorms like hurricanes and tornadoes. This study explored the effectiveness of unmanned aerial vehicles (UAVs) equipped with Light Detection and Ranging (LiDAR) to detect, classify, and map windstorm damage in ten pine-dominated forest stands (10–20 acres each). Three classification techniques, Random Forest (RF), Maximum Likelihood (ML), and Decision Tree (DT), were tested on two datasets: RGB imagery integrated with LiDAR-derived Canopy Height Model (CHM) and without LiDAR-CHM. Using LiDAR-CHM integrated datasets, RF achieved an average Overall Accuracy (OA) of 94.52% and a kappa coefficient (k) of 0.92, followed by ML (average OA = 89.52% and k = 0.85), and DT (average OA = 81.78% and k = 0.75). The results showed that RF consistently outperformed ML and DT in classification accuracy across all sites. Without LiDAR-CHM, the performance of all classifiers significantly declined, underscoring the importance of structural data in distinguishing among the classification categories (downed trees, standing trees, ground, and water). These findings highlight the role of UAV-derived LiDAR-CHM in improving classification accuracy for assessing the impact of windstorm damage on forest stands. Full article
18 pages, 3632 KB  
Article
Rapeseed Yield Estimation Using UAV-LiDAR and an Improved 3D Reconstruction Method
by Na Li, Zhiwei Hou, Haiyong Jiang, Chongchong Chen, Chao Yang, Yanan Sun, Lei Yang, Tianyu Zhou, Jingyu Chu, Qingzhe Fan and Lijie Zhang
Agriculture 2025, 15(21), 2265; https://doi.org/10.3390/agriculture15212265 - 30 Oct 2025
Viewed by 191
Abstract
Quantitative estimation of rapeseed yield is important for precision crop management and sustainable agricultural development. Traditional manual measurements are inefficient and destructive, making them unsuitable for large-scale applications. This study proposes a canopy-volume estimation and yield-modeling framework based on unmanned aerial vehicle light [...] Read more.
Quantitative estimation of rapeseed yield is important for precision crop management and sustainable agricultural development. Traditional manual measurements are inefficient and destructive, making them unsuitable for large-scale applications. This study proposes a canopy-volume estimation and yield-modeling framework based on unmanned aerial vehicle light detection and ranging (UAV-LiDAR) data combined with a HybridMC-Poisson reconstruction algorithm. At the early yellow ripening stage, 20 rapeseed plants were reconstructed in 3D, and field data from 60 quadrats were used to establish a regression relationship between plant volume and yield. The results indicate that the proposed method achieves stable volume reconstruction under complex canopy conditions and yields a volume–yield regression model. When applied at the field scale, the model produced predictions with a relative error of approximately 12% compared with observed yields, within an acceptable range for remote sensing–based yield estimation. These findings support the feasibility of UAV-LiDAR–based volumetric modeling for rapeseed yield estimation and help bridge the scale from individual plants to entire fields. The proposed method provides a reference for large-scale phenotypic data acquisition and field-level yield management. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 47805 KB  
Article
Comparative Evaluation of Nine Machine Learning Models for Target and Background Noise Classification in GM-APD LiDAR Signals Using Monte Carlo Simulations
by Hongchao Ni, Jianfeng Sun, Xin Zhou, Di Liu, Xin Zhang, Jixia Cheng, Wei Lu and Sining Li
Remote Sens. 2025, 17(21), 3597; https://doi.org/10.3390/rs17213597 - 30 Oct 2025
Viewed by 86
Abstract
This study proposes a complete data-processing framework for Geiger-mode avalanche photodiode (GM-APD) light detection and ranging (LiDAR) echo signals. It investigates the feasibility of classifying target and background noise using machine learning. Four feature processing schemes were first compared, among which the PNT [...] Read more.
This study proposes a complete data-processing framework for Geiger-mode avalanche photodiode (GM-APD) light detection and ranging (LiDAR) echo signals. It investigates the feasibility of classifying target and background noise using machine learning. Four feature processing schemes were first compared, among which the PNT strategy (Principal Component Analysis without tail features) was identified as the most effective and adopted for subsequent analysis. Based on this framework, nine models derived from six baseline algorithms—Decision Trees (DTs), Support Vector Machines (SVMs), Backpropagation Neural Networks (NN-BPs), Linear Discriminant Analysis (LDA), Logistic Regression (LR), and k-Nearest Neighbors (KNN)—were systematically assessed under Monte Carlo simulations with varying echo signal-to-noise ratio (ESNR) and statistical frame number (SFN) conditions. Model performance was evaluated using eight metrics: accuracy, precision, recall, FPR, FNR, F1-score, Kappa coefficient, and relative change percentage (RCP). Monte Carlo simulations were employed to generate datasets, and Principal Component Analysis (PCA) was applied for feature extraction in the machine learning training process. The results show that LDA achieves the shortest training time (0.38 s at SFN = 20,000), DT maintains stable accuracy (0.7171–0.8247) across different SFNs, and NN-BP models perform optimally under low-SNR conditions. Specifically, NN-BP-3 achieves the highest test accuracy of 0.9213 at SFN = 20,000, while NN-BP-2 records the highest training accuracy of 0.9137. Regarding stability, NN-BP-3 exhibits the smallest RCP value (0.0111), whereas SVM-3 yields the largest (0.1937) at the same frame count. In conclusion, NN-BP-based models demonstrate clear advantages in classifying sky-background noise. Building on this, we design a ResNet based on NN-BP, which achieves further accuracy gains over the best baseline at 400, 2000, and 20,000 frames—12.5% (400), 9.16% (2000), and 2.79% (20,000)—clearly demonstrating the advantage of NN-BP for GM-APD LiDAR signal classification. This research thus establishes a novel framework for GM-APD LiDAR signal classification, provides the first systematic comparison of multiple machine learning models, and highlights the trade-off between accuracy and computational efficiency. The findings confirm the feasibility of applying machine learning to GM-APD data and offer practical guidance for balancing detection performance with real-time requirements in field applications. Full article
Show Figures

Figure 1

20 pages, 2898 KB  
Article
On the Lossless Compression of HyperHeight LiDAR Forested Landscape Data
by Viktor Makarichev, Andres Ramirez-Jaime, Nestor Porras-Diaz, Irina Vasilyeva, Vladimir Lukin, Gonzalo Arce and Krzysztof Okarma
Remote Sens. 2025, 17(21), 3588; https://doi.org/10.3390/rs17213588 - 30 Oct 2025
Viewed by 183
Abstract
Satellite Light Detection and Ranging (LiDAR) systems produce high-resolution data essential for confronting critical environmental challenges like climate change, disaster management, and ecological conservation. A HyperHeight Data Cube (HHDC) is a novel representation of LiDAR data. HHDCs are structured three-dimensional tensors, where each [...] Read more.
Satellite Light Detection and Ranging (LiDAR) systems produce high-resolution data essential for confronting critical environmental challenges like climate change, disaster management, and ecological conservation. A HyperHeight Data Cube (HHDC) is a novel representation of LiDAR data. HHDCs are structured three-dimensional tensors, where each cell captures the number of photons detected at specific spatial and height coordinates. These data structures preserve the detailed vertical and horizontal information essential for ecological and topographical analyses, particularly Digital Terrain Models and canopy height profiles. In this paper, we investigate lossless compression techniques for large volumes of HHDCs to alleviate constraints on onboard storage, processing resources, and downlink bandwidth. We analyze several methods, including bit packing, Rice coding (RC), run-length encoding (RLE), and context-adaptive binary arithmetic coding (CABAC), as well as their combinations. We introduce the block-splitting framework, which is a simplified version of octrees. The combination of RC with RLE and CABAC within this framework achieves a median compression ratio greater than 24, which is confirmed by the results of processing two large sets of HHDCs simulated using the Smithsonian Environmental Research Center NEON data. Full article
Show Figures

Figure 1

26 pages, 778 KB  
Review
Applications of 3D Reconstruction Techniques in Crop Canopy Phenotyping: A Review
by Yanzhou Li, Zhuo Liang, Bo Liu, Lijuan Yin, Fanghao Wan, Wanqiang Qian and Xi Qiao
Agronomy 2025, 15(11), 2518; https://doi.org/10.3390/agronomy15112518 - 29 Oct 2025
Viewed by 143
Abstract
Amid growing challenges to global food security, high-throughput crop phenotyping has become an essential tool, playing a critical role in genetic improvement, biomass estimation, and disease prevention. Unlike controlled laboratory environments, field-based phenotypic data collection is highly vulnerable to unpredictable factors, significantly complicating [...] Read more.
Amid growing challenges to global food security, high-throughput crop phenotyping has become an essential tool, playing a critical role in genetic improvement, biomass estimation, and disease prevention. Unlike controlled laboratory environments, field-based phenotypic data collection is highly vulnerable to unpredictable factors, significantly complicating the data acquisition process. As a result, the choice of appropriate data collection equipment and processing methods has become a central focus of research. Currently, three key technologies for extracting crop phenotypic parameters are Light Detection and Ranging (LiDAR), Multi-View Stereo (MVS), and depth camera systems. LiDAR is valued for its rapid data acquisition and high-quality point cloud output, despite its substantial cost. MVS offers the potential to combine low-cost deployment with high-resolution point cloud generation, though challenges remain in the complexity and efficiency of point cloud processing. Depth cameras strike a favorable balance between processing speed, accuracy, and cost-effectiveness, yet their performance can be influenced by ambient conditions such as lighting. Data processing techniques primarily involve point cloud denoising, registration, segmentation, and reconstruction. This review summarizes advances over the past five years in 3D reconstruction technologies—focusing on both hardware and point cloud processing methods—with the aim of supporting efficient and accurate 3D phenotype acquisition in high-throughput crop research. Full article
Show Figures

Figure 1

18 pages, 10509 KB  
Article
High-Precision Mapping and Real-Time Localization for Agricultural Machinery Sheds and Farm Access Roads Environments
by Yang Yu, Zengyao Li, Buwang Dai, Jiahui Pan and Lizhang Xu
Agriculture 2025, 15(21), 2248; https://doi.org/10.3390/agriculture15212248 - 28 Oct 2025
Viewed by 233
Abstract
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system [...] Read more.
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system for agricultural vehicles. First, an autonomous navigation system was developed by integrating multi-sensor data from LiDAR (Light Laser Detection and Ranging), GNSS, and IMU (Inertial Measurement Unit), with functional modules for mapping, localization, planning, and control implemented within the ROS (Robot Operating System) framework. Second, an improved LeGO-LOAM algorithm is introduced for constructing maps of machinery sheds and farm access roads. The mapping accuracy is enhanced through reflectivity filtering, ground constraint optimization, and ScanContext-based loop closure detection. Finally, a localization method combining NDT (Normal Distribution Transform), IMU, and a UKF (Unscented Kalman Filter) is proposed for tracked grain transport vehicles. The UKF and IMU measurements are used to predict the vehicle state, while the NDT algorithm provides pose estimates for state update, yielding a fused and more accurate pose estimate. Experimental results demonstrate that the proposed mapping method reduces APE (absolute pose error) by 79.99% and 49.04% in the machinery sheds and farm access roads environments, respectively, indicating a significant improvement over conventional methods. The real-time localization module achieves an average processing time of 26.49 ms with an average error of 3.97 cm, enhancing localization accuracy without compromising output frequency. This study provides technical support for fully autonomous operation of agricultural machinery. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
Show Figures

Figure 1

31 pages, 17070 KB  
Article
WRF Simulations of Passive Tracer Transport from Biomass Burning in South America: Sensitivity to PBL Schemes
by Douglas Lima de Bem, Vagner Anabor, Damaris Kirsch Pinheiro, Luiz Angelo Steffenel, Hassan Bencherif, Gabriela Dornelles Bittencourt, Eduardo Landulfo and Umberto Rizza
Remote Sens. 2025, 17(20), 3483; https://doi.org/10.3390/rs17203483 - 19 Oct 2025
Viewed by 433
Abstract
This single high-impact case study investigates the impact of planetary boundary layer (PBL) representation on long-range transport of Amazon fire smoke that reached the Metropolitan Area of São Paulo (MASP) from 15 to 20 August 2019, using the WRF model to compare three [...] Read more.
This single high-impact case study investigates the impact of planetary boundary layer (PBL) representation on long-range transport of Amazon fire smoke that reached the Metropolitan Area of São Paulo (MASP) from 15 to 20 August 2019, using the WRF model to compare three PBL schemes (MYNN 2.5, YSU, and BouLac) and three source-tagged tracers. The simulations are evaluated against MODIS-derived aerosol optical depth (AOD), the Light Detection and Ranging (LiDAR) time–height curtain over MASP, and HYSPLIT forward trajectories. Transport is diagnosed along the source-to-MASP pathway using six-hourly cross-sections and two integrative metrics: the projected mean wind in the 700–600 hPa layer and the vertical moment of tracer mass above the boundary layer. Outflow and downwind impact are strongest when a persistent reservoir between 2 and 4 km coexists with projected winds for several hours. In this episode, MYNN maintains an elevated 2–5 km transport layer and matches the observed arrival time and altitude, YSU yields a denser but delayed column, and BouLac produces discontinuous pulses with reduced coherence over the city. A negatively tilted trough, jet coupling, and a nearly stationary front establish a northwest-to-southeast corridor consistent across model fields, trajectories, and satellite signal. Seasonal robustness should be assessed with multi-event, multi-model analyses. Full article
Show Figures

Figure 1

25 pages, 4025 KB  
Review
Precision Forestry Revisited
by Can Vatandaslar, Kevin Boston, Zennure Ucar, Lana L. Narine, Marguerite Madden and Abdullah Emin Akay
Remote Sens. 2025, 17(20), 3465; https://doi.org/10.3390/rs17203465 - 17 Oct 2025
Viewed by 597
Abstract
This review presents a synthesis of global research on precision forestry, a field that integrates advanced technologies to enhance—rather than replace—established tools and methods used in the operational forest management and the wood products industry. By evaluating 210 peer-reviewed publications indexed in Web [...] Read more.
This review presents a synthesis of global research on precision forestry, a field that integrates advanced technologies to enhance—rather than replace—established tools and methods used in the operational forest management and the wood products industry. By evaluating 210 peer-reviewed publications indexed in Web of Science (up to 2025), the study identifies six main categories and eight components of precision forestry. The findings indicate that “forest management and planning” is the most common category, with nearly half of the studies focusing on this topic. “Remote sensing platforms and sensors” emerged as the most frequently used component, with unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) systems being the most widely adopted tools. The analysis also reveals a notable increase in precision forestry research since the early 2010s, coinciding with rapid developments in small UAVs and mobile sensor technologies. Despite growing interest, robotics and real-time process control systems remain underutilized, mainly due to challenging forest conditions and high implementation costs. The research highlights geographical disparities, with Europe, Asia, and North America hosting the majority of studies. Italy, China, Finland, and the United States stand out as the most active countries in terms of research output. Notably, the review emphasizes the need to integrate precision forestry into academic curricula and support industry adoption through dedicated information and technology specialists. As the forestry workforce ages and technology advances rapidly, a growing skills gap exists between industry needs and traditional forestry education. Equipping the next generation with hands-on experience in big data analysis, geospatial technologies, automation, and Artificial Intelligence (AI) is critical for ensuring the effective adoption and application of precision forestry. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

27 pages, 4875 KB  
Article
A Comprehensive Radar-Based Berthing-Aid Dataset (R-BAD) and Onboard System for Safe Vessel Docking
by Fotios G. Papadopoulos, Antonios-Periklis Michalopoulos, Efstratios N. Paliodimos, Ioannis K. Christopoulos, Charalampos Z. Patrikakis, Alexandros Simopoulos and Stylianos A. Mytilinaios
Electronics 2025, 14(20), 4065; https://doi.org/10.3390/electronics14204065 - 16 Oct 2025
Viewed by 287
Abstract
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been [...] Read more.
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been deployed as auxiliary tools to support informed berthing decisions; however, these sensing modalities are severely affected by weather and light conditions, respectively, while cameras in particular are inherently incapable of providing direct range measurements. In this paper, we introduce a comprehensive, Radar-Based Berthing-Aid Dataset (R-BAD), aimed to cultivate the development of safe berthing systems onboard ships. The proposed R-BAD dataset includes a large collection of Frequency-Modulated Continuous Wave (FMCW) radar data in point cloud format alongside timestamped and synced video footage. There are more than 69 h of recorded ship operations, and the dataset is freely accessible to the interested reader(s). We also propose an onboard support system for radar-aided vessel docking, which enables obstacle detection, clustering, tracking and classification during ferry berthing maneuvers. The proposed dataset covers all docking/undocking scenarios (arrivals, departures, port idle, and cruising operations) and was used to train various machine/deep learning models of substantial performance, showcasing its validity for further autonomous navigation systems development. The berthing-aid system is tested in real-world conditions onboard an operational Ro-Ro/Passenger Ship and demonstrated superior, weather-resilient, repeatable and robust performance in detection, tracking and classification tasks, demonstrating its technology readiness for integration into future autonomous berthing-aid systems. Full article
Show Figures

Figure 1

32 pages, 2733 KB  
Article
Collaborative Multi-Agent Platform with LIDAR Recognition and Web Integration for STEM Education
by David Cruz García, Sergio García González, Arturo Álvarez Sanchez, Rubén Herrero Pérez and Gabriel Villarrubia González
Appl. Sci. 2025, 15(20), 11053; https://doi.org/10.3390/app152011053 - 15 Oct 2025
Viewed by 282
Abstract
STEM (Science, Technology, Engineering, and Mathematics) education faces the challenge of incorporating advanced technologies that foster motivation, collaboration, and hands-on learning. This study proposes a portable system capable of transforming ordinary surfaces into interactive learning spaces through gamification and spatial perception. A prototype [...] Read more.
STEM (Science, Technology, Engineering, and Mathematics) education faces the challenge of incorporating advanced technologies that foster motivation, collaboration, and hands-on learning. This study proposes a portable system capable of transforming ordinary surfaces into interactive learning spaces through gamification and spatial perception. A prototype based on multi-agent architecture was developed on the PANGEA (Platform for automatic coNstruction of orGanizations of intElligent agents) platform, integrating LIDAR (Light Detection and Ranging) sensors for gesture detection, an ultra-short-throw projector for visual interaction and a web platform to manage educational content, organize activities and evaluate student performance. The data from the sensors is processed in real time using ROS (Robot Operating System), generating precise virtual interactions on the projected surface, while the web allows you to configure physical and pedagogical parameters. Preliminary tests show that the system accurately detects gestures, translates them into digital interactions, and maintains low latency in different classroom environments, demonstrating robustness, modularity, and portability. The results suggest that the combination of multi-agent architectures, LIDAR sensors, and gamified platforms offers an effective approach to promote active learning in STEM, facilitate the adoption of advanced technologies in diverse educational settings, and improve student engagement and experience. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 10966 KB  
Article
UAV-Based Wellsite Reclamation Monitoring Using Transformer-Based Deep Learning on Multi-Seasonal LiDAR and Multispectral Data
by Dmytro Movchan, Zhouxin Xi, Angeline Van Dongen, Charumitha Selvaraj and Dani Degenhardt
Remote Sens. 2025, 17(20), 3440; https://doi.org/10.3390/rs17203440 - 15 Oct 2025
Viewed by 340
Abstract
Monitoring reclaimed wellsites in boreal forest environments requires accurate, scalable, and repeatable methods for assessing vegetation recovery. This study evaluates the use of uncrewed aerial vehicle (UAV)-based light detection and ranging (LiDAR) and multispectral (MS) imagery for individual tree detection, crown delineation, and [...] Read more.
Monitoring reclaimed wellsites in boreal forest environments requires accurate, scalable, and repeatable methods for assessing vegetation recovery. This study evaluates the use of uncrewed aerial vehicle (UAV)-based light detection and ranging (LiDAR) and multispectral (MS) imagery for individual tree detection, crown delineation, and classification across five reclaimed wellsites in Alberta, Canada. A deep learning workflow using 3D convolutional neural networks was applied to LiDAR and MS data collected in spring, summer, and autumn. Results show that LiDAR alone provided high accuracy for tree segmentation and height estimation, with a mean intersection over union (mIoU) of 0.94 for vegetation filtering and an F1-score of 0.82 for treetop detection. Incorporating MS data improved deciduous/coniferous classification, with the highest accuracy (mIoU = 0.88) achieved using all five spectral bands. Coniferous species were classified more accurately than deciduous species, and classification performance declined for trees shorter than 2 m. Spring conditions yielded the highest classification accuracy (mIoU = 0.93). Comparisons with ground measurements confirmed a strong correlation for tree height estimation (R2 = 0.95; root mean square error = 0.40 m). Limitations of this technique included lower performance for short, multi-stemmed trees and deciduous species, particularly willow. This study demonstrates the value of integrating 3D structural and spectral data for monitoring forest recovery and supports the use of UAV remote sensing for scalable post-disturbance vegetation assessment. The trained models used in this study are publicly available through the TreeAIBox plugin to support further research and operational applications. Full article
Show Figures

Figure 1

30 pages, 14674 KB  
Article
Modulation of Typical Three-Dimensional Targets on the Echo Waveform Using Analytical Formula
by Yongxiang Wang, Xinyuan Zhang, Shilong Xu, Fei Han, Yuhao Xia, Jiajie Fang and Yihua Hu
Remote Sens. 2025, 17(20), 3419; https://doi.org/10.3390/rs17203419 - 13 Oct 2025
Viewed by 267
Abstract
Despite the wide applications of full-waveform light detection and ranging (FW-LiDAR) on target detection and recognizing, topographical mapping, and ecological management, etc., the mapping between the echo waveform and the properties of the targets, even for typical three-dimensional (3D) targets, has not been [...] Read more.
Despite the wide applications of full-waveform light detection and ranging (FW-LiDAR) on target detection and recognizing, topographical mapping, and ecological management, etc., the mapping between the echo waveform and the properties of the targets, even for typical three-dimensional (3D) targets, has not been established. The mechanics of the modulation of targets on the echo waveform is thus ambiguous, constraining the retrieval of target properties in FW-LiDAR. This paper derived the formula of echo waveform modulated by typical 3D targets, namely, a rectangular prism, a regular hexagonal prism, and a cone. The modulation of shape, size, position, and attitude of 3D targets on the echo waveform has been investigated extensively. The results showed that, for prisms, variations in the echo waveforms under various factors essentially arise from changes in the inclination angles of their reflective surfaces and their positions relative to the laser spot. For cones, their echo waveforms can be approximated and analyzed using isosceles triangular micro-facets. The work in this paper is helpful in probing the modulation of 3D targets on echo waveform, as well as extracting the properties of 3D targets in FW-LiDAR domains, which are significant in areas ranging from topographical mapping to space debris monitoring. Full article
Show Figures

Figure 1

23 pages, 8993 KB  
Article
Automatic Rooftop Solar Panel Recognition from UAV LiDAR Data Using Deep Learning and Geometric Feature Analysis
by Joel Coglan, Zahra Gharineiat and Fayez Tarsha Kurdi
Remote Sens. 2025, 17(19), 3389; https://doi.org/10.3390/rs17193389 - 9 Oct 2025
Viewed by 598
Abstract
As drone-based Light Detection and Ranging (LiDAR) becomes more accessible, it presents new opportunities for automated, geometry-driven classification. This study investigates the use of LiDAR point cloud data and Machine Learning (ML) to classify rooftop solar panels from building surfaces. While rooftop solar [...] Read more.
As drone-based Light Detection and Ranging (LiDAR) becomes more accessible, it presents new opportunities for automated, geometry-driven classification. This study investigates the use of LiDAR point cloud data and Machine Learning (ML) to classify rooftop solar panels from building surfaces. While rooftop solar detection has been explored using satellite and aerial imagery, LiDAR offers geometric and reflectance-based attributes for classification. Two datasets were used: the University of Southern Queensland (UniSQ) campus, with commercial-sized panels, both elevated and flat, and a suburban area in Newcastle, Australia, with residential-sized panels sitting flush with the roof surface. UniSQ was classified using RANSAC (Random Sample Consensus), while Newcastle’s dataset was processed based on reflectance values. Geometric features were selected based on histogram overlap and Kullback–Leibler (KL) divergence, and models were trained using a Multilayer Perceptron (MLP) classifier implemented in both PyTorch and Scikit-learn libraries. Classification achieved F1 scores of 99% for UniSQ and 95–96% for the Newcastle dataset. These findings support the potential for ML-based classification to be applied to unlabelled datasets for rooftop solar analysis. Future work could expand the model to detect additional rooftop features and estimate panel counts across urban areas. Full article
Show Figures

Figure 1

26 pages, 3841 KB  
Article
Comparison of Regression, Classification, Percentile Method and Dual-Range Averaging Method for Crop Canopy Height Estimation from UAV-Based LiDAR Point Cloud Data
by Pai Du, Jinfei Wang and Bo Shan
Drones 2025, 9(10), 683; https://doi.org/10.3390/drones9100683 - 1 Oct 2025
Viewed by 401
Abstract
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient [...] Read more.
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient alternative by capturing three-dimensional point cloud data (PCD). In this study, UAV-LiDAR data were acquired using a DJI Matrice 600 Pro equipped with a 16-channel LiDAR system. Three canopy height estimation methodological approaches were evaluated across three crop types: corn, soybean, and winter wheat. Specifically, this study assessed machine learning regression modeling, ground point classification techniques, percentile-based method and a newly proposed Dual-Range Averaging (DRA) method to identify the most effective method while ensuring practicality and reproducibility. The best-performing method for corn was Support Vector Regression (SVR) with a linear kernel (R2 = 0.95, RMSE = 0.137 m). For soybean, the DRA method yielded the highest accuracy (R2 = 0.93, RMSE = 0.032 m). For winter wheat, the PointCNN deep learning model demonstrated the best performance (R2 = 0.93, RMSE = 0.046 m). These results highlight the effectiveness of integrating UAV-LiDAR data with optimized processing methods for accurate and widely applicable crop height estimation in support of precision agriculture practices. Full article
(This article belongs to the Special Issue UAV Agricultural Management: Recent Advances and Future Prospects)
Show Figures

Figure 1

Back to TopTop