Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (208)

Search Parameters:
Keywords = extraction of tree feature parameters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 16939 KiB  
Article
A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage
by Jiaqi Li, Qingqing Huang, Xin Wang, Benye Xi, Jie Duan, Hang Yin and Lingya Li
Remote Sens. 2025, 17(8), 1473; https://doi.org/10.3390/rs17081473 - 20 Apr 2025
Viewed by 283
Abstract
Three-dimensional models of trees can help simulate forest resource management, field surveys, and urban landscape design. With the advancement of Computer Vision (CV) and laser remote sensing technology, forestry researchers can use images and point cloud data to perform digital modeling. However, modeling [...] Read more.
Three-dimensional models of trees can help simulate forest resource management, field surveys, and urban landscape design. With the advancement of Computer Vision (CV) and laser remote sensing technology, forestry researchers can use images and point cloud data to perform digital modeling. However, modeling leafless tree models that conform to tree growth rules and have effective branching remains a major challenge. This article proposes a method based on 3D Gaussian Splatting (3D GS) to address this issue. Firstly, we compared the reconstruction of the same tree and confirmed the advantages of the 3D GS method in tree 3D reconstruction. Secondly, seven landscape trees were reconstructed using the 3D GS-based method, to verify the effectiveness of the method. Finally, the 3D reconstructed point cloud was used to generate the QSM and extract tree feature parameters to verify the accuracy of the reconstructed model. Our results indicate that this method can effectively reconstruct the structure of real trees, and especially completely reconstruct 3rd-order branches. Meanwhile, the error of the Diameter at Breast Height (DBH) of the model is below 1.59 cm, with a relative error of 3.8–14.6%. This proves that 3D GS effectively solved the problems of inconsistency between tree models and real growth rules, as well as poor branch structure in tree reconstruction models, providing new insights and research directions for the 3D reconstruction and visualization of landscape trees in the leafless stage. Full article
Show Figures

Figure 1

34 pages, 10447 KiB  
Article
Investigating the Effects of 2D/3D Urban Morphology on Land Surface Temperature Using High-Resolution Remote Sensing Data
by You Mo, Yongfang Huang, Ruofei Zhong, Bin Wang and Zhaocheng Guo
Buildings 2025, 15(8), 1256; https://doi.org/10.3390/buildings15081256 - 10 Apr 2025
Viewed by 335
Abstract
Understanding the influence of urban morphology on Land Surface Temperature (LST) is essential for urban planning, development, and mitigating the urban heat island effect. Leveraging high-resolution remote sensing data, this study systematically extracted 64 2D urban morphological parameters (UMPs) and 28 3D UMPs, [...] Read more.
Understanding the influence of urban morphology on Land Surface Temperature (LST) is essential for urban planning, development, and mitigating the urban heat island effect. Leveraging high-resolution remote sensing data, this study systematically extracted 64 2D urban morphological parameters (UMPs) and 28 3D UMPs, along with their corresponding summer and winter LST data, at both the grid level (using a 30 m × 30 m grid as the minimum unit) and the block level (using an urban block as the minimum unit). The 2D UMPs were derived from landscape indices of land cover, while the 3D UMPs included 3D building-related UMPs (BUMPs) and tree-related UMPs (TUMPs). Ultimately, multiple statistical methods were employed to investigate the complex mechanisms through which these 2D and 3D UMPs influence LST across summer and winter. This study showed the following results: (1) Most 2D and 3D UMPs significantly correlated with LST in both seasons at the grid/block levels, with stronger correlations at block level. (2) Stepwise regression revealed that combining 2D and 3D UMPs enhanced LST explanation, achieving R2 = 70.9% (summer) and 65.7% (winter) for the entire area, with consistent results in built-up zones. (3) Relative importance analysis identified 35 (summer) and 28 (winter) influential features, which were ranked as follows: 2D UMPs > 3D BUMPs > 3D TUMPs. This highlights 2D UMPs’ dominance while confirming 3D UMPs’ significance. These findings emphasize the need for integrated 2D and 3D urban design, considering both planar layouts and vertical configurations of buildings/vegetation. This study provides practical guidance for thermal environment mitigation and sustainable urban development through optimized spatial planning. Full article
(This article belongs to the Special Issue Advanced Studies in Urban and Regional Planning—2nd Edition)
Show Figures

Figure 1

26 pages, 65178 KiB  
Article
Comparison of UAV-Based LiDAR and Photogrammetric Point Cloud for Individual Tree Species Classification of Urban Areas
by Qixia Man, Xinming Yang, Haijian Liu, Baolei Zhang, Pinliang Dong, Jingru Wu, Chunhui Liu, Changyin Han, Cong Zhou, Zhuang Tan and Qian Yu
Remote Sens. 2025, 17(7), 1212; https://doi.org/10.3390/rs17071212 - 28 Mar 2025
Viewed by 653
Abstract
UAV LiDAR and digital aerial photogrammetry (DAP) have shown great performance in forest inventory due to their advantage in three-dimensional information extraction. Many studies have compared their performance in individual tree segmentation and structural parameters extraction (e.g. tree height). However, few studies have [...] Read more.
UAV LiDAR and digital aerial photogrammetry (DAP) have shown great performance in forest inventory due to their advantage in three-dimensional information extraction. Many studies have compared their performance in individual tree segmentation and structural parameters extraction (e.g. tree height). However, few studies have compared their performance in tree species classification. Therefore, we have compared the performance of UAV LiDAR and DAP-based point clouds in individual tree species classification with the following steps: (1) Point cloud data processing: Denoising, smoothing, and normalization were conducted on LiDAR and DAP-based point cloud data separately. (2) Feature extraction: Spectral, structural, and texture features were extracted from the pre-processed LiDAR and DAP-based point cloud data. (3) Individual tree segmentation: The marked watershed algorithm was used to segment individual trees on canopy height models (CHM) derived from LiDAR and DAP data, respectively. (4) Pixel-based tree species classification: The random forest classifier (RF) was used to classify urban tree species with features derived from LiDAR and DAP data separately. (5) Individual tree species classification: Based on the segmented individual tree boundaries and pixel-based classification results, the majority filtering method was implemented to obtain the final individual tree species classification results. (6) Fused with hyperspectral data: LiDAR-hyperspectral and DAP-hyperspectral fused data were used to conduct individual tree species classification. (7) Accuracy assessment and comparison: The accuracy of the above results were assessed and compared. The results indicate that LiDAR outperformed DAP in individual tree segmentation (F-score 0.83 vs. 0.79), while DAP achieved higher pixel-level classification accuracy (73.83% vs. 57.32%) due to spectral-textural features. Fusion with hyperspectral data narrowed the gap, with LiDAR reaching 95.98% accuracy in individual tree classification. Our findings suggest that DAP offers a cost-effective alternative for urban forest management, balancing accuracy and operational costs. Full article
Show Figures

Figure 1

16 pages, 5701 KiB  
Article
Generating Human-Interpretable Rules from Convolutional Neural Networks
by Russel Pears and Ashwini Kumar Sharma
Information 2025, 16(3), 230; https://doi.org/10.3390/info16030230 - 16 Mar 2025
Viewed by 466
Abstract
Advancements in the field of artificial intelligence have been rapid in recent years and have revolutionized various industries. Various deep neural network architectures capable of handling both text and images, covering code generation from natural language as well as producing machine translation and [...] Read more.
Advancements in the field of artificial intelligence have been rapid in recent years and have revolutionized various industries. Various deep neural network architectures capable of handling both text and images, covering code generation from natural language as well as producing machine translation and text summaries, have been proposed. For example, convolutional neural networks or CNNs perform image classification at a level equivalent to that of humans on many image datasets. These state-of-the-art networks have reached unprecedented levels of success by using complex architectures with billions of parameters, numerous kernel configurations, weight initialization, and regularization methods. Unfortunately to reach this level of success, the models that CNNs use are essentially black box in nature, with little or no human-interpretable information on the decision-making process. This lack of transparency in decision making gave rise to concerns amongst some sectors of the user community such as healthcare, finance, justice, and defense, among others. This challenge motivated our research, where we successfully produced human-interpretable influential features from CNNs for image classification and captured the interactions between these features by producing a concise decision tree making that makes classification decisions. The proposed methodology makes use of a pretrained VGG-16 with fine-tuning to extract feature maps produced by learnt filters. On the CelebA image benchmark dataset, we successfully produced human-interpretable rules that captured the main facial landmarks responsible for segmenting men from women with 89.6% accuracy, while on the more challenging Cats vs. Dogs dataset, the decision tree achieved 87.6% accuracy. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Graphical abstract

22 pages, 7559 KiB  
Article
Automated Tunnel Point Cloud Segmentation and Extraction Method
by Zhe Wang, Zhenyi Zhu, Yong Wu, Qihao Hong, Donglai Jiang, Jinbo Fu and Sifa Xu
Appl. Sci. 2025, 15(6), 2926; https://doi.org/10.3390/app15062926 - 7 Mar 2025
Viewed by 763
Abstract
To address the issue of inaccurate tunnel segmentation caused by solely relying on point cloud coordinates, this paper proposes two algorithms, GuSAC and TMatch, along with a ring-based cross-section extraction method to achieve high-precision tunnel lining segmentation and cross-section extraction. GuSAC, based on [...] Read more.
To address the issue of inaccurate tunnel segmentation caused by solely relying on point cloud coordinates, this paper proposes two algorithms, GuSAC and TMatch, along with a ring-based cross-section extraction method to achieve high-precision tunnel lining segmentation and cross-section extraction. GuSAC, based on the RANSAC algorithm, introduces a minimum spanning tree to reconstruct the topological structure of the tunnel design axis. By using a sliding window, it effectively distinguishes between curved and straight sections of long tunnels while removing non-tunnel structural point clouds with normal vectors, thereby enhancing the lining boundary features and significantly improving the automation level of tunnel processing. At the same time, the TMatch algorithm, which combines cluster analysis and Gaussian Mixture Models (GMMs), achieves accurate segmentation of tunnel rings and inner ring areas and further determines the tunnel cross-section position based on this segmentation result to complete the cross-section extraction. Experimental results show that the proposed method achieves a segmentation accuracy of up to 95% on a standard tunnel point cloud dataset. Compared with traditional centerline extraction methods, the proposed cross-section extraction method does not require complex parameter settings, provides more stable positioning, and demonstrates high practicality and robustness. Full article
Show Figures

Figure 1

13 pages, 2215 KiB  
Article
Disease Infection Classification in Coconut Tree Based on an Enhanced Visual Geometry Group Model
by Xiaocun Huang, Mustafa Muwafak Alobaedy, Yousef Fazea, S. B. Goyal and Zilong Deng
Processes 2025, 13(3), 689; https://doi.org/10.3390/pr13030689 - 27 Feb 2025
Viewed by 616
Abstract
The coconut is a perennial, evergreen tree in the palm family that belongs to the monocotyledonous group. The coconut plant holds significant economic value due to the diverse functions served by each of its components. Any ailment that impacts the productivity of the [...] Read more.
The coconut is a perennial, evergreen tree in the palm family that belongs to the monocotyledonous group. The coconut plant holds significant economic value due to the diverse functions served by each of its components. Any ailment that impacts the productivity of the coconut plantation will ultimately have repercussions on the associated industries and the sustenance of the families reliant on the coconut economy. Deep learning has the potential to significantly alter the landscape of plant disease detection. Convolutional neural networks are trained using extensive datasets that include annotated images of plant diseases. This training enables the models to develop high-level proficiency in identifying complex patterns and extracting disease-specific features with exceptional accuracy. To address the need for a large dataset for training, an Enhanced Visual Geometry Group (EVGG16) model utilizing transfer learning was developed for detecting disease infections in coconut trees. The EVGG16 model achieves effective training with a limited quantity of data, utilizing the weight parameters of the convolution layer and pooling layer from the pre-training model to perform transfer Visual Geometry Group (VGG16) network model. Through hyperparameter tuning and optimized training batch configurations, we achieved enhanced recognition accuracy, facilitating the development of more robust and stable predictive models. Experimental results demonstrate that the EVGG16 model achieved a 97.70% accuracy rate, highlighting its strong performance and suitability for practical applications in disease detection for plantations. Full article
(This article belongs to the Special Issue Transfer Learning Methods in Equipment Reliability Management)
Show Figures

Figure 1

23 pages, 4913 KiB  
Article
Sweet Potato Yield Prediction Using Machine Learning Based on Multispectral Images Acquired from a Small Unmanned Aerial Vehicle
by Kriti Singh, Yanbo Huang, Wyatt Young, Lorin Harvey, Mark Hall, Xin Zhang, Edgar Lobaton, Johnie Jenkins and Mark Shankle
Agriculture 2025, 15(4), 420; https://doi.org/10.3390/agriculture15040420 - 17 Feb 2025
Viewed by 658
Abstract
Accurate prediction of sweet potato yield is crucial for effective crop management. This study investigates the use of vegetation indices (VIs) extracted from multispectral images acquired by a small unmanned aerial vehicle (UAV) throughout the growing season, along with in situ-measured plant physiological [...] Read more.
Accurate prediction of sweet potato yield is crucial for effective crop management. This study investigates the use of vegetation indices (VIs) extracted from multispectral images acquired by a small unmanned aerial vehicle (UAV) throughout the growing season, along with in situ-measured plant physiological parameters, to predict sweet potato yield. The data acquisition process through UAV field imaging is discussed in detail along with the extraction process for the multispectral bands that we use as features. The experiment is designed with a combination of different nitrogen application rates and cover crop treatments. The dependence of VIs and crop physiological parameters, such as leaf chlorophyll content, plant biomass, vine length, and leaf nitrogen content, on yield is evaluated through feature selection methods and model performance. Classical machine learning (ML) approaches and tree-based algorithms, like XGBoost and Random Forest, are implemented. Additionally, a soft-voting ML model ensemble approach is employed to improve performance of yield prediction. Individual models are trained and tested for different cover crop and nitrogen treatments to capture the relationships between the treatments and the target yield variable. The performance of the ML algorithms is evaluated using various popular model performance metrics like R2, RMSE, and MAE. Through modelling the data for cover crops and nitrogen treatment rates using individual models, the relationships and effects of different treatments on yield are explored. Important VIs useful for the study are identified through feature selection and model performance evaluation. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

17 pages, 7393 KiB  
Article
Laser Stripe Centerline Extraction Method for Deep-Hole Inner Surfaces Based on Line-Structured Light Vision Sensing
by Huifu Du, Daguo Yu, Xiaowei Zhao and Ziyang Zhou
Sensors 2025, 25(4), 1113; https://doi.org/10.3390/s25041113 - 12 Feb 2025
Viewed by 652
Abstract
This paper proposes a point cloud post-processing method based on the minimum spanning tree (MST) and depth-first search (DFS) to extract laser stripe centerlines from the complex inner surfaces of deep holes. Addressing the limitations of traditional image processing methods, which are affected [...] Read more.
This paper proposes a point cloud post-processing method based on the minimum spanning tree (MST) and depth-first search (DFS) to extract laser stripe centerlines from the complex inner surfaces of deep holes. Addressing the limitations of traditional image processing methods, which are affected by burrs and low-frequency random noise, this method utilizes 360° structured light to illuminate the inner wall of the deep hole. A sensor captures laser stripe images, and the Steger algorithm is employed to extract sub-pixel point clouds. Subsequently, an MST is used to construct the point cloud connectivity structure, while DFS is applied for path search and noise removal to enhance extraction accuracy. Experimental results demonstrate that this method significantly improves extraction accuracy, with a dice similarity coefficient (DSC) approaching 1 and a maximum Hausdorff distance (HD) of 3.3821 pixels, outperforming previous methods. This study provides an efficient and reliable solution for the precise extraction of complex laser stripes and lays a solid data foundation for subsequent feature parameter calculations and 3D reconstruction. Full article
Show Figures

Figure 1

26 pages, 33213 KiB  
Article
From Crown Detection to Boundary Segmentation: Advancing Forest Analytics with Enhanced YOLO Model and Airborne LiDAR Point Clouds
by Yanan Liu, Ai Zhang and Peng Gao
Forests 2025, 16(2), 248; https://doi.org/10.3390/f16020248 - 28 Jan 2025
Viewed by 900
Abstract
Individual tree segmentation is crucial to extract forest structural parameters, which is vital for forest resource management and ecological monitoring. Airborne LiDAR (ALS), with its ability to rapidly and accurately acquire three-dimensional forest structural information, has become an essential tool for large-scale forest [...] Read more.
Individual tree segmentation is crucial to extract forest structural parameters, which is vital for forest resource management and ecological monitoring. Airborne LiDAR (ALS), with its ability to rapidly and accurately acquire three-dimensional forest structural information, has become an essential tool for large-scale forest monitoring. However, accurately locating individual trees and mapping canopy boundaries continues to be hindered by the overlapping nature of the tree canopies, especially in dense forests. To address these issues, this study introduces CCD-YOLO, a novel deep learning-based network for individual tree segmentation from the ALS point cloud. The proposed approach introduces key architectural enhancements to the YOLO framework, including (1) the integration of cross residual transformer network extended (CReToNeXt) backbone for feature extraction and multi-scale feature fusion, (2) the application of the convolutional block attention module (CBAM) to emphasize tree crown features while suppressing noise, and (3) a dynamic head for adaptive multi-layer feature fusion, enhancing boundary delineation accuracy. The proposed network was trained using a newly generated individual tree segmentation (ITS) dataset collected from a dense forest. A comprehensive evaluation of the experimental results was conducted across varying forest densities, encompassing a variety of both internal and external consistency assessments. The model outperforms the commonly used watershed algorithm and commercial LiDAR 360 software, achieving the highest indices (precision, F1, and recall) in both tree crown detection and boundary segmentation stages. This study highlights the potential of CCD-YOLO as an efficient and scalable solution for addressing the critical challenges of accuracy segmentation in complex forests. In the future, we will focus on enhancing the model’s performance and application. Full article
Show Figures

Figure 1

18 pages, 4437 KiB  
Article
Uncertainty Analysis of Remote Sensing Estimation of Chinese Fir (Cunninghamia lanceolata) Aboveground Biomass in Southern China
by Yaopeng Hu, Liyong Fu, Bo Qiu, Dongbo Xie, Zheyuan Wu, Yuancai Lei, Jinsheng Ye and Qiulai Wang
Forests 2025, 16(2), 230; https://doi.org/10.3390/f16020230 - 25 Jan 2025
Viewed by 899
Abstract
Forest aboveground biomass (AGB) is not only the basis for forest carbon stock research, but also an important parameter for assessing the forest carbon cycle and ecological functions of forests. However, there are various uncertainties in the estimation process, limiting the accuracy of [...] Read more.
Forest aboveground biomass (AGB) is not only the basis for forest carbon stock research, but also an important parameter for assessing the forest carbon cycle and ecological functions of forests. However, there are various uncertainties in the estimation process, limiting the accuracy of AGB estimation. Therefore, we extracted the spectral features, vegetation indices and texture factors from remote sensing images based on the field data and Landsat 8 OLI remote sensing images in Southern China to quantify the uncertainties. Then, we established three AGB estimation models, including K Nearest Neighbor Regression (KNN), Gradient Boosted Regression Tree (GBRT) and Random Forest (RF). Uncertainties at the plot scale and models were measured by using error equations to analyze the influences of uncertainties at different scales on AGB estimation. Results were as follows: (1) The R2 of the per-tree biomass model for Cunninghamia lanceolata was 0.970, while the uncertainty of the residual and parameters for per-tree biomass model was 4.62% and 4.81%, respectively; and the uncertainty transferred to the plot scale was 3.23%. (2) The estimation methods had the most significant effects on the remote sensing models. RF was more accurate than other two methods, and had the highest accuracy (R2 = 0.867, RMSE = 19.325 t/ha) and lowest uncertainty (5.93%), which outperformed both the KNN and GBRT models (KNN: R2 = 0.368, RMSE = 42.314 t/ha, uncertainty = 14.88%; GBRT: R2 = 0.636, RMSE = 32.056 t/ha, uncertainty = 6.3%). Compared to KNN and GBRT, the R2 of RF was enhanced by 0.499 and 0.231, while the uncertainty was decreased by 8.95% and 0.37%, respectively. The uncertainty associated with the scale of remote sensing models remains the primary source of uncertainty when compared to the plot scale. On the remote sensing scale, RF is the model with the best estimation effect. This study examines the impact of both plot-scale and remote sensing model-scale methodologies on the estimation of AGB for Cunninghamia lanceolata. The findings aim to offer valuable insights and considerations for enhancing the accuracy of AGB estimations. Full article
(This article belongs to the Special Issue Forest Biometrics, Inventory, and Modelling of Growth and Yield)
Show Figures

Figure 1

21 pages, 4884 KiB  
Article
Evaluation of Machine Learning Algorithms for Classification of Visual Stimulation-Induced EEG Signals in 2D and 3D VR Videos
by Mingliang Zuo, Xiaoyu Chen and Li Sui
Brain Sci. 2025, 15(1), 75; https://doi.org/10.3390/brainsci15010075 - 16 Jan 2025
Cited by 1 | Viewed by 1145
Abstract
Backgrounds: Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment’s characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers [...] Read more.
Backgrounds: Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment’s characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers can explore the neural mechanisms underlying cognitive and emotional responses to VR stimuli. However, distinguishing EEG signals recorded by two-dimensional (2D) versus three-dimensional (3D) VR environments remains underexplored. Current research primarily utilizes power spectral density (PSD) features to differentiate between 2D and 3D VR conditions, but the potential of other feature parameters for enhanced discrimination is unclear. Additionally, the use of machine learning techniques to classify EEG signals from 2D and 3D VR using alternative features has not been thoroughly investigated, highlighting the need for further research to identify robust EEG features and effective classification methods. Methods: This study recorded EEG signals from participants exposed to 2D and 3D VR video stimuli to investigate the neural differences between these conditions. Key features extracted from the EEG data included PSD and common spatial patterns (CSPs), which capture frequency-domain and spatial-domain information, respectively. To evaluate classification performance, several classical machine learning algorithms were employed: ssupport vector machine (SVM), k-nearest neighbors (KNN), random forest (RF), naive Bayes, decision Tree, AdaBoost, and a voting classifier. The study systematically compared the classification performance of PSD and CSP features across these algorithms, providing a comprehensive analysis of their effectiveness in distinguishing EEG signals in response to 2D and 3D VR stimuli. Results: The study demonstrated that machine learning algorithms can effectively classify EEG signals recorded during watching 2D and 3D VR videos. CSP features outperformed PSD in classification accuracy, indicating their superior ability to capture EEG signals differences between the VR conditions. Among the machine learning algorithms, the Random Forest classifier achieved the highest accuracy at 95.02%, followed by KNN with 93.16% and SVM with 91.39%. The combination of CSP features with RF, KNN, and SVM consistently showed superior performance compared to other feature-algorithm combinations, underscoring the effectiveness of CSP and these algorithms in distinguishing EEG responses to different VR experiences. Conclusions: This study demonstrates that EEG signals recorded during watching 2D and 3D VR videos can be effectively classified using machine learning algorithms with extracted feature parameters. The findings highlight the superiority of CSP features over PSD in distinguishing EEG signals under different VR conditions, emphasizing CSP’s value in VR-induced EEG analysis. These results expand the application of feature-based machine learning methods in EEG studies and provide a foundation for future research into the brain cortical activity of VR experiences, supporting the broader use of machine learning in EEG-based analyses. Full article
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)
Show Figures

Figure 1

28 pages, 127916 KiB  
Article
A Pine Wilt Disease Detection Model Integrated with Mamba Model and Attention Mechanisms Using UAV Imagery
by Minhui Bai, Xinyu Di, Lechuan Yu, Jian Ding and Haifeng Lin
Remote Sens. 2025, 17(2), 255; https://doi.org/10.3390/rs17020255 - 13 Jan 2025
Viewed by 1241
Abstract
Pine wilt disease (PWD) is a highly destructive worldwide forest quarantine disease that has the potential to destroy entire pine forests in a relatively brief period, resulting in significant economic losses and environmental damage. Manual monitoring, biochemical detection and satellite remote sensing are [...] Read more.
Pine wilt disease (PWD) is a highly destructive worldwide forest quarantine disease that has the potential to destroy entire pine forests in a relatively brief period, resulting in significant economic losses and environmental damage. Manual monitoring, biochemical detection and satellite remote sensing are frequently inadequate for the timely detection and control of pine wilt disease. This paper presents a fusion model, which integrates the Mamba model and the attention mechanism, for deployment on unmanned aerial vehicles (UAVs) to detect infected pine trees. The experimental dataset presented in this paper comprises images of pine trees captured by UAVs in mixed forests. The images were gathered primarily during the spring of 2023, spanning the months of February to May. The images were subjected to a preprocessing phase, during which they were transformed into the research dataset. The fusion model comprised three principal components. The initial component is the Mamba backbone network with State Space Model (SSM) at its core, which is capable of extracting pine wilt features with a high degree of efficacy. The second component is the attention network, which enables our fusion model to center on PWD features with greater efficacy. The optimal configuration was determined through an evaluation of various attention mechanism modules, including four attention modules. The third component, Path Aggregation Feature Pyramid Network (PAFPN), facilitates the fusion and refinement of data at varying scales, thereby enhancing the model’s capacity to detect multi-scale objects. Furthermore, the convolutional layers within the model have been replaced with depth separable convolutional layers (DSconv), which has the additional benefit of reducing the number of model parameters and improving the model’s detection speed. The final fusion model was validated on a test set, achieving an accuracy of 90.0%, a recall of 81.8%, a map of 86.5%, a parameter counts of 5.9 Mega, and a detection speed of 40.16 FPS. In comparison to Yolov8, the accuracy is enhanced by 7.1%, the recall by 5.4%, and the map by 3.1%. These outcomes demonstrate that our fusion model is appropriate for implementation on edge devices, such as UAVs, and is capable of effective detection of PWD. Full article
Show Figures

Figure 1

21 pages, 5489 KiB  
Article
An Improved Tree Crown Delineation Method Based on a Gradient Feature-Driven Expansion Process Using Airborne LiDAR Data
by Jiaxuan Jia, Lei Zhang, Kai Yin and Uwe Sörgel
Remote Sens. 2025, 17(2), 196; https://doi.org/10.3390/rs17020196 - 8 Jan 2025
Viewed by 702
Abstract
Accurate individual tree crown delineation (ITCD), which can be used to estimate various forest parameters such as biomass, stem density, and carbon storage, stands as an essential component of precision forestry. Currently, raster data such as the canopy height model derived from airborne [...] Read more.
Accurate individual tree crown delineation (ITCD), which can be used to estimate various forest parameters such as biomass, stem density, and carbon storage, stands as an essential component of precision forestry. Currently, raster data such as the canopy height model derived from airborne light detection and ranging (LiDAR) data have been widely used in large-scale ITCD. However, the accuracy of current existing algorithms is limited due to the influence of understory vegetation and variations in tree crown geometry (e.g., the delineated crown boundaries consistently extend beyond their actual boundaries). In this study, we achieved more accurate crown delineation results based on an expansion process. First, the initial crown boundaries were extracted through watershed segmentation. Then, a “from the inside out” expansion process was guided by a novel gradient feature to obtain accurate crown delineation results across different forest conditions. Results show that our method produced much better performance (~75% matched on average) than other commonly used methods across all test forest plots. The erroneous situation of “match but over-grow” is significantly reduced, regardless of forest conditions. Compared to other methods, our method demonstrates a notable increase in the precisely matched rate across different plot types, with an average increase of 25% in broadleaf plots, 18% in coniferous plots, 23% in mixed plots, 15% in high-density plots, and 32% in medium-density plots, without increasing over- and under- segmentation errors. Our method demonstrates potential applicability across various forest conditions, facilitating future large-scale ITCD tasks and precision forestry applications. Full article
Show Figures

Graphical abstract

30 pages, 9613 KiB  
Article
Mapping Soil Properties in Tropical Rainforest Regions Using Integrated UAV-Based Hyperspectral Images and LiDAR Points
by Yiqing Chen, Tiezhu Shi, Qipei Li, Chao Yang, Zhensheng Wang, Zongzhu Chen and Xiaoyan Pan
Forests 2024, 15(12), 2222; https://doi.org/10.3390/f15122222 - 17 Dec 2024
Cited by 1 | Viewed by 811
Abstract
For tropical rainforest regions with dense vegetation cover, the development of effective large-scale soil mapping methods is crucial to improve soil management practices to replace the time-consuming and laborious conventional approaches. While machine learning (ML) algorithms demonstrate superior predictability of soil properties over [...] Read more.
For tropical rainforest regions with dense vegetation cover, the development of effective large-scale soil mapping methods is crucial to improve soil management practices to replace the time-consuming and laborious conventional approaches. While machine learning (ML) algorithms demonstrate superior predictability of soil properties over linear models, their practical and automated application for predicting soil properties using remote sensing data requires further assessment. Therefore, this study aims to integrate Unmanned Aerial Vehicles (UAVs)-based hyperspectral images and Light Detection and Ranging (LiDAR) points to predict the soil properties indirectly in two tropical rainforest mountains (Diaoluo and Limu) in Hainan Province, China. A total of 175 features, including texture features, vegetation indices, and forest parameters, were extracted from two study sites. Six ML models, Partial Least Squares Regression (PLSR), Random Forest (RF), Adaptive Boosting (AdaBoost), Gradient Boosting Decision Trees (GBDT), Extreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP), were constructed to predict soil properties, including soil acidity (pH), total nitrogen (TN), soil organic carbon (SOC), and total phosphorus (TP). To enhance model performance, a Bayesian optimization algorithm (BOA) was introduced to obtain optimal model hyperparameters. The results showed that compared with the default parameter tuning method, BOA always improved models’ performances in predicting soil properties, achieving average R2 improvements of 202.93%, 121.48%, 8.90%, and 38.41% for soil pH, SOC, TN, and TP, respectively. In general, BOA effectively determined the complex interactions between hyperparameters and prediction features, leading to an improved model performance of ML methods compared to default parameter tuning models. The GBDT model generally outperformed other ML methods in predicting the soil pH and TN, while the XGBoost model achieved the highest prediction accuracy for SOC and TP. The fusion of hyperspectral images and LiDAR data resulted in better prediction of soil properties compared to using each single data source. The models utilizing the integration of features derived from hyperspectral images and LiDAR data outperformed those relying on one single data source. In summary, this study highlights the promising combination of UAV-based hyperspectral images with LiDAR data points to advance digital soil property mapping in forested areas, achieving large-scale soil management and monitoring. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

26 pages, 18107 KiB  
Article
Tree Species Classification for Shelterbelt Forest Based on Multi-Source Remote Sensing Data Fusion from Unmanned Aerial Vehicles
by Kai Jiang, Qingzhan Zhao, Xuewen Wang, Yuhao Sheng and Wenzhong Tian
Forests 2024, 15(12), 2200; https://doi.org/10.3390/f15122200 - 13 Dec 2024
Cited by 1 | Viewed by 856
Abstract
Accurately understanding the stand composition of shelter forests is essential for the construction and benefit evaluation of shelter forest projects. This study explores classification methods for dominant tree species in shelter forests using UAV-derived RGB, hyperspectral, and LiDAR data. It also investigates the [...] Read more.
Accurately understanding the stand composition of shelter forests is essential for the construction and benefit evaluation of shelter forest projects. This study explores classification methods for dominant tree species in shelter forests using UAV-derived RGB, hyperspectral, and LiDAR data. It also investigates the impact of individual tree crown (ITC) delineation accuracy, crown morphological parameters, and various data sources and classifiers. First, as a result of the overlap and complex structure of tree crowns in shelterbelt forests, existing ITC delineation methods often lead to over-segmentation or segmentation errors. To address this challenge, we propose a watershed and multi-feature-controlled spectral clustering (WMF-SCS) algorithm for ITC delineation based on UAV RGB and LiDAR data, which offers clearer and more reliable classification objects, features, and training data for tree species classification. Second, spectral, texture, structural, and crown morphological parameters were extracted using UAV hyperspectral and LiDAR data combined with ITC delineation results. Twenty-one classification images were constructed using RF, SVM, MLP, and SAMME for tree species classification. The results show that (1) the proposed WMF-SCS algorithm demonstrates significant performance in ITC delineation in complex mixed forest scenarios (Precision = 0.88, Recall = 0.87, F1-Score = 0.87), resulting in a 1.85% increase in overall classification accuracy; (2) the inclusion of crown morphological parameters derived from LiDAR data improves the overall accuracy of the random forest classifier by 5.82%; (3) compared to using LiDAR or hyperspectral data alone, the classification accuracy using multi-source data improves by an average of 7.94% and 7.52%, respectively; (4) the random forest classifier combined with multi-source data achieves the highest classification accuracy and consistency (OA = 90.70%, Kappa = 0.8747). Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Back to TopTop