Next Article in Journal
Water Conservation Capacity of Soil and Litter Layers of Five Magnoliaceae Plants in Hainan Island, China
Previous Article in Journal
Estimation of the Total Carbon Stock of Dudles Forest Based on Satellite Imagery, Airborne Laser Scanning, and Field Surveys
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

WLC-Net: A Robust and Fast Deep Learning Wood–Leaf Classification Method

School of Science, Beijing Forestry University, No. 35 Qinghua East Road, Haidian District, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Forests 2025, 16(3), 513; https://doi.org/10.3390/f16030513
Submission received: 2 February 2025 / Revised: 5 March 2025 / Accepted: 12 March 2025 / Published: 14 March 2025
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Effective classification of wood and leaf points from terrestrial laser scanning (TLS) point clouds is critical for analyzing and estimating forest attributes such as diameter at breast height (DBH), above-ground biomass (AGB), and wood volume. To address this, we introduce the Wood–Leaf Classification Network (WLC-Net), a deep learning model derived from PointNet++, designed to differentiate between wood and leaf points within tree point clouds. WLC-Net enhances classification accuracy, completeness, and speed by incorporating linearity as an inherent feature, refining the input–output framework, and optimizing the centroid sampling technique. We trained and evaluated WLC-Net using datasets from three distinct tree species, totaling 102 individual tree point clouds, and compared its performance against five existing methods including PointNet++, DGCNN, Krishna Moorthy’s method, LeWoS, and Sun’s method. WLC-Net achieved superior classification accuracy, with overall accuracy (OA) scores of 0.9778, 0.9712, and 0.9508; the mean Intersection over Union (mIoU) scores of 0.9761, 0.9693, and 0.9141; and F1-scores of 0.8628, 0.7938, and 0.9019, respectively. The model also demonstrated high efficiency, processing an average of 102.74 s per million points. WLC-Net has demonstrated notable advantages in wood–leaf classification, including significantly enhanced classification accuracy, improved processing efficiency, and robust applicability across diverse tree species. These improvements stem from its innovative integration of linearity in the model architecture, refined input–output framework, and optimized centroid sampling technique. In addition, WLC-Net also exhibits strong applicability across various tree point clouds and holds promise for further optimization.

1. Introduction

Terrestrial laser scanning (TLS) has become a cornerstone in forestry over the past decades, offering high-resolution, three-dimensional data crucial for strategic forest management and ecosystem understanding. Its ability to perform non-destructive measurements makes it invaluable for accurately capturing forest attributes such as DBH, AGB, canopy structure, tree skeleton extraction, etc. [1].
Within the realm of forestry studies, wood–leaf classification stands as an essential and fundamental key step. For example, distinguishing leaf from wood in tree point clouds is essential for assessing tree health, studying physiological processes, and understanding carbon dynamics. While the analysis of woody structures derived from point cloud data can facilitate the estimation of DBH, tree volume, and AGB through the application of quantitative structural models [2,3]. Wood–leaf classification is thus instrumental in enabling precise analysis and the development of innovative applications, encompassing the exchange of mass and energy, as well as the partitioning of net primary production across various components [4,5,6]. Despite its potential, the irregular and complex nature of forest point clouds poses significant challenges, driving the need for innovative classification approaches. Current approaches can be broadly categorized into three principal methodologies: traditional method, machine learning method, and deep learning method. Traditional methods are straightforward but often less scalable. Machine learning offers better adaptability but still requires extensive feature engineering. Deep learning, chosen for this study, excels in processing complex datasets automatically and significantly enhances classification accuracy without the need for manual feature selection [7].
Basically, traditional methods for wood–leaf classification from tree point clouds have predominantly relied on geometric or radiometric features, or a combination of both. The geometric information of tree point cloud is frequently utilized as the primary classification criterion due to the more pronounced and structured nature of trunk and branch structures, in contrast to the more dispersed and random arrangement of leaves [8]. For example, the LeWoS, an automatic MATLAB-based tool designed for wood–leaf classification, employs classification outcomes for tree modeling and subsequently benchmarks these results against those derived from manually classified wood points [9]. Additionally, Sun et al. proposed an automated three-step classification approach that integrates the intensity data, neighborhood relationships, and point cloud density to facilitate fast wood–leaf classification [8]. These methods are often contingent on the selection of specific parameters, which can significantly influence the classification outcomes. For instance, the choice of point density threshold can determine the differentiation between dense wood structures and sparse leaf arrangements, impacting the accuracy of the classification. Similarly, neighborhood search radius affects how points are clustered for analysis, influencing both the precision and recall of the classification results. Furthermore, the applicability of these methods can be constrained when fundamental data, such as intensity information, is absent. For instance, Sun’s three-step method is contingent upon the availability of intensity data for its application [8].
While these methods have proven effective in wood–leaf classification, they require labor-intensive manual annotation or specific user-defined parameters [10]. This requirement not only renders the process time-consuming but also susceptible to human error, which can compromise the reliability and scalability of the methods. Additionally, the inherent complexity of forestry point clouds, with their intricate branching structures and varying densities, presents further challenges to achieve accurate and automated classification [11]. Therefore, there is an urgent need for more robust, automated solutions capable of managing the intricacies of forestry point clouds while reducing the reliance on manual intervention.
Machine learning methods also have been increasingly applied to the wood–leaf classification of tree point clouds. These methods generally encompass two pivotal stages: the extraction of distinctive features to act as classifiers, and the utilization of machine learning techniques to categorize each point into either wood or leaf points. Both supervised and unsupervised machine learning algorithms have been leveraged in this context, such as Support Vector Machines (SVMs) [12,13], Random Forests (RFs) [14], Gauss Mixture Models (GMMs) [15] and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [16]. Additionally, hybrid techniques have been integrated into classical machine learning methodologies for wood–leaf classification. For instance, a framework that integrates unsupervised classification of geometric features with shortest path analysis has achieved an average accuracy of 89% with field data and 83% with simulated data [17]. Similarly, a sophisticated supervised learning approach, which combines geometrical features defined by radially bounded nearest neighbors across multiple spatial scales, has yielded an overall average accuracy of 89.75% on field data from tropical and deciduous forests, and 82.1% on simulated point clouds [18]. Despite these methods’ impressive performance on test datasets, they continue to confront the challenge posed by the structural complexity inherent in tree point cloud.
Beyond the realm of machine learning, the emergence of deep learning has introduced a transformative approach to wood–leaf classification. Proficient at discerning complex patterns within high-dimensional data, deep learning algorithms effectively capture the subtle, non-linear interplays between inputs and outputs [19,20], promising significant enhancements to classification accuracy. Early deep learning forays involved converting point clouds into projections or voxels, exemplified by MVCNNs (Multi-View Convolutional Neural Networks), View-GCNs (View Graph Neural Networks), and VoxNet. However, these transformations techniques risked the loss of explicit information [21,22,23,24] and demanded substantial computational resources and advanced hardware. A paradigm shift occurred with the advent of PointNet [25], an algorithm engineered specifically for direct processing of raw point cloud data. Despite its pioneering role, PointNet faced limitations capturing the local structures and leveraging multi-scale features. To address these shortcomings, subsequent methods, such as PointNet++ [26], MO-Net (Mixture model network), DGCNN (Dynamic Graphs Convolutional Neural Network) [27], PointCNN [28], D-FCN (directionally constrained fully convolutional neural network) [29] were developed. Additionally, specialized deep learning frameworks for wood–leaf classification, including MDC-Net (multi-directional collaborative convolutional neural network) [30] and FWCNN (Convolution Neural Network-Based Model for Classifying Foliage and Woody) [31], have been progressively introduced. While these advanced methods offer the promising avenues for enhancing accuracy in wood–leaf classification, the full integration of deep learning within forestry classification is still evolving. For instance, both MDC-Net and FWCNN rely on datasets that are composed solely of columnar trees with similar morphologies. MDC-Net operates with a lower data density and a smaller number of point clouds per tree, potentially limiting its broader applicability. While FWCNN relies on intensity data obtained from sensors and meets with misclassifying a significant number of disordered leaf points as wood points. The field necessitates ongoing research to fully harness the capabilities of deep learning and achieve its potential [30,31].
In this study, we proposed the specific WLC-Net, a novel deep learning approach tailored for wood–leaf classification in tree point clouds. WLC-Net is constructed upon the foundational architecture of PointNet++, with three key innovations: introduce linearity as a prior feature, automatically output complete prediction results, and optimize the centroid sampling technique. All the three modules help to achieve improvements in the accuracy or efficiency of wood–leaf classification. Collectively, they synergize to dramatically enhance both the precision and efficiency of the classification process.

2. Materials and Methods

2.1. Equipment and Data

The data were collected using the RIEGL VZ-400 scanner, a sophisticated laser measurement system manufactured by the RIEGL company (RIEGL Laser Measurement Systems GmbH, 3580 Horn, Austria). This system is distinguished for its extensive field of view, which covers a full 360° horizontally and spans 100° vertically from +60° to −40°, with angular resolutions of less than 0.0005°. Additionally, the scanner’s proficiency in generating up to 300,000 points per second significantly enhances the density and precision of resultant point clouds. The comprehensive characteristics of the scanner are listed in Table 1.
The experiment was carried out in July 2023 at the Dongsheng Bajia Suburban Park, situated in the Haidian District of Beijing. The study focused on two tree species: Chinese ash (Fraxinus chinensis Roxb.) and willow (Salix matsudana Koidz.). These species were selected for their dense canopies and moderate heights, which are advantageous for TLS. The dense canopy structure ensures that TLS faces the intricate task of distinguishing between overlapping wood and leaf materials, crucial for testing the robustness of classification algorithms. Additionally, the moderate height of these trees prevents the common issue of data loss at the upper canopy levels, which often occurs in taller species, ensuring that comprehensive and detailed canopy data is captured. The dataset included a total of 42 trees, specifically consisting of 21 Chinese ash trees and 21 willow trees. The scanning angular resolution is set 0.1 degrees. The tree heights spanned from 6.68 m to 18.09 m, and their diameters at breast height (DBH) ranged from 10.3 cm to 43.7 cm. These measurements were manually obtained from laser scanning point cloud data using the CloudCompare software (Version 2.12 alpha stereo), which is an open-source software licensed under the GNU General Public License (GPL). It allows for precise extraction of both height and DBH metrics directly from the high-resolution point clouds, providing an accurate representation of the tree dimensions without the need for traditional field inventory. Each tree point cloud was assigned a unique numerical identifier as demonstrated in Figure 1 and Figure 2, respectively.
All the trees point clouds are manual extracted and meticulously classified into wood and leaf categories to establish a benchmark for wood–leaf classification using CloudCompare. The standard classification for an exemplar tree, designated as ‘ash07’, is illustrated in Figure 3, where wood points are represented in brown and leaf points in green. The manual extraction and classification effort totaled costed about 140 h. All processing procedures were conducted on a laptop equipped with an NVIDIA GeForce RTX 3080, 16 GB of memory, and an Intel Core i7-11800H CPU. The laptop was manufactured by Dell Inc., located in Round Rock, TX, USA.
Furthermore, an open-source dataset [9] was used to verify the effectiveness of WLC-Net. This dataset encompasses 61 tropical trees across 15 distinct species. Tree heights in the dataset ranged from 8.7 to 53.6 m, with an average height of 33.7 ± 12.4 m [1]. Similarly, DBH ranged from 10.8 to 186.6 cm, averaging at 58.4 ± 41.3 cm. As previously detailed, these dimensions were accurately derived from laser scanning point clouds, analyzed using CloudCompare software.

2.2. Overview of Method

PointNet++, a framework renowned in the deep learning domain, has exhibited robust performance across a spectrum of evaluations. Building upon the capabilities of PointNet++, we introduced WLC-Net to classify the tree point cloud into wood and leaf points. As depicted in Figure 4, the architecture of WLC-Net ingests the initial 3D coordinates and preliminary features to generate semantic labels for each point, thereby enhancing the granularity of classification outputs.
As indicated in Figure 4, to elevate the wood–leaf classification outcome, WLC-Net enhances classification accuracy, completeness, and speed by incorporating linearity as an inherent feature, refining the input–output framework, and optimizing the centroid sampling technique. A Prior Feature Calculation (PFC) module was designed to bolster classification accuracy. Meanwhile, the Splitter and Integrator (SAI) mechanism, which is marked yellow in Figure 4, ensures a complete set of output data. Additionally, the incorporation of a randomized center point selection process serves to curtail processing times [32]. As illustrated in Figure 4, the PFC consists of three components: neighborhood analysis and centralization, principal component analysis (PCA), and linearity measurement. The SAI, on the other hand, is bifurcated into a data splitter and a data output integrator.
These enhancements are not merely technical upgrades but are strategic interventions aimed at pushing the boundaries of what is possible within the existing hardware constraints. They are designed to work in synergy, ensuring that our model can handle more complex datasets while maintaining high levels of accuracy and efficiency.
The workflow of WLC-Net and experimental design are presented in Figure 5. We selected five methods for comparison with WLC-Net: PointNet++, DGCNN, LeWoS, Krishna Moorthy’s method, and Sun’s method. PointNet++ and DGCNN represent well-established deep learning approaches, while Krishna Moorthy’s method is a specialized machine learning technique for wood–leaf classification. The other two methods adhere to traditional classification techniques. In our experiments, leveraging the benchmark classification as a reference, the classification accuracy of different methods was analyzed using three metrics: OA, mIoU, and F1-score. Notably, the original study on the LeWoS method evaluated accuracy using sensitivity and specificity; therefore, we also computed these metrics for consistency when comparing with LeWoS and the associated open-source dataset. Furthermore, the efficiency of these methods was scrutinized in terms of time cost and time per million points (TPMP), as detailed in Section 3 [8].

2.2.1. Prior Feature Calculation (PFC)

A well-established principle in numerous studies posits that integration of prior feature knowledge can markedly improve a model’s efficiency in identifying critical information [33]. However, this essential feature knowledge is often overlooked in contemporary deep learning techniques, particularly in the domain of wood–leaf classification for tree point cloud [34]. To address this, WLC-Net introduced the prior feature.
Prior features are generally categorized into two main categories: radiometric and geometric. Radiometric features, such as the intensity information of tree point clouds, are instrumental in differentiating between wood and leaf points due to their inherent physical disparities. Woody trunks and branches, characterized by their rigidity, contrast with the softer structure and variable positioning of leaves. Furthermore, the wood components, being drier, exhibit a distinct intensity under identical conditions when scanned with the RIEGL VZ-400 scanner, whose laser wavelength coincides with the absorption spectrum of water. However, the utility of intensity values maybe compromised by fluctuations stemming from varying observation angles and environmental conditions, which are prevalent in multi-scan registrations. This variability can impede the application of radiometric features in the wood–leaf classification of registered tree point clouds. In contrast, geometric features are more advantageous for analyzing registered tree point cloud due to the spatial arrangement of wood and leaf points. Wood points exhibit more regularity, reflecting the structured nature of trunk and branches, whereas leaf points are more random and irregular, influenced by their positioning and susceptibility to wind [35].
Consequently, WLC-Net prioritizes the geometric feature—specifically, the linearity of points—as a pivotal metric for effectively harnessing the geometric information inherent in tree point clouds. According to Belton et al., the linearity of points, which indicates the linear alignment within the point set and offers a nuanced and accurate depiction of the structure, is calculated based on their method. This method involves using eigenvalues and eigenvectors derived from PCA to compute linearity [36]. The linearity is calculated as following steps:
Step 1: Set the radius of neighborhood.
This process primarily focuses on extracting neighborhood information of each point P c , which is essential for subsequent analysis. The criterion for neighborhood extraction is predicated on the typical diameter of branch, ranging from 0.1 m to 0.3 m in the middle or lower parts of the tree crown. As depicted in Figure 6, a cylinder symbolizes a branch, with R representing its radius. In this context, r denotes the radius of the neighborhood under consideration. When r is considerably smaller than R, the selected neighborhood approximates a planar configuration, characterized by a scattered distribution of points and a smaller linearity. When r increases to approximately R but remains less than 2R, the neighborhood forms a substantial arc, evidencing a notable linearity. However, when r is marginally greater than 2R, the neighborhood fully encapsulates the branch, exhibiting a pronounced linearity trend. Conversely, if r is considerably larger than 2R, it also incorporates the dispersed leaves in the canopy. Therefore, the selection of r should ideally be no more than 2R and at least equivalent to R. Accordingly, taking into account the morphological characteristics of branches across a majority of tree species, the neighborhood extraction is conducted using a radius r set to 0.15 m. This results in a point set M comprising all neighbor points and the point of interest P c .
Step 2: Centralize each point of point set M .
Centralization is implemented to mitigate the effects of varying scales across different neighborhoods. This process adjusts the raw X , Y , and Z coordinates of each point in M . The centralization of P i x i , y i , z i is fulfilled as follows, P i M :
P i x i , y i , z i = x i x ¯ , y i y ¯ , z i z ¯ ,   i = 1,2 , 3 , , n , P i V
x ¯ = 1 n i n x i , y ¯ = 1 n i n y i , z ¯ = 1 n i n z i
Here, x ¯ , y ¯ , z ¯ represent the mean coordinates of the points within M . All the centralized points P i construct the point set V . Centralization is essential for the subsequent process of V , like PCA, which is invariant to the absolute positions of the points, thereby maintaining the accuracy and reliability of our linearity metric.
Step 3: Perform the PCA analysis of point set V .
Now, we get a matrix V which is 3 * n . To find the principal direction of variance and its corresponding degree, we employ the PCA, a sophisticated dimensionality reduction technique. PCA operates by transforming a dataset with k variables into a new set of t orthogonal variables, known as principal components (Jolliffe and Cadima, 2016) [37]. The primary principal component captures the greatest variance within the dataset, and each subsequent component accounts for the maximum remaining variance while adhering to the orthogonality constraint. Following the PCA, the matrix V has been transformed into matrix X , a 3 * n matrix derived from PCA, and the covariance matrix Σ of X is computed as follows:
Σ = 1 n 1 X T X  
Step 4: Calculate the linearity of the point.
Because the points in the point set M and the point set V correspond on a one-to-one basis, the linearity for point P c in M can be calculated as follows:
L i n e a r i t y p c = λ 1 λ 2 λ 1
The value of linearity ranges from 0 to 1. A linearity value approaching 1 indicates that the point is predominantly aligned along a single axis, indicative a pronounced linear characteristic. In contrast, a linearity value approaching 0 implies a more uniform distribution of the point across multiple axes, signifying a weaker linear characteristic.
Step 5: Calculate the linearity values of point cloud.
Repeat step 2 to step 4 for each point in a tree point cloud and calculate the linearity values for all points. The linearity characteristic of tree ash15 is demonstrated in Figure 7. Linearity values are depicted on a color gradient, where red represents the highest degree of linearity, while blue represents the lowest linearity. It is evident that branches and twigs have the highest linearity, with leaves showing the next highest values, while the trunk exhibits the lowest linearity overall. Furthermore, there is a distinct overlap in the linearity values of leaves and the trunk, whereas the linearity values for branches and twigs are more closely clustered. This observation suggests that the use of linearity as a feature enhances the distinguishability of branches and twigs within the point cloud.

2.2.2. Splitter and Integrator

For deep learning methods, more points in the training tree point cloud necessitates more computing resources. Consequently, most deep learning methods are constrained from utilizing excessively large point sets. For instance, datasets generally utilized by algorithms such as PointNet++ and DGCNN typically contain files with exactly 2048 points. And, it is also common for deep learning methods to handle point clouds which contain a uniform number of points, typically fewer than 10,000 [26,27].
For wood–leaf classification tasks, tree point clouds are usually characterized by high density and large number of points, with a single tree in our datasets frequently exceeding 100,000 points. Given the substantial volume of points and the computational resources demanded by deep learning models, WLC-Net employs a strategy to partition the entire tree point cloud into several smaller, more manageable subsets prior to initiating the deep learning analysis.
To achieve this, a splitter function was introduced, designed to adaptively divide tree points into manageable subsets. This function segments the point cloud into smaller sections, each containing a number of points as close as possible to a predefined maximum, M a x P o i n t N u m , primarily determined by hardware limitations. Our experiments demonstrated that processing times on our GPU increased exponentially as point counts exceeded 100,000. For example, processing 130,000 points from the Chinese ash dataset takes 36 s, which is 3.6 times longer than processing 100,000 points. When the count reaches 180,000 points, the processing time surges to 162 s. Given our hardware setup, the value of M a x P o i n t N u m was set to 100,000. The actual number of points contained in each point cloud, A c t u a l P o i n t N u m , is calculated by:
A c t u a l P o i n t N u m = t o t a l   n u m b e r   o f   p o i n t   c l o u d M a x P o i n t N u m
Meanwhile, once the WLC-Net model has classified the subsets of points, it is imperative to consolidate these results into a comprehensive dataset. This integration allows for a straightforward comparison of the WLC-Net’s outcomes with those of alternative methods. Since the input data have undergone a splitting process and each subset is normalized during the deep learning procedure, it is essential to revert to the original point cloud configuration at the data output stage. Therefore, we perform an inverse normalization on the results of the subsets, then merge the normalized results back together to produce a complete classified tree point cloud.

2.2.3. Random Selecting of Center Points

PointNet++ utilizes the farthest point sampling (FPS) method for selecting centroid point, aiming to preserve the original shape of the point cloud as accurately as possible. However, as the number of points increases, so does the burden of calculation which poses challenges for processing dense tree point clouds. The number of points processed now is 100,000, which is much larger than the amount of data processed by PointNet++ model 2048. Furthermore, upon comparing FPS with other sampling methods like random sampling, density sampling and voxel-based sampling techniques, accuracy differences among these methods are minimal for large-scale tree point cloud, significant variations in computational efficiency are evident.
In WLC-Net, a random sampling method is adopted due to its simplicity and computational efficiency [38], which is also particularly suited for high-density, large-scale datasets, effectively capturing key geometric and topological features, a quality crucial for complex tree point clouds [39]. Notably, when uniformly sampling same number of points from a point cloud comprising N points, the computational complexity of random sampling is O ( 1 ) , whereas that of FPS can escalate to O ( N ) . Obviously, the larger the number of points, the more obvious the time advantage of the random sampling method will be compared to FPS. When 2048 centroid points is set in the model, our experiments demonstrated that using random sampling method led to an approximately 45% increase in overall computational efficiency and an improvement in accuracy by about 0.5%.

2.2.4. Training and Testing

In this study, three datasets were used, which are Chinese ash, willow, and an open-source dataset. Each of these datasets adhered to a training–test ratio of 2:1. Specifically, the Chinese ash and willow datasets comprised 21 trees, respectively. Thus, there are 14 trees in the training set and 7 trees in the test set. The open-source dataset initially contained 61 trees. To align with our training–test ratio, we randomly excluded one tree, thereby adopting a training set of 40 trees and a test set of 20 trees.
Furthermore, the open-source dataset contains trees from 15 different tropical species. Although the dataset could potentially be divided into 15 separate subsets for species-specific training, the number of samples for each individual species is too small to ensure robust training outcomes. Instead, we chose to train and test the model on the whole dataset of all tropical trees. This approach aimed to validate whether our model could maintain high accuracy across a diverse range of species, testing the model’s ability to generalize across varied tropical tree forms without being tailored to specific species characteristics, thus ensuring that the model is robust and versatile in real-world applications.
Deep learning methods used in the experiment, including the proposed WCL-Net, PointNet++ and DGCNN, are all trained with a learning rate of 0.001, epochs of 60, and a decay rate of 0.5, and conducted under same hardware conditions.

2.2.5. Accuracy Metrics

In this study, we utilized three principal metrics to evaluate classification accuracy: OA, mIoU, and the F1-score. OA is a measure of the model’s overall classification capability across all categories. It is calculated using the formula:
O A = T P + T N T P + T N + F P + F N
In this equation, TP signifies true positives, TN denotes true negatives, FP represents false positives, and FN corresponds to false negatives.
mIoU, a standard metric in deep learning for semantic classification, evaluates the agreement between predicted classifications and actual ground-truth labels [40]. For each class, the Intersection over Union (IoU) evaluates the overlap between predicted and actual bounding areas relative to their total combined area. mIoU is the average of all the class specific IoU values. Particularly, in binary classifications, mIoU simplifies to a singular IoU value owing to the evaluation of only two classes. mIoU is formulated as:
I o U = T P T P + F P + F N
m I o U = 1 N i = 1 N I o U i
However, it is important to note that both OA and mIoU may exhibit bias towards dominant classes. This bias can obscure performance on minority classes, a critical consideration in tree points cloud data where disparities between wood and leaf points can reach ratios as high as 1:10. Relying solely on OA and mIoU could lead to performance indicators that inadvertently favor these dominant classes.
For a comprehensive evaluation of our model, we primarily focus on the F1-score, while also recognizing the importance of the Kappa Coefficient, another well-regarded metric. The Kappa Coefficient is particularly useful as it accounts for chance agreement, highlighting how the model’s performance exceeds mere random chance, a consideration crucial for multi-class problems where random predictions can significantly skew results [41].
However, it is important to note that in binary classification scenarios, particularly with highly imbalanced sample sizes, the Kappa Coefficient might sometimes present an overly optimistic view of the model’s effectiveness. In such cases, we turn to the F1-score for a more accurate assessment. The F1-score is particularly adept at emphasizing the performance of the positive class, which is often in the minority [42]. This makes it an ideal metric for distinguishing between categories like wood and leaf points in tree point clouds. The F1-score combines precision and recall in a balanced manner, making it invaluable in scenarios with significant data imbalances. It is calculated as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
In conclusion, by utilizing a combination of evaluation metrics—OA, mIoU, and the F1-score—we ensure a thorough and insightful analysis of the model’s performance. This diversified strategy is essential in dealing with tree point clouds, where the disparity between classes presents a considerable challenge. This rigorous evaluation framework aims to confirm the model’s accuracy and its proficiency in distinguishing complex tree structures [43].
In addition, considering the different characteristics, and the open-source dataset are evaluated in the reference article using sensitivity and specificity. For convenience, the calculations of sensitivity and specificity are listed as follows. Sensitivity, also known as the true positive rate, measures the proportion of actual positives correctly identified by the model. In contrast, specificity, or the true negative rate, assesses the proportion of actual negatives accurately identified.
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P

3. Results

In our experiment, a consistent training-to-test ratio of 14:7 was applied across both Chinese ash trees and willows. As previously mentioned, the manual classification results of all trees served as the standard benchmarks. The trees allocated to the testing phase—comprising seven Chinese ash trees and seven willow trees—were pivotal for our analytical process.
Furthermore, to verify the applicability of WLC-Net, the open-source dataset was also used, which includes a diverse range of tree species. Considering the dataset’s composition of multiple tree species with a limited number per type, each species, we elected to test it in its entirety. To maintain the consistent training–test ratio of 2:1, one tree was randomly excluded from the dataset, resulting in 60 trees remaining. Consequently, 40 tree point clouds were designated for the training set, while 20 tree point clouds were allocated to the testing set.

3.1. WLC-Net Classification Results

The performance of WLC-Net was evaluated across three key dimensions: visual inspection, accuracy, and efficiency. As previously mentioned, the testing phase included seven tree point clouds each for Chinese ash and willow trees. The classification results for these 14 trees are presented in Figure 8 and Figure 9, with wood points depicted in brown and leaf points in green. It is notable that misclassifications of leaf points on the main trunks and primary branches are rare, indicating the superior performance of WLC-Net in distinguishing trunk and branch points. The classification results for the 20 test samples from the open-source dataset are extensively detailed in Figure 10. It is shown that WLC-Net maintains robust performance on the shared dataset, even when scanning parameters and specific tree characteristics are not known. The trio of visual presentations corroborates the high reliability and broad applicability of WLC-Net across diverse tree point clouds.
The accuracy evaluation of WLC-Net was grounded in three key metrics: OA, mIoU and F1-score. Comprehensive results for three datasets are presented in Table 2. As listed in Table 2, for our two datasets, OA values range from 0.9554 to 0.9899, with the majority of trees surpassing the 0.96 threshold. The average OA for Chinese ash trees is 0.9778, while for willows, it stands at 0.9712. Notably, mIoU values generally align with OA, when the proportion of contrasting samples varies markedly, suggesting a positive association. However, the F1-scores exhibit a wider variance, spanning from 0.5804 to 0.9216, with no discernible pattern in relation to OA or mIoU.
In comparison with the above visual demonstrations, it is apparent that the trees recording lower F1-scores always have obvious main trunk and elusive canopy interior wood points. These trees often exhibit suboptimal classification upon visual inspection, yet are challenging to identify using OA and mIoU alone. Based on the datasets of Chinese ash and willow, the correlation diagrams for OA against mIoU and F1-score are depicted in Figure 11. The R 2 for OA with mIoU and OA with F1-score are 0.9917 and 0.6235, respectively. Evidently, OA correlates strongly with mIoU, whereas the relationship with F1-score is less pronounced. This highlights the importance of the F1-score as a stringent and precise metric for evaluating wood–leaf classification.
For the open-source dataset, a comprehensive evaluation was conducted, encompassing not only OA, mIoU, and F1-score, but also sensitivity and specificity. As reported, LeWoS exhibited an OA of 0.91 ± 0.03, a sensitivity of 0.92 ± 0.04, and a specificity of 0.89 ± 0.06 across the entire dataset of 61 trees.. In our experimental evaluation, we utilized LeWoS to assess the subset of 20 tree point clouds that were used for testing in the WLC-Net study. This analysis yielded an OA of 0.9109, a sensitivity of 0.9539, and a specificity of 0.8256. These discrepancies are likely attributed to the use of 20 trees from the dataset’s total of 61 trees. Furthermore, to facilitate a direct comparison between the WLC-Net and LeWoS methods, we also evaluated the performance of WLC-Net on the same subset of 20 tree point clouds using sensitivity and specificity. This analysis achieved a sensitivity of 0.9878 and a specificity of 0.8565. Obviously, this highlights the robust capability of WLC-Net to distinguish between wood and leaves, ensuring high precision in identifying wood points while maintaining a lower rate of incorrectly labeling leaf points as wood points.
The efficiency evaluation is primarily based on the processing time and TPMP of individual tree point cloud. Comprehensive results of the three datasets are presented in Table 3. As detailed in Table 3, the total points of each dataset vary significantly, ranging from 21,608 to 8,269,426. This disparity directly influences the processing time required for each dataset, which varies from a mere 4.39 s to a more substantial 923.04 s. The correlation between the total points and the processing time is illustrated in Figure 12, demonstrating a predominantly linear relationship with an R2 value of 0.9484, indicating a strong correlation between the total points and the processing time. Additionally, the TPMP ranges from 62.09 to 203.26 s, with an average of 102.74 s.
The majority of the datasets exhibit a TPMP within the range of 80 and 120 s. Notably, the highest TPMP value of 203.26 s is associated with the tree point cloud with the fewest points, numbering 21,608. This suggests that a considerable portion of the processing time is consumed by operations that are not directly proportional to the number of points processed, such as model invocation and file I/O operations.

3.2. Accuracy Analysis

To thoroughly assess the performance of WLC-Net, further experiments were undertaken, comparing it against five alternative methods: PointNet++, DGCNN, Krishna Moorthy’s method, LeWoS, and Sun’s method. PointNet++ and DGCNN are prominent deep learning techniques, Krishna Moorthy’s method is a distinguished machine learning strategy, and both LeWoS and Sun’s method are recognized for their effectiveness as traditional approaches. Table 4 provides a quantitative comparison of WLC-Net with these five comparator methods. Based on three metrics, WLC-Net ’s wood–leaf classification accuracy outperformed the other five methods in a comprehensive manner.
As shown in Figure 13, we use the ash03 tree to show the difference between WLC-Net and the other five methods used for comparison in wood–leaf classification. On the left side of the figure, we present the standard data, with black rectangles highlighting areas where our method significantly differs from the others. Notably, WLC-Net, LeWoS, Krishna Moorthy’s method, and PointNet++ all demonstrate strong visual classification performance, successfully identifying some obscured branches. Overall, the LeWoS method performs well, avoiding misclassifications of chaotic leaf points as wood, although it slightly underperforms compared to WLC-Net in identifying fine branches. Similarly, PointNet++ shows limited capability in recognizing some fine branches. While Krishna Moorthy’s method excels at branch identification, it struggles with fine branches in the upper canopy and misclassifies many chaotic leaf points as wood. In contrast, WLC-Net exhibits superior branch recognition without misclassifying too many chaotic leaf points. Additionally, WLC-Net is the only method among the six mentioned that identifies the small leaf buds present on both the left and right primary branches.
For Chinese ash tree point clouds, WLC-Net and LeWoS demonstrated closely competitive results, markedly outperforming the other four methods. WLC-Net improved upon LeWoS by 0.09% in OA, 0.08% in mIoU, and 1.4% in F1-score. Among the latter four methods, PointNet++ exhibits the best performance, with Sun’s method and DGCNN lagging behind. Specifically, for ash tree processing, WLC-Net, LeWoS, and PointNet++ exhibited OA values above 90%, significantly higher than the OA values of the remaining three methods, which were all below 90%.
For willow tree point clouds, the results from WLC-Net and LeWoS are still quite close, markedly outperforming the other four methods. WLC-Net improved upon LeWoS by 1.37% in OA, 1.51% in mIoU, and 3.66% in F1-score. Compared to processing Chinese ash tree point clouds, the leading advantage of WLC-Net over LeWoS has increased slightly. The results of PointNet++and DGCNN fall into an excellent second tier, and their metrics are very close to each other. Their OA values approximate 92%, which is a notable improvement over Krishna Moorthy’s method and Sun’s method, both of which have OA values below 90%. In contrast, both WLC-Net and LeWoS achieve OA values in excess of 95%, clearly delineating their better performance. Additionally, the F1-score for WLC-Net and LeWoS exceed 75%, surpassing the other four methods significantly, which are around 40%. This disparity highlights the distinct advantages of WLC-Net and LeWoS in the classification of willow tree point clouds.
For the open-source dataset, which is devoid of intensity data, Sun’s method was excluded from testing. Consequently, the remaining five methods were evaluated. Among these, only the Krishna Moorthy’s method had an OA below 90%, while the other four methods all exceeded this threshold. Notably, WLC-Net and PointNet++ demonstrated significantly higher OA compared to LeWoS and DGCNN. Specifically, WLC-Net and PointNet++ achieved OA values above 93%, whereas LeWoS and DGCNN had OA values around 91%. In particular, WLC-Net improved upon PointNet++ by 2.04% in OA, 3.23% in mIoU, and 4.68% in F1-score. And, all five methods achieved F1-score accuracies above 80%, with WLC-Net notably reaching an F1-score accuracy of over 90%.

3.3. Efficiency Analysis

To better evaluate WLC-Net, we also compared its processing efficiency against other methods. However, since DGCNN, Krishna Moorthy’s method, and LeWoS did not provide explicit timing data, we resorted to manually estimating the processing time based on ash01 with around 100,000 points. Approximately, DGCNN required about 10 s, Krishna Moorthy’s method took around 150 s, and LeWoS used about 60 s. In terms of processing speed, Sun’s method is remarkably fast, completing most tree point cloud in one second. WLC-Net and PointNet++ are more analogous, as they are both deep learning approaches.
We focused specifically on the time efficiency of WLC-Net and PointNet++ for the Chinese ash and willow datasets, as detailed in Table 5, with a distinction made between tree point clouds that required splitting and that which did not. For the data requiring split, PointNet++ takes a considerable amount of time due to hardware limitations. Specifically, when the point cloud data surpasses a certain threshold—approximately 120,000 points under our testing conditions—the processing time escalates sharply, potentially leading to GPU memory exhaustion and program failure. In these scenarios, the efficiency gains of WLC-Net are predominantly attributable to its data splitting module. For datasets that do not require splitting, WLC-Net operates approximately 41% faster than PointNet++, with this improvement largely due to the centroid sampling technique it employs.

4. Discussion

According to the preceding information, WLC-Net exhibits the best overall performance in wood–leaf classification accuracy across three datasets. The incorporation of linearity as a feature-level discriminator has significantly enhanced the model’s performance, achieving an improvement of approximately 2.93% in OA, 3.4% in mIoU, and 15.98% in F1-score across three datasets when compared to the original PointNet++ model. This enhancement highlights the effectiveness of using linearity as a prior feature to augment the performance of the well-established PointNet++ model. Additionally, we addressed the original PointNet++ model’s limitation regarding the complete input and output of the entire tree point cloud, a challenge that, despite the calculation of OA and mIoU, the model could not previously overcome.
Many studies on wood–leaf classification have indicated that the quality of canopy data is often compromised due to occlusion issues, and the complexity within the crown layer is significantly higher than that of the trunk, making it difficult for most methods to accurately identify the finer branches within the canopy [9,18,30]. While WLC-Net outperforms the other five approaches in the current datasets, it still falls short in identifying fine branches in some instances. For example, as demonstrated in Figure 14, we present three such trees in our datasets along with their corresponding standard data, F1-scores, and the ratio of wood points and leaf points, which are all notably below 0.1, and which are all below the average observed across the datasets. This unusually low ratio highlights a significant class imbalance, likely contributing to the below-average F1-scores for these trees. According to previous study on class imbalance in deep learning, one plausible explanation for the challenges in recognizing finer branches could be the class imbalance issue, which has been shown to detrimentally affect classification performance. Oversampling strategies, recognized as the dominant method for addressing class imbalance, could potentially mitigate the adverse effects and ensure a balanced representation of all classes in wood–leaf classification [44].
To explore whether data imbalance is the cause of inadequate fine branch recognition, we conducted an analysis using a single-scan willow dataset [8]. Figure 15 illustrates the results of this test. Although the OA for all trees remains high, a significant variation in the F1-scores was observed. Particularly noteworthy is the tree labeled ‘Single-scan04’, which had an F1-score of 0.6181—substantially below the average—indicating the poorest classification performance among the sampled trees. This tree also exhibited the lowest ratio of wood to leaf points, further substantiating the hypothesis that class imbalance is impacting classification accuracy. Sun’s method, which focuses on wood–leaf classification for single-station data, slightly underperformed compared to WLC-Net on this dataset, with an OA of 0.9550 for Sun’s method against 0.9553 for WLC-Net. This slight difference, although minimal, underscores WLC-Net’s capability to maintain robust performance in single-scan dataset.
WLC-Net still presents certain challenges. Previous studies have shown that finer branches exhibit distinct geometric characteristics, such as linearity, curvature, and normal vectors, which are markedly different from the more robust trunks and shrubs [36]. As illustrated in Figure 7, while linearity can effectively differentiate between fine branches and leaves, it struggles to distinguish between the trunk and leaves due to similar linearity values. This similarity could lead to a feature representation bias, where the substantial difference in features between trunks and fine branches, coupled with a model’s exposure primarily to trunk samples during training, results in a learned bias towards trunk features. Such a bias may hinder the model’s ability to recognize fine branches, even though they are part of the wood category, because these features are underrepresented in the training data [45].
For instance, the open-source dataset, which primarily features trees exceeding 30 m in height, presents a unique scenario where the vertical distribution ratio of the canopy is low, indicating a minimal vertical span of the canopy relative to the overall tree height. This condition results in a high ratio of wood to leaf points, with minimal canopy occlusion. Despite these seemingly favorable conditions for classification, as demonstrated by tree06 in Figure 16a, the performance remains suboptimal, with only the trunk and primary branches being accurately identified while finer branches are largely unrecognized. This issue is not unique to WLC-Net but is also evident in LeWoS method from similar datasets.
To address this, we cropped the trunk portions from the dataset, aiming to eliminate the feature representation bias. As shown in Figure 16b, the retrained model on the modified dataset, devoid of dominant trunk data, displayed improved classification results. It is essential to note that, given the significant modifications made to the original data, a direct comparison of accuracy with the unaltered dataset is not feasible. Therefore, we relied on visual assessments of the classification outcomes, and the contrasts between Figure 16a,b starkly highlight the benefits of training on a dataset after removing trunk data, which yields more accurate classification of finer branches.
Therefore, for wood–leaf classification, a tri-class scheme, including trunk, branches, and leaves, may be more appropriate. This methodology has the potential to reduce the inconsistencies observed in the classification of trunk and branch segments. Implementing a tri-class system could augment the wood–leaf classification accuracy within the realm of deep learning. Consequently, this enhancement could bolster the model’s versatility and robustness.
In terms of efficiency, there is a notable variation in processing time across the different methods utilized. Sun’s approach, which classifies tree point cloud directly based on its intrinsic characteristics, without the need for sample collection or model training, and is implemented in C++. This method is the fastest, capable of processing individual tree point cloud within approximately one second. The LeWoS method, developed in Matlab R2019b, is also without the need for samples or training and is relatively slower. Krishna Moorthy’s method, a machine learning-based approach executed in Python 3.10.11, exhibits longer processing times when compared to LeWoS.
Deep learning methodologies necessitate a training phase, which involves an initial investment of time to compile a dataset and train the model. However, once the model is trained, these methods generally outpace LeWoS in terms of processing time. Among the deep learning approaches, the processing speed of WLC-Net is consistent with that of DGCNN, both significantly outperforming PointNet++. This enhancement can be ascribed to two primary factors: the employment of a random sampling strategy instead of the more traditional farthest point sampling during centroid selection, and the segmentation of extensive point cloud data into more manageable chunks for processing, thereby conserving computational resources.

5. Conclusions

This research presents WLC-Net, a pioneering method for the automated wood–leaf classification within tree point cloud data. By adapting the PointNet++ architecture, we integrated linearity as a potent distinguishing feature, which significantly aids in precise identification of branches and leaves. Our quantitative assessments, anchored on manually classified benchmarks, demonstrate that WLC-Net achieved an F1-score improvement of approximately 15.98% over PointNet++. In terms of efficiency, WLC-Net has been shown to operate 41% faster than PointNet++ in datasets not requiring data splitting, and maintains robust performance even under the stress of larger datasets, validating the model’s efficiency and reliability.
However, the study encountered challenge, mainly in terms of class imbalance and feature representation bias. Future research should prioritize refining the linearity feature, incorporating an oversampling module and exploring multi-class classification techniques to better navigate these challenges. Further development could also investigate the integration of additional discriminative features that may enhance the model’s sensitivity to finer details within the point clouds.
WLC-Net is able to quickly process and classify complex tree point clouds, and it does not falsely identify non-existent branches. This precise recognition capability makes WLC-Net particularly valuable in tree modeling, enabling them to assist in managing forest resources, monitoring ecological health, and conducting large-scale environmental assessments.

Author Contributions

Conceptualization, P.W.; Methodology, H.L.; Software, H.L., J.R. and Y.G.; Validation, P.W., Y.W., Y.G., L.Z., M.Z. and W.C.; Formal analysis, H.L.; Investigation, H.L.; Data curation, Y.W. and J.R.; Writing—original draft, H.L.; Writing—review & editing, P.W.; Visualization, H.L. and Y.W.; Supervision, J.R.; Project administration, P.W.; Funding acquisition, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Beijing Natural Science Foundation (No. 6232031) and supported by the Third Xinjiang Scientific Expedition Program (Grant No.2022xjkk1205).

Data Availability Statement

Data available on request due to restrictions.

Acknowledgments

Our deepest gratitude goes to the editors and reviewers for their careful work and thoughtful suggestions that have helped improve this paper substantially.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Takoudjou, S.M.; Ploton, P.; Sonké, B.; Hackenberg, J.; Griffon, S.; de Coligny, F.; Kamdem, N.G.; Libalah, M.; Mofack, G.I.; Le Moguédec, G.; et al. Using terrestrial laser scanning data to estimate large tropical trees biomass and calibrate allometric models: A comparison with traditional destructive approach. Methods Ecol. Evol. 2017, 9, 905–916. [Google Scholar] [CrossRef]
  2. Calders, K.; Newnham, G.; Burt, A.; Murphy, S.; Raumonen, P.; Herold, M.; Culvenor, D.; Avitabile, V.; Disney, M.; Armston, J.; et al. Nondestructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 2014, 6, 198–208. [Google Scholar] [CrossRef]
  3. Kükenbrink, D.; Gardi, O.; Morsdorf, F.; Thürig, E.; Schellenberger, A.; Mathys, L. Above-ground biomass references for urban trees from terrestrial laser scanning data. Ann. Bot. 2021, 128, 709–724. [Google Scholar] [CrossRef]
  4. Hosoi, F.; Nakai, Y.; Omasa, K. Estimation and Error Analysis of Woody Canopy Leaf Area Density Profiles Using 3-D Airborne and Ground-Based Scanning Lidar Remote-Sensing Techniques. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2215–2223. [Google Scholar] [CrossRef]
  5. Kong, F.; Yan, W.; Zheng, G.; Yin, H.; Cavan, G.; Zhan, W.; Zhang, N.; Cheng, L. Retrieval of three-dimensional tree canopy and shade using terrestrial laser scanning (TLS) data to analyze the cooling effect of vegetation. Agric. For. Meteorol. 2016, 217, 22–34. [Google Scholar] [CrossRef]
  6. Olsoy, P.J.; Mitchell, J.J.; Levia, D.F.; Clark, P.E.; Glenn, N.F. Estimation of big sagebrush leaf area index with terrestrial laser scanning. Ecol. Indic. 2016, 61, 815–821. [Google Scholar] [CrossRef]
  7. Han, T.; Sánchez-Azofeifa, G.A. A Deep Learning Time Series Approach for Leaf and Wood Classification from Terrestrial LiDAR Point Clouds. Remote Sens. 2022, 14, 3157. [Google Scholar] [CrossRef]
  8. Sun, J.; Wang, P.; Gao, Z.; Liu, Z.; Li, Y.; Gan, X.; Liu, Z. Wood–Leaf Classification of Tree Point Cloud Based on Intensity and Geometric Information. Remote Sens. 2021, 13, 4050. [Google Scholar] [CrossRef]
  9. Wang, D.; Takoudjou, S.M.; Casella, E. LeWoS: A universal leaf-wood classification method to facilitate the 3D modelling of large tropical trees using terrestrial LiDAR. Methods Ecol. Evol. 2020, 11, 376–389. [Google Scholar] [CrossRef]
  10. Dong, R.; Li, W.; Fu, H.; Gan, L.; Yu, L.; Zheng, J.; Xia, M. Oil palm plantation mapping from high-resolution remote sensing images using deep learning. Int. J. Remote Sens. 2020, 41, 2022–2046. [Google Scholar] [CrossRef]
  11. Jiang, T.; Zhang, Q.; Liu, S.; Liang, C.; Dai, L.; Zhang, Z.; Sun, J.; Wang, Y. LWSNet: A Point-Based Segmentation Network for Leaf-Wood Separation of Individual Trees. Forests 2023, 14, 1303. [Google Scholar] [CrossRef]
  12. Yun, T.; An, F.; Li, W.; Sun, Y.; Cao, L.; Xue, L. A Novel Approach for Retrieving Tree Leaf Area from Ground-Based LiDAR. Remote Sens. 2016, 8, 942. [Google Scholar] [CrossRef]
  13. Liu, Z.; Zhang, Q.; Wang, P.; Li, Z.; Wang, H. Automated classification of stems and leaves of potted plants based on point cloud data. Biosyst. Eng. 2020, 200, 215–230. [Google Scholar] [CrossRef]
  14. Zhu, X.; Skidmore, A.K.; Darvishzadeh, R.; Niemann, K.O.; Liu, J.; Shi, Y.; Wang, T. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 43–50. [Google Scholar] [CrossRef]
  15. Ma, L.; Zheng, G.; Eitel, J.U.H.; Moskal, L.M.; He, W.; Huang, H. Improved Salient Feature-Based Approach for Automatically Separating Photosynthetic and Nonphotosynthetic Components Within Terrestrial Lidar Point Cloud Data of Forest Canopies. IEEE Trans. Geosci. Remote Sens. 2015, 54, 679–696. [Google Scholar] [CrossRef]
  16. Ferrara, R.; Virdis, S.G.; Ventura, A.; Ghisu, T.; Duce, P.; Pellizzaro, G. An automated approach for wood-leaf separation from terrestrial LIDAR point clouds using the density based clustering algorithm DBSCAN. Agric. For. Meteorol. 2018, 262, 434–444. [Google Scholar] [CrossRef]
  17. Vicari, M.B.; Disney, M.; Wilkes, P.; Burt, A.; Calders, K.; Woodgate, W. Leaf and wood classification framework for terrestrial LiDAR point clouds. Methods Ecol. Evol. 2019, 10, 680–694. [Google Scholar] [CrossRef]
  18. Krishna Moorthy, S.M.; Calders, K.; Vicari, M.B.; Verbeeck, H. Improved Supervised Learning-Based Approach for Leaf and Wood Classification From LiDAR Point Clouds of Forests. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3057–3070. [Google Scholar] [CrossRef]
  19. Fei, B.; Yang, W.; Chen, W.-M.; Li, Z.; Li, Y.; Ma, T.; Hu, X.; Ma, L. Comprehensive Review of Deep Learning-Based 3D Point Cloud Completion Processing and Analysis. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22862–22883. [Google Scholar] [CrossRef]
  20. Wickramasinghe, C.S.; Marino, D.L.; Manic, M. ResNet Autoencoders for Unsupervised Feature Learning from High-Dimensional Data: Deep Models Resistant to Performance Degradation. IEEE Access 2021, 9, 40511–40520. [Google Scholar] [CrossRef]
  21. Su, H.; Maji, S.; Kalogerakis, E.; Learned-Miller, E. Multi-View Convolutional Neural Networks for 3D Shape Recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 945–953. [Google Scholar]
  22. Li, Z.; Wang, H.; Li, J. Auto-MVCNN: Neural Architecture Search for Multi-view 3D Shape Recognition. arXiv 2020. [Google Scholar] [CrossRef]
  23. Maturana, D.; Scherer, S. VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar] [CrossRef]
  24. Wu, W.; Qi, Z.; Fuxin, L. PointConv: Deep Convolutional Networks on 3D Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–19 June 2019; pp. 9621–9630. [Google Scholar]
  25. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar] [CrossRef]
  26. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  27. Wang, Y.; Solomon, J.M. Object DGCNN: 3D Object Detection using Dynamic Graphs. In Proceedings of the Advances in Neural Information Processing Systems, (NeurIPS 2021), Online, 6–14 December 2021; pp. 20745–20758. [Google Scholar]
  28. Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution On X-Transformed Points. In Proceedings of the Advances in Neural Information Processing Systems, (NeurIPS 2018), Montréal, QC, Canada, 3–8 December 2018. [Google Scholar]
  29. Wen, C.; Yang, L.; Li, X.; Peng, L.; Chi, T. Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens. 2020, 162, 50–62. [Google Scholar] [CrossRef]
  30. Dai, W.; Jiang, Y.; Zeng, W.; Chen, R.; Xu, Y.; Zhu, N.; Xiao, W.; Dong, Z.; Guan, Q. MDC-Net: A multi-directional constrained and prior assisted neural network for wood and leaf separation from terrestrial laser scanning. Int. J. Digit. Earth 2023, 16, 1224–1245. [Google Scholar] [CrossRef]
  31. Wu, B.; Zheng, G.; Chen, Y. An Improved Convolution Neural Network-Based Model for Classifying Foliage and Woody Components from Terrestrial Laser Scanning Data. Remote Sens. 2020, 12, 1010. [Google Scholar] [CrossRef]
  32. Olken, F.; Rotem, D. Random sampling from databases: A survey. Stat. Comput. 1995, 5, 25–42. [Google Scholar] [CrossRef]
  33. Sharma, R.; Gangrade, J.; Gangrade, S.; Mishra, A.; Kumar, G.; Kumar Gunjan, V. Modified EfficientNetB3 Deep Learning Model to Classify Colour Fundus Images of Eye Diseases. In Proceedings of the 2023 IEEE 5th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA), Hamburg, Germany, 7–8 October 2023; pp. 632–638. [Google Scholar] [CrossRef]
  34. Chen, X.; Jiang, K.; Zhu, Y.; Wang, X.; Yun, T. Individual Tree Crown Segmentation Directly from UAV-Borne LiDAR Data Using the PointNet of Deep Learning. Forests 2021, 12, 131. [Google Scholar] [CrossRef]
  35. Arrizza, S.; Marras, S.; Ferrara, R.; Pellizzaro, G. Terrestrial Laser Scanning (TLS) for tree structure studies: A review of methods for wood-leaf classifications from 3D point clouds. Remote Sens. Appl. Soc. Environ. 2024, 36, 101364. [Google Scholar] [CrossRef]
  36. Belton, D.; Moncrieff, S.; Chapman, J. Processing Tree Point Clouds Using Gaussian Mixture Models. In Proceedings of the ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey, 11–13 November 2013; pp. 43–48. [Google Scholar] [CrossRef]
  37. Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A 2016, 374, 20150202. [Google Scholar] [CrossRef]
  38. Chen, S.; Sandryhaila, A.; Kovačević, J. Sampling Theory for Graph Signals. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia, 19–24 April 2015; pp. 3392–3396. [Google Scholar] [CrossRef]
  39. Warren, J.; Marz, N. Big Data: Principles and Best Practices of Scalable Realtime Data Systems; Simon and Schuster: New York, NY, USA, 2015. [Google Scholar]
  40. Zhang, F.; Sun, H.; Xie, S.; Dong, C.; Li, Y.; Xu, Y.; Zhang, Z.; Chen, F. A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model. Front. Plant Sci. 2023, 14, 1199473. [Google Scholar] [CrossRef]
  41. Chicco, D.; Warrens, M.J.; Jurman, G. The Matthews Correlation Coefficient (MCC) is More Informative Than Cohen’s Kappa and Brier Score in Binary Classification Assessment. IEEE Access 2021, 9, 78368–78381. [Google Scholar] [CrossRef]
  42. Guo, Z.; Xu, J.; Liu, A. Remote Sensing Image Semantic Segmentation Method Based on Improved Deeplabv3+. In Proceedings of the International Conference on Image Processing and Intelligent Control (IPIC 2021), Lanzhou, China, 30 July–1 August 2021; pp. 101–109. [Google Scholar] [CrossRef]
  43. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
  44. Buda, M.; Maki, A.; Mazurowski, M.A. A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 2018, 106, 249–259. [Google Scholar] [CrossRef] [PubMed]
  45. Adeli, E.; Zhao, Q.; Pfefferbaum, A.; Sullivan, E.V.; Fei-Fei, L.; Niebles, J.C.; Pohl, K.M. Representation Learning with Statistical Independence to Mitigate Bias. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 2513–2523. [Google Scholar]
Figure 1. Extracted 21 Chinese ash point clouds and their numerical identifier.
Figure 1. Extracted 21 Chinese ash point clouds and their numerical identifier.
Forests 16 00513 g001
Figure 2. Extracted 21 willow point clouds and their numerical identifier.
Figure 2. Extracted 21 willow point clouds and their numerical identifier.
Forests 16 00513 g002
Figure 3. Manual standard classification result of ash 7. Brown: wood points; green: leaf points.
Figure 3. Manual standard classification result of ash 7. Brown: wood points; green: leaf points.
Forests 16 00513 g003
Figure 4. WLC-Net structural overview for classifying wood and leaf points in tree point clouds datasets. (a,b) Multi-scale grouping (MSG).
Figure 4. WLC-Net structural overview for classifying wood and leaf points in tree point clouds datasets. (a,b) Multi-scale grouping (MSG).
Forests 16 00513 g004
Figure 5. Workflow of WLC-Net and experimental design.
Figure 5. Workflow of WLC-Net and experimental design.
Forests 16 00513 g005
Figure 6. Visualization of neighborhood radius ‘r’ relative to cylinder base radius ‘R’ in various scenarios: (a) r << R, (b) r ≈ R, (c) R < r < 2R, and (d) 2R < r.
Figure 6. Visualization of neighborhood radius ‘r’ relative to cylinder base radius ‘R’ in various scenarios: (a) r << R, (b) r ≈ R, (c) R < r < 2R, and (d) 2R < r.
Forests 16 00513 g006
Figure 7. Visual representation of linear features (ash15 as an example).
Figure 7. Visual representation of linear features (ash15 as an example).
Forests 16 00513 g007
Figure 8. Demonstration of seven Chinese ash classification results. Green: leaf points; brown: wood points.
Figure 8. Demonstration of seven Chinese ash classification results. Green: leaf points; brown: wood points.
Forests 16 00513 g008
Figure 9. Demonstration of seven willow trees classification results. Green: leaf points; brown: wood points.
Figure 9. Demonstration of seven willow trees classification results. Green: leaf points; brown: wood points.
Forests 16 00513 g009
Figure 10. Demonstration of 20 open-source data classification results. Green: leaf points; brown: wood points.
Figure 10. Demonstration of 20 open-source data classification results. Green: leaf points; brown: wood points.
Forests 16 00513 g010
Figure 11. Correlation diagrams of OA with mIoU and F1-score based on three datasets. The graph on the (left) is derived from the Chinese ash and willow datasets, and the one on the (right) is derived from the open-source dataset.
Figure 11. Correlation diagrams of OA with mIoU and F1-score based on three datasets. The graph on the (left) is derived from the Chinese ash and willow datasets, and the one on the (right) is derived from the open-source dataset.
Forests 16 00513 g011
Figure 12. Correlation diagrams of total points and time cost.
Figure 12. Correlation diagrams of total points and time cost.
Forests 16 00513 g012
Figure 13. Wood–leaf classification results of WLC-Net, PointNet++, DGCNN, Krishna Moorthy’s method, LeWoS, and Sun’s method on ‘ash03’. Black rectangles marks area where classification results are significantly different.
Figure 13. Wood–leaf classification results of WLC-Net, PointNet++, DGCNN, Krishna Moorthy’s method, LeWoS, and Sun’s method on ‘ash03’. Black rectangles marks area where classification results are significantly different.
Forests 16 00513 g013
Figure 14. Comparison of benchmark and WLC-Net results for willow01, ash02, and ash04. Green: leaf points; brown: wood points.
Figure 14. Comparison of benchmark and WLC-Net results for willow01, ash02, and ash04. Green: leaf points; brown: wood points.
Forests 16 00513 g014
Figure 15. WLC-Net classification results of eight single-scanned willow trees. Green: leaf points; brown: wood points.
Figure 15. WLC-Net classification results of eight single-scanned willow trees. Green: leaf points; brown: wood points.
Forests 16 00513 g015
Figure 16. Comparison of classification results for tree06 before and after trunk cropping.
Figure 16. Comparison of classification results for tree06 before and after trunk cropping.
Forests 16 00513 g016
Table 1. Characteristics of RIEGL VZ-400 scanner.
Table 1. Characteristics of RIEGL VZ-400 scanner.
Technical Parameters
Maximum Scanning Distance600 m
(Natural object reflectivity ≥ 90%)
Vertical Scan Angle RangeTotal 100° (+60°/−40°)
Horizontal Scan Angle RangeMax 360°
Accuracy5 mm
Scan Speed3 lines/s to 120 lines/s (Vertical)
0°/s to 60°/s (Horizontal)
Laser Pulse Repetition Rate100 kHz (Long Range Mode)
300 kHz (High Speed Mode)
Angular ResolutionBetter than 0.0005°
Table 2. Accuracy analysis of 34 trees classification results.
Table 2. Accuracy analysis of 34 trees classification results.
Tree SpeciesNumerical IdentifierOAmIoUF1-Score
Chinese ash010.98600.98270.9216
020.97870.97740.8400
030.97550.97350.8650
040.97880.97750.8393
050.97740.97580.8561
060.96520.96240.8091
070.98270.98110.9084
Avg. 0.97780.97610.8628
Willow010.95540.95400.5804
020.97070.96790.8565
030.96680.96530.7191
040.98990.98960.8618
050.97700.97420.9045
060.96670.96390.8251
070.97170.96990.8094
Avg. 0.97120.96930.7938
Open-source data010.95540.96460.9317
020.94850.97190.9453
030.96500.96170.9805
040.94080.91970.8988
050.96450.96000.9796
060.92890.71870.8364
070.97110.95110.9749
080.95780.93800.9680
090.91390.89180.9428
100.95970.94220.9703
110.95980.87430.9329
120.91580.90090.9479
130.94950.93490.9664
140.94250.91280.9544
150.91400.83650.9110
160.97170.96760.9835
170.96470.95060.9747
180.98510.89910.9468
190.94020.91100.9534
200.97760.95650.9777
Avg. 0.95130.91820.9489
Table 3. Time cost of 34 tree point cloud in three testing datasets.
Table 3. Time cost of 34 tree point cloud in three testing datasets.
Tree SpeciesNumerical IdentifierTotal PointsTime Cost(s)TPMP(s)
Chinese ash01107,02410.90101.81
02228,64226.42115.57
0396,46010.22105.99
04127,53915.55121.94
0583,2878.90106.91
06175,36216.7095.25
07182,11217.1894.36
Willow01430,62541.5296.42
02307,78429.4495.65
0343,6976.49148.48
04161,47417.13106.07
0592,76310.83116.77
06140,85416.72118.70
0721,6084.39203.26
Open-source dataset012,584,586199.6277.23
021,118,126119.79107.13
03279,25930.87110.54
04140,88914.11100.15
055,887,978662.75112.56
061,626,003153.2594.25
074,082,51828870.54
084,208,202345.382.05
095,472,253541.899.01
10814,80567.1482.40
111,545,318157.54101.95
12877,56686.898.91
133,767,575345.691.73
141,361,736106.578.21
152,735,498169.7662.06
168,269,482923.04111.62
171,859,161158.5585.28
182,227,416339.32152.34
192,608,974220.5984.55
202,432,919154.763.59
Avg. 1,649,985156.40102.74
Table 4. Wood–leaf classification results comparison of six methods.
Table 4. Wood–leaf classification results comparison of six methods.
Tree SpeciesRelated MethodsOAmIoUF1-Score
Chinese ashPointNet++0.95900.95660.7254
DGCNN0.88730.88190.4249
Krishna Moorthy’s method0.89650.88970.5461
LeWos0.97690.97530.8488
Sun’s method0.87270.86620.4834
WLC-Net0.97780.97610.8628
WillowPointNet++0.92260.91920.4986
DGCNN0.91460.91210.3785
Krishna Moorthy’s method0.86420.85520.4741
LeWos0.95750.95420.7572
Sun’s method0.78360.76880.3794
WLC-Net0.97120.96930.7938
Open-source dataPointNet++0.93040.88180.8551
DGCNN0.91680.86610.8268
Krishna Moorthy’s method0.84640.63170.8396
LeWos0.91090.85230.8372
Sun’s method---
WLC-Net0.95080.91410.9019
Table 5. Efficiency comparison against WLC-Net and PointNet++.
Table 5. Efficiency comparison against WLC-Net and PointNet++.
TreeWLC-Net(s)PointNet++(s)
Splitash0226.42454.072
ash0415.5548.456
ash0616.70216.064
ash0717.18235.008
willow0229.44726.18
willow0417.13174.99
willow0616.72121.88
Avg. 19.88282.38
Intactash0110.9014.272
ash0310.2213.872
ash058.9013.072
willow036.4910.208
willow0510.8313.344
willow074.398.376
Avg 8.6212.19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Wang, P.; Wu, Y.; Ren, J.; Gao, Y.; Zhang, L.; Zhang, M.; Chen, W. WLC-Net: A Robust and Fast Deep Learning Wood–Leaf Classification Method. Forests 2025, 16, 513. https://doi.org/10.3390/f16030513

AMA Style

Li H, Wang P, Wu Y, Ren J, Gao Y, Zhang L, Zhang M, Chen W. WLC-Net: A Robust and Fast Deep Learning Wood–Leaf Classification Method. Forests. 2025; 16(3):513. https://doi.org/10.3390/f16030513

Chicago/Turabian Style

Li, Hanlong, Pei Wang, Yuhan Wu, Jing Ren, Yuhang Gao, Lingyun Zhang, Mingtai Zhang, and Wenxin Chen. 2025. "WLC-Net: A Robust and Fast Deep Learning Wood–Leaf Classification Method" Forests 16, no. 3: 513. https://doi.org/10.3390/f16030513

APA Style

Li, H., Wang, P., Wu, Y., Ren, J., Gao, Y., Zhang, L., Zhang, M., & Chen, W. (2025). WLC-Net: A Robust and Fast Deep Learning Wood–Leaf Classification Method. Forests, 16(3), 513. https://doi.org/10.3390/f16030513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop