Next Article in Journal
Contrasting Non-Timber Forest Products’ Case Studies in Underdeveloped Areas in China
Previous Article in Journal
Habitat Suitability Modeling of Endemic Genus Chimonanthus in China under Climate Change
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Reconstruction of Forest Scenes with Tree–Shrub–Grass Structure Using Airborne LiDAR Point Cloud

1
Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
International Research Center of Big Data for Sustainable Development Goals, Beijing 100094, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
Zhejiang Climate Center, Zhejiang Meteorological Administration, Hangzhou 310052, China
*
Author to whom correspondence should be addressed.
Forests 2024, 15(9), 1627; https://doi.org/10.3390/f15091627 (registering DOI)
Submission received: 5 August 2024 / Revised: 7 September 2024 / Accepted: 13 September 2024 / Published: 15 September 2024

Abstract

:
Fine three-dimensional (3D) reconstruction of real forest scenes can provide a reference for forestry digitization and forestry resource management applications. Airborne LiDAR technology can provide valuable data for large-area forest scene reconstruction. This paper proposes a 3D reconstruction method for complex forest scenes with trees, shrubs, and grass, based on airborne LiDAR point clouds. First, forest vertical distribution characteristics are used to segment tree, shrub, and ground–grass points from an airborne LiDAR point cloud. For ground–grass points, a ground–grass grid model is constructed. For tree points, a method based on hierarchical canopy point fitting is proposed to construct a trunk model, and a crown model is constructed with the 3D α-shape algorithm. For shrub points, a shrub model is directly constructed based on the 3D α-shape algorithm. Finally, tree, shrub, and ground–grass models are spatially combined to achieve the reconstruction of real forest scenes. Taking six forest plots located in Hebei, Yunnan, and Guangxi provinces in China and Baden-Württemberg in Germany as study areas, experimental results show that the accuracy of individual tree segmentation reaches 87.32%, the accuracy of shrub segmentation reaches 60.00%, the height accuracy of the grass model is evaluated with an RMSE < 0.15 m, the volume accuracy of shrub and tree models is assessed with R2 > 0.848 and R2 > 0.904, respectively. Furthermore, we compared the model constructed in this study with simplified point cloud and voxel models. The results demonstrate that the proposed modeling approach can meet the demand for the high-accuracy and lightweight modeling of large-area forest scenes.

1. Introduction

Forests are the most widely distributed, structurally complex, and resource-rich terrestrial ecosystem. They play a crucial role in the water cycle, carbon cycle, and climate regulation [1,2]. A three-dimensional (3D) model of forest scenes can be used for change detection [3], radiative transfer simulation modeling [4], and visualization analysis [5], meeting the need for the precise and rapid monitoring of the forest ecosystem for ecological safety. LiDAR (Light Detection and Ranging) can quickly and accurately obtain spatial 3D coordinates of forest vegetation [6]. Terrestrial laser scanning (TLS) and airborne laser scanning (ALS) are commonly used in forestry applications [7,8,9]. Although TLS can obtain more detailed information on ground features, its acquisition efficiency is low, making it difficult to use for forest scene modeling [10]. In contrast, ALS technology can quickly and accurately obtain the spatial 3D coordinates of forest vegetation and penetrate the canopy to obtain terrain information under the forest, providing a reliable data source for constructing 3D forest scenes [11].
Existing forest scene modeling methods mainly focus on tree modeling, including individual tree segmentation, trunk modeling, and crown modeling [12,13].
For individual tree segmentation, current methods can be categorized into three approaches: the canopy height model (CHM)-based approach, the point-based approach, and the deep-learning-based approach. The CHM represents the distance between vegetation and ground and is a raster image. Segmentation methods based on the CHM are essentially methods for processing grayscale images. The commonly used methods include the watershed segmentation method, the immersion algorithm, and the region-growing method. Liu et al. [14] utilized CHM data for tree top detection and segmentation, further combining it with other geometric information to accurately segment individual trees. While CHM-based methods are fast and efficient, they primarily capture the upper canopy structure and lack detailed representation of the vertical structure below the canopy. Point-based segmentation methods use clustering or voxel division to segment discrete point clouds. Yan et al. [15], using a high-density UAV LiDAR point cloud, first applied the mean shift algorithm for the rough classification of non-ground points, followed by the iterative segmentation of undersegmented areas using the Normalized Cut (NCut) algorithm until individual trees were identified. Compared to CHM-based methods, point-based methods not only reduce segmentation errors and improve identification accuracy but also allow for the segmentation of sub-canopy layers, providing a more detailed understanding of forest structure. With the rapid development of deep learning, an increasing number of studies have applied deep learning algorithms to point cloud processing. Some researchers have proposed deep learning algorithms specifically for individual tree segmentation. Chang et al. [16] introduced a top-down approach using the YOLOv3 deep learning network to achieve individual tree point cloud segmentation. While deep learning facilitates the automatic extraction of high-dimensional features from point clouds, its high computational resource demands and limited model interpretability make it unsuitable for segmenting large-scale forest scenes. The above methods are mostly focused on the segmentation of trees. Although some methods can extract understory vegetation, they primarily target understory trees and do not consider the segmentation of lower-layer vegetation such as shrubs.
For tree trunk modeling, most methods determine the trunk position using either the center point of the tree crown’s projection onto the ground [17] or the highest point of the tree crown [18]. Then, the trunk is represented by fitting a cylindrical model. For tree crowns, they can be simplified using methods such as fast point feature histograms (FPFH), information entropy [19], and partition simplification (PS) [20]. The point cloud simplification model proposed by Hu et al. [19] addresses challenges in point cloud data storage and computation. This method considers the similarity between the simplified and original point cloud, as well as the area of the tree point cloud. This approach has been proven to enhance the efficiency of forest surveys and monitoring. For tree crown modeling, they are typically represented by homogeneous layers, 3D simple crowns, uniform voxels, as well as explicit 3D triangle mesh surfaces [21]. Homogeneous layers treat the canopy as one or more horizontal layers. For example, Yang et al. [22] assumed the canopy in each horizontal layer as a homogeneous turbid medium in their radiative transfer simulation model. Pfeifer et al. [23] used airborne LiDAR point clouds to construct a forest model by horizontally layering the tree crowns and calculating the outer envelope of each layer as the crown model. Homogeneous layers are often used in radiative transfer models. Three-dimensional simple crowns use geometric primitives (e.g., ellipsoids) to represent the entire crown. For instance, Vosselman et al. [24] used ellipsoids to represent tree crowns in urban scene modeling. Morsdorf et al. [25] obtained the position, height, and crown diameter of trees from airborne LiDAR point clouds and used paraboloid models to construct a tree model. Three-dimensional simple crowns are typically constructed based on structural parameters such as crown width. Uniform voxels describe the crown model with a series of 3D grid cells. Wang et al. [26] layered tree crown points and extracted the grid points of each layer to construct a 3D voxel model of the tree crown. Lin et al. [27] applied the forest voxel model to a radiative transfer model, demonstrating that the optimal voxel size for simulating BRFs at a 30 m resolution is approximately 0.9 m. Li et al. [28] proposed a voxel-based method for constructing forest scenes by fusing TLS and ALS data. They constructed a forest voxel model with a grid resolution of 0.02 m and validated its ability to simulate image brightness. However, within a forest area of only 108 × 108 m, the memory consumption of the voxel model at this grid resolution reached 17 GB. Three-dimensional triangular mesh surfaces describe the detailed surface structure of each crown element (e.g., leaves) using a series of triangles. Qi et al. [29] created tree models represented by triangular meshes using Onyx Tree software, which served as inputs for radiative transfer simulation. Mesh surfaces can accurately describe the tree crown structure, but the resulting crown model will require a significant amount of memory.
Considering practical constraints, modeling large-area forest scenes requires balancing model accuracy with efficiency and memory usage. Homogeneous layers and 3D simple crowns offer high modeling efficiency and low memory demands but at the cost of reduced detail and accuracy. In contrast, 3D triangular mesh surfaces can provide highly accurate and detailed representations of tree crown structures [30]; however, their substantial memory requirements make them more suitable for individual trees or small-scale scenes. Uniform voxels offer a compromise, but varying voxel sizes can lead to significant differences, and in homogeneous regions, multiple voxels may result in unnecessary memory consumption. Furthermore, in forest management and the timber industry, stem models are essential parameters for describing individual trees and wood quality assessment. However, the trunk model constructed using the aforementioned methods does not adequately account for the relationship between crown structure and trunk position. They assume that all trunks are vertical—a simplification that does not reflect the characteristics of tree trunks in natural forests. Moreover, these forest scene construction methods primarily focus on tree modeling, neglecting the extraction and modeling of understory components such as shrubs and grass. Existing studies have demonstrated that the stratified structure of forests is closely related to factors such as forest ecological resilience, health, and carbon stock assessment [31,32]. These findings stress the importance of achieving high-accuracy, multi-element lightweight modeling of trees, shrubs, grass, and understory terrain for effective forest resource management.
In summary, forest scene reconstruction based on LiDAR point clouds has become a research hotspot in the field of forest ecology digitization, but several issues remain. To address these issues, this study proposes a model reconstruction method for forest tree–shrub–grass stratified structures using airborne LiDAR point clouds. This method employs 3D envelopes, 3D grids, and two-dimensional (2D) grids to achieve a high-fidelity 3D reconstruction of trees, shrubs, grass, and understory terrain. To address the issue of missing trunk points in airborne LiDAR data, a trunk prediction method based on hierarchical canopy point fitting is proposed for modeling the tree trunk. We implemented the proposed reconstruction method into the six forest plots in Germany and China. The reconstruction results are compared with the point cloud simplification method and the voxel-based method.

2. Materials

2.1. Study Area

The study area includes six research plots, two of which are located in Bretten and Karlsruhe, Baden-Württemberg, Germany, and four in Saihanba, Hebei; Shangri-La, Yunnan; Central Yunnan; and Guilin, Guangxi, China. They are referred to as BR, KA, SHB, SL, DZ, and GX. The study areas exhibit diverse terrain with high vegetation coverage, characterized by a multi-layered distribution of trees, shrubs, and grass. There are significant differences in tree height, volume, etc. Their geographical locations and basic information are shown in Figure 1 and Table 1.

2.2. ALS Point Cloud

The data for BR and KA study areas come from a publicly available dataset released by Weiser et al. [33]. The airborne LiDAR point cloud for these areas was collected using a RIEGL VQ-780i LiDAR (RIEGL Laser Measurement Systems, Horn, Austria) system mounted on a Cessna C207 aircraft. The RIEGL VQ-780i LiDAR system, with its wide field of view and multiple-time-around measurement capability, is ideally suited for the collection of forest point cloud data. And highspeed rotating mirror design ensures reliability and uniform point distribution across its entire wide field of view and at all flying altitudes.
For study areas in China, the airborne LiDAR point cloud was acquired using a RIEGL VUX-1UAV LiDAR (RIEGL Laser Measurement Systems, Horn, Austria) system mounted on an unmanned aerial vehicle (UAV) [34]. The RIEGL VUX-1UAV is a very lightweight and compact laser scanner. It provides high-speed data acquisition using a narrow infrared laser beam and a fast-line scanning mechanism. Based on RIEGL’s proven Waveform-LiDAR technology, these systems provide point clouds with the highest accuracy and excellent vertical target resolution.
The detailed information is shown in Table 2.

2.3. Validation Data

In this study, the terrestrial LiDAR point cloud collected in the study area and the manually annotated results of the airborne and terrestrial LiDAR point cloud are used as validation data. The TLS point cloud of BR and KA plots was collected using a RIEGL VZ-400 terrestrial laser scanner (RIEGL Laser Measurement Systems, Horn, Austria), with an accuracy of 5 mm and a precision of 3 mm. Additional scans were conducted at certain locations using a tilted bracket to capture the tops of the trees at close range. The TLS point cloud of plots in China was collected using a RIEGL VZ-1000 terrestrial laser scanner (RIEGL Laser Measurement Systems, Horn, Austria), with an accuracy of 8 mm and a precision of 5 mm.
The TLS point cloud only covers parts of the six study areas. To validate the accuracy of the forest modeling based on the ALS point cloud, this study manually annotated the TLS point cloud of individual tree point clouds, tree trunk positions, and shrub point clouds. Additionally, portions of the ALS point cloud information for six plots were manually annotated, including individual tree point clouds and tree trunk positions. The detailed information is shown in Table 3.
The ALS point cloud extracted for a tree and shrub, along with their corresponding manually annotated TLS point cloud, are shown in Figure 2.

3. Methods

The 3D reconstruction method for the forest scene proposed in this study includes the following steps: point cloud segmentation, construction of ground–grass, tree, and shrub models, forest scene model construction, and accuracy validation. First, forest point cloud filtering is performed on the airborne LiDAR point cloud, with ground points including ground and grass elements. For ground areas without grass cover, a 2D grid model of the ground is constructed using spatial interpolation; for ground areas with grass cover, the grass height is calculated, and a 3D grid model of the grass is constructed. For non-ground point clouds, this study implements tree and shrub point cloud segmentation based on traditional individual tree segmentation methods [35]. For tree point clouds, the crown points and trunk points are separated, and the number and height range of trunk points are assessed to determine if they meet the requirements for trunk modeling. If they did, a trunk modeling method based on trunk points is used; otherwise, a method based on hierarchical canopy point fitting is used for trunk modeling. For tree crown point clouds, a lightweight envelope model of the crown is constructed using the 3D α-shape algorithm. For shrubs without a distinct trunk, a 3D α-shape envelope model is directly constructed. Finally, the models of ground, grass, shrubs, and trees are spatially combined to achieve a 3D reconstruction of the forest scene. The specific technical route is shown in Figure 3.

3.1. Point Cloud Segmentation

Due to the penetration capability of airborne LiDAR, forest point clouds include both ground and non-ground points. In this study, the Cloth Simulation Filter (CSF) algorithm [36] is used for point cloud filtering to extract ground points from the ALS point cloud while identifying non-ground points. Since grass is relatively low in height and covers the ground, it is classified as ground points during the filtering process. Therefore, the modeling of grass and ground is based on the filtered ground point cloud.
For stratified forests with high canopy closure and dense tree distribution, a point-based multi-layer clustering method is used for point cloud segmentation. Additional constraints on height range and point count are applied to differentiate between the tree and shrub point cloud. Specifically, the point cloud is first horizontally sliced, and then a region-growing clustering algorithm is applied from the highest point to the lowest point for layer clustering. The point clusters between layers are merged, and finally, all individual tree point clouds and shrub point clouds are segmented. The main parameters for the point cloud segmentation algorithm are the search radius for region growing and the vertical resolution. The search radius depends on point density and tree crown width, while the vertical resolution depends on point density and the level of detail in the point cloud. In this study, these two parameters are set to 1 m and 1.5 m, respectively. Shrubs are typically defined as woody plants with a height of no more than 3 m. Therefore, a point cloud will be classified as a shrub if its height range is less than 3 m, and more than half of the points are located less than 1 m above the ground.

3.2. Ground–Grass Model Construction

The ground points are divided into spatial grids in the x-y plane. In each grid, the RANSAC (RANdom Sample Consensus) method [37] is used to fit a 3D plane equation for the ground, given by Ax + By + Cz + D = 0. The distance of all points in the grid to the fitted plane is then calculated, and the standard deviation σ of these distances is computed, as shown in Equation (1):
σ = i = 1 n d i 2 n
where di represents the distance from the i-th point in the grid to the fitted 3D plane, and n is the number of points in the grid. A threshold of 0.1 m is set. If the standard deviation is greater than the threshold, it is determined that there is grass in the grid, and the height of the grass model in the grid is set to σ; otherwise, the grid is determined to be bare ground. Spatial interpolation is used to construct a ground model from the ground points. For grids containing grass, a 3D grid model corresponding to the grass height is constructed and placed on the top of the ground model. Finally, the ground–grass model is constructed, as shown in Figure 4. This method is applicable for grass height extraction in any terrain.

3.3. Tree Model Construction

The 3D reconstruction of a tree includes the reconstruction of the tree trunk and tree crown. It is mainly divided into three parts, as follows:
(1) Separation of crown and trunk points. The point cloud of the tree trunk has two main characteristics compared to the tree crown: first, due to the occlusion by the crown, the trunk points detected by airborne LiDAR are often sparse; second, the trunk is generally cylindrical, with the trunk diameter significantly smaller than the crown width, resulting in a more concentrated horizontal distribution. Therefore, the vertical distribution of individual tree point clouds is obtained to separate the crown and trunk points. The idea of this separation method is similar to that proposed in [38]. This study additionally incorporates a dispersion assessment to enhance the separation effect. First, based on the maximum and minimum elevation values of the individual tree point cloud (Figure 5a), the point cloud is divided into N horizontal layers. For each layer, the number of points nl and the centroid (xcl, ycl, zcl) are calculated. Then, the horizontal distance di of all points in the layer to the centroid is computed, and the standard deviation σl of these distances is calculated. The calculation equations are as follows:
x c l = i = 1 n l x i l n l ,   y c l = i = 1 n l y i l n l ,   z c l = i = 1 n l z i l n l
d i l = x i l x c l 2 + y i l y c l 2
σ l = i = 1 n l ( d i l ) 2 n l
where l represents the layer number. Finally, vertical distribution histograms of point counts and dispersion (Figure 5b,c) are obtained. A smaller standard deviation indicates a more concentrated point cloud in that layer. Starting from the lowest layer, the point count and standard deviation are checked against set thresholds to identify the starting elevation of the tree crown, thus achieving the separation of trunk and crown points.
(2) Trunk modeling. For each tree, the number of trunk points and the height range are analyzed. If the number of trunk points exceeds 10, and the height range is greater than 3 m, the trunk model is fitted using the trunk points (Figure 6). Specifically, Principal Component Analysis (PCA) [39] is first used to calculate the main direction of the trunk. Then, the trunk points are projected onto the ground along this main direction, and the RANSAC [37] algorithm is applied to fit a circle on this plane. The center of this circle represents the trunk position, and the trunk cylinder model extends from the ground to the crown center. If the number of points is less than 10 or the height range is below 3 m, it is determined that the trunk information is missing. In this case, use a hierarchical canopy point fitting method to predict the trunk position (Figure 7). This involves calculating the center position of each crown layer and fitting the trunk direction through these centers, extending it to the ground to obtain the trunk’s position and height. The trunk model is then constructed based on the predicted trunk direction and position.
(3) Crown modeling. This study uses a 3D envelope vector model to represent tree crowns. Specifically, the extracted crown points are spatially clustered to obtain leaf clusters. The 3D α-shape algorithm is then applied to these leaf clusters to compute their convex hull, resulting in the crown model (Figure 6 and Figure 7). Finally, the trunk model and the crown model are spatially combined to create the complete tree model.

3.4. Shrub Model Construction

Shrubs are characterized by multiple branches and leaves, no obvious main trunk, and a single-layer canopy that is low and wide. Therefore, this study constructs the shrub model directly from the extracted shrub point cloud. First, the mean-shift algorithm [40] is used to cluster the shrub point cloud. Then, multiple 3D α-shape envelopes are generated from the clustered point cloud to construct the shrub models, as shown in Figure 8.

3.5. Forest Scene Model Construction and Accuracy Validation

The constructed ground–grass model, tree models, and shrub models are spatially combined according to their respective geographical positions to form a 3D forest scene model. The accuracy of the 3D forest scene model is then validated. Given that forest scene reconstruction involves numerous structural parameters, the validation is performed for parameters such as individual tree segmentation, grass height, shrub volume, tree position, height, and volume.
(1) Validation of individual tree segmentation. The individual tree segmentation results from airborne LiDAR forest point clouds are validated by comparing them with manually extracted individual tree information. The evaluation metrics are calculated using Equations (5)–(7) [41,42,43].
r = T P T P + F N
p = T P T P + F P
f = 2 × r × p r + p
where r (recall) is the tree detection rate, p (precision) is the accuracy of the segmented trees, and f (F-score) is the overall accuracy of tree detection. TP (True Positive) is the number of correctly segmented trees (correct segmentation), FN (False Negative) is the number of not segmented trees, and FP (False Positive) is the number of incorrectly segmented trees.
The proposed point cloud segmentation algorithm is compared with the open-source individual tree segmentation algorithms treeiso and ForAINet. Treeiso is an unsupervised learning algorithm model based on the cut-pursuit graph, designed to separate individual tree points from a point cloud. The treeiso method comprises three steps. Each of these steps requires the setting of three parameters. Some of these parameters are predefined, while others are given within a range and need tuning to achieve the best segmentation results [44]. The ForAINet algorithm achieves semantic segmentation and instance segmentation of trees in dense forests. It performs semantic segmentation at the plot/stand level, tree instance segmentation, and semantic component segmentation at the tree level [45].
(2) Validation of the grass model. The grass model was constructed using a terrestrial LiDAR point cloud according to the method in Section 3.2. This model is used to validate the height of the grass model reconstructed from the airborne LiDAR point cloud in the same area. The modeling accuracy of the grass model is evaluated using bias, the coefficient of determination (R2), and the root mean square error (RMSE). The calculation methods are provided in Equations (8)–(10):
b i a s = 1 n i = 1 n ( x i y i )
R 2 = 1 i = 1 n ( x i y i ) 2 i = 1 n ( x i y ¯ ) 2
R M S E = 1 n 1 i = 1 n ( x i y i ) 2
where xi and yi represent the grass height extracted from the airborne LiDAR point cloud and the terrestrial LiDAR point cloud in the i-th grid, respectively, y ¯ is the average grass height extracted from the terrestrial LiDAR point cloud, and n is the number of grids.
(3) Validation of the shrub model. A shrub point cloud is extracted from terrestrial LiDAR data through manual annotation, and shrub models are generated according to the method in Section 3.4. The volumes of the shrub models generated from terrestrial and airborne LiDAR point clouds are compared. The model accuracy is evaluated using bias, R2, and RMSE. The calculation methods are based on Equations (8)–(10), where x and y represent the shrub volumes extracted from airborne and terrestrial LiDAR point clouds, respectively, and n is the number of shrub models.
(4) Validation of the tree model. This study validates the position, height, and crown volume of the tree models.
Tree Position: Tree trunk positions extracted from airborne point clouds are compared with those manually annotated from terrestrial LiDAR point clouds. The average offset in the x and y directions ( d x ¯ and d y ¯ ) and the average offset distance ( d i s ¯ ) are calculated to assess the accuracy of the trunk positions.
Tree Height: The heights extracted from terrestrial LiDAR point clouds were used to validate the heights of tree models reconstructed from airborne LiDAR point clouds. The accuracy of the tree height is evaluated using bias, R2, and RMSE. The calculation methods follow Equations (8)–(10), where x and y represent the tree heights extracted from airborne and terrestrial LiDAR point clouds, respectively, and n is the number of trees.
Tree Crown Volume: The method in Section 3.3 was used to separate tree crown points from terrestrial LiDAR point clouds and generate crown envelope models. The crown volumes generated from terrestrial and airborne LiDAR point clouds were compared. The modeling accuracy was evaluated using bias, R2, and RMSE, with calculation methods as described in Equations (8)–(10), where x and y represent the crown volumes extracted from airborne and terrestrial LiDAR point clouds, respectively, and n is the number of trees.
(5) Comparison with other reconstruction methods. The accuracy and memory of the scene models constructed in this study are compared with models constructed with other methods, such as the simplified point cloud based on FPFH information entropy and the voxel models.
The forest model constructed in this study is compared with existing forest model construction algorithms, including a point cloud simplification model and voxel models. Based on FPFH information entropy [19], we simplified the point cloud data used in this study with a reduction rate of 30%. Based on references [27,28], we constructed forest voxel models with resolutions of 0.5 m and 1 m for the data used in this study.

4. Results and Discussion

4.1. Validation of Point Cloud Segmentation

We compared our algorithm with two existing individual tree segmentation algorithms, treeiso and ForAINet, with the results presented in Table 4. All three algorithms achieved relatively high accuracy in coniferous forests, primarily due to the more easily identifiable and segmentable conical crowns of coniferous trees. In mixed forests BR, KA, and DZ, our method achieved tree segmentation accuracies of 87.32%, 88.99%, and 89.13%, respectively, outperforming both the treeiso and ForAINet algorithms. In the coniferous forest SHB, our method achieved a tree segmentation accuracy of 93.62%, which is higher than treeiso but slightly lower than ForAINet. The lower segmentation accuracy of deep learning methods in mixed forests is primarily due to the significant loss of trunk points and the lower point cloud density in these environments, which results in reduced segmentation precision. Additionally, the deep learning algorithms primarily focus on the segmentation of trees. While it can extract some understory trees, it is unable to capture lower vegetation such as shrubs. In contrast, our method successfully extracted shrub point cloud clusters in the nine validation areas corresponding to the terrestrial point cloud, with the results presented in Table 5. The accuracy of shrub extraction can exceed 60%. Overall, our method provides higher accuracy for comprehensive forest point cloud segmentation.

4.2. Validation of Grass, Shrub, and Tree Models

4.2.1. Validation of Grass Model

Figure 9 shows the validation results of grass models in three sample plots within the study area. The R2 values are all higher than 0.661, the bias values are all negative, and the RMSE ranges from 0.08 m to 0.15 m. This indicates that there is a certain correlation between the grass heights extracted from airborne and terrestrial LiDAR point clouds. However, the heights extracted from airborne LiDAR are generally lower than those from terrestrial LiDAR. This discrepancy is due to the upper canopy and shrub coverage, which results in fewer grass points being captured by airborne LiDAR. In contrast, terrestrial laser scanning can capture more comprehensive information about the forest understory structure. Among the three sample plots, the SHB plot has the highest accuracy and the smallest bias for grass heights (RMSE = 0.08 m, bias = −0.05 m), which may be related to the higher point cloud density (108 pts/m2) in this plot.

4.2.2. Validation of Shrub Model

Figure 10 shows the volume validation results of shrub models in six sample plots within the study area. The R2 values are all higher than 0.848, indicating a general consistency between shrub volumes extracted from airborne and terrestrial LiDAR point clouds. The bias values are all negative, and the RMSE ranges from 1.48 m3 to 4.05 m3. This discrepancy is due to the upper canopy blocking the scan of lower shrubs, causing a slight underestimation of shrub volume by the airborne LiDAR.

4.2.3. Validation of Tree Model

(1)
Validation of Tree Position
Table 6 shows the average offsets in the x and y directions and the average offset distances between the tree positions obtained by the proposed method and the crown projection center method [17] compared to the actual positions. The proposed method achieves average offset distances of 0.35 m for tree trunk positions in the sample plots, respectively. These are significantly lower than the offset distances of 0.80 m obtained using the crown projection center method. This improvement is attributed to the proposed method’s consideration of the overall geometric characteristics of the tree point cloud. However, there remains some offset between the results of the proposed method and the actual tree positions, mainly due to the incomplete scanning of some tree crown points.
(2)
Validation of Tree Height
Figure 11 shows the validation results of tree height in three sample plots within the study area. The R2 values are all higher than 0.991, and the RMSE ranges from 0.23 m to 0.89 m, indicating a high consistency between tree heights extracted from airborne and terrestrial LiDAR point clouds. The tree height accuracy for the SHB plot is relatively lower (R2 = 0.991, RMSE = 0.89 m, bias = 0.13 m) because terrestrial laser scanning has difficulty accurately capturing the tops of the trees. In contrast, in the BR and KA plots, a tilted bracket was used for terrestrial laser scanning, allowing for the close-range capture of the tree tops, resulting in higher validation accuracy.
(3)
Validation of Tree Crown Volume
Figure 12 shows the validation results of tree crown volumes in three sample plots within the study area. The R2 values are all higher than 0.904, indicating a high consistency between the tree crown volumes extracted from airborne and terrestrial LiDAR point clouds. The bias values are all negative, and the RMSE ranges from 62.9 m3 to 100.88 m3, indicating an underestimation of tree crown volumes extracted from airborne LiDAR point clouds. This underestimation is due to the sparsity of the lower layer of tree crown points captured by airborne laser scanning.
The comparison between the tree–shrub–grass model and the terrestrial point cloud achieved the expected results; however, there are still some limitations. Firstly, for the grass model, a 3D grid model was constructed directly. However, the grid size can affect the accuracy of the grass model to some extent, and the grid size’s impact on model accuracy cannot be validated using a terrestrial point cloud. Secondly, for tree positions, the proposed method utilizes the acquired point cloud information sufficiently, resulting in higher accuracy compared to previous methods. Nevertheless, the internal structural features of the tree canopy have not been considered, leading to some deviation between the estimated and actual tree trunk positions. Finally, the volume of tree crowns and shrubs derived from the terrestrial point cloud is not entirely accurate, which somewhat affects the evaluation of airborne models.

4.3. Forest Scene Model Reconstruction Results

Figure 13 shows the results of the 3D forest scene model reconstruction for the six study plots. The total volumes of trees, shrubs, and grass in each scene are summarized in Table 7. It can be observed that the KA plot has the highest vegetation volume, reaching 687,645.62 m3, which is related to the higher vegetation coverage and average tree height of this plot. In the SHB plot, the combined volume of shrub and grass only accounts for 2.96% of the total volume, while the proportions for other plots are higher than 6.84%. This discrepancy is due to the fact that the SHB plot is an artificial forest, while the other plots are natural forests.

4.4. Comparison with Other Modeling Methods

In Table 8, we perform a visual analysis of the coniferous and broadleaf tree models constructed using different methods. The purpose of the point cloud simplification method is to reduce processing time while retaining as much geometric detail as possible. However, the point cloud simplification method only reduces the data volume and does not model elements within the scene, such as the tree crown and trunk. Additionally, for trees with missing trunk points, this method does not perform further processing, resulting in incomplete tree models. In the voxel model, a larger voxel size results in a significant number of empty voxels, which affects the accuracy of the results. A smaller voxel size aligns more closely with the point cloud, but it also leads to higher memory consumption. In contrast, the modeling method proposed in this paper better preserves the original shape of the point cloud without causing redundant memory usage.
Additionally, Table 9 compares the memory usage of the 3D envelope-based models constructed in this study with simplified point cloud (simplified ratio = 30%) and 3D voxel models at different resolutions (1 m, 0.5 m). The forest scene models constructed using the proposed method reduce memory usage by approximately 80% compared to the 0.5 m resolution 3D voxel model. Although 3D voxel models with resolutions of 1m and above also effectively reduce memory usage, they tend to suffer from significant detail loss due to their coarser resolution. In contrast, the proposed method achieves a high-accuracy reconstruction of 3D forest scenes with lightweight memory usage.

5. Conclusions

To meet the needs of large-scale forest scene modeling, this study uses forest point clouds collected by more efficient airborne LiDAR to reconstruct realistic 3D forest scenes. Considering the stratified structure of trees, shrubs, and grass in the forest, a 3D modeling method for complex tree–shrub–grass forest scenes is proposed. This method includes steps such as the point cloud segmentation of trees, shrubs, and grass, ground–grass model construction, tree model construction, and shrub model construction. Furthermore, to address the issue of missing tree trunk points, a tree trunk position prediction method based on hierarchical canopy point fitting is proposed. Forest scene reconstruction experiments were conducted in six plots in Germany and China. The modeling accuracy of the algorithm was validated using terrestrial LiDAR point cloud and manually annotated data. The conclusions are as follows:
(1) The proposed method considers the stratified structure of trees, shrubs, and grass in complex forest scenes, allowing for a more accurate reconstruction of realistic 3D forest scenes. Validation results show that the segmentation algorithm proposed in this study is capable of segmenting airborne point clouds into trees, shrubs, and grass. The segmentation accuracy for trees and shrubs reaches 87.32% and 60.00%, respectively. The grass height accuracy achieves an RMSE < 0.15 m, and shrub volume accuracy achieves an RMSE < 4.05 m3.
(2) To address the challenge of tree trunk reconstruction caused by missing trunk points in airborne LiDAR data, this study proposes a method for predicting tree trunks based on hierarchical canopy point fitting. Validation results with terrestrial LiDAR point clouds show that the trunk position prediction accuracy is improved by 0.45 m compared to the canopy projection center method.
(3) The use of 3D α-shape envelope models to represent tree crowns achieved height and volume accuracy with R2 > 0.991 and R2 > 0.904, respectively. Compared to simplified point cloud and 3D voxel models, the outer envelope model accurately represents the canopy structure while significantly reducing memory usage, by approximately 80%. This approach enables high-fidelity and lightweight modeling of complex forest scenes.
Overall, this study demonstrates the feasibility of using airborne LiDAR point clouds for a large-scale 3D reconstruction of tree–shrub–grass forest scenes. However, the limited information on understory vegetation captured by airborne LiDAR leads to lower modeling accuracy for shrubs and grass. Additionally, fine details such as arboreal branches and the woody structures of shrubs were not considered in this study. Moreover, the airborne point clouds utilized in this study have relatively high point densities, and the impact of point density on model accuracy was not investigated. Future research will focus on analyzing the impact of point cloud density on modeling accuracy and determining the minimum point cloud density required for high-accuracy forest scene modeling.

Author Contributions

Conceptualization, D.X. and X.Y.; methodology, D.X. and X.Y.; validation, D.X.; formal analysis, D.X.; investigation, D.X. and X.Y.; data curation, D.X.; writing—original draft preparation, D.X.; writing—review and editing, X.Y., X.X. and C.W.; funding acquisition, X.Y. and G.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number U22A20566 and 42201380), the Science and Disruptive Technology Program of AIRCAS (grant number E3Z21501), and the “Pioneer” and “Leading Goose” R&D Program of Zhejiang (grant number 2023C03190).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy reasons.

Acknowledgments

The authors greatly appreciate the Geospatial Data Processing Research Group at Heidelberg University Institute of Geography and the Quantitative Remote Sensing Team at Beijing Forestry University for providing the point cloud data and some of the annotated information.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. De Boissieu, F.; Heuschmidt, F.; Lauret, N.; Ebengo, D.M.; Vincent, G.; Féret, J.-B.; Yin, T.; Gastellu-Etchegorry, J.-P.; Costeraste, J.; Lefèvre-Fonollosa, M.-J.; et al. Validation of the DART Model for Airborne Laser Scanner Simulations on Complex Forest Environments. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8379–8394. [Google Scholar] [CrossRef]
  2. Wang, W.; Li, Y.; Huang, H.; Hong, L.; Du, S.; Xie, L.; Li, X.; Guo, R.; Tang, S. Branching the Limits: Robust 3D Tree Reconstruction from Incomplete Laser Point Clouds. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103557. [Google Scholar] [CrossRef]
  3. Xiao, W.; Xu, S.; Elberink, S.O.; Vosselman, G. Individual Tree Crown Modeling and Change Detection from Airborne Lidar Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3467–3477. [Google Scholar] [CrossRef]
  4. Gastellu-Etchegorry, J.-P.; Yin, T.; Lauret, N.; Grau, E.; Rubio, J.; Cook, B.D.; Morton, D.C.; Sun, G. Simulation of Satellite, Airborne and Terrestrial LiDAR with DART (I): Waveform Simulation with Quasi-Monte Carlo Ray Tracing. Remote Sens. Environ. 2016, 184, 418–435. [Google Scholar] [CrossRef]
  5. Zhao, K.; Suarez, J.C.; Garcia, M.; Hu, T.; Wang, C.; Londo, A. Utility of Multitemporal Lidar for Forest and Carbon Monitoring: Tree Growth, Biomass Dynamics, and Carbon Flux. Remote Sens. Environ. 2018, 204, 883–897. [Google Scholar] [CrossRef]
  6. Li, J.; Wu, H.; Xiao, Z.; Lu, H. 3D Modeling of Laser-Scanned Trees Based on Skeleton Refined Extraction. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102943. [Google Scholar] [CrossRef]
  7. Deng, S.; Jing, S.; Zhao, H. A Hybrid Method for Individual Tree Detection in Broadleaf Forests Based on UAV-LiDAR Data and Multistage 3D Structure Analysis. Forests 2024, 15, 1043. [Google Scholar] [CrossRef]
  8. Yin, T.; Cook, B.D.; Morton, D.C. Three-Dimensional Estimation of Deciduous Forest Canopy Structure and Leaf Area Using Multi-Directional, Leaf-on and Leaf-off Airborne Lidar Data. Agric. For. Meteorol. 2022, 314, 108781. [Google Scholar] [CrossRef]
  9. Liu, S.; Deng, Y.; Zhang, J.; Wang, J.; Duan, D. Extraction of Arbors from Terrestrial Laser Scanning Data Based on Trunk Axis Fitting. Forests 2024, 15, 1217. [Google Scholar] [CrossRef]
  10. Li, R.; Bu, G.; Wang, P. An Automatic Tree Skeleton Extracting Method Based on Point Cloud of Terrestrial Laser Scanner. Int. J. Opt. 2017, 2017, 5408503. [Google Scholar] [CrossRef]
  11. Yun, T.; Jiang, K.; Li, G.; Eichhorn, M.P.; Fan, J.; Liu, F.; Chen, B.; An, F.; Cao, L. Individual Tree Crown Segmentation from Airborne LiDAR Data Using a Novel Gaussian Filter and Energy Function Minimization-Based Approach. Remote Sens. Environ. 2021, 256, 112307. [Google Scholar] [CrossRef]
  12. Kukkonen, M.; Maltamo, M.; Korhonen, L.; Packalen, P. Fusion of Crown and Trunk Detections from Airborne UAS Based Laser Scanning for Small Area Forest Inventories. Int. J. Appl. Earth Obs. Geoinf. 2021, 100, 102327. [Google Scholar] [CrossRef]
  13. André, F.; de Wergifosse, L.; de Coligny, F.; Beudez, N.; Ligot, G.; Gauthray-Guyénet, V.; Courbaud, B.; Jonard, M. Radiative Transfer Modeling in Structurally Complex Stands: Towards a Better Understanding of Parametrization. Ann. For. Sci. 2021, 78, 1–21. [Google Scholar] [CrossRef]
  14. Liu, K.; Shen, X.; Cao, L.; Wang, G.; Cao, F. Estimating Forest Structural Attributes Using UAV-LiDAR Data in Ginkgo Plantations. ISPRS J. Photogramm. Remote Sens. 2018, 146, 465–482. [Google Scholar] [CrossRef]
  15. Yan, W.; Guan, H.; Cao, L.; Yu, Y.; Gao, S.; Lu, J. An Automated Hierarchical Approach for Three-Dimensional Segmentation of Single Trees Using UAV LiDAR Data. Remote Sens. 2018, 10, 1999. [Google Scholar] [CrossRef]
  16. Chang, L.; Fan, H.; Zhu, N.; Dong, Z. A Two-Stage Approach for Individual Tree Segmentation From TLS Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8682–8693. [Google Scholar] [CrossRef]
  17. Hu, S.; Li, Z.; Zhang, Z.; He, D.; Wimmer, M. Efficient Tree Modeling from Airborne LiDAR Point Clouds. Comput. Graph. 2017, 67, 1–13. [Google Scholar] [CrossRef]
  18. Kato, A.; Moskal, L.M.; Schiess, P.; Swanson, M.E.; Calhoun, D.; Stuetzle, W. Capturing Tree Crown Formation through Implicit Surface Reconstruction Using Airborne Lidar Data. Remote Sens. Environ. 2009, 113, 1148–1162. [Google Scholar] [CrossRef]
  19. Hu, C.; Ru, Y.; Fang, S.; Zhou, H.; Xue, J.; Zhang, Y.; Li, J.; Xu, G.; Fan, G. A Tree Point Cloud Simplification Method Based on FPFH Information Entropy. Forests 2023, 14, 1507. [Google Scholar] [CrossRef]
  20. Wang, S.; Hu, Q.; Xiao, D.; He, L.; Liu, R.; Xiang, B.; Kong, Q. A New Point Cloud Simplification Method with Feature and Integrity Preservation by Partition Strategy. Measurement 2022, 197, 111173. [Google Scholar] [CrossRef]
  21. Qi, J.; Xie, D.; Jiang, J.; Huang, H. 3D Radiative Transfer Modeling of Structurally Complex Forest Canopies through a Lightweight Boundary-Based Description of Leaf Clusters. Remote Sens. Environ. 2022, 283, 113301. [Google Scholar] [CrossRef]
  22. Yang, P.; van der Tol, C.; Yin, T.; Verhoef, W. The SPART Model: A Soil-Plant-Atmosphere Radiative Transfer Model for Satellite Measurements in the Solar Spectrum. Remote Sens. Environ. 2020, 247, 111870. [Google Scholar] [CrossRef]
  23. Pfeifer, N.; Gorte, B.; Winterhalder, D. Automatic Reconstruction of Single Trees from Terrestrial Laser Scanner Data. Remote Sens. Spat. Inf. Sci. 2004, 35, 119–124. [Google Scholar]
  24. Vosselman, G. 3D reconstruction of roads and trees for city modelling. Int. Arch. Photogramm., Remote Sens.Spat. Inf. Sci. 2003, 34, 231–236. [Google Scholar]
  25. Morsdorf, F.; Meier, E.; Kötz, B.; Itten, K.I.; Dobbertin, M.; Allgöwer, B. LIDAR-Based Geometric Reconstruction of Boreal Type Forest Stands at Single Tree Level for Forest and Wildland Fire Management. Remote Sens. Environ. 2004, 92, 353–362. [Google Scholar] [CrossRef]
  26. Wang, Y.; Weinacker, H.; Koch, B. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest. Sensors 2008, 8, 3938–3951. [Google Scholar] [CrossRef]
  27. Lin, X.; Li, A.; Bian, J.; Zhang, Z.; Lei, G.; Chen, L.; Qi, J. Reconstruction of a Large-Scale Realistic Three-Dimensional (3-D) Mountain Forest Scene for Radiative Transfer Simulations. Gisci. Remote Sens. 2023, 60, 2261993. [Google Scholar] [CrossRef]
  28. Li, W.; Hu, X.; Su, Y.; Tao, S.; Ma, Q.; Guo, Q. A New Method for Voxel-based Modelling of Three-dimensional Forest Scenes with Integration of Terrestrial and Airborne LIDAR Data. Methods Ecol. Evol. 2024, 15, 569–582. [Google Scholar] [CrossRef]
  29. Qi, J.; Xie, D.; Yin, T.; Yan, G.; Gastellu-Etchegorry, J.-P.; Li, L.; Zhang, W.; Mu, X.; Norford, L.K. LESS: LargE-Scale Remote Sensing Data and Image Simulation Framework over Heterogeneous 3D Scenes. Remote Sens. Environ. 2019, 221, 695–706. [Google Scholar] [CrossRef]
  30. Janoutová, R.; Homolová, L.; Malenovský, Z.; Hanuš, J.; Lauret, N.; Gastellu-Etchegorry, J.-P. Influence of 3D Spruce Tree Representation on Accuracy of Airborne and Satellite Forest Reflectance Simulated in DART. Forests 2019, 10, 292. [Google Scholar] [CrossRef]
  31. Jarron, L.R.; Coops, N.C.; MacKenzie, W.H.; Tompalski, P.; Dykstra, P. Detection of Sub-Canopy Forest Structure Using Airborne LiDAR. Remote Sens. Environ. 2020, 244, 111770. [Google Scholar] [CrossRef]
  32. Wing, B.M.; Ritchie, M.W.; Boston, K.; Cohen, W.B.; Gitelman, A.; Olsen, M.J. Prediction of understory vegetation cover with airborne lidar in an interior ponderosa pine forest. Remote Sens. Environ. 2012, 124, 730–741. [Google Scholar] [CrossRef]
  33. Weiser, H.; Schäfer, J.; Winiwarter, L.; Krašovec, N.; Fassnacht, F.E.; Höfle, B. Individual Tree Point Clouds and Tree Measurements from Multi-Platform Laser Scanning in German Forests. Earth Syst. Sci. Data 2022, 14, 2989–3012. [Google Scholar] [CrossRef]
  34. Zhao, X.; Qi, J.; Yu, Z.; Yuan, L.; Huang, H. Fine-Scale Quantification of Absorbed Photosynthetically Active Radiation (APAR) in Plantation Forests with 3D Radiative Transfer Modeling and LiDAR Data. Plant Phenomics 2024, 6, 0166. [Google Scholar] [CrossRef]
  35. Tusa, E.; Monnet, J.-M.; Barré, J.-B.; Mura, M.D.; Dalponte, M.; Chanussot, J. Individual Tree Segmentation Based on Mean Shift and Crown Shape Model for Temperate Forest. IEEE Geosci. Remote Sens. Lett. 2021, 18, 2052–2056. [Google Scholar] [CrossRef]
  36. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  37. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  38. Wang, X.; Zhang, Y.; Luo, Z. Combining Trunk Detection with Canopy Segmentation to Delineate Single Deciduous Trees Using Airborne LiDAR Data. IEEE Access 2020, 8, 99783–99796. [Google Scholar] [CrossRef]
  39. Zhou, X.; Alexiou, E.; Viola, I.; Cesar, P. PointPCA+: Extending PointPCA Objective Quality Assessment Metric. In Proceedings of the IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), Kuala Lumpur, Malaysia, 8–11 October 2023. [Google Scholar] [CrossRef]
  40. Comaniciu, D.; Meer, P. Mean Shift: A Robust Approach toward Feature Space Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef]
  41. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. In Proceedings of the 27th European Conference on Advances in Information Retrieval Research, Santiago de Compostela, Spain, 21–23 March 2005. [Google Scholar] [CrossRef]
  42. Wang, J.; Chen, X.; Cao, L.; An, F.; Chen, B.; Xue, L.; Yun, T. Individual Rubber Tree Segmentation Based on Ground-Based LiDAR Data and Faster R-CNN of Deep Learning. Forests 2019, 10, 793. [Google Scholar] [CrossRef]
  43. Chen, X.; Jiang, K.; Zhu, Y.; Wang, X.; Yun, T. Individual Tree Crown Segmentation Directly from UAV-Borne LiDAR Data Using the PointNet of Deep Learning. Forests 2021, 12, 131. [Google Scholar] [CrossRef]
  44. Xi, Z.; Hopkinson, C. 3D Graph-Based Individual-Tree Isolation (Treeiso) from Terrestrial Laser Scanning Point Clouds. Remote Sens. 2022, 14, 6116. [Google Scholar] [CrossRef]
  45. Xiang, B.; Wielgosz, M.; Kontogianni, T.; Peters, T.; Puliti, S.; Astrup, R.; Schindler, K. Automated Forest Inventory: Analysis of High-Density Airborne LiDAR Point Clouds with 3D Deep Learning. Remote Sens. Environ. 2024, 305, 114078. [Google Scholar] [CrossRef]
Figure 1. Study area location and ALS point cloud.
Figure 1. Study area location and ALS point cloud.
Forests 15 01627 g001
Figure 2. ALS and TLS tree and shrub point cloud. (a) Automatically extracted ALS tree point cloud. (b) Manually annotated TLS tree point cloud. (c) Automatically extracted ALS shrub point cloud. (d) Manually annotated TLS shrub point cloud.
Figure 2. ALS and TLS tree and shrub point cloud. (a) Automatically extracted ALS tree point cloud. (b) Manually annotated TLS tree point cloud. (c) Automatically extracted ALS shrub point cloud. (d) Manually annotated TLS shrub point cloud.
Forests 15 01627 g002
Figure 3. Technical route for 3D forest scene modeling based on ALS point cloud.
Figure 3. Technical route for 3D forest scene modeling based on ALS point cloud.
Forests 15 01627 g003
Figure 4. Ground–grass model.
Figure 4. Ground–grass model.
Forests 15 01627 g004
Figure 5. Diagram of separation of crown trunk points. (a) Individual tree point cloud. (b) Vertical distribution histogram of point cloud count. (c) Vertical distribution histogram of point cloud dispersion.
Figure 5. Diagram of separation of crown trunk points. (a) Individual tree point cloud. (b) Vertical distribution histogram of point cloud count. (c) Vertical distribution histogram of point cloud dispersion.
Forests 15 01627 g005
Figure 6. Modeling of trees with trunk points.
Figure 6. Modeling of trees with trunk points.
Forests 15 01627 g006
Figure 7. Modeling of trees without trunk points.
Figure 7. Modeling of trees without trunk points.
Forests 15 01627 g007
Figure 8. Shrub model construction.
Figure 8. Shrub model construction.
Forests 15 01627 g008
Figure 9. Height validation results of grass models.
Figure 9. Height validation results of grass models.
Forests 15 01627 g009
Figure 10. Volume validation results of shrub models.
Figure 10. Volume validation results of shrub models.
Forests 15 01627 g010
Figure 11. Height validation results of tree models.
Figure 11. Height validation results of tree models.
Forests 15 01627 g011
Figure 12. Volume validation results of tree crown models.
Figure 12. Volume validation results of tree crown models.
Forests 15 01627 g012
Figure 13. Three-dimensional forest scene models.
Figure 13. Three-dimensional forest scene models.
Forests 15 01627 g013
Table 1. Study area location and general information.
Table 1. Study area location and general information.
Sample PlotsLocationPlot SizeAverage Slope (°)Forest TypeAverage Tree Height (m)Vegetation Coverage
BR49.01° N, 8.69° E300 × 300 m21.80Mixed Broadleaf–Conifer Forest14.080.53%
KA49.03° N, 8.43° E300 × 300 m9.09Mixed Broadleaf–Conifer Forest20.291.76%
SHB42.42° N, 117.31° E300 × 300 m9.09Coniferous Forest17.788.45%
DZ24.32° N, 102.57° E300 × 170 m10.21Mixed Broadleaf–Conifer Forest11.682.36%
GX25.45° N, 110.46° E300 × 340 m5.79Broad-leaved Forest12.271.66%
SL27.62° N, 99.74° E210 × 200 m17.68Coniferous Forest12.181.74%
Table 2. Information on ALS point cloud.
Table 2. Information on ALS point cloud.
Sample PlotsAcquisition
Time
Acquisition
Equipment
Flight
Altitude (m)
Pulse Repetition
Frequency (kHz)
Elevation
Accuracy (mm)
Density (pts/m2)
BR5 July 2019RIEGL VQ-780i65010002027
KA5 July 2019RIEGL VQ-780i65010002026
SHB21 July 2022RIEGL VUX-1UAV6055010108
DZ5 June 2022RIEGL VUX-1UAV7055010139
GX11 October 2021RIEGL VUX-1UAV705501074
SL25 May 2021RIEGL VUX-1UAV 6055010129
Table 3. Information of annotated data.
Table 3. Information of annotated data.
Sample PlotsNumber of Individual TreesNumber of Trunk PositionsNumber of Shrub Point Cloud Clusters
ALSBR38//
KA51//
SHB48//
DZ45//
GX57//
SL53//
TLSBR282811
KA303016
SHB505011
DZ404018
GX535317
SL444410
Table 4. Individual tree segmentation accuracy.
Table 4. Individual tree segmentation accuracy.
MethodSample PlotsNumber of
Actual Trees
Number of
Segmented Trees
TPFNFPr (%)p (%)f (%)
treeisoBR38412991276.3170.7373.42
KA515340111378.4375.4176.92
SHB4850426887.5084.0085.71
DZ454333121073.3376.7475.00
GX57524314975.4482.6978.90
SL535743101481.1375.4478.18
ForAINetBR3836308678.9583.3381.08
KA5147429582.3589.3685.71
SHB4846453193.7597.8395.94
DZ4545387784.4484.4484.44
GX57554710882.4685.4583.93
SL5357485990.5784.2187.27
ProposedBR3833317281.5893.9487.32
KA5148447486.2791.6788.89
SHB4846444291.6795.6593.62
DZ4547414691.1187.2389.13
GX5753507387.7294.3490.91
SL5356494792.4587.5089.91
Table 5. Shrub extraction accuracy of our method.
Table 5. Shrub extraction accuracy of our method.
Sample PlotsBRKASHBDZGXSL
Number of actual shrubs111611181710
Number of segmented shrubs8121012146
Extraction accuracy72.73%75.00%90.91%66.67%82.35%60.00%
Table 6. Offset value of tree position.
Table 6. Offset value of tree position.
Sample PlotsNumber of TreesOur MethodCenter of Crown Projection
d x ¯ (m) d y ¯ (m) d i s ¯ (m) d x ¯ (m) d y ¯ (m) d i s ¯ (m)
BR280.190.290.370.580.450.79
KA300.230.200.330.670.721.09
SHB500.260.250.410.630.661.01
DZ300.220.240.330.490.410.64
GX320.250.290.380.720.400.82
SL490.280.230.360.580.380.69
Total2190.240.250.350.610.500.80
Table 7. Proportion of tree, shrub, and grass volume in 3D forest scene models.
Table 7. Proportion of tree, shrub, and grass volume in 3D forest scene models.
Sample PlotsTree Volume
Proportion
Shrub Volume
Proportion
Grass Volume
Proportion
Total Vegetation
Volume (m3)
BR92.21%7.72%0.07%574,702.62
KA85.96%13.80%0.24%687,645.62
SHB97.04%2.89%0.07%568,656.26
DZ93.16%6.49%0.35%424,956.25
GX91.36%6.25%2.39%557,095.37
SL93.03%6.84%0.13%389,406.25
Table 8. Visualization of tree model constructed using different methods.
Table 8. Visualization of tree model constructed using different methods.
Original Point CloudSimplified Point CloudVoxel Model
(1 m)
Voxel Model
(0.5 m)
Proposed Model
coniferous treeForests 15 01627 i001Forests 15 01627 i002Forests 15 01627 i003Forests 15 01627 i004Forests 15 01627 i005
broadleaf treesForests 15 01627 i006Forests 15 01627 i007Forests 15 01627 i008Forests 15 01627 i009Forests 15 01627 i010
Table 9. Comparison of memory usage for 3D forest scene models.
Table 9. Comparison of memory usage for 3D forest scene models.
Sample PlotsOriginal Point CloudSimplified Point CloudVoxel Model at Different ResolutionsOur Model
1 m0.5 m
BR451 MB143 MB155 MB584 MB98 MB
KA551 MB211 MB177 MB681 MB119 MB
SHB1.03 GB153 MB172 MB719 MB204 MB
DZ789 MB55 MB85 MB326 MB107 MB
GX715 MB91 MB138 MB598 MB156 MB
SL562 MB44 MB55 MB219 MB102 MB
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, D.; Yang, X.; Wang, C.; Xi, X.; Fan, G. Three-Dimensional Reconstruction of Forest Scenes with Tree–Shrub–Grass Structure Using Airborne LiDAR Point Cloud. Forests 2024, 15, 1627. https://doi.org/10.3390/f15091627

AMA Style

Xu D, Yang X, Wang C, Xi X, Fan G. Three-Dimensional Reconstruction of Forest Scenes with Tree–Shrub–Grass Structure Using Airborne LiDAR Point Cloud. Forests. 2024; 15(9):1627. https://doi.org/10.3390/f15091627

Chicago/Turabian Style

Xu, Duo, Xuebo Yang, Cheng Wang, Xiaohuan Xi, and Gaofeng Fan. 2024. "Three-Dimensional Reconstruction of Forest Scenes with Tree–Shrub–Grass Structure Using Airborne LiDAR Point Cloud" Forests 15, no. 9: 1627. https://doi.org/10.3390/f15091627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop