Next Article in Journal
Geographic Range vs. Occurrence Records in Plant Distribution Mapping: The Case of Arbutus in the Old World
Previous Article in Journal
Prediction of Regional Forest Biomass Using Machine Learning: A Case Study of Beijing, China
Previous Article in Special Issue
Site Index Estimation Using Airborne Laser Scanner Data in Eucalyptus dunnii Maide Stands in Uruguay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Individual Tree Segmentation Method That Combines LiDAR Data and Spectral Imagery

1
College of Forestry, Beijing Forestry University, Beijing 100083, China
2
Beijing Key Laboratory of Precision Forestry, Beijing Forestry University, Beijing 100083, China
3
Beijing Ocean Forestry Technology Co., Ltd., Beijing 100083, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(5), 1009; https://doi.org/10.3390/f14051009
Submission received: 24 February 2023 / Revised: 30 April 2023 / Accepted: 10 May 2023 / Published: 13 May 2023

Abstract

:
The dynamic monitoring of forest resources is an integral component of forest resource management and forest eco-system stability maintenance. In recent years, LiDAR (Light Detection and Ranging) has been increasingly utilized in precision forest surveys due to its great penetrating ability and capacity to detect forest vertical structure information. However, the present airborne LiDAR data individual tree segmentation algorithms are not highly adaptable to forest types, particularly in mixed coniferous and broad-leaved forest zones, where the accuracy of individual tree extraction is low, and trees are incorrectly recognized and missed. In order to address these issues, in this study, spectral images and LiDAR data of a red pine conifer–broadleaf mixed forest in the Changbai Mountain Nature Reserve in Jilin Province were chosen, and the normalized point cloud was segmented iteratively using the distance-threshold-based individual tree segmentation method to obtain the initial segmented individual tree vertices. For individual trees with deviations in the initial vertex identification position, and unidentified individual trees, identification anchor points of real tree vertices are added within the canopy of the trees. These identification anchor points have strong position directivity in LiDAR data, which can mark the individual trees whose vertices were misidentified or missed during the initial individual tree segmentation process and identify these two tuples. The tree vertices may be inserted precisely based on the 3D shape of the individual tree point cloud, and the seed-point-based individual tree segmentation method is used to segment the normalized point cloud and finish the extraction of individual trees in red pine mixed conifer forests. The results indicate that, compared to the previous individual tree segmentation approach based on the relative spacing between individual trees, this study enhances the accuracy of individual tree segmentation from 83% to 96%. The extremely high segmentation accuracy indicates that the proposed method can accurately identify individual trees based on remote sensing techniques to segment forest individual trees, can provide a data basis for subsequent individual tree information extraction, and has great potential in practical applications.

1. Introduction

Forests serve as the basis for all living species to flourish and play a crucial role in preserving the equilibrium of the planet’s ecology [1,2,3]. Sustainable forest management must implement the dynamic monitoring of forest resources based on surveys of forest resource data [4]. Forest structure metrics are important indicators that characterize forest growth and evaluate ecological functions. For accurate and sustainable forest management, it is essential to measure the structural characteristics of each individual tree in a forest [5,6]. Commonly utilized individual tree metrics include crown width, diameter at breast height, tree position, tree height, and tree species. The measurements of all individual tree parameters rely on the precise extraction of individual trees [7,8,9]; therefore, the identification of individual trees is crucial in forestry. Airborne LiDAR can penetrate forest vegetation to rapidly collect large-area, high-precision three-dimensional information of vegetation [1,10,11]. This provides topographic information of the horizontal structure and generates spatial information of the forest canopy’s vertical structure, and can also realize the mechanization and automation of forest surveys and satisfy the demand for precise forestry [12].
Current LiDAR point cloud segmentation methods can be categorized into two groups: (1) those that use a surface model, which is typically obtained by subtracting a digital surface model (DSM) from a digital elevation model (DEM); and (2) those that segment individual trees directly based on point clouds [13]. These algorithms typically vary in terms of the approaches employed to smooth the canopy height model (CHM) and the window location and window size used in the local extreme value search. The window size and smoothness of the CHM are generally determined by a priori information or external inputs [14,15]. Persson et al. [16] produced a smoothed CHM using a two-dimensional Gaussian filter and segmented and positioned the trees by searching for local maxima. Popescu et al. [8] assessed the height of pine and deciduous trees based on their respective CHM and area growth. Koch et al. [17] utilized a filter to locate treetops and a “water injection” method to segment the canopy of coniferous and broadleaf woods with an overall segmentation accuracy of 62%. The lower limit of the regression curve between tree height and crown width was utilized to calculate the size of the searching window for the local maxima. The majority of these individual tree segmentation methods are CHM-based individual tree segmentation algorithms. However, there are inherent errors and uncertainties in the process of generating CHM. For example, spatial errors may be introduced in the interpolation process from the point cloud to the grid height model, which can reduce the accuracy of individual tree segmentation [18,19].
In the second category, the point clouds are normalized to reduce the effect of topography and are subsequently segmented directly based on the vertical and horizontal spatial distribution characteristics of the point clouds to generate individual trees [20,21,22]. Morsdorf et al. [8] segmented individual trees from coniferous forest point clouds using a K-means clustering algorithm, which first identifies the highest local points from the DSM as the highest tree points and then uses each local highest point as a seed point for K-means clustering. Li et al. [23] combined an area growth method with threshold judgment for single-tree partitioning in coniferous forests. This approach makes full use of the distance between trees, particularly at the tree tops, by first assuming that the highest point is the tree height point, and partitioning tree point by area growth, etc., until all trees are partitioned. Sackov et al. [24] summarized the relationship between tree distribution and tree height, as well as the relationship between tree height and canopy size, based on the measured data. These tree growth rules are combined with a moving window detection treetop for single-tree segmentation in the broadleaf forest. However, the tree growth rules used are not universally applicable.
The aforementioned previous research on single-tree segmentation algorithms based on airborne LiDAR data has made great progress in the field. However, the majority of algorithms are subject to the influence of forest land type and have a strong applicability to single leaf trees, irrespective of whether the algorithm is based on CHM or a point cloud. Note that in the segmentation of coniferous and broad-leaved mixed forests, due to the large difference in the shape and crown width of a single tree crown, the traditional single-tree extraction method based on a point cloud is used for crown detection as it adopts a single segmentation scale. It is impossible to simultaneously and accurately detect the range of individual tree crowns with large differences in crown width. The under-segmentation and over-segmentation of individual trees is often observed within the same ground area. The existing point cloud single-tree segmentation methods cannot overcome these problems. Therefore, it is necessary to develop a single-tree segmentation method with a strong adaptability and low robustness to different forest lands to segment single trees in coniferous and broad-leaved mixed forests. In this study, we made use of the strong texture information of high-resolution remote sensing images and the large gap between the height and canopy diameter of single trees in coniferous and broad-leaved mixed forests. In particular, a single-tree segmentation method combining high-resolution images and LiDAR data is proposed, which can deal with the problems of over-detection and missed detection in the segmentation of coniferous and broad-leaved mixed forests, and offers a solution for the accurate segmentation of single trees in coniferous and broad-leaved mixed forests. We aimed to overcome the bottlenecks of single-tree missed detection and single-tree over-segmentation in the single-scale segmentation results observed in the current methods.

2. Materials and Methods

2.1. Study Area

The Baihe Conservation Management Station in the Changbai Mountain National Nature Reserve, Jilin, was selected as the study area. It is in the heart of an original dense forest, with an elevation of 700–1000 m and total area of 3.6 hectares. It has a humid temperate continental climate with warm summers and long cold winters, an annual average temperature of approximately 3 °C, and annual average precipitation of approximately 700–1400 mm [25]. Pinus koraiensis, Larix olgensis, Betula costata, Tilia tuan, Acer pictum, and Betula platyphylla are the predominant flora of the Pinus koraiensis broad-leaved forest zone, one of four vegetation zones on Changbai Mountain [26]. Figure 1 depicts the geographic location and true-color spectral images of the study region. Due to the broad area, the processing of point cloud individual tree segmentation is time-consuming and inefficient. Hence, a small plot of the research area was selected for point cloud segmentation of individual trees. Figure 2 presents a LiDAR image of this region.

2.2. Data Acquisition

The field data were collected in June 2021 using a Bell helicopter equipped with a LiDAR scanner and aerial camera. On the day of the data acquisition, the weather was clear, with a sufficient amount of light and no wind, providing suitable conditions for the collection of aerial images using a helicopter. For the collection of images, the flight height was 500 m, the side overlap was approximately 45%, and the course overlap was 65%. The laser radar sensor employed was a Galaxy PRIME scanner. Table 1 reports the specific parameters of the instrument.The 100-million-pixel Feisi aerial camera was employed to collect aerial imagery at the three bands of red, green, and blue (RGB), with a spatial resolution of 0.03 m (Figure 1). The manual observation of the high-resolution images reveals the study area to belong to a mixed zone of coniferous and broad-leaved forests. The crown shape and crown width vary with the individual trees and, as the crowns of broad-leaved trees generally form spherical clusters, the relative spacing between trees is also different.

2.3. Point Cloud Data Processing

The acquisition of the airborne LiDAR point cloud data will inevitably exhibit some errors due to the LiDAR equipment, the characteristics of the measured object, and the environment of the measurement area. This results in the presence of noise in the point cloud data, which can be divided into high and low gross error. High gross error usually refers to the error of the high elevation points caused by the influence of low flying objects (e.g., birds) in the air during the laser radar acquisition process. This mixes the three-dimensional spatial information of the real surface with the three-dimensional spatial information of flying objects. Low gross error typically refers to the error caused by the multipath effect during the acquisition process or the very low point error attributed to the laser scanner equipment. The noise points generated by these errors will affect the subsequent point cloud processing, thus reducing the accuracy of point cloud filtering, normalization, and single-tree segmentation. Therefore, the initial point cloud data are denoised, which effectively improves the calculation accuracy of the subsequent steps.
The distance-based noise algorithm was used for denoising in this study [27]. For this approach, the point cloud data were regarded as a set of points in the space, and the noise point was far away from the majority of other objects. The adjacent points of the specified quantity of each point were located and the average distance (D) from each point to the adjacent points was calculated. The median meanD, standard deviation S, and MaxD (Formula (1)) of all D were then determined. If D > MaxD for a point, it was considered to be a noise point and was removed.
MaxD = meanD + meanK × S
Based on the stand density of the forest in the study area determined from the forest resources survey, the optimal neighborhood point experimental group was set to 4, 6, and 8, and the optimal field search radius experimental group was taken as 8, 10, and 12 m. The parameters of the 2 experimental groups were combined in pairs to obtain a total of 9 experimental combinations. Following the group-by-group experiments, 6 neighborhood points were denoised. The denoising effect was optimized for the search radius equal to 10 m. Figure 3 presents the results of the denoising process. The total number of initial point clouds in the study area was 7,052,863, and a total of 8947 and 26,774 low- and high-difference noise points were removed after denoising, respectively. A small number of isolated noise points that cannot be eliminated were subsequently manually corrected to complete the denoising of the initial LiDAR data.
The elevation information from the LiDAR data was obtained by superimposing the tree height on the terrain elevation. However, the terrain elevation interfered with the mono-wood segmentation process and exerted a significant impact on the individual trees segmentation accuracy. Thus, prior to the mono-wood segmentation, the denoised point cloud should be normalized to eliminate the influence of terrain elevation [28].
In this study, the point cloud was separated into terrestrial and non-terrestrial points using improved progressive TIN (triangulated irregular network) densification (IPTD) [29], a commonly used algorithm in point cloud filtering. Firstly, a default grid was constructed and the lowest point in the grid was then taken as the starting seed point. The initial triangulation was constructed using the starting seed point. All points to be classified were subsequently traversed, and the triangles falling into the horizontal plane projection of each point were queried. As shown in Figure 4, the distance d from point P to the triangle and the maximum angles α1, α2, and α3 from point P to the three vertices V1, V2, and V3 of the triangle were calculated and compared with the iterative distance and iterative angle, respectively. If the distance and iterative angle of a point were less than the corresponding threshold, the point was determined as the ground point and added to the triangulation. This process was repeated until all ground points are classified. The algorithm can effectively separate ground points in low terrain areas.
The terrain fluctuations in the study area were small, and thus the applicability of this algorithm was high. In order to determine the optimal ground point separation parameters, a total of 9 experimental groups were set up, with the iteration angle equal to 10°, 20°, and 30°, and the distances group equal to 1 m, 1.5 m, and 2 m, respectively. The parameters in the two groups were combined in pairs. By visual comparison and the elimination of noise points, it was confirmed that the filtering effect was optimized at the iteration angle of 30° and iteration distance of 1.5 m. A total of 88,572 ground points were separated by filtering (Figure 5) and the average ground point height was 1163 m (Figure 6).
The airborne laser system emits a laser signal from the top to bottom, and the signal receiver collects the laser signal returned by the ground object. The laser beam has a strong penetrating ability and can penetrate several layers of leaves. However, as the number of penetrating leaves increases, the beam intensity will decay until no signal is returned. Therefore, the occlusion of the dense canopy causes a large number of ground points to lose information during the collection process. In order to solve this problem, after the ground point separation process, the inverse distance weight interpolation method was adopted to encrypt and interpolate ground point, thus obtaining the DEM of the study area (Figure 7). By calculating the difference between the elevation value of a single tree and the DEM, the true tree height that was not affected by the terrain is determined. This process is denoted as point cloud normalization and the results are depicted in Figure 8. The tree elevation ranges before and after normalization were 1167–1200 m and 0 m to 39 m, respectively. The latter corresponds to the true value of tree height in the study area. The normalized point cloud data retain rich information, which eliminates the influence of topographic factors caused by surface undulations, while retaining the information of multiple point cloud echoes and the stratification characteristics of the forest canopy structure. This had a significant impact on the precision of individual tree segmentation [30].

2.4. Methods

In this study, LiDAR data and high-resolution RGB images were collected by the airborne platform. The forest in the study area belongs to the mixed forest type of Korean pine and broad-leaved forest. As the airborne LiDAR was collected from the top of the forest and the forest canopy in the study area was dense, information loss was experienced for the lower half of the forest canopy [31]. This study aimed to estimate the characteristics of the LiDAR data from the lower part of the forest canopy, as well as the distribution structure of the tree species in the study area. Traditional single-tree point cloud segmentation methods have a single scale and the extraction accuracy is greatly reduced due to the different distribution densities of single trees in coniferous and broad-leaved mixed forest areas [32]. In order to overcome this problem, the iterative segmentation method was used to extract the tree top points from the LiDAR data, and the optimal single threshold segmentation scale was determined iteratively. The multi-scale segmentation method was adopted to segment the crown edge of the high-resolution image, which was then used to constrain the results of the fixed-point detection of single trees. Accurate individual tree vertex information was obtained and used as seed points to complete individual tree segmentation based on the LiDAR data. Figure 9 presents the segmentation processing flow.

2.4.1. Initial Single-Tree Vertex Detection Using LiDAR

A point cloud segmentation (PCS) algorithm was used to detect the vertex of a single tree. This method assumes that there is a certain distance between trees, particularly at the top. Firstly, the local maximum of the discrete point cloud was detected. It was assumed that the local maximum point of the point cloud is the top point of the tree. Thus, the top point of the tree was used as the seed point for the regional growth to segment a tree. In this iteration, each segmentation was carried out from top to bottom, and the threshold was evaluated. If the distance of a point was greater than the threshold interval, it was determined as the top of the tree. If the point was within the interval threshold, then the point was classified into a group with the existing segmentation tree. Figure 10 presents the principle of the algorithm. In the two single-tree point cloud data, point A had the highest elevation value in the region and was set as the treetop of the target tree. Point B was the highest elevation point in the remaining point cloud and dAB, dAC, dBC, dBD, dCD, dCE, and dDE were the horizontal distances between each point, respectively. If dAB exceeded the set horizontal distance threshold, B was regarded as the vertex of tree 2. The highest point C (dAC was less than the set threshold) in the point cloud data to be classified was then determined, and the horizontal distances between point C to point A and point B were calculated and evaluated. If dAC exceeded dBC, then C points belonged to tree 2, otherwise, C points belonged to tree 1, and so on until the point cloud classification was completed. The 2D Euclidean distance between the tree top points is key for the individual tree segmentation in the point cloud segmentation algorithm [33]. However, determining the threshold interval proves to be a complicated task. The threshold is typically set to the average crown radius of all individual trees in the sample plot, which requires the support of the tree resource survey data in the sample plot. Furthermore, the forest type of the sample plot cannot be too complicated. However, in this study, there was a lack of data on the average crown diameter of individual trees in the plots provided by the second-class survey. The segmented plots belonged to coniferous and broad-leaved mixed forests. The distribution of individual trees was uneven, and the relative spacing between trees varied. Some tree plots were more closely distributed than others, while other plots had larger gaps, and the spacing threshold between trees was difficult to determine. Therefore, iterative segmentation was adopted to determine the optimal threshold based on the tree relative spacing segmentation algorithm.
Due to the lack of average crown radius data, this study first randomly set the initial threshold on the fixed segmentation scale to perform the initial segmentation of the sample plot. After the segmentation was completed, each individual tree carried the crown diameter data. The results of the initial segmentation were statistically analyzed, and the average value of the crown radius of the individual tree was used as the distance threshold for the second individual tree segmentation. This cycle was repeated until the difference between the average crown diameter of the last two individual trees was less than 0.1 m, which denoted the completion of the segmentation. The results reveal that when the segmentation scale was determined, regardless of the initial segmentation threshold setting, the total number and average crown of individual trees will always approach a fixed value after continuous iteration. In this study, three segmentation scales were selected, namely, 1.0, 1.1, and 1.2, for the single-scale segmentation of LiDAR data. Figure 11 demonstrates that, for a fixed segmentation scale, the initial single-tree segmentation threshold was set to 9, and the average diameter of the single-tree canopy obtained by segmentation will increase and subsequently stabilize after several iterations. The final segmentation results under the three segmentation scales were compared with the spectral images, and the accuracy was evaluated using the over-detection, missed detection, and wrong detection indexes. The single-tree segmentation results at the segmentation scale of 1.1 were selected as the initial results of single-tree segmentation. Figure 12 presents the extracted single-tree vertices, the red point represents the vertices of the tree in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20.

2.4.2. Single-Tree Vertex Location Based on LiDAR Data and Spectral Images

The spectral image and LiDAR data are defined in the same projection coordinate system (WGS84/UTM zone 52N) such that the tree vertices extracted by LiDAR can be superimposed with the spectral image to obtain the position of the trees. The spectral image at a 0.03 m resolution was used as the reference data for the accurate positioning of individual tree positions. The key steps of the location procedure are described in the following.
1. Extraction of single-tree crown range.
Multi-scale segmentation based on regional growth is used to extract the canopy edge of high-resolution orthophoto data. The image and LiDAR data are simultaneously collected. Firstly, a pixel is determined as the growth point of the region to be segmented, and a threshold is set as the heterogeneity standard to calculate the heterogeneity between the pixel neighborhoods. If the heterogeneity is less than the set threshold, the pixels are merged into the region, and the calculation of the heterogeneity between the neighborhoods of the new merged region is continued. When the heterogeneity between the objects exceeds the set threshold, the merging stops [34]. The segmentation object can effectively use the spectral and spatial structure feature information of the image and maximize the homogeneity of each object while minimizing the average heterogeneity of the object. Furthermore, the segmentation speed is fast and the application scope is wide. This study selected the multi-scale segmentation algorithm to extract individual tree canopies using eCognition. Multi-component segmentation scale experiments are required to accurately segment the crown edge. The optimal proportional parameters are typically determined by the proportional parameter estimation (ESP2) eCognition plug-in. However, the segmentation range needs to be qualitatively evaluated. The ESP tool calculates the local variance (LV) of all image objects to evaluate the segmentation results. The LV change rate (Roc-LV) is used to evaluate the optimal segmentation ratio of a given object. The maximum LV value produces a peak, and the optimal segmentation ratio corresponds to the peak of a given ground object. Figure 13 depicts the LV line chart. The segmentation scale corresponding to the peak in the LV line chart is tested, and the optimal segmentation scale is subsequently determined.
Figure 13. LV line chart determined using the eCognition ESP plug-in.
Figure 13. LV line chart determined using the eCognition ESP plug-in.
Forests 14 01009 g013
2. The tree vertices obtained by the initial detection of over-segmented trees are filtered.
In the tree-vertex identification step of the single-scale segmentation of broadleaf trees with large canopies and irregular canopy shapes, the elevation maxima at different locations of a single canopy are identified as the vertices of a single tree. This results in the over-segmentation of broadleaf trees with large canopies (Figure 14). As depicted in Figure 15 and Figure 16, the local elevation maxima inside a single canopy are used as tree apex candidates. The elevation maxima are sieved when individual tree apexes are potentially higher than the general canopy as trees grow. Therefore, if the maximum height point is close to the edge of the canopy, it is disregarded, and the second highest point is chosen as the individual tree vertex.
Figure 14. Over-segmented single tree.
Figure 14. Over-segmented single tree.
Forests 14 01009 g014
Figure 15. Vertex elevation information.
Figure 15. Vertex elevation information.
Forests 14 01009 g015
Figure 16. True vertex of over-segmented single tree.
Figure 16. True vertex of over-segmented single tree.
Forests 14 01009 g016
3. The addition of vertices previously missed by error detection to individual trees.
When the only value to remain is the recognition result of the top single-tree point and the position of the point has obviously deviated from the crown (Figure 17), the point is deleted and a point in the center of the crown is added as the single-tree vertex (Figure 18). Moreover, when performing single-scale segmentation, individual trees with a lower tree height or smaller canopy than most trees may be missed during the detection process (Figure 19). Thus, a point is also added at the center of its canopy as a single-tree vertex (Figure 20).
Figure 17. Vertex error detection individual tree.
Figure 17. Vertex error detection individual tree.
Forests 14 01009 g017
Figure 18. Modified vertex position.
Figure 18. Modified vertex position.
Forests 14 01009 g018
Figure 19. Missing detection of individual tree.
Figure 19. Missing detection of individual tree.
Forests 14 01009 g019
Figure 20. Addition of vertices to missed individual trees.
Figure 20. Addition of vertices to missed individual trees.
Forests 14 01009 g020
4. The addition of elevation attributes to the added vertices.
There exists a certain error between the position of the added tree top point based on the canopy range in the spectral image and the actual position of the tree top point. Furthermore, these points do not contain elevation information. Therefore, it is necessary to combine LiDAR data to locate the real tree vertices. The edge of the single tree crown is used to constrain and manually add the top point of the tree as the position information it carries will fall in the projection of the single tree crown on the ground (Figure 21). Through the front view, side view, and top view can visually determine its own single tree, the addition of points at the top of the single tree crown, that is, the precise positioning of the single tree vertex is completed, the red line represents the change of vertex position after adding elevation information. (Figure 22). A total of 50 treetops were added (Figure 23), with an average height of 23.3 m.

2.4.3. Single-Tree Extraction Based on Treetops

After the initial vertex detection and precise positioning of the single-tree vertex, the final single-tree extraction was performed using the single-tree segmentation method based on the tree vertex. The basic principle of this method is similar to that of the initial single-tree vertex detection. It is a region-growing algorithm based on the relative distance between point clouds for point cloud classification. In particular, under the premise of a given seed point, that is, a single-tree vertex, the point cloud genus was determined, and finally the crown detection of a single tree was completed.

2.5. Experimental Environment and Parameter Settings

The CPU used for this experiment was AMD Ryzen 7 4800 H with Radeon Graphics, the GPU is NVIDIA GeForce RTX 3060 Laptop GPU 6 GB, the motherboard was LENOVO, with a 16 G memory for the system. China Beijing Green Valley Lidar360 V5.2.2.0 and Germany Munich Definiens Imaging eCognition V10.2 version were used to visualize and analyze the data.

3. Results and Analysis

3.1. Accuracy Evaluation

In order to evaluate the accuracy of the individual tree segmentation, we used the artificially labeled individual trees on the high-resolution images as the true values to compare the individual trees obtained by the LiDAR segmentation. We randomly selected 10 equal-area rectangular plots in each plot and used 4 indicators (R, detection rate; r, recall rate; p, precision rate; and F, harmonic value, Formulas (2)–(5), respectively) to evaluate the effect of the individual tree segmentation using the detection rate. R reflects the detection ratio of trees in the plot and, the closer to 1, the higher the detection rate. r reflects the proportion of correctly segmented individual trees to the actual individual trees in the plot; the closer to 1, the better the segmentation effect. p reflects the proportion of correct segmentation in the single tree segmented by the algorithm; the closer to 1, the higher the segmentation accuracy. F comprehensively evaluates the quality of segmentation and, the closer to 1, the better the overall effect [35].
R = n/N,
r = TP/(TP + FN),
p = TP/(TP + FP),
F = 2rp/(r + p),
where N is the actual number of individual trees in the plot; n is the number of individual trees segmented by the algorithm; TP is the number of individual trees correctly segmented; FN is the number of single trees missed; and FP is the number of single trees inspected.
In order to evaluate the point cloud integrity of the individual tree segmentation effect, we used three indicators based on the number of individual tree point clouds, namely, the correct segmentation point cloud (NTP), under-segmentation point cloud (NFN), and over-segmentation point cloud (NFP) ratios (Formulas (6)–(8), respectively).
NTP = (n1/n) × 100%,
NFN = [(n − n2)/n] × 100%,
NFP = [(n1 − n2)/n] × 100%,
where n is the number of individual tree point clouds obtained by manual interpretation; n1 is the number of single-tree point clouds obtained by the algorithm; and n2 is the number of correctly classified single-tree point clouds.

3.2. Segmentation Results

We tested the segmentation scale corresponding to the peak in the LV line chart for the application of high-resolution image data to extract the edge of a single tree crown. Figure 24 compares the segmentation results at different scales, revealing the optimal segmentation effect at 241. In addition, the segmentation scale 216 is too small, while the segmentation scales 278 are 292 too large. Therefore, we choose 241 as the best segmented scale.
We used the segmentation scale of 241 to test the shape factor and compactness parameter combinations. The shape factor is used to describe the degree of fragmentation of the segmentation vector, and the compactness parameter is used to describe the irregularity of the segmentation vector boundary, with a 0.1–0.9 range and 0.1 interval. One parameter remains unchanged and the other parameter is adjusted. The optimal parameter combination is determined by the visual evaluation of the contrast and segmentation performances, and four typical parameter combinations are selected for comparison (Figure 25).
Following the experimental comparison tests, the segmentation results are optimized at the scale of 241, shape factor of 0.1, and compactness parameter of 0.4 (Figure 26).
The segmentation is then completed using the segmented single tree crown edge to constrain the position of the single tree top point. In this study, a total of 588 individual trees were segmented from the coniferous and broad-leaved mixed forest in the plot. Figure 27 depicts the overall segmentation results. Based on the individual tree segmentation method used, the tree top detection results can be used to calculate the geographical location and tree height of individual trees. Moreover, the delineation results of the canopy contour can be used to estimate parameters including the individual tree crown width. Table 2 reports the extraction results of the individual tree location, tree height, and crown width of individual tree parameters. For the accuracy evaluation, 10 rectangular plots of equal area in the high-resolution image were randomly selected in the range of the study plots. In order to verify whether the proposed single-tree extraction method with the combined LiDAR data and spectral images achieved the expected accuracy improvements, the results obtained by using a single-scale segmentation method were selected for comparison using the aforementioned 10 verification plots. Table 3 reports the accuracy evaluation results.
The total number of actual individual trees in the sample area selected for accuracy evaluation is 173, while the number of individual trees extracted by the original single-scale segmentation algorithm and proposed individual tree segmentation are 203 and 153, respectively (Table 3). The recognition rate of the original single-scale single-tree segmentation method exceeds 1. Considering that the missing detection of short individual trees in the process of segmentation will reduce the overall recognition rate of individual trees in the study area, it can be inferred that the over-segmentation phenomenon of the original single-scale segmentation method in the coniferous and broad-leaved mixed forest is extensive. This results in the total number of individual trees obtained by segmentation exceeding the actual number of individual trees. In this study, the over-segmentation phenomenon generated by the single-scale segmentation method has been significantly improved due to the combination of spectral images to screen the position and elevation of the initial recognition individual tree vertices (Figure 27). In addition, some individual trees are missed in the single-scale segmentation, which can be attributed to their low height. The proposed method combines LiDAR data with high-resolution images to identify such trees. As shown in Figure 28, the missed-detection phenomenon caused by the original single-scale segmentation method is also significantly improved, with great improvements in the individual tree recognition rate R, accuracy rate p, and harmonic value F. The original single-scale segmentation method identified 678 individual trees in the study area, with an average height of 28.6 m, while the segmentation method used in this study identified 589 individual trees, with an average height of 28.1 m. The results reveal that, as the proposed method solves the over-segmentation phenomenon of individual trees in the original segmentation method, there is a reduction in the total number of individual trees identified. At the same time, the proposed method identified numerous short individual trees (Figure 29)that were not originally identified, and thus there is a decrease in the average height of individual trees.

4. Discussion

In this study, a single-tree segmentation algorithm that combines LiDAR and spectral data was used to segment a mixed plot of broad-leaved and coniferous forest with a high canopy density with the purpose of improving the accuracy of the single-tree segmentation of coniferous and broad-leaved mixed forest. In the single-tree segmentation of the forest plot, the crown shape, tree shape, and tree point cloud distribution were fully considered. Firstly, the iterative threshold segmentation algorithm was used to identify the single-tree vertex. The multi-scale segmentation of the high-resolution image was then used to obtain the crown edge, which was adopted to constrain the position of the detected tree top point in the horizontal direction. The spatial three-dimensional information of the LiDAR data was used to add elevation information to the tree top point. The real tree top point was then obtained and the single-tree segmentation process was completed. Compared with the traditional single-tree segmentation method based on distance discriminant clustering, the proposed algorithm is more adaptable to forest land with different leaf types and has a higher segmentation accuracy. The differences in accuracy between the two methods can be explained in the following.
In order to maximize the amount of tree top points identified, the traditional segmentation method based on distance discriminant clustering must set the segmentation scale as small as possible. In this case, broad-leaved trees are mistakenly divided into multiple single trees with small crowns because of their large crown area and irregular shape, which leads to errors in the tree top extraction. Shorter trees may be blocked by taller trees nearby. If the segmentation scale is not small enough, such small single trees will be considered as a branch of the nearby tall trees when the tree top point is identified, thus causing the vertex to be missed. In this study, in addition to the LiDAR data, high-resolution imagery was used to detect individual tree crowns. The crown detection results from the high-resolution images were used to constrain the position of the tree top points identified by the LiDAR data. Compared with the traditional point cloud segmentation method based on distance discrimination, the single-tree segmentation method combined with LiDAR and high-resolution images proposed in this study can effectively solve the problem that the traditional method is not adaptable to the coniferous and broad-leaved mixed forest, its segmentation threshold is difficult to determine, and its segmentation accuracy is not high.
The proposed method solved two problems associated with the traditional point cloud segmentation method, thus improving the accuracy of the individual tree segmentation by 8%. It also avoids the limitations of the single-segmentation scale used by the traditional point cloud segmentation method under coniferous and broad-leaved mixed forests. However, the proposed method involves a large amount of work, and it takes a lot of time to compare the precise positioning process of the tree top point. Hence, it is not suitable for large study areas, yet it works well for single tree parameters, plot volume, and carbon sink extraction. Moreover, the proposed method requires remote sensing data that is accurate and of a high resolution. Therefore, future work will focus on designing an algorithm with a higher detection accuracy. The local and multi-threshold methods in the 2D image segmentation method are introduced into the 3D point cloud single-tree segmentation. In view of the large differences in the crown radius of single trees in coniferous and broad-leaved mixed forests, as well as the distribution of single trees across leaf types and tree species in different regions of the plot, the point cloud segmentation thresholds vary with the plot region for single-tree segmentation, thereby improving the accuracy of single-tree segmentation.

5. Conclusions

Effective and accurate individual tree segmentation is important to comprehensively understand individual tree information and realize the efficient management of forest resources. In this study, LiDAR and high-resolution image data were collected using a helicopter platform, and a new method for individual tree segmentation based on LiDAR and high-resolution image data was developed. The implementation of this method includes two steps: (1) the detection and identification of tree vertices from the LiDAR data, and the adoption of the iterative threshold method to segment the tree crowns from the LiDAR data; and (2) the matching of the detected individual tree vertices with the canopy contour in the high-resolution image to modify the individual tree vertices identified in step 1, and the completion of the individual tree segmentation. The evaluation index of the segmentation accuracy reveals that the proposed method works well in coniferous and broad-leaved mixed forests with a high canopy density, with improvements in single-tree segmentation and the parameter extraction accuracy on the single-tree scale. This provides a new method for forest point cloud single-tree segmentation, facilitating data support for ‘precision forestry’ and ‘digital forestry’.

Author Contributions

Conceptualization, X.C. and R.W.; methodology, X.C.; software, X.C.; validation, X.C., X.W., W.S. and X.L.; writing—original draft preparation, X.C.; writing—review and editing, X.C.; visualization, X.C.; supervision, X.C. and X.Z.; project administration, R.W.; funding acquisition, R.W. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China, ‘Biomass Precision Estimation Model Research for Large-Scale Regions Based on Multi-View Heterogeneous Stereographic Image Pairs of Forests’ (41971376).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lechner, A.M.; Foody, G.M.; Boyd, D.S. Applications in Remote Sensing to Forest Ecology and Management. One Earth 2020, 2, 405–412. [Google Scholar] [CrossRef]
  2. Pretzsch, H.; Biber, P.; Schütze, G.; Bielak, K. Changes of forest stand dynamics in Europe. Facts from long-term observational plots and their relevance for forest ecology and management. For. Ecol. Manag. 2014, 316, 65–77. [Google Scholar] [CrossRef]
  3. Zhu, L. Application of airborne LiDAR data in dynamic monitoring of forest resources. East China Forest Management 2018, 32, 75–78. [Google Scholar]
  4. Meissner, B.; Wyss, D.; Hoffmann, H.; Teusan, S. Application of Remote Sensing and Gis for Sustainable Forest Management and Capacity Building in Mongolia. In Proceedings of the 27th Asian Conference on Remote Sensing (ACRS2006), 9–13 October 2006, Ulaanbaatar, Mongolia; Volume 6.
  5. Luis, F.; Daniel, B.J.; Harald, B.; Van Oijen, M.; Carlos, G.; Koen, K.; Marcus, L.; Thomas, R.; Peter, S.J. Models for supporting forest management in a changing environment. For. Syst. 2011, 19, 8–29. [Google Scholar]
  6. Siry, J.P.; Cubbage, F.W.; Ahmed, M.R. Sustainable forest management: Global trends and opportunities. For. Policy Econ. 2005, 7, 551–561. [Google Scholar] [CrossRef]
  7. Dai, W.; Yang, B.; Dong, Z.; Shaker, A. A new method for 3D individual tree extraction using multispectral airborne LiDAR point clouds. Isprs J. Photogramm. 2018, 144, 400–411. [Google Scholar] [CrossRef]
  8. Wolf, B.M.; Wu, J.; Yu, X.; Solberg, S.; Pitkänen, J.; Popescu, S.; Hirschmugl, M.; Morsdorf, F.; Næsset, E.; Heipke, C.; et al. An International Comparison of Individual Tree Detection and Extraction Using Airborne Laser Scanning. Remote Sens. 2012, 4, 950–974. [Google Scholar]
  9. Wen, F.; Bisheng, Y.; Zhen, D.; Fuxun, L.; Jianhua, X.; Fashuai, L. Confidence-guided roadside individual tree extraction for ecological benefit estimation. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102368. [Google Scholar]
  10. Zhao, Y.; Hao, Y.; Zhen, Z.; Quan, Y. A Region-Based Hierarchical Cross-Section Analysis for Individual Tree Crown Delineation Using ALS Data. Remote Sens. 2017, 9, 1084. [Google Scholar] [CrossRef]
  11. Lindberg, E.; Holmgren, J. Individual Tree Crown Methods for 3D Data from Remote Sensing. Curr. For. Rep. 2017, 3, 19–31. [Google Scholar] [CrossRef]
  12. Zhen, Z.; Quackenbush, L.J.; Stehman, S.V.; Zhang, L. Agent-based region growing for individual tree crown delineation from airborne laser scanning (ALS) data. Int. J. Remote Sens. 2015, 36, 1965–1993. [Google Scholar] [CrossRef]
  13. Dian, Y.; Pang, Y.; Dong, Y.; Li, Z. Urban Tree Species Mapping Using Airborne LiDAR and Hyperspectral Data. J. Indian Soc. Remote. 2016, 44, 595–603. [Google Scholar] [CrossRef]
  14. Ottoy, S.; Tziolas, N.; Van Meerbeek, K.; Aravidis, I.; Tilkin, S.; Sismanis, M.; Stavrakoudis, D.; Gitas, I.Z.; Zalidis, G.; De Vocht, A. Effects of Flight and Smoothing Parameters on the Detection of Taxus and Olive Trees with UAV-Borne Imagery. Drones 2022, 6, 197. [Google Scholar] [CrossRef]
  15. Rowell, E.; Seielstad, C.; Vierling, L.; Queen, L.; Shepperd, W. Using laser altimetry-based segmentation to refine automated tree identification in managed forests of the Black Hills, South Dakota. Photogramm. Eng. Remote Sens. J. Am. Soc. Photogramm. 2006, 72, 1379–1388. [Google Scholar] [CrossRef]
  16. Persson, A.; Holmgren, J. Identifying species of individual trees using airborne laser scanner. Remote Sens. Environ. 2004, 90, 415–423. [Google Scholar]
  17. Ullah, S.; Adler, P.; Dees, M.; Datta, P.; Weinacker, H.; Koch, B. Comparing image-based point clouds and airborne laser scanning data for estimating forest heights. Ifor.-Biogeosci. For. 2017, 10, 273. [Google Scholar] [CrossRef]
  18. Thomas, J.L.H.B. Automated Delineation of Individual Tree Crowns from Lidar Data by Multi-Scale Analysis and Segmentation. Photogramm. Eng. Remote Sens. 2012, 78, 1275–1284. [Google Scholar]
  19. Jakubowski, M.; Li, W.; Guo, Q.; Kelly, M. Delineating Individual Trees from Lidar Data: A Comparison of Vector- and Raster-based Segmentation Approaches. Remote Sens. 2013, 5, 4163–4186. [Google Scholar] [CrossRef]
  20. Lang, M.W.; Kim, V.; McCarty, G.W.; Li, X.; Yeo, I.; Huang, C.; Du, L. Improved Detection of Inundation below the Forest Canopy using Normalized LiDAR Intensity Data. Remote Sens. 2020, 12, 707. [Google Scholar] [CrossRef]
  21. Yeung, W.; Shaker, A. Radiometric normalization of overlapping LiDAR intensity data for reduction of striping noise. Int. J. Digit. Earth 2016, 9, 649–661. [Google Scholar]
  22. Korpela, I.; Ørka, H.O.; Maltamo, M.; Tokola, T.; Hyyppä, J. Tree Species Classification Using Airborne LiDAR—Effects of Stand and Tree Parameters, Downsizing of Training Set, Intensity Normalization, and Sensor Type. Silva Fenn. 2010, 44, 319–339. [Google Scholar] [CrossRef]
  23. Maggi, L.W.G.Q. A New Method for Segmenting Individual Trees from the Lidar Point Cloud. Photogramm. Eng. Remote Sens. 2012, 78, 75–84. [Google Scholar]
  24. Sačkov, I.; Kardoš, M. Forest delineation based on LiDAR data and vertical accuracy of the terrain model in forest and non-forest area. Ann. For. Res. 2014, 57, 119–136. [Google Scholar] [CrossRef]
  25. Guo, M.; Wang, S.J.; Na, Y.U. Ecological Land Fragmentation Evaluation and Dynamic Changes in Changbai Mountain Areas. Resour. Dev. Mark. 2020, 36, 14–22. [Google Scholar]
  26. Yang, T.; Wang, J.; Peng, W. Biological Characteristics and Culture Conditions of Hericium coralloides(Scop.)Pers. Med. Plant 2020, 11, 32–40. [Google Scholar]
  27. Ya, L.V.; Wan, C. A Denoising Method by Layering for Terrain Point Cloud from 3D Laser Scanner. J. Geomat. Sci. Technol. 2014, 31, 501–504. [Google Scholar]
  28. Zhang, C.; Zhou, Y.; Qiu, F. Individual Tree Segmentation from LiDAR Point Clouds for Urban Forest Inventory. Remote Sens. 2015, 7, 7892–7913. [Google Scholar] [CrossRef]
  29. Zhao, X.; Guo, Q.; Su, Y.; Xue, B. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas. Isprs J. Photogramm. 2016, 117, 79–91. [Google Scholar] [CrossRef]
  30. Budei, B.C.; St-Onge, B.; Hopkinson, C.; Audet, F. Identifying the genus or species of individual trees using a three-wavelength airborne lidar system. Remote Sens. Environ. 2018, 204, 632–647. [Google Scholar] [CrossRef]
  31. Zhi-Qiang, M.A.; Sun, S.Y.; Wang, C.P.; Cheng, Y.Z. Research on projectile and target’s image edge detection method based on the iterative threshold. Inf. Technol. 2011, 35, 26–28+51. [Google Scholar] [CrossRef]
  32. Parkan, M.; Tuia, D. Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering. Remote Sens. 2018, 10, 335. [Google Scholar] [CrossRef]
  33. Jing, L.; Hu, B.; Noland, T.; Li, J. An individual tree crown delineation method based on multi-scale segmentation of imagery. Isprs J. Photogramm. 2012, 70, 88–98. [Google Scholar] [CrossRef]
  34. Carr, J.C.; Slyder, J.B. Individual tree segmentation from a leaf-off photogrammetric point cloud. Int. J. Remote Sens. 2018, 39, 5195–5210. [Google Scholar] [CrossRef]
  35. Liu, H.L.; Zhang, X.L.; Zhang, Y. Review on individual tree detection based on airborne LiDAR. Laser Optoelectron. Prog. 2018, 55, 40–48. [Google Scholar]
Figure 1. Location of study area.
Figure 1. Location of study area.
Forests 14 01009 g001
Figure 2. LiDAR image of area used for point cloud segmentation.
Figure 2. LiDAR image of area used for point cloud segmentation.
Forests 14 01009 g002
Figure 3. Comparison of lateral view (a) before point cloud denoising and (b) after point cloud denoising.
Figure 3. Comparison of lateral view (a) before point cloud denoising and (b) after point cloud denoising.
Forests 14 01009 g003
Figure 4. Diagram of progressive encryption triangulation filtering method.
Figure 4. Diagram of progressive encryption triangulation filtering method.
Forests 14 01009 g004
Figure 5. Progressive encrypted triangulation filter used to filter the research area. (a) Large terrain undulation filtering effect; (b) small terrain undulation filtering effect.
Figure 5. Progressive encrypted triangulation filter used to filter the research area. (a) Large terrain undulation filtering effect; (b) small terrain undulation filtering effect.
Forests 14 01009 g005
Figure 6. Separated ground points.
Figure 6. Separated ground points.
Forests 14 01009 g006
Figure 7. DEM obtained by interpolation.
Figure 7. DEM obtained by interpolation.
Forests 14 01009 g007
Figure 8. Comparison of LiDAR data normalization before and after processing. (a) Lateral view before normalization; (b) lateral view after normalization.
Figure 8. Comparison of LiDAR data normalization before and after processing. (a) Lateral view before normalization; (b) lateral view after normalization.
Forests 14 01009 g008
Figure 9. Schematic representation of the key steps of the proposed segmentation method.
Figure 9. Schematic representation of the key steps of the proposed segmentation method.
Forests 14 01009 g009
Figure 10. Principles of point cloud segmentation.
Figure 10. Principles of point cloud segmentation.
Forests 14 01009 g010
Figure 11. Mean canopy diameter for individual tree segmentation.
Figure 11. Mean canopy diameter for individual tree segmentation.
Forests 14 01009 g011
Figure 12. Initial identification results of individual tree vertices. (a) Plot 1; (b) Plot 2.
Figure 12. Initial identification results of individual tree vertices. (a) Plot 1; (b) Plot 2.
Forests 14 01009 g012
Figure 21. Addition of partial vertices.
Figure 21. Addition of partial vertices.
Forests 14 01009 g021
Figure 22. Addition of elevation values to the added vertices.
Figure 22. Addition of elevation values to the added vertices.
Forests 14 01009 g022
Figure 23. Position graph of added vertices.
Figure 23. Position graph of added vertices.
Forests 14 01009 g023
Figure 24. Comparison of segmentation results at different scales of (a) 216; (b) 241; (c) 278; and (d) 292.
Figure 24. Comparison of segmentation results at different scales of (a) 216; (b) 241; (c) 278; and (d) 292.
Forests 14 01009 g024
Figure 25. Comparison of parameter combinations. (a) Shape factor 0.1, compactness parameter 0.4; (b) shape factor 0.4, compactness parameter 0.7; (c) shape factor 0.5, compactness parameter 0.6; and (d) shape factor 0.8, compactness parameter 0.1.
Figure 25. Comparison of parameter combinations. (a) Shape factor 0.1, compactness parameter 0.4; (b) shape factor 0.4, compactness parameter 0.7; (c) shape factor 0.5, compactness parameter 0.6; and (d) shape factor 0.8, compactness parameter 0.1.
Forests 14 01009 g025aForests 14 01009 g025b
Figure 26. Optimal segmentation results.
Figure 26. Optimal segmentation results.
Forests 14 01009 g026
Figure 27. Single-tree point cloud segmentation results. (a) Denoised original point cloud; (b) normalized point cloud; (c) single tree crown segmentation results.
Figure 27. Single-tree point cloud segmentation results. (a) Denoised original point cloud; (b) normalized point cloud; (c) single tree crown segmentation results.
Forests 14 01009 g027aForests 14 01009 g027b
Figure 28. Results of broad-leaved tree segmentation. (a) Over-segmented broad-leaved trees in the single-scale segmentation method; (b) over-segmented broad-leaved trees and surrounding trees; (c) segmented broad-leaved trees of the proposed method; (d) segmented broad-leaved trees and surrounding trees.
Figure 28. Results of broad-leaved tree segmentation. (a) Over-segmented broad-leaved trees in the single-scale segmentation method; (b) over-segmented broad-leaved trees and surrounding trees; (c) segmented broad-leaved trees of the proposed method; (d) segmented broad-leaved trees and surrounding trees.
Forests 14 01009 g028aForests 14 01009 g028b
Figure 29. Segmented short individual trees.
Figure 29. Segmented short individual trees.
Forests 14 01009 g029
Table 1. Key parameters of the Galaxy Prime LiDAR sensor used in this study.
Table 1. Key parameters of the Galaxy Prime LiDAR sensor used in this study.
ParameterValue
Flight height/m500
Laser wavelength/nm1064
Scanning angle/°10–60
Pulse frequency/kHz50–1000
The average density of point cloud/(pts/m2)160
Table 2. Individual tree parameters obtained using partial segmentation (WGS84 coordinate system).
Table 2. Individual tree parameters obtained using partial segmentation (WGS84 coordinate system).
XYTree Height/mCrown Diameter/m
14,255,049.35,188,213.1335.01719.953
14,255,236.795,188,237.1335.07515.869
14,255,239.445,188,138.6628.2211.024
14,255,112.375,188,103.6332.62922.862
14,255,242.785,188,196.2722.1338.663
14,255,098.435,188,165.7533.99417.102
14,255,191.895,188,244.3321.8517.559
14,255,220.815,188,015.9230.4796.557
14,255,086.095,188,212.0132.7515.544
14,255,213.555,188,092.6625.4883.031
14,255,171.515,188,089.1332.87413.061
14,255,179.95,188,002.9233.61816.057
14,255,215.985,188,097.3426.1594.362
14,255,034.485,188,156.9132.43912.413
Table 3. Comparison of the accuracy of the individual tree segmentation method based on the initial distance discriminant clustering and the improved segmentation method in this study.
Table 3. Comparison of the accuracy of the individual tree segmentation method based on the initial distance discriminant clustering and the improved segmentation method in this study.
MethodNnRrpFNTPNFNNFP
Initial method1732031.170.910.830.871.170.170.34
Improved method1731580.910.950.960.950.910.130.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Wang, R.; Shi, W.; Li, X.; Zhu, X.; Wang, X. An Individual Tree Segmentation Method That Combines LiDAR Data and Spectral Imagery. Forests 2023, 14, 1009. https://doi.org/10.3390/f14051009

AMA Style

Chen X, Wang R, Shi W, Li X, Zhu X, Wang X. An Individual Tree Segmentation Method That Combines LiDAR Data and Spectral Imagery. Forests. 2023; 14(5):1009. https://doi.org/10.3390/f14051009

Chicago/Turabian Style

Chen, Xingwang, Ruirui Wang, Wei Shi, Xiuting Li, Xianhao Zhu, and Xiaoyan Wang. 2023. "An Individual Tree Segmentation Method That Combines LiDAR Data and Spectral Imagery" Forests 14, no. 5: 1009. https://doi.org/10.3390/f14051009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop