Next Article in Journal
Interpretable Deep Learning Applied to Rip Current Detection and Localization
Next Article in Special Issue
Reconstruction of LoD-2 Building Models Guided by Façade Structures from Oblique Photogrammetric Point Cloud
Previous Article in Journal
An Enhanced Image Patch Tensor Decomposition for Infrared Small Target Detection
Previous Article in Special Issue
A Method for the Automatic Extraction of Support Devices in an Overhead Catenary System Based on MLS Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Relative Projection Density for Classification of Terrestrial Laser Scanning Data with Unknown Angular Resolution

1
School of Smart City, Chongqing Jiaotong University, Chongqing 400074, China
2
Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Natural Resources, Shenzhen 518000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 6043; https://doi.org/10.3390/rs14236043
Submission received: 1 November 2022 / Revised: 26 November 2022 / Accepted: 27 November 2022 / Published: 29 November 2022

Abstract

:
Point cloud classification is a key step for three-dimensional (3D) scene analysis in terrestrial laser scanning but is commonly affected by density variation. Many density-adaptive methods are used to weaken the impact of density variation and angular resolution, which denotes the angle between two horizontally or vertically adjacent laser beams and are commonly used as known parameters in those methods. However, it is difficult to avoid the case of unknown angular resolution, which limits the generality of such methods. Focusing on these problems, we propose a density-adaptive feature extraction method, considering the case when the angular resolution is unknown. Firstly, we present a method for angular resolution estimation called neighborhood analysis of randomly picked points (NARP). In NARP, n points are randomly picked from the original data and the k nearest points of each point are searched to form the neighborhood. The angles between the beams of each picked point and its corresponding neighboring points are used to construct a histogram, and the angular resolution is calculated by finding the adjacent beams of each picked point under this histogram. Then, a grid feature called relative projection density is proposed to weaken the effect of density variation based on the estimated angular resolution. Finally, a 12-dimensional feature vector is constructed by combining relative projection density and other commonly used geometric features, and the semantic label is generated utilizing a Random Forest classifier. Five datasets with a known angular resolution are used to validate the NARP method and an urban scene with a scanning distance of up to 1 km is used to compare the relative projection density with traditional projection density. The results demonstrate that our method achieves an estimation error of less than 0.001° in most cases and is stable with respect to different types of targets and parameter settings. Compared with traditional projection density, the proposed relative projection density can improve the performance of classification, particularly for small-size objects, such as cars, poles, and scanning artifacts.

1. Introduction

The increasingly extensive application of 3D laser scanning has led to the development of research in many fields, such as forest investigation [1,2,3], autonomous driving [4], building modeling and detection [5,6], cultural relic protection [7], etc. However, the original scanning data only provides spatial information and does not contain semantic information that can be directly understood and applied in such fields. Therefore, it is necessary to assign a semantic label to each point before further applications, which is called point cloud classification.
Feature extraction, as a vital step of classification, aims to find discriminative features describing 3D distribution from massive points [8], which directly affects the accuracy of subsequent point cloud processing. Commonly used point cloud features include RGB colors [9], intensity features [10,11,12,13], geometric features [9,14,15,16,17], etc. The intensity and RGB information are not always available, which limits the generality of methods using those features. Moreover, RGB color and intensity information are not necessarily conducive to improving classification accuracy [18].
As 3D coordinates are fundamental information in all LiDAR (Light Detection and Ranging) systems. We only use geometric features for classification in this study, which can be extracted directly from 3D coordinates and depends on the local structure and point density [10,19,20,21]. The three major types of laser scanning data (Airborne Laser Scanning, Mobile Laser Scanning, and Terrestrial Laser Scanning) present different point distributions. For instance, Airborne Laser Scanning (ALS) is used to collect large-scale 3D information with almost uniform point density. As an important complement to ALS, Mobile Laser Scanning (MLS) and Terrestrial Laser Scanning (TLS) can capture dense and detailed 3D information from the side view. The scanning targets are mainly distributed on the two sides of the route in MLS, with a similar distance to the scanning center, while the distribution of the targets is more complex in TLS. Thus, the density variation in TLS is commonly larger than that of MLS, in spite of the similar scanning pattern [22]. In this research, we mainly focus on the effect of density variation on the representation of geometric features in TLS data. As one of the factors that affect geometric features, the impact of density variation on feature extraction has been focused on feature extraction strategy and parameter setting in previous studies.
Local point density can be directly used as a feature to reflect the point distribution characteristics by counting the number of points in a given geometric primitive, such as a sphere and a cylinder [11,16,21,23,24,25]. Pirotti et al. delimitated the working area by calculating the point density and defining the lowest acceptable point density [11]. Previous research has established that specific objects can be detected by density variation. For instance, leaf points are separated by utilizing the remarkable density difference between leaf and wood points [26,27]. On this basis, Tan et al. further analyzed the factors influencing the density variation and calibrated the distance effect on density [2]. Che et al. proposed a method to filter TLS data with a scanline density analysis by simulating reference point density based on distance and angular resolution, which is the angle between two horizontally or vertically adjacent laser beams [28]. Ibrahim et al. extracted ground points by counting the point number in cylinder neighborhoods and set the density threshold according to the average point density and the number of drive lines [29]. In addition to directly using point density as a feature description, projecting 3D points onto the XOY plane is another common strategy [14,19,30,31]. Li et al. projected the points onto a rectangular grid in the XOY plane, counting the number of points in each sub-grid, and taking it as the projection density. The points with projection density above a predefined threshold are labeled as building façade [32,33,34]. In some studies, projection density is also used to extract poles in an urban scene [35] and individual trees in a forest scene [31,36].
Density variation also affects the parameter setting in the post-processing, e.g., grid size [37], slice thickness [38], voxel size [39], density threshold [40], and neighborhood setting [19]. Setting a unified threshold may be unsuitable when dealing with data of large density variation. In order to solve the problems brought by threshold setting, Liu et al. generated an adaptive threshold for each grid based on scanning distance and angular resolution to detect dynamic targets at different distances [41]. Cheng et al. calculated the threshold of projection density based on the distance from the farthest building to the scanner, the height of the lowest building, and angular resolution [42]. However, this approach requires a detailed understanding of the entire scene. To solve this problem, Chen et al. calculated the threshold of projection density based on the polar grid and vertical angular resolution [22].
In addition to the threshold setting, many studies have weakened the effect of density variation on feature extraction by changing the neighborhood setting, further improving feature saliency. Under the neighborhood with the same scale, density variation will result in a lower number of neighboring points farther away from the scanner, which affects the discrimination of local features. In order to overcome the effect of density variation on the fix-scale neighborhood, Dimitrov et al. determine the representative radius by various surfaces, mitigating the impact of density variation [43]. Another idea is to evaluate a series of neighborhood settings through local features, such as eigenentropy [19], curvature [44], surface variation [45], or direct use multi-scale neighborhoods for feature extraction [46,47]. However, the selection of the search range still depends on prior knowledge. To solve this problem, Chen et al. constructed a density-adaptive search range for each point, based on the theoretical point spacing estimated from a distance and angular resolution [30,48].
Although the above methods take into account the effect of density variation on feature extraction, which can improve the accuracy of classification and object recognition, there are still some problems to be solved.
(1)
Angular resolution is an important parameter in many density-adaptive methods, indicating how close two adjacent points are under a given distance. However, most current methods assume that it is a known parameter and directly take it into calculation [28,30,41,42,48]. Although the angular resolution can be obtained directly from the scanner settings, the condition with the unknown angular resolution is unavoidable, limiting the generality of the current methods.
(2)
Among the object extraction methods, many density-adaptive methods are designed for specific objects. The target objects are distinguished from other objects in the scene by defining a set of rules that take the geometric features into account, such as buildings [32,34,49,50], vehicles [29], or trees [36]. There is still a lack of feature design considering density variation in multi-class classification.
Focusing on the above limitations, we provide a solution when the angular resolution is unknown in density-adaptive processing and propose an alternative density-adaptive method for feature extraction considering density variation in multi-class classification. The following objectives are accomplished:
(I)
Focusing on the case when angular resolution is unknown, we present a stable method to estimate the angular resolution of TLS data, called the neighborhood analysis of randomly picked points (NARP), and
(II)
Based on the estimated angular resolution, we propose a grid-based feature called relative projection density, to adapt to the density variation in TLS data. In contrast to the commonly used projection density in previous studies [19,30,31,32,33,34,49], the relative projection density can weaken the effect of density variation and strengthen the relationship between projection density and object geometry.

2. Methodology

The proposed method mainly includes four steps: ground filtering, angular resolution estimation, feature extraction, and classification, as shown in Figure 1. Firstly, we use Cloth Simulation Filtering (CSF) [51] to extract ground points. Then, the angular resolution is estimated by NARP for non-ground points. In the step of feature extraction, some commonly used multi-dimensional geometric features [14,19,20,30,52] are extracted for each non-ground point, with traditional projection density replaced by relative projection density. Finally, a Random Forest (RF) classifier is used to label each point and evaluate feature importance.

2.1. Estimation of Angular Resolution with NARP

During scanning, laser beams are emitted from the scanning center with a fixed interval called angular resolution. The angle intervals between horizontally and vertically adjacent beams are horizontal and vertical angular resolution, respectively. Notice that we mean the XOY plane is positioned “horizontally” in this research, instead of the absolutely horizontal plane obtained by leveling equipment, and “vertically” means the direction perpendicular to the XOY plane. Focusing on the case when angular resolution is unknown, we propose a method to estimate it, called the neighborhood analysis of randomly picked points (NARP). The basic idea is to analyze the angles between the beams of the picked point and its neighboring points, and then find the adjacent beams of the picked point, as shown in Figure 2.
We first separate ground points from original data with the CSF algorithm [51], as the parameters in this method are easy to tune. The estimation and classification are then executed on the non-ground points. Our method randomly selects n points from the non-ground points and finds the K nearest points for each of them with the KNN (K nearest neighbors) search used as follows:
N ( p i ) = { p i j ,   j = 1 , 2 K } , i = 1 , 2 , n
where pi represents the randomly selected point and pij represents the points in the neighbor of pi, labeled as N(pi).
In TLS data, a point is commonly represented by rectangular coordinates. To calculate the angle interval between beams of different points, we transform rectangular coordinates to polar coordinates (see Figure 3):
ρ = x 2 + y 2 ,   α = arccos ( z / ρ ) θ = { arc cos ( x / ρ ) , y 0 2 π arc cos ( x / ρ ) , y < 0
where (x,y,z) and (ρ,θ,α) are the rectangular and polar coordinates of one point, respectively. As shown in Figure 3, θ is the azimuth angle with a range of [0°, 360°] and α is the zenith angle with a range of [0°, 180°].
Based on the polar coordinates, the azimuth and zenith angle intervals between the beams of each pi and its neighboring points pij can be calculated (see Figure 4), labeled as Δθij and Δαij, as shown below:
Δ θ i j = abs ( θ i j θ i ) , i = 1 , 2 , n ,   j = 1 , 2 K Δ α i j = abs ( α i j α i ) , i = 1 , 2 , n ,   j = 1 , 2 K
where θi and αi are the azimuth and zenith angle of pi, θij, and αij are the azimuth and zenith angle of pij. The interval value Δθij can represent the horizontal angular resolution when pi and pij are located on two horizontally adjacent beams. Similarly, Δαij can represent the vertical angular resolution when the beams of pi and pij are vertically adjacent. Thus, the key of our method is to recognize the points in N(pi) that belong to the horizontally or vertically adjacent beams of pi.
Based on Equation (3), we can obtain n*K values of Δθij and Δαij, respectively. Then, a histogram is constructed for Δθij and Δαij, respectively, with an interval of Δ. Figure 5 shows an example of histogram construction on a plane without occlusion. The Δθij and Δαij calculated from points on the central scanning line (line a and 1) are 0°. Theoretically, that means points located in the first interval ([0°, 0.005°] in Figure 5) correspond to the beams on the central line. For other scanning lines, there are more neighboring points located on the horizontally or vertically adjacent line (lines b and 2 in Figure 5) than others. Thus, except for the first interval, the interval with the maximum point number commonly corresponds to the horizontally or vertically adjacent line of the central point. We calculate the mean Δθij and Δαij values in the selected interval as the horizontal and vertical angular resolution θar(Δ) and αar(Δ).
It may be hard to obtain accurate estimation results with a single run of histogram construction because a suitable Δ always requires tuning the parameter many times. To solve this problem, a series of Δ values are generated for histogram construction. For each pi, the eight closest points in N(pi) are picked and 8*n azimuth and zenith angle intervals can be obtained, respectively. Then, the median of all those Δθij and Δαij values, labeled as Δθm and Δαm, are used to generate the range of Δ values, as shown below:
Δ θ i j = 0.25 Δ θ m + ( i 1 ) × 0.05 Δ θ m , i = 1 , 2 , 11 Δ α i j = 0.25 Δ α m + ( i 1 ) × 0.05 Δ α m , i = 1 , 2 , 11
Then, a series of horizontal and vertical angular resolutions can be obtained with histograms constructed from different Δθi and Δαi, of which the median values are used as the final estimation result, as shown below:
θ a r = median ( θ a r ( Δ θ i ) ) , i = 1 , 2 , 11 α a r = median ( α a r ( Δ θ i ) ) , i = 1 , 2 , 11  
where θar and αar are the estimated horizontal and vertical angular resolution, and θarθi) and αarαi) are the estimated result from the histogram constructed with interval Δθi and Δαi.

2.2. Using Relative Projection Density in Classification

2.2.1. Neighborhood Selection

In some research, fixed parameters are set to select neighboring points. Additionally, such methods are only suitable for datasets with uniform point density, and the selection of scale parameters relies on experience and heuristic knowledge. Since suitable neighboring parameters vary with different datasets, they show many insufficiencies when using fixed neighborhoods [52,53]. To overcome the insufficiency of fixed-scale neighborhoods, we use the method in [19] to select a suitable neighborhood for each point.
Based on the principal component analysis method, a 3D covariance matrix S can be constructed for a given point P0 = [x0 y0 z0]T by involving its k closest neighboring points Pi = [xi yi zi]T (i = 1, 2, …, k):
S = 1 k + 1 i = 0 k ( P i P ¯ ) ( P i P ¯ ) T  
where P ¯ represents the geometric center of the closest neighbors.
P ¯ = 1 k + 1 i = 0 k P i
The matrix S can be decomposed to obtain eigenvalues λ1, λ2, and λ3 (where λ1 > λ2 > λ3). Then, the normalized eigenvalues e1, e2, and e3 can be obtained by the sum of the three eigenvalues λ and used to calculate the three-dimensionality features:
L λ = e 1 e 2 e 1
P λ = e 2 e 3 e 1
S λ = e 3 e 1
which are usually considered as the probabilities of a point to be labeled as linearity (1D), planarity (2D), or scattering (3D). Then, the unpredictability of a neighborhood can be measured by the eigenentropy:
E λ = e 1 ln ( e 1 ) e 2 ln ( e 2 ) e 3 ln ( e 3 )  
For different scale parameters, the k value yields the minimum eigenentropychosen to generate the optimal neighborhood. In this research, we start from k = 10 and increase the neighborhood parameter k successively with a step size of Δk = 10 until the upper bound of k = 100.

2.2.2. Extraction of Commonly Used Features

As the intensity and RGB information are not always available for TLS data, we only use geometric features in this paper. A series of features that are commonly used in previous studies, such as [19,20,52], are extracted under the neighborhood generated in the previous step and divided into two sets, as shown in Table 1.
The first set includes a variety of 3D features, which are directly derived by describing the local geometric structure of the neighborhood. The three-dimensionality features and the eigenentropy from Equations (8)–(11) are directly used as geometric features. Then, the normalized eigenvalues e1, e2, and e3 are further used to generate the Shannon entropy, omnivariance, anisotropy, and curvature variation in Table 1. The verticality describes the relationship between the fitted plane and the XOY plane:
V = 1 | n z |
where the vertical component nz of the normal vector is the eigenvector corresponding to the minimum eigenvalue λ3 in the neighborhood of the given point X.
The second set consists of three grid features, generated by projecting original points onto a rectangular grid in the XOY plane. Then, the grid features can be derived by analyzing the overall points in each sub-grid, e.g., projection density, maximum height difference, and the standard deviation of all height values [14,19,21,30]. In this study, we directly use the height difference and height standard deviation defined in previous work and improve the traditional projection density to adapt to the density variation.

2.2.3. Relative Projection Density

As an important grid feature, projection density is typically defined as the number of points falling into each sub-grid, which is mainly affected by scanning geometry, object geometry, and scanning resolution. Scanning geometry is a function of distance and incidence angle, and object geometry describes the shape of the scanned object [28]. Scanning resolution can be measured by the angle interval between adjacent laser beams emitted from the scanning center, which can also be represented by angular resolution. As object geometry and angular resolution are unchangeable in a given dataset, scanning geometry is the main factor affecting the density variation. Among the two variables in scanning geometry, we mainly focus on the effect brought by distance, as the target with a high incidence angle is commonly not the main scanning target.
The number of laser beams passing through each sub-grid will decrease rapidly with the increase in scanning distance. Thus, the traditional projection density Ngrid will also decrease rapidly and cannot reflect the object geometry of different classes properly. Therefore, we weaken the effect of scanning geometry on projection density by calculating the reference projection density Nref for each sub-grid. Following this line of thought, we propose a grid feature called relative projection density defined as the ratio between Ngrid and Nref.
Nref also represents the theoretical number of laser beams passing through one sub-grid on the XOY plane, which can be calculated by the horizontal angular resolution and the angle interval of a sub-grid:
N rd = N grid N ref
where θαr represents the horizontal angular resolution and θ represents the angle interval of a sub-grid, as shown in Figure 6. It can be expressed as:
N ref = θ θ α r
where θi represents the azimuth angle of the line from the origin to the four sub-grid vertices.
θ = max ( θ i ) min ( θ i ) ,   i = 1 , 2 , 3 , 5
We illustrate the relative projection density with an example in Figure 6. A scanline is selected, which includes close vegetation, a car, and a distant building, as shown in Figure 7a. In this scanline, we analyze the variation of three types of projection density in terms of distance, including traditional projection density (Ngrid), reference projection density (Nref in Equation (14)), and relative projection density (Nrd in Equation (13)), as shown in Figure 7b–d. The building is commonly considered to have a larger projection density than other objects; however, due to the effect of scanning distance, the traditional projection density of small-size objects at close range may be larger than a building at a distance. As shown in Figure 7a,b, although the surface area of the building is much larger than the vegetation, the traditional projection density Ngrid of vegetation at 4 m and 43 m is much larger than the building at 300 m from the scanner. This phenomenon demonstrates that the traditional projection density Ngrid is not effective enough to distinguish those objects. In contrast, the relative projection density (Nrd) is valid even though the building is far away from the scanner. In this manner, vegetation, a car, and a building can be distinguished by calculating the relative projection density, as shown in Figure 7d. The variation is consistent with the object geometry, which can correctly reflect the ground truth. After feature extraction, the relative projection density is combined with other commonly used geometric features to construct a 12-dimensional feature vector, and the semantic label is generated using Random Forest.

3. Experimental Results and Results

3.1. Effectiveness of the NARP Method

3.1.1. Dataset

The proposed method is tested on five datasets collected with different scanners and angular resolutions, as shown in Figure 8. The first dataset is captured under a forest scene, in which high trees are the main targets, while the others are captured under an urban environment with man-made objects and mixed with vegetation. The detailed information is shown in Table 2, including the scanner, the point number, and the setting of the angular resolution.

3.1.2. Comparison of the Point Spacing-Based Method

Our method is compared with the point spacing-based method [48] on the five datasets, as shown in Table 3 and Table 4. The results of our method are averaged on 100 runs for each dataset, as each run may vary because of the strategy of randomly selecting points. The parameters N and K in this test are set as 500 and 30. In the spacing-based method, we manually select 50 groups of neighboring point pairs on a flat surface (e.g., façade and trunk), for the estimation of horizontal and vertical angular resolutions, respectively. It can be seen that our method has significantly better performance and the estimation error is less than 0.001° in most cases, while the point spacing-based method has many limitations with unstable estimation results. For example, the estimation error of horizontal angular resolution in Data 5 is obviously worse than others, as we can hardly find surfaces perpendicular to laser beams for the point spacing-based method. Our method also shows better stability in terms of standard deviation, even when horizontal and vertical angular resolutions are not in the same order of magnitude (see Data 5).
Our method is also tested on different types of targets, including buildings, crowns, poles, shrubs, the ground, cars, and pedestrians, as shown in Figure 9. The results show that the overall performance of our method is stable in terms of different types of targets with varieties of local point distribution, as shown in Table 5. Using the targets consisting of a flat surface (see the results of buildings and cars) does not obviously increase the accuracy, and the incident angle variation does not affect the result either (see the results of building 1 and ground 1). For the common objects under the forest, the estimation error of crown points increases slightly compared with the error of the whole scene while the stem points correspond to a much lower error. It should be noted that the error of horizontal angular resolution is significantly larger than others when it comes to pedestrians. The probable reason is that the movement of pedestrians makes the point spacing in a horizontal direction much larger than the theoretical value. Thus, it can be inferred our method is not suitable for moving objects. However, as such targets usually account for a very small proportion of all points, the effectiveness of the proposed method is not affected in most cases.
To demonstrate the efficiency of our method, the execution time of our method on the five datasets (see Table 2) is shown in Table 6. The parameter is the same as the above test, with K and N set as 500 and 30, respectively. For each dataset, the running time is averaged on 50 runs with an Intel Core i7-9750H CPU @2.60GHz computer. It can be seen that the increase in execution time with the point number is a little faster than a linear change. It is consistent with that the most time is spent on the search of neighboring points under a kd-tree structure with complexity of O(nlogn). Considering our method is stable for different object types, as shown in the above test, an effective way to reduce the execution time is to only use part of the original points.
There are two parameters in our method: the number of randomly selected points N and the number of neighboring points K. The influence of parameter setting on estimation result is tested, as shown in Figure 10a, and Figure 10b shows the relation between estimation result and N, with K fixed to 30. It can be seen that our method is stable to the parameter N in general and the error is less than 0.001° under most parameter settings, which is similar to the conclusion in Table 3. Additionally, increasing N may improve the estimation result slightly in some datasets. Figure 10c,d show the relationship between the estimation result and K, with N fixed to 500. As some error values are significantly higher than others, the y-axis is illustrated in logarithmic form. The results show that our method is also stable when K is no more than 100, but the conclusion is different for some datasets when K is larger than 100, because an estimation error may occur. The probable reason is that the point number of different scanning lines may be very similar when K is set as a large value. We make an insight into the histogram construction when K is 100 and 1000, respectively, as shown in Figure 11. In Figure 11a, the point number of the adjacent laser beam is obviously larger than other beams and the estimation error is small with a mean error of less than 0.002°. When K is set to 1000, the point number of the four nearest laser beams is quite similar and the adjacent laser beam does not correspond to the highest point number in Figure 11b, resulting in a wrong estimation.

3.2. Effectiveness of Relative Projection Density

3.2.1. Dataset

As the scanning distance of the dataset used in the aforementioned test is relatively limited (less than 200 m), we tested the proposed feature (relative projection density) on a dataset with a much larger scanning distance and density variation. The dataset utilized in this section was acquired by a Leica P50 terrestrial laser scanner in Nanbin Road, Nan’an District, Chongqing, China, as shown in Figure 12. It contains more than 5 million points corresponding to an urban scene with a scanning resolution of 6 mm@10 m, and includes a variety of natural and man-made objects. The point density varies a lot with a length of approximately 1 km, and the point density range of non-ground points varies roughly from 0.04 cm to 0.98 m. The majority classes are building, vegetation, cars, poles, and scanning artifacts, and 5000 points are randomly selected for each category as a training set.

3.2.2. Evaluation Metrics

We use four indicators to evaluate the classification result, i.e., the OA (Overall Accuracy), Recall, Precision, and F1-Score. The OA expresses the proportion of correct classification results to the overall dataset, which is an accurate measurement of all categories. The other three evaluation metrics describe the classification results within each category. Recall measures the proportion of correctly identified points in a category to the total data in that category, and Precision measures the proportion of data correctly identified as the current category. F1-Score is a balanced index of Recall and Precision. The four indicators can be expressed by the following equations:
OA = T P + F N T P + T N + F P + F N
Recall = T P T P + F N
Precision = T P T P + F P
F 1 - Score = 2 × Recall × Precision Recall + Precision  
where TP (True Positive) represents the number of points that are correctly identified as the current category, TN (True Negative) represents the number of points that are correctly identified as other categories, FN (False Negative) indicates the number of points in the current category which are incorrectly predicted as the other categories, and FP (False Positive) indicates the number of points in other categories which are incorrectly predicted as the current category.

3.2.3. Comparison with Traditional Projection Density

To verify the effectiveness of the feature extraction method in this paper, the relative projection density feature is compared with the traditional projection density in [14,19,32,33,34,49]. The two types of features are combined with the commonly used geometric features in Section 2.2.2, respectively, and an in-depth analysis is provided by addressing the classification results. The two features are based on a rectangular grid, so the parameter to be set in the feature extraction is mainly the grid width l. In this section, we test the impact of different grid widths l∈[0.5,8] with an interval of 0.5 m, resulting in 16 parameter settings. A Random Forest (RF) classifier is used to compare the two different feature combinations and results under each l are averaged over 10 runs as the classification results may vary slightly for each run. Thus, 160 runs are operated for each feature combination and the comparison results are shown in Figure 13 and Figure 14.
Figure 13 shows the quantitative comparison of the two methods and each point of the bar chart represents the average of 10 runs at each grid width. Our approach and the method using traditional projection density are evaluated on this dataset with an overall accuracy of 92.50% and 92.17%. The corresponding F1-Score, Recall, as well as Precision values are provided in Figure 13b–d. The mean F1-Score, Recall, and Precision of our method are 85.96%, 89.81%, and 84.22%, respectively, while those of the method using traditional projection density are 84.73%, 89.2%, and 82.87%. From the above results, it can be seen that relative projection density can improve the classification result in terms of the four indicators.
In terms of class level, it becomes visible that an improvement in performance can be observed especially for the small-size objects, such as cars, poles, and scanning artifacts. Using relative projection density instead of traditional projection density increases the F1-Score of cars, poles, and scanning artifacts by 2%, 1.34%, and 2.43%, respectively. The visual inspection of the derived results also demonstrates the effectiveness of the relative projection density in poles, scanning artifacts, and cars (Figure 15a–c) considering the density variation. Our approach shows better performance on the OA, F1-Score, and Recall value of poles, which are 0.27%, 1.34%, and 2.66% higher than the traditional method, respectively. We can also observe in Figure 15a that some vegetation points and building points are misclassified as poles. The probable reason for this error is that the object geometry of vegetation (trunk points) and building façade are similar to some parts of poles. Better discrimination is achieved for the classification results of relative projection density, the F1-Score, and the Precision of scanning artifacts are 2.43% and 3.43% higher than those of the method using the traditional projection density feature, and the local visualization result is shown in Figure 15b. The F1-Score and Precision value of scanning artifacts correspond to the worst results for both two features. This may be caused by moving objects during the scanning, and their object geometry with different postures may not be learned enough from the training samples. As shown in Figure 15c, most car points are misidentified as vegetation in the method using traditional projection density, while our method has fewer error points. It can be seen in Figure 15d,e that there is an improvement in the recognition of vegetation and building, although it is not reflected obviously in the quantitative results in Figure 13. The possible reason is that the vegetation and building have much larger point numbers than other classes and the detailed improvements are not obviously reflected on the evaluation metrics as small-size objects. In summary, compared with the method using traditional projection density, our method achieves better performance on small-size objects and similar performance on buildings and vegetation in terms of qualitative and quantitative analysis. That means considering density variation in feature extraction is helpful for improving classification results.

4. Discussion

The proposed method aims to improve the generality of density-adaptive processing and weaken the impact of density variation caused by scanning distance in feature extraction. Our framework consists of two components, and each component will be processed independently of the other one. We will briefly summarize the advantage of our method derived from our experience below.
In previous studies, angular resolution was often used as a known parameter to balance the density variation [22,28,30,41,42,48]. Although angular resolution can be obtained directly from scanner settings, dealing with the data without angular resolution is unavoidable in some cases because (a) TLS benchmark datasets usually do not contain angular resolution, such as Semantic-3D [18], WHU-TLS [54]; (b) some scanners do not directly provide an entry for setting the angular resolution, e.g., Leica P Series scanners control the scanning resolution by setting point spacing at a specific distance. Although the angular resolution may be obtained by making an insight of the original data through the software provided by the scanner developer, it will take extra time and makes the density-adaptive methods inconvenient to use and; (c) scanning parameters are generally stored in the original data exported directly from the scanner, while point cloud formats used (e.g., ASC, LAS, PTS, etc.) for data sharing usually do not contain angular resolution, which will cause angular resolution information to be lost in data sharing and transmission processing. Compared with previous studies, our method provides a viable solution for the above conditions when the angular resolution is unknown. In addition, it should be mentioned that our method requires to retrieve the neighboring information of some selected points, which are commonly not included in the original point cloud and may be time-consuming. However, because constructing a neighborhood is a necessary step in the frequently used point cloud processing, such as feature extraction, segmentation, and classification, our method does not occupy additional computing resources from the perspective of the whole process. Thus, our method is easy to be embedded in most methods in which detailed and local point analysis is utilized.
Previous studies dealt with the effect of density variation from many aspects, such as threshold setting [30,32,48,49], radius selection [30,40,41,55], and feature design [26,27,28,56,57]. We address this problem from the view of grid-based feature extraction. Our test has shown that the proposed relative projection density is effective and density variation in grid-based feature extraction is beneficial to improve the result of object recognition, which is similar to the conclusion of many previous studies, such as [28,30,48]. Furthermore, our conclusion can be supported by the evaluation result of feature importance, as shown in Figure 16. Figure 16 shows the importance of the projection density feature increasing after considering density variation and the proposed feature is among the three most important features. Moreover, it becomes visible that noisy labeling can be observed from the classification result in Figure 15. To avoid unsmooth classification results, more features, such as contextual features, can be subsequently introduced on the basis of existing features [58], and a label smoothing method [59,60] can be used to optimize the overall results. As our focus is to weaken the effect of density variation on classification results; the smoothing of noise points is not included in this research.

5. Conclusions

The main concentrations of our work are the generality of the density-adaptive methods using angular resolution, which is the angle between two horizontally or vertically adjacent laser beams, and the effect of density variation on projection density in feature extraction. To deal with the case when angular resolution is unknown, we propose an estimation method of angular resolution for TLS data. The test shows that the error of our method is less than 0.001° in most cases. Additionally, our method is stable in terms of object type (building, crown, pole, car, and ground), incident angle, and parameter setting. Although the moving targets will cause an estimation error, the point number of such objects is commonly small and will not affect the estimation result.
Focusing on the problem on the extraction of projection density, which is brought by density variation, a relative projection is proposed in step of feature extraction. Compared with traditional projection density, our approach improves the Overall Accuracy, F1-Score, Recall, and Precision by 0.33%, 1.23%, 0.61%, and 1.35%, respectively. Our experimental results show that our method improves the classification accuracy of small-size objects, such as cars, poles, and scanning artifacts, while ensuring the classification accuracy of buildings and vegetation. Since there are still some noise points in the final classification results, for future work, we can add some contextual information or combine smooth labeling methods to further optimize the results of the 3D scene analysis. Furthermore, a suitable feature selection method would be desirable.

Author Contributions

Conceptualization, M.C.; methodology, M.C. and X.Z.; software, M.C. and X.Z.; validation, M.C., X.Z. and C.J.; writing—original draft preparation, M.C. and X.Z.; writing—review and editing, C.J., J.P. and F.M.; visualization, M.C. and X.Z.; funding acquisition, M.C., J.P. and F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key R&D Program of Ningxia Autonomous Region under Grant 2022CMG02014, in part by the National Natural Science Foundation of China under Grant 32060373 and 41801394, in part by the Open Fund of Key Laboratory of Urban Resources Monitoring and Simulation, Ministry of Natural Resources under Grant KF-2021-06-102, and in part by the Research and Innovation Program for Graduate Students in Chongqing under Grant CYS22436.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, X.; Skidmore, A.K.; Darvishzadeh, R.; Niemann, K.O.; Liu, J.; Shi, Y.F.; Wang, T.J. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 43–50. [Google Scholar] [CrossRef]
  2. Tan, K.; Ke, T.; Tao, P.; Liu, K.; Duan, Y.; Zhang, W.; Wu, S. Discriminating Forest Leaf and Wood Components in TLS Point Clouds at Single-Scan Level Using Derived Geometric Quantities. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
  3. Pang, Y.; Wang, W.W.; Du, L.M.; Zhang, Z.J.; Liang, X.J.; Li, Y.N.; Wang, Z.Y. Nystrom-based spectral clustering using airborne LiDAR point cloud data for individual tree segmentation. Int. J. Digit. Earth 2021, 14, 1452–1476. [Google Scholar] [CrossRef]
  4. Rozsa, Z.; Szirany, T. Obstacle Prediction for Automated Guided Vehicles Based on Point Clouds Measured by a Tilted LIDAR Sensor. IEEE Trans. Intell. Transp. Syst. 2018, 19, 2708–2720. [Google Scholar] [CrossRef] [Green Version]
  5. Lafarge, F.; Mallet, C. Creating Large-Scale City Models from 3D-Point Clouds: A Robust Approach with Hybrid Representation. Int. J. Comput. Vis. 2012, 99, 69–85. [Google Scholar] [CrossRef] [Green Version]
  6. Xiao, Y.; Wang, C.; Li, J.; Zhang, W.M.; Xi, X.H.; Wang, C.L.; Dong, P.L. Building segmentation and modeling from airborne LiDAR data. Int. J. Digit. Earth 2015, 8, 694–709. [Google Scholar] [CrossRef]
  7. Pan, Y.; Dong, Y.Q.; Wang, D.L.; Chen, A.R.; Ye, Z. Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds. Remote Sens. 2019, 11, 1204. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, X.; Lu, Y.; Wu, T.; Yuan, T. An Improved Local Descriptor based Object Recognition in Cluttered 3D Point Clouds. Int. J. Comput. Commun. Control. 2018, 13, 221–234. [Google Scholar] [CrossRef] [Green Version]
  9. Li, Z.Q.; Zhang, L.Q.; Tong, X.H.; Du, B.; Wang, Y.B.; Zhang, L.; Zhang, Z.X.; Liu, H.; Mei, J.; Xing, X.Y.; et al. A Three-Step Approach for TLS Point Cloud Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5412–5424. [Google Scholar] [CrossRef]
  10. Xu, Y.S.; Ye, Z.; Yao, W.; Huang, R.; Tong, X.H.; Hoegner, L.; Stilla, U. Classification of LiDAR Point Clouds Using Supervoxel-Based Detrended Feature and Perception-Weighted Graphical Model. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 72–88. [Google Scholar] [CrossRef]
  11. Pirotti, F.; Guarnieri, A.; Vettore, A. Ground filtering and vegetation mapping using multi-return terrestrial laser scanning. ISPRS J. Photogramm. Remote Sens. 2013, 76, 56–63. [Google Scholar] [CrossRef]
  12. Ghamisi, P.; Hofle, B. LiDAR Data Classification Using Extinction Profiles and a Composite Kernel Support Vector Machine. IEEE Geosci. Remote Sens. Lett. 2017, 14, 659–663. [Google Scholar] [CrossRef] [Green Version]
  13. Yang, B.S.; Dong, Z. A shape-based segmentation method for mobile laser scanning point clouds. ISPRS-J. Photogramm. Remote Sens. 2013, 81, 19–30. [Google Scholar] [CrossRef]
  14. Weinmann, M.; Urban, S.; Hinz, S.; Jutzi, B.; Mallet, C. Distinctive 2D and 3D features for automated large-scale scene analysis in urban areas. Comput. Graph. UK 2015, 49, 47–57. [Google Scholar] [CrossRef]
  15. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  16. Zhou, J.; Wei, H.; Zhou, G.; Song, L. Separating leaf and wood points in terrestrial laser scanning data using multiple optimal scales. Sensors 2019, 19, 1852. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, D.; Brunner, J.; Ma, Z.; Lu, H.; Hollaus, M.; Pang, Y.; Pfeifer, N. Separating tree photosynthetic and non-photosynthetic components from point cloud data using dynamic segment merging. Forests 2018, 9, 252. [Google Scholar] [CrossRef] [Green Version]
  18. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d. net: A new large-scale point cloud classification benchmark. arXiv 2017, arXiv:1704.03847. [Google Scholar]
  19. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS-J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  20. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality Based Scale Selection in 3D LiDAR Point Clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inform. Sci. 2012, XXXVIII-5/W12, 97–102. [Google Scholar] [CrossRef] [Green Version]
  21. Weinmann, M.; Jutzi, B.; Mallet, C. Feature relevance assessment for the semantic interpretation of 3D point cloud data. ISPRS Annals of the Photogrammetry. Remote Sens. Spat. Inf. Sci. 2013, II-5, 313–318. [Google Scholar]
  22. Chen, M.L.; Liu, X.J.; Zhang, X.Y.; Wang, M.W.; Zhao, L.D. Building Extraction from Terrestrial Laser Scanning Data with Density of Projected Points on Polar Grid and Adaptive Threshold. Remote Sens. 2021, 13, 4392. [Google Scholar] [CrossRef]
  23. Durrieu, S.; Allouis, T.; Fournier, R.; Véga, C.; Albrech, L. Spatial quantification of vegetation density from terrestrial laser scanner data for characterization of 3D forest structure at plot level. In Proceedings of the SilviLaser 2008, Edinburgh, UK, 17–19 September 2008; pp. 325–334. [Google Scholar]
  24. Straatsma, M.; Warmink, J.J.; Middelkoop, H. Two novel methods for field measurements of hydrodynamic density of floodplain vegetation using terrestrial laser scanning and digital parallel photography. Int. J. Remote Sens. 2008, 29, 1595–1617. [Google Scholar] [CrossRef]
  25. Lari, Z.; Habib, A. Alternative methodologies for the estimation of local point density index: Moving towards adaptive LiDAR data processing. Int. Arch. Photogramm. Remote Sens. Spat. Inform. Sci 2012, 39, 127–132. [Google Scholar] [CrossRef] [Green Version]
  26. Ferrara, R.; Virdis, S.G.; Ventura, A.; Ghisu, T.; Duce, P.; Pellizzaro, G. An automated approach for wood-leaf separation from terrestrial LIDAR point clouds using the density based clustering algorithm DBSCAN. Agric. For. Meteorol. 2018, 262, 434–444. [Google Scholar] [CrossRef]
  27. Tan, K.; Zhang, W.; Dong, Z.; Cheng, X.; Cheng, X. Leaf and wood separation for individual trees using the intensity and density data of terrestrial laser scanners. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7038–7050. [Google Scholar] [CrossRef]
  28. Che, E.; Olsen, M.J. Fast ground filtering for TLS data via Scanline Density Analysis. ISPRS J. Photogramm. Remote Sens. 2017, 129, 226–240. [Google Scholar] [CrossRef]
  29. Ibrahim, S.; Lichti, D. Curb-based street floor extraction from mobile terrestrial LiDAR point cloud. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, B5. [Google Scholar] [CrossRef] [Green Version]
  30. Chen, M.; Pan, J.; Xu, J. Classification of Terrestrial Laser Scanning Data With Density-Adaptive Geometric Features. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1795–1799. [Google Scholar] [CrossRef]
  31. Sun, H.; Wang, G.; Lin, H.; Li, J.; Zhang, H.; Ju, H. Retrieval and accuracy assessment of tree and stand parameters for Chinese fir plantation using terrestrial laser scanning. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1993–1997. [Google Scholar] [CrossRef]
  32. Fan, H.; Yao, W.; Tang, L. Identifying man-made objects along urban road corridors from mobile LiDAR data. IEEE Geosci. Remote Sens. Lett. 2013, 11, 950–954. [Google Scholar] [CrossRef]
  33. Cheng, L.; Tong, L.; Wu, Y.; Chen, Y.; Li, M. Shiftable leading point method for high accuracy registration of airborne and terrestrial LiDAR data. Remote Sens. 2015, 7, 1915–1936. [Google Scholar] [CrossRef]
  34. Hammoudi, K.; Dornaika, F.; Paparoditis, N. Extracting building footprints from 3D point clouds using terrestrial laser scanning at street level. ISPRS/CMRT09 2009, 38, 65–70. [Google Scholar]
  35. Zheng, H.; Wang, R.; Xu, S. Recognizing street lighting poles from mobile LiDAR data. IEEE Trans. Geosci. Remote Sens. 2016, 55, 407–420. [Google Scholar] [CrossRef]
  36. Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N. Automatic and self-adaptive stem reconstruction in landslide-affected forests. Remote Sens. 2016, 8, 974. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, X.; Chen, M.; Tan, C.; Ma, H. Classification of terresrial laser scanning data based on multi-dimensional geometry features. In Proceedings of the International Conference on Smart Transportation and City Engineering 2021, Chongqing, China, 6–8 August 2021; pp. 1404–1410. [Google Scholar]
  38. Yang, B.; Dai, W.; Dong, Z.; Liu, Y. Automatic forest mapping at individual tree levels from terrestrial laser scanning point clouds with a hierarchical minimum cut method. Remote Sens. 2016, 8, 372. [Google Scholar] [CrossRef] [Green Version]
  39. Zhao, Y.; Wu, B.; Wu, J.; Shu, S.; Liang, H.; Liu, M.; Badenko, V.; Fedotov, A.; Yao, S.; Yu, B. Mapping 3D visibility in an urban street environment from mobile LiDAR point clouds. GISci. Remote Sens. 2020, 57, 797–812. [Google Scholar] [CrossRef]
  40. Cheng, L.; Zhao, W.; Han, P.; Zhang, W.; Shan, J.; Liu, Y.; Li, M. Building region derivation from LiDAR data using a reversed iterative mathematic morphological algorithm. Opt. Commun. 2013, 286, 244–250. [Google Scholar] [CrossRef]
  41. Liu, K.; Wang, W.; Tharmarasa, R.; Wang, J. Dynamic vehicle detection with sparse point clouds based on PE-CPD. IEEE Trans. Intell. Transp. Syst. 2018, 20, 1964–1977. [Google Scholar] [CrossRef]
  42. Cheng, X.; Cheng, X.; Li, Q.; Ma, L. Automatic registration of terrestrial and airborne point clouds using building outline features. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 628–638. [Google Scholar] [CrossRef]
  43. Dimitrov, A.; Golparvar-Fard, M. Segmentation of building point cloud models including detailed architectural/structural features and MEP systems. Autom. Constr. 2015, 51, 32–45. [Google Scholar] [CrossRef]
  44. Mitra, N.J.; Nguyen, A. Estimating surface normals in noisy point cloud data. In Proceedings of the Nineteenth Annual Symposium on Computational Geometry, San Diego, CA, USA, 8–10 June 2003; pp. 322–328. [Google Scholar]
  45. Pauly, M.; Keiser, R.; Gross, M. Multi-scale feature extraction on point-sampled surfaces. In Computer Graphics Forum; Blackwell Publishing, Inc.: Oxford, UK, 2003; pp. 281–289. [Google Scholar]
  46. Atik, M.E.; Duran, Z.; Seker, D.Z. Machine learning-based supervised classification of point clouds using multiscale geometric features. ISPRS Int. J. Geo Inf. 2021, 10, 187. [Google Scholar] [CrossRef]
  47. Brodu, N.; Lague, D. 3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology. ISPRS J. Photogramm. Remote Sens. 2012, 68, 121–134. [Google Scholar] [CrossRef] [Green Version]
  48. Chen, M.; Wan, Y.; Wang, M.; Xu, J. Automatic stem detection in terrestrial laser scanning data with distance-adaptive search radius. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2968–2979. [Google Scholar] [CrossRef]
  49. Li, B.; Li, Q.; Shi, W.; Wu, F. Feature extraction and modeling of urban building from vehicle-borne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2004, 35, 934–939. [Google Scholar]
  50. Hernández, J.; Marcotegui, B. Point cloud segmentation towards urban ground modeling. In Proceedings of the 2009 Joint Urban Remote Sensing Event, Shanghai, China, 20–22 May 2009; pp. 1–5. [Google Scholar]
  51. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  52. Günen, M.A. Adaptive neighborhood size and effective geometric features selection for 3D scattered point cloud classification. Appl. Soft Comput. 2022, 115, 108196. [Google Scholar] [CrossRef]
  53. Hackel, T.; Wegner, J.D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184. [Google Scholar] [CrossRef] [Green Version]
  54. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyyppä, J. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  55. Lari, Z.; Habib, A. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data. ISPRS-J. Photogramm. Remote Sens. 2014, 93, 192–212. [Google Scholar] [CrossRef]
  56. Chu, H.-J.; Wang, C.-K.; Huang, M.-L.; Lee, C.-C.; Liu, C.-Y.; Lin, C.-C. Effect of point density and interpolation of LiDAR-derived high-resolution DEMs on landscape scarp identification. GIScience Remote Sens. 2014, 51, 731–747. [Google Scholar] [CrossRef]
  57. Yan, Y.; Yan, H.; Guo, J.; Dai, H. Classification and segmentation of mining area objects in large-scale spares lidar point cloud using a novel rotated density network. ISPRS Int. J. Geo-Inf. 2020, 9, 182. [Google Scholar] [CrossRef] [Green Version]
  58. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS-J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar]
  59. Landrieu, L.; Raguet, H.; Vallet, B.; Mallet, C.; Weinmann, M. A structured regularization framework for spatially smoothing semantic labelings of 3D point clouds. ISPRS-J. Photogramm. Remote Sens. 2017, 132, 102–118. [Google Scholar]
  60. Schindler, K. An overview and comparison of smooth labeling methods for land-cover classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4534–4545. [Google Scholar] [CrossRef]
Figure 1. Workflow of our method.
Figure 1. Workflow of our method.
Remotesensing 14 06043 g001
Figure 2. Workflow of NARP.
Figure 2. Workflow of NARP.
Remotesensing 14 06043 g002
Figure 3. Relation between rectangular and polar coordinates.
Figure 3. Relation between rectangular and polar coordinates.
Remotesensing 14 06043 g003
Figure 4. Calculation of azimuth and zenith angle interval between one point and its neighboring points.
Figure 4. Calculation of azimuth and zenith angle interval between one point and its neighboring points.
Remotesensing 14 06043 g004
Figure 5. Example of the histogram of azimuth and zenith angle intervals.
Figure 5. Example of the histogram of azimuth and zenith angle intervals.
Remotesensing 14 06043 g005
Figure 6. The angle interval of the sub-grid.
Figure 6. The angle interval of the sub-grid.
Remotesensing 14 06043 g006
Figure 7. Comparison of three density definitions. (a) Scanline and surrounding objects, (b) Traditional projection density, (c) Reference projection density, (d) Relative projection density.
Figure 7. Comparison of three density definitions. (a) Scanline and surrounding objects, (b) Traditional projection density, (c) Reference projection density, (d) Relative projection density.
Remotesensing 14 06043 g007
Figure 8. Test data for estimation of angular resolution colored by height.
Figure 8. Test data for estimation of angular resolution colored by height.
Remotesensing 14 06043 g008
Figure 9. Different types of targets were selected from the five datasets. (a) building 1 with a large range of incident angle variation, (b) building 2 nearly perpendicular to the laser beam, (c) crown 1 of a small arbor, (d) crown 2 of a tall arbor, (e) pole 1 (stem), (f) pole 2 (street lamp and traffic sign), (g) shrub 1, (h) shrub 2, (i) ground 1: man-made ground with a large incident angle, (j) ground 2: natural ground on a slope, (k) car, (l) pedestrian.
Figure 9. Different types of targets were selected from the five datasets. (a) building 1 with a large range of incident angle variation, (b) building 2 nearly perpendicular to the laser beam, (c) crown 1 of a small arbor, (d) crown 2 of a tall arbor, (e) pole 1 (stem), (f) pole 2 (street lamp and traffic sign), (g) shrub 1, (h) shrub 2, (i) ground 1: man-made ground with a large incident angle, (j) ground 2: natural ground on a slope, (k) car, (l) pedestrian.
Remotesensing 14 06043 g009
Figure 10. Estimation error with different parameter settings. (a) error of horizontal angular resolution with N, (b) error of vertical angular resolution with N, (c) error of horizontal angular resolution with K, (d) error of vertical angular resolution with K.
Figure 10. Estimation error with different parameter settings. (a) error of horizontal angular resolution with N, (b) error of vertical angular resolution with N, (c) error of horizontal angular resolution with K, (d) error of vertical angular resolution with K.
Remotesensing 14 06043 g010aRemotesensing 14 06043 g010b
Figure 11. The histogram in terms of different K values. (a) K = 100; (b) K = 1000.
Figure 11. The histogram in terms of different K values. (a) K = 100; (b) K = 1000.
Remotesensing 14 06043 g011
Figure 12. The dataset colored by distance.
Figure 12. The dataset colored by distance.
Remotesensing 14 06043 g012
Figure 13. Comparison of the metric values obtained by traditional projection density and relative projection density. The error bars indicate the standard deviation of 160 runs (10 runs for each parameter). (a) Overall Accuracy, (b) F1-Score, (c) Precision, (d) Recall.
Figure 13. Comparison of the metric values obtained by traditional projection density and relative projection density. The error bars indicate the standard deviation of 160 runs (10 runs for each parameter). (a) Overall Accuracy, (b) F1-Score, (c) Precision, (d) Recall.
Remotesensing 14 06043 g013
Figure 14. Classification results obtained by traditional projection density and relative projection density. (a) The traditional projection density, (b) the relative projection density, (c) the ground truth.
Figure 14. Classification results obtained by traditional projection density and relative projection density. (a) The traditional projection density, (b) the relative projection density, (c) the ground truth.
Remotesensing 14 06043 g014
Figure 15. Comparison of the classification results obtained by traditional projection density and relative projection density. The left diagram in each sub-diagram represents traditional projection density and the right diagram in each sub-diagram represents relative projection density. (a) poles, (b) scanning artifacts, (c) cars, (d) vegetation, (e) buildings.
Figure 15. Comparison of the classification results obtained by traditional projection density and relative projection density. The left diagram in each sub-diagram represents traditional projection density and the right diagram in each sub-diagram represents relative projection density. (a) poles, (b) scanning artifacts, (c) cars, (d) vegetation, (e) buildings.
Remotesensing 14 06043 g015aRemotesensing 14 06043 g015b
Figure 16. Feature importance is obtained from the classification results of different feature combinations. (a) Relative projection density and commonly used features, (b) Traditional projection density and commonly used features.
Figure 16. Feature importance is obtained from the classification results of different feature combinations. (a) Relative projection density and commonly used features, (b) Traditional projection density and commonly used features.
Remotesensing 14 06043 g016
Table 1. Geometric features.
Table 1. Geometric features.
Set FeatureDefinition
ICovariance featuresLinearity L λ
Planarity P λ
Scattering S λ
Shannon entropy E dim = L λ ln ( L λ ) L λ ln ( L λ ) L λ ln ( L λ )
Eigenentropy E λ = e 1 ln ( e 1 ) e 2 ln ( e 2 ) e 3 ln ( e 3 )
Omnivariance O λ = e 1 e 2 e 3 3
Anisotropy A λ = ( e 1 e 3 ) / e 1
Curvature variation C λ = e 3 / ( e 1 + e 2 + e 3 )
Verticality featureVerticality V = 1 | n z |
IIGrid featuresRelative projection density N rd = N grid N ref
The maximum height difference Δ Z = Z max Z min
The standard deviation of z values within the sub-grid σ z
Table 2. Information of test data.
Table 2. Information of test data.
DatasetScannerPoint NumberAngular Resolution Setting in Scanning/°
HorizontalVertical
Data 1Reigl-VZ400531016290.030.02
Data 2FARO Focus S350158319940.0350.035
Data 3Reigl-VZ2000103967960.040.04
Data 4STONEX X30018765430.090.09
Data 5Reigl-VZ40013463130.20.092
Table 3. Estimation error of horizontal angular resolution on each dataset (Mean ± Std).
Table 3. Estimation error of horizontal angular resolution on each dataset (Mean ± Std).
DatasetOur Method/10−4°Point Spacing-Based Method/10−4°
Data 18.6 ± 7.931.7 ± 23.7
Data 21.1 ± 0.819.0 ± 7.6
Data 30.9 ± 0.734.8 ± 19.4
Data 42.2 ± 0.737.2 ± 22.0
Data 52.7 ± 1.871.6 ± 50.5
Table 4. Estimation error of vertical angular resolution on each dataset (Mean ± Std).
Table 4. Estimation error of vertical angular resolution on each dataset (Mean ± Std).
DatasetOur Method/10−4°Point Spacing-Based Method/10−4°
Data 15.0 ± 0.420.7 ± 20.0
Data 22.5 ± 1.15.9 ± 5.9
Data 317.9 ± 5.637.4 ± 17.6
Data 42.1 ± 1.426.4 ± 34.5
Data 54.3 ± 1.964.4 ± 17.8
Table 5. Estimation error of different types of objects.
Table 5. Estimation error of different types of objects.
TargetDatasetHorizontal/10−4°Vertical/10−4°
Building 1Data21.61.4
Building 2Data30.67.1
Crown 1Data52.76.4
Crown 2Data116.65.4
Pole 1Data42.61.0
Pole 2Data11.51.6
Shrub 1Data21.91.9
Shrub 2Data42.81.4
Ground 1Data40.40.3
Ground 2Data51.74.1
CarData30.812.7
PedestrianData33008.1
Table 6. Running time on each dataset (N = 500 and K = 30).
Table 6. Running time on each dataset (N = 500 and K = 30).
DatasetPoint NumberExecution Time/s
Data 153101629102.0
Data 21583199429.8
Data 31039679616.2
Data 418765432.5
Data 513463131.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, M.; Zhang, X.; Ji, C.; Pan, J.; Mu, F. Using Relative Projection Density for Classification of Terrestrial Laser Scanning Data with Unknown Angular Resolution. Remote Sens. 2022, 14, 6043. https://doi.org/10.3390/rs14236043

AMA Style

Chen M, Zhang X, Ji C, Pan J, Mu F. Using Relative Projection Density for Classification of Terrestrial Laser Scanning Data with Unknown Angular Resolution. Remote Sensing. 2022; 14(23):6043. https://doi.org/10.3390/rs14236043

Chicago/Turabian Style

Chen, Maolin, Xinyi Zhang, Cuicui Ji, Jianping Pan, and Fengyun Mu. 2022. "Using Relative Projection Density for Classification of Terrestrial Laser Scanning Data with Unknown Angular Resolution" Remote Sensing 14, no. 23: 6043. https://doi.org/10.3390/rs14236043

APA Style

Chen, M., Zhang, X., Ji, C., Pan, J., & Mu, F. (2022). Using Relative Projection Density for Classification of Terrestrial Laser Scanning Data with Unknown Angular Resolution. Remote Sensing, 14(23), 6043. https://doi.org/10.3390/rs14236043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop