Next Article in Journal
Effect of Light-Emitting Grid Panel on Indoor Aquaculture for Measuring Fish Growth
Previous Article in Journal
Allocation of Eavesdropping Attacks for Multi-System Remote State Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Clustering for Point Cloud

1
College of Geomatics and Geoinformation, Guilin University of Technology, Guilin 541004, China
2
Key Laboratory of Spatial Information and Geomatics, Guilin University of Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(3), 848; https://doi.org/10.3390/s24030848
Submission received: 13 December 2023 / Revised: 12 January 2024 / Accepted: 19 January 2024 / Published: 28 January 2024
(This article belongs to the Section Remote Sensors)

Abstract

:
The point cloud segmentation method plays an important role in practical applications, such as remote sensing, mobile robots, and 3D modeling. However, there are still some limitations to the current point cloud data segmentation method when applied to large-scale scenes. Therefore, this paper proposes an adaptive clustering segmentation method. In this method, the threshold for clustering points within the point cloud is calculated using the characteristic parameters of adjacent points. After completing the preliminary segmentation of the point cloud, the segmentation results are further refined according to the standard deviation of the cluster points. Then, the cluster points whose number does not meet the conditions are further segmented, and, finally, scene point cloud data segmentation is realized. To test the superiority of this method, this study was based on point cloud data from a park in Guilin, Guangxi, China. The experimental results showed that this method is more practical and efficient than other methods, and it can effectively segment all ground objects and ground point cloud data in a scene. Compared with other segmentation methods that are easily affected by parameters, this method has strong robustness. In order to verify the universality of the method proposed in this paper, we test a public data set provided by ISPRS. The method achieves good segmentation results for multiple sample data, and it can distinguish noise points in a scene.

1. Introduction

With the development and popularization of laser radar technology, the use of three-dimensional point cloud processing technology has become increasingly important in urban development and construction [1,2]. There are many advantages of using point cloud data in actual production; however, there are also some limitations. Because of the large amount of data and messiness [3], it is necessary to carry out pretreatment before putting them into production. Point cloud segmentation techniques play a vital role in this process, and they can aid in understanding and perceiving complex scenes. Point cloud segmentation is intended to facilitate the extraction of geometrical and topological information from each object, and it involves dividing the point cloud data into different regions of the object and performing a fine-grained scene analysis and modeling [4,5].
Currently, the commonly used 3D point cloud segmentation methods mainly include the area-based method, model-based approach, convolutional network method, graph-theory-based approach, and edge-based method [6,7]. The region-based method can adaptively cluster adjacent points based on similarity, and this method has a good segmentation effect, but it can easily produce segmentation errors in the presence of large gaps or holes between objects [8]. The model-based approach can better fit the geometric model, and the calculation speed is relatively fast; however, this method has a poor segmentation effect for objects with complex geometric shapes and irregular structures, and it is sensitive to noise and local outliers [9]. The convolutional network method has a high learning capacity and expression capacity, and it can extract rich features from point cloud data; however, when using this method, overfitting easily occurs in the presence of a small amount of data, and a large amount of data are required for the training of the model. Furthermore, it is difficult for the model to make sense of the decision and explain the prediction of outcomes [10]. The graph-theory-based approach is based on the principle of image segmentation for segmenting point clouds, but this method lacks real-time performance [11,12]. The edge-based method is currently the most studied method [6], and it works by sensing the geometric boundary of the data point and then partitioning it into multiple independent sets of points. The goal of point cloud segmentation can be accomplished by using this method to visually detect the edges of different regions of the point cloud. The clustering method is used as a common method for detecting whether the three-dimensional point cloud in the scene belongs to the same set of points. This method is also widely used in edge detection methods. Chen et al. proposed an improved clustering method—Density-Based Spatial Clustering of Applications with Noise (DBSCAN)—and applied it to 3D point cloud boundary detection and planar segmentation, obtaining good results [13]. However, due to the clustering method, the edge detection method still suffers from some shortcomings in the segmentation of scenes.
The density-based and distance-based clustering methods are commonly employed in 3D point cloud segmentation [14]. DBSCAN is the primary representation of density-based clustering. This method achieves the goal of clustering by defining a specific domain range and density threshold, but a decision diagram needs to be drawn for its cluster center, which is amenable to manual intervention. Although some scholars have proposed solutions to this problem, they are still not perfect when applied to three-dimensional scenes, and they are vulnerable to outliers and some information loss [15]. The k-means clustering and KD-tree-based Euclidean clustering methods represent distance-based clustering methods. The k-means clustering method has good performance in most cases, but it is sensitive to the initial cluster centers, and the number of clusters needs to be specified in advance. In practice, unknown 3D point cloud scenes are not well used [16]. The KD-tree-based Euclidean clustering method can quickly search for adjacent points, and it also has a good effect on high-dimensional data due to KD-tree; however, this method usually uses a fixed cluster threshold in clustering. For different scenes, a great deal of debugging is required to obtain better segmentation results, and the use of fixed thresholds in large-scale scene applications is subject to classification errors [17,18].
To summarize, there are still some issues with the current methods in segmenting point clouds, especially in complex scenes. The edge detection method, which is the most commonly used segmentation method, also has some shortcomings due to the limitations of the cluster method. Therefore, establishing a clustering method that can accurately identify the boundaries of different objects in a scene without specifying the cluster center and adaptive clustering range is an important step to improve the segmentation accuracy of complex scene point clouds. To address the aforementioned issues, this paper proposes an adaptive clustering method for large-scale point clouds. This method adjusts the clustering threshold of each point using the characteristic parameters of the adjacent points of the point cloud to divide the points with the same characteristics into the same cluster points, and then it merges the preliminary segmented cluster points according to their standard deviation. After completing the above process, the cluster points whose number does not meet the conditions are clustered again, and scene point cloud segmentation is realized. Finally, the scattered points after segmentation are tested to determine whether they are outlier noise points. After removing the outlier noise points, the scene point cloud segmentation result is obtained.

2. Materials and Methods

The technical flow of the entire method is shown in Figure 1.
First, this paper adopts the local surface fitting method to estimate the normal vector of each point. Then, the density of each point of the scene point cloud is determined and sorted, and an adaptive clustering point cloud segmentation method is applied to complete the initial segmentation of the point cloud data. Next, standard deviation is introduced as the basis of cluster point merging to realize cluster point merging with the same characteristics. Based on the above segmentation results, the cluster points whose number of does not meet the conditions are proposed as scattered points for further clustering segmentation. Finally, after completing the segmentation of all cluster points, the discrete points are calibrated to determine whether they are outlier noise points.

2.1. Normal Vector Calculations

Existing normal vector estimation methods fall into three types: those based on the Delaunay method of triangulation, those based on the robust statistical method, and those based on the local surface fitting method [19,20,21]. In this paper, the local surface fitting method is selected. The reason for this is that, among the many normal vector estimation methods, the local surface fitting method is the most widely used, the principle is simple, and the calculation efficiency is high. The principle is as follows (Figure 2): Any point is selected from the imported point cloud data, and then m adjacent points are found using the KD-tree method. Then, m × 3 matrices are constructed using the m point coordinates, and the covariance matrix of the matrix is calculated (1). After completing the above process, the eigenvalues and corresponding eigenvectors are obtained according to the definition of the eigenvalues (2).
C o v ( X ) = 1 m 1 ( X μ x ) T ( X μ x )
C o v ( X ) is the constructed matrix’s covariance matrix, m is the number of adjacent points searched, and μ x is the average of each dimension.
C o v ( X ) x = λ x ( λ E C o v ( X ) ) x = 0
λ is the eigenvalue, and x is the eigenvector.

2.2. Adaptive Clustering Method

In comparison to traditional Euclidean clustering, KD-tree-based Euclidean clustering can accomplish the goal of clustering faster, but this method also has some limitations. This method can only consider scene segmentation from the distance of the point cloud, and it can only be applied to relatively simple scenes. In the presence of a large number of complex features, this clustering method is prone to classification errors [18]. In the adaptive cluster threshold clustering method proposed in this paper, the starting point is the distance, height difference, and normal vector angle between each point. The angle of the normal vector is primarily obtained based on the prior feature vector (3). Three parameters (4) are used to calculate the clustering threshold of each point with respect to the cluster center. A larger cluster threshold is used for points with the same features, and a lower cluster threshold is used for points with inconsistent features.
θ = arccos ( a 1 a 2 + b 1 b 2 + c 1 c 2 a 1 2 + b 1 2 + c 1 2 a 2 2 + b 2 2 + c 2 2 )
a 1 , b 1 , c 1 and a 2 , b 2 , c 2 are two-point normal vector components, and θ is the angle between the two-point normal vectors.
Y C = α d e n s i t y d i s g c θ
Y C is the two-point clustering threshold, d e n s i t y is the average density of the point cloud data, d i s is the distance between two points, g c is the height difference between two points, θ is the angle between two normal vectors, and α is the adjustment parameter.

2.3. Boundary Construction

In this paper, the method of finding outer contour points mainly involves the Graham algorithm (Figure 3). The detailed settlement process pseudo-code is shown in Algorithm 1.
Algorithm 1 Pseudo-code of boundary construction.
1. Initialize P = {P1,P2,...Pn} as target cluster set
2. project all points onto a two-dimensional coordinate system
3. find the x coordinate extreme point in the projection two-dimensional coordinate system: xmin = min{xi},xmax = max{xi}, and find the y coordinate extreme point: ymin = min{yi},ymax = max{yi}, get four points: Pta(xmin,ya), Ptb(xb,ymax), Ptc(xmax,yc), Ptd(xd,ymin); these four points are convex hull vertices.
4. Let Pt1 and Pta, Pt2 and Ptb, Pt3 and Ptc, Pt4 and Ptd exchange coordinates, connect four points counterclockwise, and construct the initial convex hull P1,P2,P3,P4.
5. if the points are in the initial convex hull then
6. remove the points from the point set of the outer contour to be judged
7. end if
8. divide the remaining point set into multiple subsets in order
9. obtain the convex hull vertices of each subset by Graham algorithm
10. use Graham algorithm to judge the outer contour of the subset to generate the final convex hull
11. Return OP-the outermost point of each cluster point

2.4. Cluster Merging

Due to the influence of external factors on the process of laser radar data acquisition, some clustering points may be missing, and then the point cloud that should belong to the same cluster point is divided into multiple clusters [22,23]. Therefore, it is necessary to judge the cluster points again after the initial point cloud clustering and to identify the cluster points with similar characteristics and similar clusters for cluster merging. The characteristic comparison method in this paper is to calculate the standard deviation of each cluster point elevation σ (5). σ is the most important and commonly used index to measure the degree of data variation, which can well explain the distribution of each cluster point [24,25]. The process of merging the clustering results is as follows (Figure 4): After the first classification, the cluster points are sorted according to the number of point clouds, and then the point cloud set with the first order is selected. The regular point cloud and the irregular point cloud are distinguished according to the distance and standard deviation, and then the distance of the two clustering blocks is determined according to whether the distance satisfies a specific density condition in order to determine whether it belongs to the same cluster point.
σ = i = 1 n ( ( h i E ( H ) ) 2 ) n
n is the number of cluster point clouds, and E ( H ) is the average value of the cluster point elevation.

3. Experimentation

3.1. Study Data

For the experimental data in this paper, the pitch angle of a UAV was used to shoot an image of a park in Guilin, Guangxi, China, to obtain point cloud data(Figure 5). There were many buildings, trees, and features in this scene, and they had a large effect on the segmentation of the point cloud. To fully verify the segmentation effect of various point cloud segmentation methods on small, medium, and large scenes, this paper manually cuts the point cloud data from this scene to obtain original point cloud data—large-scene point cloud data, medium-scene point cloud data, and small-scene point cloud data. In the subsequent research and comparison, a variety of common segmentation methods are used to segment the three scenes to verify the segmentation effect.

3.2. Experimental Process

Once the corresponding experimental data were obtained, a number of adjacent points in the point cloud data were found in turn, and a matrix of coordinates in the three dimensions of m × 3 was constructed (the number of adjacent points in this paper is 8). The minimum eigenvector of the point cloud data was computed according to the method of normal vector calculus. A principal component analysis showed that the eigenvector is the normal vector of the point. The normal vector of the point was represented by the three-dimensional vector (a, b, c), and the corresponding three-dimensional vector was assigned to each point (Figure 6). By comparing the normal vectors of different areas, it can be found that the distribution of normal vectors is different for different features (Figure 7), which verifies that it is feasible to use the angle of normal vectors to segment features according to different features.
After obtaining the normal vector of each point, it is necessary to determine and sort the density of each point in the point cloud data. At the same time, the nearest point to the point is found, and its distance is calculated. After adding the distance between the two points to the sum of the distances, the average density of the scene is evaluated. Sorting the density of point clouds in the study area allows for high-density points to be preferentially selected as the cluster center in the subsequent clustering method. Based on the distribution of high-density points in the figure (Figure 8), the high-density points can be found to be primarily continuous points, such as buildings. Sorting the point cloud data based on density allows for the preferential extraction of regular surfaces, which improves segmentation accuracy.
At the end of the point cloud density sorting, the first point in the original point cloud data was selected as the center of the cluster. KD-tree was used to find points near the certain cluster center density relation (this paper conducted the search within four times the average density range), computed the cluster thresholds corresponding to those points and the cluster center, and determined whether the distance from these points to the cluster center was within the threshold range. Points within the threshold range were classified as belonging to the same cluster. After completing the search point test within the cluster range, it was determined whether there were any new points. If new points were found, the outermost new points were chosen as a new cluster center; if no new points were found, this clustering was halted, and the clustering results were output. Then, the repetition of the above process was eliminated to obtain multiple cluster points. It was found that the scene was divided into multiple cluster points (Figure 9), and some of the connected points clusters were also divided into multiple clusters. This is mainly due to the fact that, in order to reduce computational complexity and improve computational efficiency, once the search points were classified into the same clusters, the newly added points were not used as the centers of the clusters one by one, and the neighboring points of the newly added points were searched to determine whether they were the same cluster points. Compared with this method, the method proposed in this paper uses the farthest point from the cluster center point in the newly added point as the cluster center, and it searches for its adjacent points. Therefore, this method will inevitably cause the point cloud set that should belong to the same cluster point to be divided into multiple cluster points; however, in the following text, fine clustering and merging are carried out to solve this problem, and this method also avoids multiple operations, which strengthens its operation efficiency.
After the initial clustering segmentation was completed, the cluster points insufficient in number were eliminated (Figure 10a). According to the segmentation results, the outer contour was generated (Figure 10b), and it was determined whether it belonged to the same cluster point based on the standard deviation. After that, clusters whose elevation and standard deviation meet a certain threshold were merged (Figure 10c). After completing the above process, the regular feature segmentation was completed (Figure 10d).
After the initial merging clustering results were obtained, the point cloud blocks with a number of scattered points and cluster point clouds of less than 20 were extracted (Figure 11). It was found that some of the extracted points were located at the boundary of the cluster block. The reason for this is that the previous normal vector calculation method was skewed at the boundary angle; the red point cloud contained a small number of cluster point clouds, and these cluster point clouds were mostly scattered forest points or small area features. These low-density clustering blocks were removed to obtain the target clustering point cloud (Figure 12a), and the corresponding two-dimensional bounding box was constructed (Figure 12b).
The above process removes most of the ground objects and avoids the interference of similar ground objects. The segmentation of ground objects can be completed using Euclidean clustering. This paper used a 3 m clustering threshold to supplement the clustering of low-density point clouds (Figure 13a). After clustering the low-density point cloud, the corresponding two-dimensional boundary was constructed (Figure 13b). Given that the distribution of point clouds in certain regions where dense forest points being inevitably encountered in the previous clustering process is relatively regular, it is necessary to judge the newly obtained clustering point cloud and the previous clustering point cloud according to the previous point cloud dispersion and the nearest distance, and the merger is expected to belong to the same cluster feature (Figure 14). Once the aforementioned work is completed, scene point cloud segmentation is essentially complete, but there are still a few scatter points that are not classified into clusters (Figure 15). Thus, in the final step, the discrete points must be re-calibrated, and the distance is used to determine whether the scattered points belong to similar blocks of cluster point clouds. The point is incorporated into the cluster point cloud if the distance satisfies a certain threshold, and, ultimately, the result of scene point cloud segmentation is achieved (Figure 16).

3.3. Analysis of Our Method

The purpose of the segmentation presented in this paper is to intercept the average scene from a small scene and the full scene from a large scene. In the three large, medium, and small scenes, the numbers of over-segmentation and under-segmentation point clouds of the method in this paper are compared to those of other node segmentation methods. The clustering of scene segmentation was observed based on manual segmentation results, and the superiority of the algorithm was comprehensively assessed. In this paper, we used the Euclidean clustering method with multiple cluster thresholds, the region-growing method, and the proposed method for scene clustering and segmentation to obtain segmentation results for a comparison of the point cloud data of the small scene, the medium scene, and the large scene.
The Euclidean clustering method was used with three thresholds to test the segmentation capability of small scenes. It was found that, while the 2 m cluster threshold method is able to separate the building point cloud from the ground (Figure 17), trees cannot be merged into the same cluster, and the same building belongs to different clusters. Even though the 2.5 m clustering threshold is used to ameliorate the tree clustering problem, the same cluster is also split into the same cluster. When the result of the decomposition is still poor, the 3 m cluster threshold can be used to merge trees into the same cluster. In the clustering process, the remaining ground points are merged. The region-growing method does not suffer from the aforementioned problem. The segmentation results showed that it is not possible to complete the segmentation goal for a region with a large change in the tree corner threshold (Figure 18) or determine whether there is a partial angle between two points that are expected to belong to the same cluster. In addition, the region will be split into multiple clusters. Compared to the previous method, the proposed method is significantly improved in terms of the segmentation of small scenes (Figure 19). Segmentation is performed on different patches of the same building, and the discrete points of the scene can also be extracted. Some of the points shown in the plot are not included in the clusters. In the sparse results, two cluster points can be found to be far apart (Figure 20), and in an analysis of the full scene, it is found that the points in this region are in fact the points of other trees in the scene cutting process, which must be defined as noise points. However, due to the large number of point clouds, they are redefined as a cluster. To summarize, to achieve the goal of segmenting small scene point clouds, the method proposed in this paper is the most efficient method, and the number of segmented clusters is also the closest to the true number. In fact, the clustering error of partial partition is the clipping process error, which has little to do with the method proposed in this paper.
The Euclidean clustering method was also used with three thresholds to test the segmentation ability of the medium scene. The results showed that the 1.5 m clustering threshold has a good effect on building patches (Figure 21); however, in ground point segmentation, some areas with interval holes are divided into multiple clusters, and the flowerbed point cloud of most ground buildings is also divided into ground points. The 2 m clustering threshold is used to improve the problem in which some ground points are divided into multiple clusters, but it also merges similar patches of the building, and it does not have a good segmentation effect on the near-ground flowerbed problem. The 3 m clustering threshold is only used to divide the far-distance point cloud clusters into multiple different clusters, and the segmentation effect is not observed in similar regions. Compared with the Euclidean clustering method, the region-growing method can achieve the segmentation of perigee flowerbeds (Figure 22), but it cannot achieve a good effect for discrete forest points or partial building patches. Compared with the first two methods, the proposed method achieves good results in scene segmentation (Figure 16), and the clusters obtained by segmentation are also similar to the actual ones.
The Euclidean clustering method was also used with three thresholds to test the segmentation ability of the large scene. The results showed that the 1.5 m clustering threshold (Figure 23) has a good effect on the segmentation of building patches. However, the dense forest, which has more scattered points, is divided into multiple clusters. The 2 m clustering threshold is used to improve the problem in which some ground points are divided into multiple clusters. It also merges the similar patches of the building, and it cannot achieve a good segmentation effect for the near-ground flowerbed problem. The 2.5 m clustering threshold is only used to divide the point cloud clusters that are far away in multiple different clusters. The segmentation effect is not observed in similar regions; in contrast, the region-growing method has a good segmentation effect on the plane building surface (Figure 24), but it still cannot complete the segmentation of the dense forest and other areas. In addition, it cannot achieve a good segmentation effect in some areas with slow angle changes. As for our method, the segmentation results of the large scene demonstrate good accuracy (Figure 25), and it can well separate the various features in the scene. Compared with the above methods, the number of clusters in the segmentation results is also the closest to the actual results.
In order to further verify the superiority of the proposed method, this paper selected some areas in the scene (Figure 26 and Figure 27). Because the segmentation effect of the comparison methods was not good in the dense forest area, the building and ground point cloud was chosen as the comparison area, and the comparison area was manually segmented. Based on the manual segmentation, the numbers of under-segmented and over-segmented point clouds of the three methods with regard to the cluster were counted. It can also be seen in the segmentation diagram of each sample that, in the local area, the proposed method is more stable and accurate than the other methods (Figure 28, Figure 29 and Figure 30). The numbers of under-segmented and over-segmented point clouds in each sample are calculated, and a relevant table is composed (Table 1). According to the table, error statistics are determined (Figure 31). It can be seen in the result radar map that the method proposed in this paper is superior to the other methods for each sample area; however, its error rate for sample 4 is slightly higher than that of the Euclidean clustering method. The reason for this is that manual segmentation will inevitably produce some errors. Additionally, the sample 4 area is far from the other areas in the scene; thus, it can be easily segmented using Euclidean clustering. However, in general, the segmentation method proposed in this paper is more stable and has a higher accuracy.
In order to verify the robustness of the proposed method, this paper used different clustering threshold constants for the small- and medium-sized scenes. The experimental results show that the use of different constants in small scenes does not affect simple scene segmentation (Figure 32). The segmentation effect is less impacted by the constant change, and the scene segmentation cluster does not fluctuate. For medium-sized scenes, the segmentation effect of various objects in the scene does not fluctuate greatly (Figure 33). Except for a small part of the region that increases the constant coefficient, the segmentation feature part of the point cloud is merged by other clusters, but this area is not distributed in the entire scene, and the number of clusters after scene segmentation does not fluctuate significantly. Through the change of two sets of scene parameter coefficients, it can be verified that the method in this paper has strong robustness. Compared with Euclidean clustering, where changing parameters will produce different segmentation results, the change of parameter coefficients in a certain range will not have a great impact on the final results.
In order to verify the universality of the method proposed in this paper for various sample scenarios, this paper selected a public data set provided by ISPRS for verification, and a public data set with continuous ground point cloud block data was selected for clustering. Based on the number of continuous ground point clouds in the data set and the actual number of clustered ground point clouds, the accuracy of the clustering results of the proposed method was verified (Figure 34). In the segmentation results, it can be seen that the clustering method proposed in this paper can effectively segment the ground points of multiple samples. However, there are a large number of missing segmentation points in the segmentation results, and the reason for this phenomenon is that there is a discontinuous phenomenon in some areas of the ground points in the scene, and the clustering results belonging to the same cluster cannot be effectively merged. There are over-segmentation points in some scenes. These points are mostly too close to the ground area and continuous points; therefore, these over-segmentation points cannot be effectively separated from the ground points.
In order to further reflect the advantages of this method over other methods, this paper selected a variety of methods to segment the above samples and obtained error data to make a corresponding error table (Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7). In the results table, it can be seen that the proposed method is more stable than the other clustering segmentation methods. Compared with the Euclidean clustering method using multiple thresholds, the proposed method can adaptively adjust the threshold and obtain better segmentation results. In some samples, the proposed method is compared with the combination of RANSAC and the region-growing method. The results are poor, but the comparison is not large, and the ground points in these sample areas are mostly flat areas. This method cannot achieve better segmentation results in the face of images with large height differences and distances. Based on all the above results, the proposed method is more stable than the other methods and can be applied to a variety of sample areas. The clustering segmentation results can also achieve good results. The method proposed in this paper accounts for a large number of over-segmentation points, but the reason for this phenomenon is mainly due to the discontinuity of the ground area. After removing the discontinuous ground points, the number of over-segmentation points is reduced (Table 8).

4. Discussion

(1)
How can adaptive clustering segmentation be realized?
Based on the experimental results above, it can be seen that, compared to other point cloud clustering segmentation methods, our method can more accurately cluster point clouds with certain spatial characteristics in a scene. When conducting experimental tests on public sample data sets, this method can guarantee accuracy in the case of clustering as many ground points as possible. Although the clustering effect is not good in the face of very close areas, all current clustering algorithms cannot achieve good segmentation results for this situation. Once the planar rough patch is classified, based on the scatter of the cluster points, the question of whether there is a correlation between each small cluster point is analyzed. Cluster points are merged if both the features are similar and the distances are similar. The standard deviation between cluster points is used in the merging process as a basis for judging whether the features of the cluster points are similar. Standard deviation is also used as a means of measuring the scatter of point clouds. By analyzing the discrete situation between point clouds, the overall trend of the vertical direction of the point cloud clusters is constrained, and then it is determined whether two points belong to the same cluster as a function of distance, which can allow for a better analysis of cluster points from two plane and vertical aspects. The experimental results show that our method can also realize the combination of different cluster points. In the workflow of this paper, high-density or angle-like point clouds are first combined, and then the combination of scattered point clouds is complemented. The reason for this is that, in the point cloud data, ground or building points tend to be regular, and the angle change relation is slow. The above process can be used to complete the separation of ground points. Typically, these regular point clouds occupy more than half of the scene. If we first remove these large-area point clouds and then judge the cluster of discrete point clouds, it is possible to avoid the interference of large-area point clouds and to effectively improve the accuracy and efficiency of point cloud data segmentation. For these sparse point clouds, good segmentation accuracy can be obtained using the traditional Euclidean clustering method. Once these tasks are completed, because the normal vector of the bounding region is subject to normal vector deflection relative to the central portion in the judgement process of the point cloud normal vector, discrete points must be judged after the final clustering is completed. This is based primarily on whether the distance between the discrete points and the closest points in the cluster satisfies a certain threshold, and then it can be determined whether these discrete points belong to the cluster or whether they are noise points generated during the point cloud data acquisition process.
(2)
Where can the proposed method be used?
The method proposed in this paper is based on the spatial characteristics of the objects in a point cloud scene, with the aim of achieving point cloud clustering segmentation. Through this method, the extraction of objects can be effectively realized, and the segmentation results in the experimental comparison process show that the method has a smaller error than manual segmentation. Through this method, scene objects are finely modeled, the labor cost is reduced in the process of point cloud data extraction of scene objects, and the efficiency of model construction is improved.
(3)
What are the shortcomings?
There are also some shortcomings to the method in this article. The process of judging the discrete point cloud is based on the traditional European clustering method, but it can better complement the sparse point segmentation of dense forest points. In the dense forest segmentation process, the points in the dense forest are merged in the same cluster, failing to achieve the goal of good single-plant separation. This will be the focus of follow-up research. In addition, there will be a deflection in the normal vector angle of the boundary point cloud in the process of judging the normal vector, and this leads to the need to make a discrete point judgment in the subsequent computational process, which reduces the efficiency of the proposed method’s operation.

5. Conclusions

With the goal of solving the point cloud segmentation problem in current urban scene segmentation, in this paper, we propose an adaptive clustering method based on the point cloud normal vector and sparsity. By extracting clusters of regular point clouds for different features of each point in a scene, sparse point clouds are classified. The proposed method is compared to the European clustering method and the region-growing method, commonly used in urban scene segmentation. This paper is based on manual segmentation, and the final results show that the segmentation results of our method are significantly improved compared to those of the two methods mentioned above. By comparing the segmentation clusters, it can be seen that the number of clusters segmented by our method is nearly the same as that segmented manually. To test our method, the final robustness test conducted in this paper uses different constant coefficients. This test demonstrates that, while different constant coefficients will alter the number of partial point clouds of clusters after segmentation, they have no effect on the number of clusters being segmented. There is not much difference in the segmentation results when using different coefficients, indicating strong robustness. In order to verify the efficiency of the proposed method, we calculated the speed of point cloud clustering. We used an Intel Core i3-8350K @ 4.00 GHz quad-core processor, 32 GB (Kingston DDR4 3200 MHZ 32 GB), NVIDIA Quadro P1000 graphics card, and Asus PRIME-Z370-A (Z370 chipset) motherboard to configure a computer to run the point cloud clustering program for different samples. The average processing time of one point was 0.000721 s.
This article will be strengthened in the following aspects:
(1)
Although the overall segmentation effect of the method proposed in this paper is good, the normal vector of the point cloud is calculated using the adjacent point in the calculation process; therefore, the normal vector of the boundary part of the scene object has a large deflection, and it is easy to eliminate the boundary points from the clustering in the calculation process of the clustering threshold.
(2)
In the adaptive clustering method proposed in this paper, only a simple formula for computing positive and negative ratios is used. We plan to fit and verify parameters from multiple test areas and use ablation studies to evaluate the impact of key components and then compute the relation.

Author Contributions

Supervision, C.K., S.W. (Siyi Wu), X.L., L.C., D.Z. and S.W. (Shiwei Wang); writing—original draft, Z.L.; writing—review and editing, C.K. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (41961063, 42064002).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the editors and the reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, X.; Jia, D.; Zhang, W. Integrating UAV photogrammetry and terrestrial laser scanning for three-dimensional geometrical modeling of post-earthquake county of Beichuan. In Proceedings of the 18th International Conference on Computing in Civil and Building Engineering: ICCCBE 2020, São Paulo, Brazil, 18–20 August 2021; Springer International Publishing: Cham, Switzerland, 2021; pp. 1086–1098. [Google Scholar]
  2. Kang, C.; Lin, Z.; Wu, S.; Yang, J.; Zhang, S.; Zhang, S.; Li, X. Method to Solve Underwater Laser Weak Waves and Superimposed Waves. Sensors 2023, 23, 6058. [Google Scholar] [CrossRef]
  3. Wang, H.; Zhang, Y.; Liu, W.; Gu, X.; Jing, X.; Liu, Z. A novel GCN-based point cloud classification model robust to pose variances. Pattern Recognit. 2022, 121, 108251. [Google Scholar] [CrossRef]
  4. Wang, X.; Yang, J.; Kang, Z.; Du, J.; Tao, Z.; Qiao, D. A category-contrastive guided-graph convolutional network approach for the semantic segmentation of point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3715–3729. [Google Scholar] [CrossRef]
  5. Xu, Y.; Tong, X.; Stilla, U. Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry. Autom. Constr. 2021, 126, 103675. [Google Scholar] [CrossRef]
  6. Ruan, X.; Liu, B. Review of 3d point cloud data segmentation methods. Int. J. Adv. Netw. Monit. Control. 2020, 5, 66–71. [Google Scholar] [CrossRef]
  7. Grilli, E.; Menna, F.; Remondino, F. A review of point clouds segmentation and classification algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339–344. [Google Scholar] [CrossRef]
  8. Luo, N.; Jiang, Y.; Wang, Q. Supervoxel-based region growing segmentation for point cloud data. Int. J. Pattern Recognit. Artif. Intell. 2021, 35, 2154007. [Google Scholar] [CrossRef]
  9. Zhao, B.; Hua, X.; Yu, K.; Xuan, W.; Chen, X.; Tao, W. Indoor point cloud segmentation using iterative gaussian mapping and improved model fitting. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7890–7907. [Google Scholar] [CrossRef]
  10. Dong, C.; Chen, N.; Li, Y.; Bao, J. Point Cloud Segmentation Algorithm Based on Deep Learning and 3D Reconstruction. In Proceedings of the 2022 IEEE 5th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 16–18 December 2022; Volume 5, pp. 476–480. [Google Scholar]
  11. Zhu, X.; Liu, X.; Zhang, Y.; Wan, Y.; Duan, Y. Robust 3-D plane segmentation from airborne point clouds based on quasi-a-contrario theory. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7133–7147. [Google Scholar] [CrossRef]
  12. Andoni, A.; Indyk, P. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Commun. ACM 2008, 51, 117–122. [Google Scholar] [CrossRef]
  13. Chen, H.; Liang, M.; Liu, W.; Wang, W.; Liu, P.X. An approach to boundary detection for 3D point clouds based on DBSCAN clustering. Pattern Recognit. 2022, 124, 108431. [Google Scholar] [CrossRef]
  14. Miao, T.; Zhu, C.; Xu, T.; Yang, T.; Li, N.; Zhou, Y.; Deng, H. Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Comput. Electron. Agric. 2021, 187, 106310. [Google Scholar] [CrossRef]
  15. Sun, L.; Liu, R.; Xu, J.; Zhang, S. An adaptive density peaks clustering method with Fisher linear discriminant. IEEE Access 2019, 7, 72936–72955. [Google Scholar] [CrossRef]
  16. Zhu, Z.; Liu, N. Early warning of financial risk based on K-means clustering algorithm. Complexity 2021, 2021, 5571683. [Google Scholar] [CrossRef]
  17. Chen, Y.; Zhong, H.; Chen, C.; Shen, C.; Huang, J.; Wang, T.; Liang, Y.; Sun, Q. On mitigating hard clusters for face clustering. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 529–544. [Google Scholar]
  18. Kang, C.; Lin, Z.; Wu, S.; Lan, Y.; Geng, C.; Zhang, S. A Triangular Grid Filter Method Based on the Slope Filter. Remote Sens. 2023, 15, 2930. [Google Scholar] [CrossRef]
  19. Zhang, J.; Cao, J.J.; Zhu, H.R.; Yan, D.M.; Liu, X.P. Geometry guided deep surface normal estimation. Comput. -Aided Des. 2022, 142, 103119. [Google Scholar] [CrossRef]
  20. Hu, G.; Xiong, L.; Lu, S.; Chen, J.; Li, S.; Tang, G.; Strobl, J. Mathematical vector framework for gravity-specific land surface curvatures calculation from triangulated irregular networks. GIScience Remote Sens. 2022, 59, 590–608. [Google Scholar] [CrossRef]
  21. Yazdanpanah, M.; Xu, C.; Sharifzadeh, M. A new statistical method to segment photogrammetry data in order to obtain geological information. Int. J. Rock Mech. Min. Sci. 2022, 150, 105008. [Google Scholar] [CrossRef]
  22. Zhou, B.; Huang, R. Segmentation algorithm for 3D LiDAR point cloud based on region clustering. In Proceedings of the 2020 7th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS), Guangzhou, China, 13–15 November 2020; pp. 52–57. [Google Scholar]
  23. Tan, K.; Ke, T.; Tao, P.; Liu, K.; Duan, Y.; Zhang, W.; Wu, S. Discriminating forest leaf and wood components in TLS point clouds at single-scan level using derived geometric quantities. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
  24. Thukral, A.K.; Bhardwaj, R.; Kumar, V.; Sharma, A. New indices regarding the dominance and diversity of communities, derived from sample variance and standard deviation. Heliyon 2019, 5, e02606. [Google Scholar] [CrossRef]
  25. Furlan, L.; Sterr, A. The applicability of standard error of measurement and minimal detectable change to motor learning research—A behavioral study. Front. Hum. Neurosci. 2018, 12, 95. [Google Scholar] [CrossRef] [PubMed]
  26. Yuan, H.; Sun, W.; Xiang, T. Line laser point cloud segmentation based on the combination of RANSAC and region growing. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 6324–6328. [Google Scholar] [CrossRef]
Figure 1. The workflow of large-scale scene segmentation based on the adaptive clustering method. Sample 42 provided by ISPRS is used as an example.
Figure 1. The workflow of large-scale scene segmentation based on the adaptive clustering method. Sample 42 provided by ISPRS is used as an example.
Sensors 24 00848 g001
Figure 2. Flowchart of normal vector calculation.
Figure 2. Flowchart of normal vector calculation.
Sensors 24 00848 g002
Figure 3. Flowchart of computation bound.
Figure 3. Flowchart of computation bound.
Sensors 24 00848 g003
Figure 4. Flowchart of cluster merging.
Figure 4. Flowchart of cluster merging.
Sensors 24 00848 g004
Figure 5. Study data distribution. White points are the original large scene study area, blue points are the medium scene study area, and red points are the small scene study area.
Figure 5. Study data distribution. White points are the original large scene study area, blue points are the medium scene study area, and red points are the small scene study area.
Sensors 24 00848 g005
Figure 6. The normal vector pointing of the whole study data. A normal vector is drawn at an interval of 6 points.
Figure 6. The normal vector pointing of the whole study data. A normal vector is drawn at an interval of 6 points.
Sensors 24 00848 g006
Figure 7. The normal vector pointing distribution of each block. The purple area is the border area, the blue area is the forest area, and the red area is the regular area.
Figure 7. The normal vector pointing distribution of each block. The purple area is the border area, the blue area is the forest area, and the red area is the regular area.
Sensors 24 00848 g007
Figure 8. High-density point distribution. The red points comprise more than 30 adjacent points within 4 times the average density, and the blue points comprise the other points in the scene, except for the high-density points.
Figure 8. High-density point distribution. The red points comprise more than 30 adjacent points within 4 times the average density, and the blue points comprise the other points in the scene, except for the high-density points.
Sensors 24 00848 g008
Figure 9. Initial clustering results. Different colors represent different clusters.
Figure 9. Initial clustering results. Different colors represent different clusters.
Sensors 24 00848 g009
Figure 10. Classification results of point clouds. Panel (a) is the initial classification results of a certain number of point clouds, and different colors represent different clusters; panel (b) is the three-dimensional boundary of preliminary clusters, and the red bounding box indicates the same cluster class; panel (c) is the classification results with scatter points; panel (d) is the boundary distribution of clusters.
Figure 10. Classification results of point clouds. Panel (a) is the initial classification results of a certain number of point clouds, and different colors represent different clusters; panel (b) is the three-dimensional boundary of preliminary clusters, and the red bounding box indicates the same cluster class; panel (c) is the classification results with scatter points; panel (d) is the boundary distribution of clusters.
Sensors 24 00848 g010
Figure 11. The result of low-density clustering points and merged clustering points. Panel (a) shows the distribution of elimination points; the blue points are scattered points, and the red points are points with a certain density. Panel (b) shows the distribution of segmentation results; the black points are elimination points.
Figure 11. The result of low-density clustering points and merged clustering points. Panel (a) shows the distribution of elimination points; the blue points are scattered points, and the red points are points with a certain density. Panel (b) shows the distribution of segmentation results; the black points are elimination points.
Sensors 24 00848 g011
Figure 12. The clustering results after removing the low-density clustering point cloud blocks. Panel (a) shows the distribution of the clustering points after removing the low-density clustering point cloud blocks; panel (b) shows the boundary after removing the low-density clustering point cloud.
Figure 12. The clustering results after removing the low-density clustering point cloud blocks. Panel (a) shows the distribution of the clustering points after removing the low-density clustering point cloud blocks; panel (b) shows the boundary after removing the low-density clustering point cloud.
Sensors 24 00848 g012
Figure 13. Low-density clustering results. Panel (a) shows the low-density point cloud clustering results; panel (b) shows the low-density point cloud clustering boundary.
Figure 13. Low-density clustering results. Panel (a) shows the low-density point cloud clustering results; panel (b) shows the low-density point cloud clustering boundary.
Sensors 24 00848 g013
Figure 14. The merged clustering point cloud and its boundary. Panel (a) shows the distribution of clustering points, and panel (b) shows the boundary of the clustering points.
Figure 14. The merged clustering point cloud and its boundary. Panel (a) shows the distribution of clustering points, and panel (b) shows the boundary of the clustering points.
Sensors 24 00848 g014
Figure 15. The remaining scatter distribution. The black points are the completed cluster segmentation points, and the red points are the cluster scatter points.
Figure 15. The remaining scatter distribution. The black points are the completed cluster segmentation points, and the red points are the cluster scatter points.
Sensors 24 00848 g015
Figure 16. The final segmentation result.
Figure 16. The final segmentation result.
Sensors 24 00848 g016
Figure 17. Euclidean clustering segmentation results for the small scene. Panels (a,c,e) show scatter distribution maps using 2 m, 2.5 m, and 3 m clustering thresholds, respectively; panels (b,d,f) show the cluster top views using 2 m, 2.5 m, and 3 m clustering thresholds. The black line shows the outer contour of the clustering result, and the red line box shows the area with a poor classification result.
Figure 17. Euclidean clustering segmentation results for the small scene. Panels (a,c,e) show scatter distribution maps using 2 m, 2.5 m, and 3 m clustering thresholds, respectively; panels (b,d,f) show the cluster top views using 2 m, 2.5 m, and 3 m clustering thresholds. The black line shows the outer contour of the clustering result, and the red line box shows the area with a poor classification result.
Sensors 24 00848 g017
Figure 18. Region growth segmentation results for the small scene. The red box shows the area with a poor classification result. Panel (a) shows the distribution of clustering points, and panel (b) shows the boundary of the clustering points.
Figure 18. Region growth segmentation results for the small scene. The red box shows the area with a poor classification result. Panel (a) shows the distribution of clustering points, and panel (b) shows the boundary of the clustering points.
Sensors 24 00848 g018
Figure 19. Segmentation result map of the proposed method for the small scene. The red box shows the area with a poor classification result, the blue box shows the eliminated discrete points, and the black line is the outer contour of the clustering result. Panel (a) shows the distribution of clustering points, and panel (b) shows the boundary of the clustering points.
Figure 19. Segmentation result map of the proposed method for the small scene. The red box shows the area with a poor classification result, the blue box shows the eliminated discrete points, and the black line is the outer contour of the clustering result. Panel (a) shows the distribution of clustering points, and panel (b) shows the boundary of the clustering points.
Sensors 24 00848 g019
Figure 20. Segmentation side view of the proposed method for the small scene. The red box shows the disputed area.
Figure 20. Segmentation side view of the proposed method for the small scene. The red box shows the disputed area.
Sensors 24 00848 g020
Figure 21. Euclidean clustering segmentation results for the medium scene. Panels (a,c,e) show the scatter distribution maps using 1.5 m, 2 m, and 3 m clustering thresholds, respectively; panels (b,d,f) show the cluster top views using 1.5 m, 2 m, and 3 m clustering thresholds. The black line is the outer contour of the clustering result, and the red line box shows the area with a poor classification result.
Figure 21. Euclidean clustering segmentation results for the medium scene. Panels (a,c,e) show the scatter distribution maps using 1.5 m, 2 m, and 3 m clustering thresholds, respectively; panels (b,d,f) show the cluster top views using 1.5 m, 2 m, and 3 m clustering thresholds. The black line is the outer contour of the clustering result, and the red line box shows the area with a poor classification result.
Sensors 24 00848 g021
Figure 22. Region-growing method segmentation results for the medium scene. Panel (a) shows a scatter plot, and panel (b) shows a cluster top view. The black line is the boundary of the clustering result, and the red box indicates the area with a poor classification result.
Figure 22. Region-growing method segmentation results for the medium scene. Panel (a) shows a scatter plot, and panel (b) shows a cluster top view. The black line is the boundary of the clustering result, and the red box indicates the area with a poor classification result.
Sensors 24 00848 g022
Figure 23. Euclidean clustering segmentation results for the large scene. Panels (a,c,e) show the scatter distribution maps using 1.5 m, 2 m, and 2.5 m clustering thresholds, respectively; panels (b,d,f) show the cluster top views using 1.5 m, 2 m, and 2.5 m clustering thresholds. The black line is the outer contour of the clustering result, and the red line box shows the area with a poor classification result.
Figure 23. Euclidean clustering segmentation results for the large scene. Panels (a,c,e) show the scatter distribution maps using 1.5 m, 2 m, and 2.5 m clustering thresholds, respectively; panels (b,d,f) show the cluster top views using 1.5 m, 2 m, and 2.5 m clustering thresholds. The black line is the outer contour of the clustering result, and the red line box shows the area with a poor classification result.
Sensors 24 00848 g023
Figure 24. Region-growing segmentation results for the large scene. Panel (a) shows a scatter plot, and panel (b) shows a cluster top view. The black line is the boundary of the clustering result, and the red box indicates the area with a poor classification result.
Figure 24. Region-growing segmentation results for the large scene. Panel (a) shows a scatter plot, and panel (b) shows a cluster top view. The black line is the boundary of the clustering result, and the red box indicates the area with a poor classification result.
Sensors 24 00848 g024
Figure 25. Segmentation result map of the proposed method for the large scene. Panel (a) shows the clustering plots, and panel (b) shows a cluster top view. The black line is the boundary of the clustering result. Panel (c) is the boundary of the clustering result.
Figure 25. Segmentation result map of the proposed method for the large scene. Panel (a) shows the clustering plots, and panel (b) shows a cluster top view. The black line is the boundary of the clustering result. Panel (c) is the boundary of the clustering result.
Sensors 24 00848 g025
Figure 26. Distributions of samples 1, 2, 3, and 4.
Figure 26. Distributions of samples 1, 2, 3, and 4.
Sensors 24 00848 g026
Figure 27. Distributions of samples 5 and 6.
Figure 27. Distributions of samples 5 and 6.
Sensors 24 00848 g027
Figure 28. Segmentation error of each sample area using the Euclidean clustering method. The black point is the correct segmented point cloud, the red point is the under-segmented point cloud, and the blue point is the over-segmented point cloud.
Figure 28. Segmentation error of each sample area using the Euclidean clustering method. The black point is the correct segmented point cloud, the red point is the under-segmented point cloud, and the blue point is the over-segmented point cloud.
Sensors 24 00848 g028
Figure 29. Segmentation error of each sample area using the region-growing method. The black point is the correct segmented point cloud, the red point is the under-segmented point cloud, and the blue point is the over-segmented point cloud.
Figure 29. Segmentation error of each sample area using the region-growing method. The black point is the correct segmented point cloud, the red point is the under-segmented point cloud, and the blue point is the over-segmented point cloud.
Sensors 24 00848 g029
Figure 30. Segmentation error of each sample area using our method. The black point is the correct segmented point cloud, the red point is the under-segmented point cloud, and the blue point is the over-segmented point cloud.
Figure 30. Segmentation error of each sample area using our method. The black point is the correct segmented point cloud, the red point is the under-segmented point cloud, and the blue point is the over-segmented point cloud.
Sensors 24 00848 g030
Figure 31. Comparison of error ratios. Panel (a) shows a comparison chart of Euclidean clustering and the regional-growing method, panel (b) shows a comparison chart of Euclidean clustering and our method, and panel (c) shows a comparison chart of the region-growing method and our method.
Figure 31. Comparison of error ratios. Panel (a) shows a comparison chart of Euclidean clustering and the regional-growing method, panel (b) shows a comparison chart of Euclidean clustering and our method, and panel (c) shows a comparison chart of the region-growing method and our method.
Sensors 24 00848 g031
Figure 32. Small-scene segmentation results with different constant coefficients. Panels (a,c,e) show the split scatter plots with constant coefficients of 2, 3, and 4, respectively; panels (b,d,f) show the segmented top view with constant coefficients of 2, 3, and 4. The black lines are the outer contours of different cluster points.
Figure 32. Small-scene segmentation results with different constant coefficients. Panels (a,c,e) show the split scatter plots with constant coefficients of 2, 3, and 4, respectively; panels (b,d,f) show the segmented top view with constant coefficients of 2, 3, and 4. The black lines are the outer contours of different cluster points.
Sensors 24 00848 g032
Figure 33. Medium-scene segmentation results with different constant coefficients. Panels (a,c,e) show split scatter plots with constant coefficients of 2, 3, and 4, respectively; panels (b,d,f) show the segmented top view with constant coefficients of 2, 3, and 4. The black lines show the outer contours of different cluster points.
Figure 33. Medium-scene segmentation results with different constant coefficients. Panels (a,c,e) show split scatter plots with constant coefficients of 2, 3, and 4, respectively; panels (b,d,f) show the segmented top view with constant coefficients of 2, 3, and 4. The black lines show the outer contours of different cluster points.
Sensors 24 00848 g033
Figure 34. Clustering segmentation results of ground points of different samples. The black points are the correct segmented ground points, the red points are the under-segmented ground points, and the blue points are the over-segmented ground points.
Figure 34. Clustering segmentation results of ground points of different samples. The black points are the correct segmented ground points, the red points are the under-segmented ground points, and the blue points are the over-segmented ground points.
Sensors 24 00848 g034
Table 1. Experimental data.
Table 1. Experimental data.
SampleNumber of Under-Segmented Points
(Red Points)
Number of over-Segmented Points
(Blue Points)
Manual SegmentationTotal Error Rate
European ClusteringRegion-Growing MethodOur MethodEuclidean ClusteringRegion-Growing MethodOur MethodEuclidean ClusteringRegion-Growing MethodOur Method
Sample117537326179294336514,41013.407420.47194.7953
Sample24500220981751838.682218.92730.5595
Sample3284210300121691.290914.01571.0143
Sample4500018869960.502018.87550.6024
Sample547027142181214504.206915.03452.6897
Sample6299115097065345.788715.00772.2971
Average 12.313017.05541.9931
Table 2. European clustering segmentation results with a 1.5 m clustering threshold.
Table 2. European clustering segmentation results with a 1.5 m clustering threshold.
SampleReal Ground PointSegmentation PointAccurate SegmentationCorrect RatioUnder-Segmented PointsOver-Segmented Points
Sample 1226,69118,21816,64491.36156910,036
Sample 2110,0851764164393.141218436
Sample 2222,5043088301497.607419,484
Sample 2313,22383983599.52312,384
Sample 24543496092296.04384507
Sample 3115,55613,89912,61790.7812762933
Table 3. European clustering segmentation results with a 2 m clustering threshold.
Table 3. European clustering segmentation results with a 2 m clustering threshold.
SampleReal Ground PointSegmentation PointAccurate SegmentationCorrect RatioUnder-Segmented PointsOver-Segmented Points
Sample 1226,69129,82524,79583.13 50201885
Sample 2110,08511,92410,02784.09 189152
Sample 2222,50424,78420,89084.29 38881608
Sample 2313,22311,367999187.89 13723228
Sample 2454346357537784.58 97552
Sample 3115,55615,60213,59187.11 20051959
Table 4. European clustering segmentation results with a 2.5 m clustering threshold.
Table 4. European clustering segmentation results with a 2.5 m clustering threshold.
SampleReal Ground PointSegmentation PointAccurate SegmentationCorrect RatioUnder-Segmented PointsOver-Segmented Points
Sample 1226,69135,62826,45374.25 9162227
Sample 2110,08512,29210,07982.00 22070
Sample 2222,50425,50421,35283.72 41461146
Sample 2313,22317,98512,42369.07 5556796
Sample 2454346611540881.80 119821
Sample 3115,55622,12015,49670.05 661554
Table 5. Region-growing segmentation results with a 1.5 m clustering threshold.
Table 5. Region-growing segmentation results with a 1.5 m clustering threshold.
SampleReal Ground PointSegmentation PointAccurate SegmentationCorrect RatioUnder-Segmented PointsOver-Segmented Points
Sample 1226,69129,52820,63269.87 88846048
Sample 2110,0859584858589.58 9931494
Sample 2222,50426,35319,63174.49 67152967
Sample 2313,22318,88611,03858.45 78442181
Sample 2454345091424983.46 8371180
Sample 3115,55619,06612,95267.93 61032698
Table 6. Segmentation results of the combination of RANSAC and the region-growing method [26].
Table 6. Segmentation results of the combination of RANSAC and the region-growing method [26].
SampleReal Ground PointSegmentation PointAccurate SegmentationCorrect RatioUnder-Segmented PointsOver-Segmented Points
Sample 1226,69120,38418,46790.60 19098213
Sample 2110,0859444908296.17 356997
Sample 2222,50412,19811,64595.47 54910,853
Sample 2313,2237224631387.39 9086906
Sample 2454343067287793.81 1862552
Sample 3115,55613,27512,21892.04 10513332
Table 7. Segmentation results of the method proposed in this paper.
Table 7. Segmentation results of the method proposed in this paper.
SampleReal Ground PointSegmentation PointAccurate SegmentationCorrect RatioUnder-Segmented PointsOver-Segmented Points
Sample 1226,69125,00121,69586.78 32994985
Sample 2110,08510,272901587.76 12521064
Sample 2222,50421,98919,91490.56 28403354
Sample 2313,2239816879989.64 10154420
Sample 2454345508492489.40 580505
Sample 3115,55613,90012,43889.48 14563112
Table 8. Comparison of removal of non-continuous ground point data.
Table 8. Comparison of removal of non-continuous ground point data.
SampleOver-Segmented PointsOver-Segmented Points Remove Non-Continuous Ground
Sample 1249852613
Sample 2110641064
Sample 2233541817
Sample 2344201650
Sample 24505505
Sample 3131121253
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Z.; Kang, C.; Wu, S.; Li, X.; Cai, L.; Zhang, D.; Wang, S. Adaptive Clustering for Point Cloud. Sensors 2024, 24, 848. https://doi.org/10.3390/s24030848

AMA Style

Lin Z, Kang C, Wu S, Li X, Cai L, Zhang D, Wang S. Adaptive Clustering for Point Cloud. Sensors. 2024; 24(3):848. https://doi.org/10.3390/s24030848

Chicago/Turabian Style

Lin, Zitao, Chuanli Kang, Siyi Wu, Xuanhao Li, Lei Cai, Dan Zhang, and Shiwei Wang. 2024. "Adaptive Clustering for Point Cloud" Sensors 24, no. 3: 848. https://doi.org/10.3390/s24030848

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop