Next Article in Journal
PHSI-RTDETR: A Lightweight Infrared Small Target Detection Algorithm Based on UAV Aerial Photography
Previous Article in Journal
Analysis of Unmanned Aerial Vehicle-Assisted Cellular Vehicle-to-Everything Communication Using Markovian Game in a Federated Learning Environment
Previous Article in Special Issue
Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Contour Extraction of UAV Point Cloud Based on Neighborhood Geometric Features of Multi-Level Growth Plane

1
School of Artificial Intelligence, Wuchang University of Technology, Wuhan 430223, China
2
Key Laboratory of Mine Environmental Monitoring and Improving around Poyang Lake of Ministry of Natural Resources, East China University of Technology, Nanchang 330013, China
3
School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
4
Changjiang River Scientific Research Institute, Wuhan 430019, China
5
Department of Architecture and Town Planning, Vocational School of Higher Education for Technical Sciences, Igdir University, Igdir 76002, Turkey
*
Author to whom correspondence should be addressed.
Drones 2024, 8(6), 239; https://doi.org/10.3390/drones8060239
Submission received: 29 April 2024 / Revised: 30 May 2024 / Accepted: 30 May 2024 / Published: 2 June 2024
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)

Abstract

:
The extraction of UAV building point cloud contour points is the basis for the expression of a three-dimensional lightweight building outline. Previous unmanned aerial vehicle (UAV) building point cloud contour extraction methods have mainly focused on the expression of the roof contour, but did not extract the wall contour. In view of this, an algorithm based on the geometric features of the neighborhood points of the region-growing clustering fusion surface is proposed to extract the boundary points of the UAV building point cloud. Firstly, the region growth plane is fused to obtain a more accurate segmentation plane. Then, the neighboring points are projected onto the neighborhood plane and a vector between the object point and neighborhood point is constructed. Finally, the azimuth of each vector is calculated, and the boundary points of each segmented plane are extracted according to the difference in adjacent azimuths. Experiment results show that the best boundary points can be extracted when the number of adjacent points is 24 and the difference in adjacent azimuths is 120. The proposed method is superior to other methods in the contour extraction of UAV buildings point clouds. Moreover, it can extract not only the building roof contour points, but also the wall contour points, including the window contour points.

1. Introduction

With the continuous development of the airborne laser scanning technology of unmanned aerial vehicles (UAVs), it has been widely used in three-dimensional urban scene reproduction. The building is the main component of the city, and its outline contains information about the location and shape of the building. The building contour is mainly obtained from images or a three-dimensional point cloud. The building contour obtained from a 2D image cannot accurately express the geometric scale information of the outline, but the building contour based on a 3D point cloud has the advantage of including geometric features. Building point cloud contour points can be used as a kind of prior shape information to infer the building structure to assist the rapid 3D model reconstruction of the building [1,2]. Therefore, many researchers have studied the extraction of contour points from building point clouds. At present, the extraction methods of point cloud contour points mainly include indirect methods and direct methods. The indirect method is to convert the point cloud into a segmented plane or image, and then obtain the point cloud contour points by the contour extraction method of the plane or image boundary [3]. The direct method is to obtain the contour points directly by using the neighborhood feature information of the point cloud [4]. The disadvantage of the first method is that high-quality images are required, and the image quality is greatly affected by the point cloud density. High-density point clouds can be converted into high-quality images, but the density of wall point clouds collected by a UAV is very low, and it is impossible to use this method to extract building contours. Simultaneously, the method based on point cloud plane segmentation is mainly based on the a-shape method. The disadvantage of this method is that the boundary of a sparse plane point cloud cannot be accurately obtained. The second method is greatly affected by point cloud accuracy, while the UAV point cloud accuracy is low, which will directly affect the accuracy of the neighborhood point feature calculation, and then affect the contour point extraction accuracy.
In view of the shortcomings of the above two methods, this paper will learn from their advantages and propose a novel boundary points extraction method based on the neighborhood geometric features of a multi-level growth plane. Firstly, a plane segmentation method based on the multi-layer fusion of regional growth planes is proposed in this paper, and the method is applied to UAV point cloud robust plane segmentation. Secondly, a 3D–2D geometric azimuth feature representation method for neighborhood points is proposed, and the method is successfully applied to the boundary point extraction of each segment plane point cloud. Finally, the boundary points of the whole UAV point cloud are obtained according to the boundary points of each segment point cloud.

2. Related Works

Indirect method: For the indirect method, Jarzabek-Rychard (2012) first convert the LIDAR point cloud into an image, and then use Random Sample Consensus (RANSAC) to extract the edge information of the image [5]. By the same token, the method of converting point clouds to images is used to obtain the building contour based on the boundary information of the image [6]. Since the outline of a building presents a three-dimensional distribution state, the contour extraction method based on two-dimensional images is only suitable for the extraction of a single surface boundary. In view of this, some scholars carry out plane segmentation of three-dimensional building point clouds to obtain multiple plane point clouds, then convert the plane point clouds into images [7], and finally the image boundary extraction method is used to obtain the building outline [8], such as with the Line Segment Detector (LSD) method [9] and the Canny algorithm [10]. Bazazian et al. [11] uses the region growth combined with geodesic distance for the region segmentation of point clouds, and defines a multi-scale operator to determine which feature points are continuous. By the same token, the normal vector difference of adjacent points can be used as a multi-scale operator to detect the feature points [12]. Xu et al. [13] used the clustering characteristics of a neighborhood normal vector to segment the surface, and then merge the surface based on the surface normal vector and roughness to achieve the accurate segmentation of the surface. Finally, the boundary of the segmented surface is extracted by the boundary extraction method. The indirect method needs to convert point clouds from the three-dimensional space to the two-dimensional raster image and then extract the building outline. Although the complexity of extraction is reduced, it is difficult to determine the appropriate grid size and the relatively low accuracy of the extraction results [14].
Direct method: For the direct point cloud-based approach, the advantage of this method is that surface reconstruction is not required [15]. They used neighborhood features to identify feature points. This method only requires inexpensive computation of the neighbor graph connecting neighboring points. Based on the point cloud neighborhood information, the researchers used principal component analysis to extract the features of the unorganized point cloud [16]. Kris Demarsin [17] used the normal vector of point clouds to find boundary information on different surfaces, and then obtain the sharp features of three-dimensional objects. The accuracy of normal vector estimation will directly affect the quality of point cloud feature extraction. However, many existing methods are unable to reliably estimate the normal vector of points around sharp features. In view of this, Wang [18] constructs an anisotropic neighborhood model based on the geometric characteristics of the accessory points of sharp features and proposes a robust normal vector estimation method. Sampath et al. [19] propose a potential-based fuzzy K-means clustering method, and cluster the normal vectors of neighborhood points considering the clustering similarity based on geometry and topology. Then, the intersection between different planes is obtained by segmentation to extract the contour points of the building roof. Weber et al. [20] use Gaussian mapping to cluster the triangular normal vector composed of the neighboring points and identify the feature points according to the statistics of number of clusters. Widyaningrum [21] only studies the contour extraction of UAV building roof point clouds. The method first classifies the building point cloud and obtains the roof point cloud data of each building. Then, the boundary points of each roof are obtained by using the alpha-shape method [22]. Finally, the medial axis transform descriptors are constructed to obtain the building outline. Zhang et al. [23] use Poisson distribution to adaptively calculate different thresholds of different local features, so as to achieve the adaptive growth of the same surface. Simultaneously, regional information analysis is used to cluster the boundary points. The tensor voting algorithm is one of the most commonly used computational methods in perceptual feature [24]. The contour extraction based on this method first obtains the voting tensor of each point based on the point cloud neighborhood information. Then, the feature weights of the points are calculated and the local structure is inferred by the eigenvalue analysis of the tensor. Finally, the sharp features are identified according to the change in the eigenvalue of each point [25]. The disadvantage of the feature point extraction based on the voting mechanism is that it needs to consume a significant amount of computing time. In view of this, the mechanisms for plate tensor voting and ball tensor voting were proposed [26], which can reduce the complexity and heavy computational burden in traditional tensor voting for the extraction of featured points from unorganized point clouds. Chen et al. [27] propose an improved k-d tree method and calculate the point cloud normal vector based on this algorithm. The field force and criterion of the normal vector are constructed, and the boundary feature points are extracted based on them. Soon afterwards, Chen et al. [28] propose an efficient global constraint approach for the robust contour feature points extraction of point clouds. For this method, the contour points are divided into boundary and folding points, and the contour points are extracted according to the distribution characteristics of the boundary and folding points in the neighborhood.

3. Proposed Method

Since the UAV point cloud contour point is the point where different surfaces intersect or the boundary point of the surface, this paper will use surface segmentation and the boundary point extraction algorithm to extract UAV point cloud contour points. The proposed method mainly includes two steps: (1) The region growth algorithm is improved, and the segmentation point cloud obtained by the region growth algorithm is fused several times to obtain more accurate segmentation data. (2) The distribution characteristics of neighborhood points are used to extract boundary points of each segment plane. The specific framework of the method described in this paper is shown in Figure 1.

3.1. Multilevel Fusion of Regional Growth

Using region growth for plane segmentation requires calculating the geometric features of each point (i.e., normal vector and curvature). The region growth process needs to set the thresholds of normal vector and curvature. These thresholds will directly affect the precision of the region growth segmentation plane. A threshold value that is too large will cause different feature planes to be separated together, while a threshold value that is too small will cause the same plane features to be separated. In view of this, the fusion approach of regional growth segmentation planes is proposed in this paper. The method mainly includes two steps: Original region growth and the multilevel fusion of segmentation planes.

3.1.1. Geometric Features of Adjacent Points

For the point cloud, the geometric features of adjacent points need be determined before conducting the region growth. Assume the point cloud is expressed as P = x i ,   y i ,   z i , i = 1,2 , , n . For each individual point, the k adjacent points are used to determine the geometric feature of the object point. Assume that the k adjacent points are expressed as q = x i ,   y i ,   z i , i = 1,2 , , k and the center point of the k adjacent points is p ¯ = x ¯ ,   y ¯ ,   z ¯ . The positive definite matrix B 3 × 3 is constructed according to the k adjacent points and center point, as described in Equation (1).
B 3 × 3 = q p ¯ T q p ¯
By eigenvalue decomposition of B 3 × 3 , three eigenvalues λ 1 λ 2 λ 3 and three eigenvectors e 1 ,   e 2 , e 3 are obtained. According to the three eigenvalues, the curvature K of each point is obtained by
K = λ 3 λ 1 + λ 2 + λ 3
where λ 1 , λ 2 , λ 3 are three eigenvalues of matrix B 3 × 3 .
The eigenvector e 3 corresponding to the minimum eigenvalue λ 3 is the normal vector of the tangent plane of the object point [29]. According to the normal vector e 3 , the angle between the tangent planes of adjacent points can be obtained, as described in Equation (3).
α 12 = a r c c o s e 31 · e 32 e 31 · e 32
where e 31 and e 32 are the normal vectors of the tangent plane of two adjacent points.
According to Equations (2) and (3), the geometric features of adjacent points are obtained, as shown in Equation (4).
f 1 = K 1 K 2 f 2 = α 12
where K 1 and K 2 are the curvatures of two adjacent points.

3.1.2. Multilevel Fusion of Different Planes

According to Equation (4), the region growth algorithm [30] is performed, and the segmented planes are obtained. For the original segmented planes, the segmented plane dataset Φ P 1 s , P 2 s , , P l s , n 1 n 2 n l is sorted according to the number of points on each segmented plane. For each segmented plane, the plane equation is constructed, as described in Equation (5).
a x + b y + c z + d = 0
where a , b , c is the normal vector e p of the plane, which can be obtained by Equation (1). d is obtained by
d = a x ¯ b y ¯ c z ¯
where x ¯ , y ¯ , y ¯ is the center point of each segmented plane.
According to Equation (5), the geometric feature parameter of each segmented parameter a , b , c , d is obtained. For Equation (5), when the number of points is small, the plane fitting parameters with high precision cannot be obtained. Therefore, a threshold for the number of points ( n i ) in each segmented plane is to be used as the determination of the fitted plane, as shown in Equation (7).
p a r a m e t e r = a , b , c , d ,   i f   n i > n t h r N o n ,   i f   n i n t h r
Generally, n t h r = 30 .
For the segmented plane dataset, due to n 1 n 2 n l , the geometric feature relationship between the back segment plane and front segment plane is judged step by step, and the rear segment plane is fused to the front according to the geometric feature relationship. For example, for segment plane P 1 s , the geometric feature relationship between other segment planes P 2 s , , P l s with P 1 s is calculated, and fuses other segment planes to the plane P 1 s . In view of this, the geometric feature relationships between different planes are constructed. According to Equation (5), the angle and distance are used to construct the geometric feature relationships between different planes, as described in Equation (8).
β j i = a r c c o s e p i · e p j e p i · e p j ,   i < j   d j i = a i x ¯ j + b i y ¯ j + c i z ¯ j + d i ,   i < j
where e p i and e p j are normal vectors of planes i and j , respectively. x ¯ j ,   y ¯ j ,   z ¯ j   is the center point of segmented plane j . a i , b i , c i , d i are the parameters of plane i .
According to Equation (8), the rear plane can be fused to the front. In order to realize the fine fusion of different segmented planes, the multilevel fusion approach of different planes is proposed. First, the lower thresholds for distance and angle are set to achieve the initial fusion of different planes. Second, the angle and distance thresholds are increased until reaching the maximum set threshold and the loop stops; thus, the multilevel fusion of the segmented plane is completed, as shown in Figure 2. The pseudo code of the multilevel fusion of segmented planes is obtained, as described in the following algorithm.
Algorithm 1 The multilevel fusion algorithm.
Notation:
PC: Point cloud.
k: the number of adjacent points used to calculate normal vectors and curvatures.
Num: the number of points of region growth.
Cth: the curvature threshold for initial region growth.
Ath: the angle threshold for initial region growth.
Dth: the distance threshold for multilevel fusion of different planes.
Max_Ath: the max angle threshold for multilevel fusion of different planes.
Max_Dth: the max distance threshold for multilevel fusion of different planes.
A t h : the rate of angle change.
D t h : the rate of distance change.
Φ : the set of segmented planar after multilevel fusion.
Input: PC
Output:  Φ
1. Calculate the normal vector and curvature of each point according to Section 3.1.1.
2. Using the KNN (K-Nearest Neighbor) search algorithm to conduct region growth according to geometric features of adjacent point.
 If f 2 < A t h , Implement growth, and if f 1 < C t h , the growth point as new seed point.
3. The set of segmented planes is sorted ( Φ P 1 s , P 2 s , , P l s , n 1 n 2 n l ).
4. Obtain the parameter a , b , c , d of each plane according to Section 3.1.2.
5. Obtain the geometric feature relationships between different planes according to Equation (8).
While β j i < M a x _ A t h and d j i < M a x _ D t h
  Sort the segment plane Φ P 1 s , P 2 s , , P l s , n 1 n 2 n l
  If P i s
  Calculate d j i and β j i
  Find d j i < D t h and β j i < Ath
  Implement fusion of different planes, such as P i s = P i s , P j s .
  end if
  Ath = Ath + A t h ; Dth = Dth + D t h
end while
Φ = P 1 s , P 2 s , , P m s
As can be seen from the rectangular box in Figure 2a, the initial region growth did not completely segment the wall. But the initial plane fusion method will aggregate some plane point clouds of the wall together, and the area of the segmented wall plane is greater than the initial region growth, as shown in the rectangular box of Figure 2b. After the multilevel fusion of planes, almost the entire wall point cloud was successfully grown out, as shown in the rectangular box of Figure 2c.

3.2. Extraction of Boundary Points for Each Plane

For the segmented planes, the geometric feature distribution of adjacent points is used to extract boundary points, as shown in Figure 3.
First, the KNNS is used to search the adjacent points x j , y j , z j , j = 1,2 , , k of each point. According to the adjacent points, the projection plane Π is constructed, as shown in Equation (9).
a b x + b b y + c b z + d b = 0
where a b ,   b b ,   c b   is the normal vector V b of the projection plane.
According to the projection plane, the projection coordinates x j , y j , z j , j = 1,2 , , k of each neighboring point on the projection plane can be obtained, as illustrated in Equation (10).
x j = b b 2 + c b 2 x j a b b b y j + c b z j + d b a b 2 + b b 2 + c b 2 y j = a b 2 + c b 2 y j b b a b x j + c b z j + d b a b 2 + b b 2 + c b 2 z j = a b 2 + b b 2 z j c b a b x j + b b y j + d b a b 2 + b b 2 + c b 2
where x j , y j , z j , j = 1,2 , , k are the original adjacent points.
The vector V o j between the object point and its neighboring point is constructed using projection coordinates.
V o j = x j x o , y j y o ,   z j z o  
where x o , y o , z o is the object point coordinate.
Calculate the modulus of the vectors and take the unit vector V o k of the longest modulus as the X-axis. According to the triorthogonal theorem, the vector of the Y-axis is obtained. Ultimately, the unit vector of X and Y-axis on the plane Π are obtained, as shown in Equation (12):
V X = V o k V Y = V o k × V b V o k × V b
where V o k is the vector with the farthest distance between the object point and the neighboring point and V b is the normal vector of the projection plane.
According to Equation (12), we can find the length of the projection of the vector V o j on the X and Y axes.
L o j X = V X · V o j L o j Y = V Y · V o j
where V o j is the vector formed by the object point and neighboring points on the projection plane.
According to Equation (13), the azimuth of vector V o j in the XY coordinate system is obtained.
α j = a r c t a n L o j Y L o j X
Then, all azimuths α j are sorted and the difference between the two adjacent azimuths is obtained. The difference between two adjacent azimuths is obtained, as described in Equation (15).
α j = α j α j 1
According to Equation (15), the azimuths difference threshold t h r α j is set, and the boundary points can be extracted.
If α j > t h r α j , the object point is the boundary point.
If α j < t h r α j , the object point is not the boundary point.
According to the above steps, the boundary points of different segmented planes are obtained, as shown in Figure 4.
From Figure 4, it is visible that the boundary points of each segmented planes of the UAV building point cloud are extracted. Figure 4a,b shows the roof plane, from which it can be seen that the density of points is relatively high. Therefore, the extracted boundary points have a good effect, as shown in Figure 4d,e. For the second roof, the extracted boundary points contain redundant points, as shown in the oval box of Figure 4e. The reason for this phenomenon is that a few points are absent in the second roof and there is a void, as shown in the oval box of Figure 4b. Although the point cloud on the wall of the UAV building is messier and sparser than the point cloud on the top surface, the proposed method can still extract the wall contour, especially the profile of the window, as shown in Figure 4f. However, the extracted wall boundary points still contain redundant points, as shown in the oval box of Figure 4f. The reason for this is that there are messy and redundant points in the UAV wall point cloud, as shown in the oval box of Figure 4c.

4. Experiment and Analysis

In this section, the experiment is conducted to verify the performance of the proposed method. Due to the fact that the proposed method mainly includes two parts, namely segmentation of UAV point cloud planes and extraction of boundary points of each plane, the performance of these two parts is analyzed separately.

4.1. Plane Segmentation of UAV Building Point Cloud

4.1.1. UAV Data Description

In this paper, the point cloud for the experiment is the campus building data collected by the UAV equipped with the RIEGL Mini300 laser scanner, which was produced by RIEGL company located in Horn, an attractive small town in Lower Austria, around 85 km northwest of the Austrian capital Vienna. The area of the point cloud collection is the Hubei University of Science and Technology (Xianning Campus), Xianning, Hubei province, China. The UAV flies at a height of about 120 m during acquisition, and the density of points is about 273 pts/m2. The point accuracy is about 0.010 m, and it is inevitably reduced during flying. In order to obtain the point cloud accuracy, 10 complete planes are selected and they are fitted, and the fitting errors in the horizontal and vertical directions are obtained; the point cloud accuracy of the horizontal direction is from about 0.022 m to 0.039 m, and that of the vertical direction is from about 0.027 m to 0.053 m. Part of the regional data of the campus scene collected by the UAV are shown in Figure 5.
Since the UAV collects the point cloud data from the top, the point clouds of buildings in a horizontal direction are complete, especially the roofs. However, the wall points of the building are very sparse, and sometimes they are even all missing, as shown in Figure 6.
From Figure 6, it is visible that the wall points of the building are very sparse, or that the walls can even have no points. This will influence the accuracy of plane segmentation, and then affect the quality of building contour extraction.

4.1.2. Plane Segmentation Procedure of UAV Buildings Point Cloud

The three UAV building point clouds are extracted manually, and the proposed method is used to segment the three buildings. The plane segmentation data with more than 100 points are extracted and the number of plane categories with more than 100 points after segmentation and the total number of points are obtained, as shown in Table 1.
From Table 1, it is visible that, when the number of points of each plane is greater than 100, the number of planes obtained by the region growth method is the greatest. The number of planes obtained by the multilevel fusion after region growth is the smallest. Simultaneously, the region growth method obtains the least total plane points, while the method in this paper obtains the most total plane points. This illustrates that multi-level fusion after regional growth can aggregate the remaining plane points of regional growth together, and then play the role of fine segmentation.
For different segmented planes, they are displayed in different colors, as shown in Figure 7.
For the first building, the region growth method is used to conduct the plane segmentation, and obtain the different planes, as shown in Figure 7a. It is clearly visible that the different roofs of the building are accurately segmented. However, the wall shows different colors, which means that the entire wall is divided into several sections. In order to fuse the same segmented planes, the proposed fusion algorithm is used to fuse different planes, and obtain the whole wall, as shown in Figure 7b. But some points on the wall are still missing, resulting in a hollow effect on the wall. In view of this, the single-level fusion is improved and the multilevel fusion of plane segmentation is proposed to conduct the fusion of remaining segmented small planes, and obtain the entire wall plane, as shown in Figure 7c. For the second building, the same step is performed to conduct the plane segmentation, and different planes are obtained. From the comparison of Figure 7d–f, it is visible that the single-level fusion after region growth is superior to the region growth, and multilevel fusion after region growth is superior to the other two methods. Especially after single-level and multi-level integration, the integrity of the wall is constantly improving, as shown in the rectangle box of Figure 7d–f. By the same token, the segmentation results of the third buildings are obtained. From the rectangle box seen in Figure 7g–i, it is visible that the integrity and density of the side wall segmented by the method in this paper are better than with the other two processes. From the above analysis, it can be seen that multi-level plane segmentation fusion after regional growth has the best effect on UAV building point cloud segmentation.

4.1.3. Segmentation Results of UAV Different Buildings Point Cloud

Some buildings within the campus scene are extracted, as shown in Figure 8. There are 15 buildings in the area. The proposed method is used to conduct the segmentation of these 15 buildings point cloud, and obtain the segmentation result, as shown in Figure 9.
The different colors in Figure 9 represent the different planes of segmentation. Since the point clouds on the roof of the building collected by the UAV are very dense, it can be seen from the different colors in Figure 9 that the method in this paper successfully segments all the roofs. Moreover, the pitched roof with a smaller slope was successfully segmented. Simultaneously, some small subroofs above a roof are segmented by the proposed method, as shown in the rectangle box of Figure 9. Although the point clouds of the wall collected by the UAV are very sparse, the method in this paper still segments them.

4.2. Boundary Points Extraction of UAV Building Point Cloud

In this section, the boundary point extraction method proposed in this paper is used to extract the boundary points of the segmented plane. In order to evaluate the performance of the proposed boundary point extraction method, it is compared with other boundary point extraction methods.

4.2.1. Parameter Analysis

The boundary point extraction method proposed in this paper involves two parameters: the number of neighbor points k and the azimuth difference threshold t h r α j . Since the UAV point cloud data include dense roof data and sparse wall data, the boundary points of dense roof and sparse wall are extracted according to different parameters. For the number of neighbor points k , different k values are used to extract boundary points of dense roof and sparse wall point clouds.
Figure 10 shows the extraction results of the dense roof point cloud boundary points corresponding to different k values. It can be clearly seen from Figure 10 that, when k is small, the extracted boundary points contain more redundant points. As the value of k increases, the number of redundant points decreases. When the value of k increases to 20, the region surrounded by boundary points contains very few redundant points, as shown in Figure 10c. When k 2 4, the extracted boundary points do not contain redundant points. From Figure 10d–f, it is visible that the extraction effect of boundary points at k = 24 is similar to that at k = 28 and k = 32 . However, the larger the k value, the higher the computational cost. Therefore, k = 24 is determined.
For the sparse wall point cloud, different k values are used to extract the boundary points, as shown in Figure 11.
From Figure 11, it is visible that, the smaller the value of k, the more redundant points are retained, and the larger the value of k, the fewer redundant points are retained. When k = 20 , the extracted boundary points contain few redundant points and can extract obscure boundary points, as shown in Figure 11c. When k = 24 , although some obscure boundary points will be lost, the extracted boundary points basically do not contain redundant points, as shown in Figure 11d. However, when k > 24 , although the extracted boundary points do not contain redundant points, the missing obscure boundary points will continue to increase. Simultaneously, the larger the k value, the higher the computational cost. Therefore, k = 24 is taken as the optimal value for extracting sparse point cloud boundary points.
The different t h r α j   is used to conduct the extraction of boundary points from a dense roof point cloud, as shown in Figure 12.
From Figure 12, it is clearly visible that, the larger t h r α j is, the fewer redundant points are retained. Certainly, in the case of a large t h r α j value, some boundary points will be lost, as shown in Figure 12f. When t h r α j = 120 , the extracted boundary points do not contain redundant points.
For the sparse wall point cloud, the different t h r α j value is also used to extract the boundary points, as shown in Figure 13.
In the same case, when the value of t h r α j is small, more redundancy points are retained, as shown in Figure 13a. When the value of t h r α j is increasing, the number of redundant points will decrease, and the boundary points will be lost. For example, t h r α j = 100 preserves significantly more boundary points than t h r α j = 120 , as shown in the comparison of Figure 13b,c. Moreover, when the value of t h r α j is very large, a large number of boundary points cannot be preserved, as shown in Figure 13f. From the above analysis, the value of t h r α j should be around 120.

4.2.2. Performance Analysis of the Boundary Points Extraction

In order to verify the performance of the proposed boundary points extraction method, it is used to extract boundary points of segmented planes and compared with the α s h a p e method [31], as shown in Figure 14.
The different colors in Figure 14a,d,g represent different segmented plane point cloud of different buildings. By the same token, the different colors in Figure 14b,c,e,h,i represent different boundary points. For the first building, the contours of the roof extracted by the α s h a p e method appear to be broken, as shown in the rectangle box of Figure 14b. As can be seen from Figure 14c, the proposed method preserves the roof contour points well. In addition, the α s h a p e method failed to preserve the outline of the window on the wall, as shown in Figure 14b,e,h. The reason for this phenomenon is that the wall point cloud is very sparse, and the α s h a p e method is easily affected by the point cloud density. However, the contours extracted by the proposed method include the window contours, as shown in the rectangle box of Figure 14c,f. This illustrates that the proposed method is also suitable for point clouds with sparse density. From Figure 14, it is clearly visible that the method in this paper preserves more complete contour points than the α s h a p e method.

4.3. Comparison of Different Methods for UAV Point Cloud Contour Extraction

In order to verify the advantage of the proposed method, it is used to conduct the extraction of contour points from UAV building point clouds, and it is compared with other contour points extraction methods. In order to compare the performance of different methods, the simple single roof, more complex double roofs, and the most complex multi-roofs building are used to conduct the comparison of the contour points extraction, as shown in Figure 15.
Figure 15a shows the original point cloud with the simple single roof. From Figure 15d, it is visible that the contour points obtained by the region clustering curvature method are rough and contain many redundant points. Simultaneously, the ridge line of the building has not been fully extracted, as shown in the rectangle box of Figure 15d. The reason for this result is that the curvature of some ridge lines is small due to the larger error of point clouds. The global constraint approach successfully extracted the ridge line, as shown in the rectangle box of Figure 15g. However, the contour points extracted by this method still contain many redundant points. From Figure 15j, it is visible that the outline points of a single roof building extracted by the proposed method in this paper contain fewer redundant points, and the outline points are clearer. For the second building with double roofs, these three methods fail to extract the contour points of the ridge line, as shown in Figure 15e,h,k. The wall points extracted by the region clustering curvature method contain more redundant points, as shown in the rectangle box of Figure 15e. The global constraint approach cannot extract the window contour points. From Figure 15k, it is visible that the proposed method can extract the window contour points on the wall, and the two roof boundaries are also accurately extracted. For the most complex multi-roofs building, although the region clustering curvature and global constraint methods can extract contour points, the expression of the contour points is not particularly fine. From Figure 15f,i, it is visible that the contour points extracted by these two methods contain more redundant points, and some contour line points are not accurately represented. However, the proposed method clearly shows these contour points, as shown in the rectangle box of Figure 15l. From the above comparison analysis, the method proposed in this paper is better than the other two methods in the extraction of UAV building point cloud contour points.
Additionally, the contour points extracted by the proposed method are compared with another method, which is similar to that presented in this paper. They are all used to conduct plane segmentation of the point cloud, and to then determine the contour points by extracting the boundary points of each segmented plane. The contour points of another UAV building point cloud are extracted by these two methods, as shown in Figure 16.
From Figure 16, it is clearly visible that both the proposed method and FSL method can extract the contour points of the UAV building point cloud. As can be seen from Figure 16b, the contour points extracted by the FSL method mainly retain some obvious contours, but for the wall contours, it fails to retain them. The reason for this phenomenon is that sparse wall point clouds will affect the extraction performance of the method. However, the advantage of the FSL method is that the preserved main outline contains fewer redundant points, thus making the building outline clearer. From Figure 16c, it is visible that the proposed method retains significantly more contour points than the FSL method. Although a small number of redundant points are included in the outline points extracted by the proposed method, the retained outline points are more complete, especially the window contour of the sparse wall point cloud which is successfully extracted by the proposed method.

5. Extraction of UAV Point Cloud Contour Points in Large Scenes

The proposed method is used to conduct the extraction of UAV building contour points, as shown in Figure 17.
As can be seen from Figure 17b, the contours of most of the roofs of all the buildings have been successfully extracted, and the contours of the roofs are very clear. Only one ridge line contour points has not been extracted, as shown in the rectangle box of Figure 17b. The main reason for this is that the slope of the building ridge line is very small. As can be seen from the wall contour in Figure 17b, the proposed method can extract not only the main building contour, but also the wall window contour. In particular, some small window outlines can be extracted by the proposed method. However, some window outlines or some small outlines are not particularly continuous. Thus, the partial contour points look like redundant points. The main reason for this phenomenon is that the UAV cannot directly scan the wall, resulting in very sparse wall points or in some regional points being missed. If the intersection line between the wall and the ground is collected, then the intersection points between the building and the ground can also be extracted by the proposed method, as shown in Figure 17. Therefore, the main factors affecting the quality of contour extraction by the proposed method are the point cloud density and point cloud data quality. From Figure 17, it is clearly visible that the proposed method can extract the contour points of the building point cloud data collected by the UAV.

6. Conclusions

In this paper, a method of UAV point cloud contour extraction is proposed, which combines multi-level plane segmentation and neighborhood vector azimuth geometric distribution. For the UAV building point cloud, the sparsity of the wall point cloud will affect the plane segmentation effect of regional growth. This paper proposes the multi-level segmentation and aggregation of regional growth to improve the effect of wall segmentation. The geometric distribution feature model of target point and neighborhood point vectors is constructed, and the boundary points of each segment plane are extracted. Experimental results verify that the optimal values of parameters k and t h r α j for boundary points extraction are 24 and 120, respectively. Comparative experiments show that the extraction effect of contour points of the UAV building point cloud in this paper is better than that of α s h a p , region clustering curvature, and global constraint approaches. The proposed method can not only extract the contour points of building roofs but also extract the contour points of wall windows. It can be applied to the extraction of UAV building point cloud contour points.

Author Contributions

X.C.: Writing—original draft, Methodology. Q.A.: Project administration, Supervision. B.Z.: Writing—review, Validation. W.T.: Writing—review, Validation. T.L.: Writing—review, Validation. H.Z.: Validation, Project administration. X.H.: Validation, Supervision. E.O.: Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Open Fund of Key Laboratory of Mine Environmental Monitoring and lmproving around Poyang Lake, Ministry of Natural Resources (Grant No. MEMI-2021-2022-04; MEMI-2021-2022-11); National Natural Science Foundation of China (42171428, 62377037, 42271447); Hubei natural science foundation (2023AFB950, 2024AFB148) and in part by the Opening Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK (CIOS-2022SC07); CRSRI Open Research Program (Program SN: CKWV20231177/KY).

Data Availability Statement

Data and code will be made available upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. He, Y.; Zhang, C.; Fraser, C.S. An energy minimization approach to automated extraction of regular building footprints from airborne LiDAR data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 65–72. [Google Scholar] [CrossRef]
  2. Albers, B.; Kada, M.; Wichmann, A. Automatic extraction and regularization of building outlines from airborne LiDAR point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 555–560. [Google Scholar] [CrossRef]
  3. Zhao, Z.; Duan, Y.; Zhang, Y.; Cao, R. Extracting buildings from and regularizing boundaries in airborne LiDAR data using connected operators. Int. J. Remote Sens. 2016, 37, 889–912. [Google Scholar] [CrossRef]
  4. Tsenga, Y.H.; Hung, H.C. Extraction of building boundary lines from airborne LiDAR point Clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 40, 957–962. [Google Scholar] [CrossRef]
  5. Jarzabek-Rychard, M. Reconstruction of building outlines in dense urban areas based on LiDAR data and address points. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 121–126. [Google Scholar] [CrossRef]
  6. Fazan, A.J.; Dal Poz, A.P. Rectilinear building roof contour extraction based on snakes and dynamic programming. Int. J. Appl. Earth Obs. Geoinf. 2013, 25, 1–10. [Google Scholar] [CrossRef]
  7. Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 102, 172–183. [Google Scholar] [CrossRef]
  8. Lu, X.; Liu, Y.; Li, K. Fast 3D line segment detection from unorganized point cloud. arXiv 2019, arXiv:1901.02532. [Google Scholar]
  9. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randal, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef]
  10. Lu, X.; Yao, J.; Li, K.; Li, L. Cannylines: A parameter-free line segment detector. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 507–511. [Google Scholar]
  11. Bazazian, D.; Casas Pla, J.R.; Ruiz Hidalgo, J. Segmentation-based multi-scale edge extraction to measure the persistence of features in unorganized point clouds. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 27 February–1 March 2017; pp. 317–325. [Google Scholar]
  12. Liu, X.; Jin, C. Feature line extraction from unorganized noisy point clouds. J. Comput. Inf. Syst. 2014, 10, 3503–3510. [Google Scholar]
  13. Xu, J.; Zhou, M.; Wu, Z.; Shui, W.; Ali, S. Robust surface segmentation and edge feature lines extraction from fractured fragments of relics. J. Comput. Des. Eng. 2015, 2, 79–87. [Google Scholar] [CrossRef]
  14. Sreevalsan-Nair, J.; Jindal, A.; Kumari, B. Contour extraction in buildings in airborne lidar point clouds using multiscale local geometric descriptors and visual analytics. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2320–2335. [Google Scholar] [CrossRef]
  15. Gumhold, S.; Wang, X.; MacLeod, R.S. Feature Extraction From Point Clouds. In Proceedings of the IMR 2001, Newport Beach, CA, USA, 7–10 October 2001; pp. 293–305. [Google Scholar]
  16. Pauly, M.; Keiser, R.; Gross, M. Multi-scale feature extraction on point-sampled surfaces. In Computer Graphics Forum; Blackwell Publishing, Inc.: Oxford, UK, 2003; Volume 22, pp. 281–289. [Google Scholar]
  17. Demarsin, K.; Vanderstraeten, D.; Volodine, T.; Roose, D. Detection of closed sharp edges in point clouds using normal estimation and graph theory. Comput.-Aided Des. 2007, 39, 276–283. [Google Scholar] [CrossRef]
  18. Wang, Y.; Feng, H.Y.; Delorme, F.É.; Engin, S. An adaptive normal estimation method for scanned point clouds with sharp features. Comput.-Aided Des. 2013, 45, 1333–1348. [Google Scholar] [CrossRef]
  19. Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial lidar point clouds. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1554–1567. [Google Scholar] [CrossRef]
  20. Weber, C.; Hahmann, S.; Hagen, H. Sharp feature detection in point clouds. In Proceedings of the 2010 Shape Modeling International Conference, Aix-en-Provence, France, 21–23 June 2010; pp. 175–186. [Google Scholar]
  21. Widyaningrum, E.; Peters, R.Y.; Lindenbergh, R.C. Building outline extraction from ALS point clouds using medial axis transform descriptors. Pattern Recognit. 2020, 106, 107447. [Google Scholar] [CrossRef]
  22. Edelsbrunner, H.; Kirkpatrick, D.; Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 1983, 29, 551–559. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Geng, G.; Wei, X.; Zhang, S.L.; Li, S.S. A statistical approach for extraction of feature lines from point clouds. Comput. Graph. 2016, 56, 31–45. [Google Scholar] [CrossRef]
  24. Medioni, G.; Tang, C.K.; Lee, M.S. Tensor voting: Theory and applications. In Proceedings of the RFIA 2000, Paris, France, 1–3 February 2000. [Google Scholar]
  25. Park, M.K.; Lee, S.J.; Lee, K.H. Multi-scale tensor voting for feature extraction from unstructured point clouds. Graph. Models 2012, 74, 197–208. [Google Scholar] [CrossRef]
  26. Lin, H.B.; Wei, W.; Shao, Y.C.; Dong, L. Feature extraction from unorganized point cloud based on analytical tensor voting. J. Graph. 2017, 38, 137–143. [Google Scholar]
  27. Chen, X.; Liu, Q.; Yu, K. A point cloud feature regularization method by fusing judge criterion of field force. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2994–3006. [Google Scholar] [CrossRef]
  28. Chen, X.J.; Zhao, B.F. An efficient global constraint approach for robust contour feature points extraction of point cloud. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5703816. [Google Scholar] [CrossRef]
  29. Dong, Z.; Yang, B.; Hu, P.; Scherer, S. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 137, 112–133. [Google Scholar] [CrossRef]
  30. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  31. Lin, Y.; Wang, C.; Chen, B.; Zai, D.; Li, J. Facet segmentation-based line segment extraction for large-scale point clouds. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4839–4854. [Google Scholar] [CrossRef]
  32. Wang, X.H.; Chen, H.W.; Wu, L.S. Feature extraction of point clouds based on region clustering segmentation. Multimed. Tools Appl. 2020, 79, 11861–11889. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed method.
Figure 1. Framework of the proposed method.
Drones 08 00239 g001
Figure 2. Multilevel fusion of region growth, (a) initial region growth, (b) initial fusion of planes, (c) final fusion of planes.
Figure 2. Multilevel fusion of region growth, (a) initial region growth, (b) initial fusion of planes, (c) final fusion of planes.
Drones 08 00239 g002
Figure 3. Boundary points extraction process.
Figure 3. Boundary points extraction process.
Drones 08 00239 g003
Figure 4. Boundary points of different segmented planes, (ac) are segmented plane point cloud; (df) are boundary points of different plane point cloud.
Figure 4. Boundary points of different segmented planes, (ac) are segmented plane point cloud; (df) are boundary points of different plane point cloud.
Drones 08 00239 g004
Figure 5. Part of the campus scene from top view collected by UAV.
Figure 5. Part of the campus scene from top view collected by UAV.
Drones 08 00239 g005
Figure 6. Part of the building facade point cloud from side view.
Figure 6. Part of the building facade point cloud from side view.
Drones 08 00239 g006
Figure 7. Segmentation results of three UAV buildings point clouds, (a,d,g) are segmentation results of region growth method; (b,e,h) are segmentation results of single-level fusion of regional growth plane segmentation; (c,f,i) are segmentation results of multilevel fusion of regional growth plane segmentation.
Figure 7. Segmentation results of three UAV buildings point clouds, (a,d,g) are segmentation results of region growth method; (b,e,h) are segmentation results of single-level fusion of regional growth plane segmentation; (c,f,i) are segmentation results of multilevel fusion of regional growth plane segmentation.
Drones 08 00239 g007
Figure 8. Original UAV 15 buildings colored point cloud.
Figure 8. Original UAV 15 buildings colored point cloud.
Drones 08 00239 g008
Figure 9. Segmentation results of the 15 buildings point cloud, different colors represent different segmented plane point cloud.
Figure 9. Segmentation results of the 15 buildings point cloud, different colors represent different segmented plane point cloud.
Drones 08 00239 g009
Figure 10. Extraction results of boundary points of dense roof point cloud under different k values, (af) are the extraction results of boundary points when k = 12,16,20,24,28,32 , respectively.
Figure 10. Extraction results of boundary points of dense roof point cloud under different k values, (af) are the extraction results of boundary points when k = 12,16,20,24,28,32 , respectively.
Drones 08 00239 g010
Figure 11. Extraction results of boundary points of sparse wall point cloud under different k values, (af) are the extraction results of boundary points when k = 12,16,20,24,28,32 , respectively.
Figure 11. Extraction results of boundary points of sparse wall point cloud under different k values, (af) are the extraction results of boundary points when k = 12,16,20,24,28,32 , respectively.
Drones 08 00239 g011
Figure 12. Extraction results of boundary points of dense roof point cloud under different t h r α j values, (af) are the extraction results of boundary points when t h r α j = 80,100,120,140,160,180 , respectively.
Figure 12. Extraction results of boundary points of dense roof point cloud under different t h r α j values, (af) are the extraction results of boundary points when t h r α j = 80,100,120,140,160,180 , respectively.
Drones 08 00239 g012
Figure 13. Extraction results of boundary points of sparse wall point cloud under different t h r α j values, (af) are the extraction results of boundary points when t h r α j = 80,100,120,140,160,180 , respectively.
Figure 13. Extraction results of boundary points of sparse wall point cloud under different t h r α j values, (af) are the extraction results of boundary points when t h r α j = 80,100,120,140,160,180 , respectively.
Drones 08 00239 g013
Figure 14. Extraction results of the boundary points of the segmentation plane by different methods, (a,d,g) segmented planes, (b,e,h) extracted results of α s h a p e method, (c,f,i) extracted results of proposed method.
Figure 14. Extraction results of the boundary points of the segmentation plane by different methods, (a,d,g) segmented planes, (b,e,h) extracted results of α s h a p e method, (c,f,i) extracted results of proposed method.
Drones 08 00239 g014
Figure 15. Comparison of contour points extraction by different methods, (ac) Three building point clouds, (df) Region clustering curvature method [32], (gi) Global constraint approach [28], (jl) Proposed method.
Figure 15. Comparison of contour points extraction by different methods, (ac) Three building point clouds, (df) Region clustering curvature method [32], (gi) Global constraint approach [28], (jl) Proposed method.
Drones 08 00239 g015
Figure 16. Comparison of contour points extraction by different methods, (a) Original UAV building point cloud, (b) FSL method [31], (c) Proposed method.
Figure 16. Comparison of contour points extraction by different methods, (a) Original UAV building point cloud, (b) FSL method [31], (c) Proposed method.
Drones 08 00239 g016
Figure 17. Extraction results of large scene building points, (a) Original UAV data, (b) Contour points.
Figure 17. Extraction results of large scene building points, (a) Original UAV data, (b) Contour points.
Drones 08 00239 g017
Table 1. Segmentation results of the three buildings.
Table 1. Segmentation results of the three buildings.
Building 1Building 2Building 3
Region growthNumber of segmented planes242737
Total points144,84762,81884,529
Single-level fusion after region growthNumber of segmented planes221725
Total points153,98464,92888,101
Multilevel fusion after region growthNumber of segmented planes181524
Total points161,80366,99190,597
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; An, Q.; Zhao, B.; Tao, W.; Lu, T.; Zhang, H.; Han, X.; Ozdemir, E. Contour Extraction of UAV Point Cloud Based on Neighborhood Geometric Features of Multi-Level Growth Plane. Drones 2024, 8, 239. https://doi.org/10.3390/drones8060239

AMA Style

Chen X, An Q, Zhao B, Tao W, Lu T, Zhang H, Han X, Ozdemir E. Contour Extraction of UAV Point Cloud Based on Neighborhood Geometric Features of Multi-Level Growth Plane. Drones. 2024; 8(6):239. https://doi.org/10.3390/drones8060239

Chicago/Turabian Style

Chen, Xijiang, Qing An, Bufan Zhao, Wuyong Tao, Tieding Lu, Han Zhang, Xianquan Han, and Emirhan Ozdemir. 2024. "Contour Extraction of UAV Point Cloud Based on Neighborhood Geometric Features of Multi-Level Growth Plane" Drones 8, no. 6: 239. https://doi.org/10.3390/drones8060239

APA Style

Chen, X., An, Q., Zhao, B., Tao, W., Lu, T., Zhang, H., Han, X., & Ozdemir, E. (2024). Contour Extraction of UAV Point Cloud Based on Neighborhood Geometric Features of Multi-Level Growth Plane. Drones, 8(6), 239. https://doi.org/10.3390/drones8060239

Article Metrics

Back to TopTop