Next Article in Journal
MOMTA-HN: A Secure and Reliable Multi-Objective Optimized Multipath Transmission Algorithm for Heterogeneous Networks
Previous Article in Journal
An Infrared Aircraft Detection Algorithm Based on Context Perception Feature Enhancement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Large Planar Point Cloud Registration Algorithm

Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2696; https://doi.org/10.3390/electronics13142696
Submission received: 30 April 2024 / Revised: 29 June 2024 / Accepted: 4 July 2024 / Published: 10 July 2024
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
The traditional Iterative Closest Point (ICP) algorithm often suffers from low computational accuracy and efficiency in certain scenarios. It is highly sensitive to the initial pose, has a poor ability to resist interference, and frequently becomes trapped in local optima. Extracting feature points accurately from partially overlapping points with weak three-dimensional features, such as smooth planes or surfaces with low curvature, is challenging using only the traditional ICP algorithm for registration. This research introduces a “First Rough then Precise” registration strategy. Initially, the target position is extracted in complex environments using an improved clustering method, which simultaneously reduces the impact of environmental factors and noise on registration accuracy. Subsequently, an improved method for calculating normal vectors is applied to the Fast Point Feature Histogram (FPFH) to extract feature points, providing data for the Sample Consistency Initial Algorithm (SAC-IA). Lastly, an improved ICP algorithm, which has strong anti-interference capabilities for partially overlapping point clouds, is utilized to merge such point clouds. In the experimental section, we validate the feasibility and precision of the proposed algorithm by comparing its registration outcomes with those of various algorithms, using both standard point cloud dataset models and actual point clouds obtained from camera captures.

1. Introduction

With the advancement of large-scale industry, the quality detection of large planes or curved surfaces, including the corrugability of wing skins, line profiles, and surface profiles, is of paramount importance. This is because the surface quality of many pieces of equipment directly impacts their performance. The advancement in computer and image acquisition technology has improved and broadened the application of noncontact measurement through 3D laser point cloud acquisition [1,2]. Noncontact measurement using 3D imaging can collect positional, attitudinal, and other data from objects of various shapes and sizes. Subsequently, computer technology is employed for three-dimensional reconstruction, enabling the detection of the equipment’s three-dimensional characteristics. In comparison with traditional contact measurement methods [3], such as those using a three-dimensional measuring machine, noncontact measurement based on 3D imaging can acquire richer image information and rapidly and precisely capture the 3D characteristics of the object’s surface. Thus, noncontact measurement based on 3D imaging offers advantages such as increased efficiency, enhanced accuracy, and the elimination of the need to physically touch the surface of the object under inspection. This approach offers a novel and superior method and concept for detecting and constructing models of various complex and large object surfaces. However, the camera’s field of view angle is limited, meaning a single scan cannot capture all the information on the surface of the object being detected. Therefore, this paper proposes an improved ICP-based point cloud registration algorithm for partially overlapping large planes.
The traditional Iterative Closest Point (ICP) algorithm is a renowned method for point cloud registration, which achieves a transformation matrix by iteratively matching point pairs and minimizing the overall distance between them. However, it faces several issues, including the risk of getting trapped in local minima and the need for a high-accuracy initial position. In 1992, Besl et al. [4] introduced the Iterative Closest Point (ICP) algorithm, which has become the most widely used version of the traditional ICP. Nevertheless, this algorithm has several drawbacks, including high computational resource consumption, lengthy calculation times, stringent requirements for the initial point cloud position’s accuracy, and a tendency to converge to local optimal solutions. Besl [4] further demonstrated that, under certain assumptions, if the number of corresponding point pairs and their relationships remain constant throughout the iteration, the ICP algorithm tends to converge monotonically to a local minimum, leading to significant discrepancies between the algorithm’s output and the actual values. Consequently, recognizing the numerous limitations of the traditional ICP, numerous scholars have introduced various enhancements and proposed a range of improved ICP algorithms.
Chen et al. [5] extended the traditional point-to-point ICP registration method to a point-to-plane variant. The registration accuracy was enhanced by calculating the minimum distance between the point and the tangent plane, which was established after fitting the normal. This effectively improves the system’s adaptability and anti-interference capabilities. Yang et al. [6] proposed a Scale-ICP algorithm that employs a seven-dimensional space iteration. Although this improved the iteration speed and registration accuracy, the algorithm still suffered from slow convergence. He et al. [7] proposed a method that integrates the PointNet++ deep learning model with ICP, utilizing PointNet++ for feature extraction and concatenation, thereby enhancing calculation speed and robustness. Simon et al. [8] employed a KD-tree and the nearest point iteration method to deduce the target position, thus improving the objective function’s convergence efficiency. Pavlov et al. [9] utilized Anderson’s acceleration of fixed points to enhance the ICP algorithm’s convergence speed and robustness. Ren et al. [10] integrated features with the ICP algorithm, significantly reducing both the number of points and memory space requirements. Yang et al. [11] proposed a 3D NDT-ICP-based point cloud registration method tailored for tunnel and roadway environments. Addressing the registration challenge of partially overlapping point clouds, Zhu et al. [12] introduced a hybrid registration method combining hard and soft registration techniques to enhance accuracy and robustness. Zhao et al. [13] introduced a 3D local feature descriptor, Histograms of Point Pair Features (HoPPF), designed to boost robustness and computational efficiency. Li et al. [14] proposed a fast global registration method leveraging RGB (Red, Green, Blue) values, suitable for RGB-annotated point clouds, aiming to improve computational efficiency while maintaining a certain level of accuracy. Yang et al. [15] introduced the Go-ICP (Globally Optimal Iterative Closest Point) algorithm for the Euclidean rigid registration of two sets of three-dimensional points in L2 space. Wu et al. [16] suggested incorporating the maximum correlation coefficient criterion (MCC) as a similarity metric, establishing a novel scale registration model, and proposing a robust scale ICP algorithm. Wu et al. [17] proposed a 3D scene reconstruction technique that utilizes a Fast Point Feature Histogram (FPFH) along with the Iterative Closest Point (ICP) method, accelerating the iteration process. Marchel et al. [18] proposed enhancements to the standard ICP environmental scanning matching method by employing three novel weighting factors, thereby reducing the iteration count and improving calculation speed. Salti et al. [19] introduced the Signature of Histogram of Orientation (SHOT), which describes and analyzes the spatial geometric feature information of feature points and their neighbors by establishing a unique local coordinate system for the extracted feature points. This approach allows for a complete description of the geometric features of the feature points. When combined with the RANSAC algorithm, it can match feature points with identical geometric characteristics and determine the transformation matrix between them, thereby completing the point cloud registration.
However, the previous literature has primarily focused on optimizing general point cloud registration algorithms, concentrating on enhancing their accuracy and efficiency. For large planar point clouds, such as the surface of aircraft wings or large ship plates, it is necessary to splice the entire surface data after multiple acquisitions, resulting in only partial overlap of point clouds in each splicing process, and the smooth surface leads to weak three-dimensional characteristics of point clouds. Algorithms specifically designed for such splicing have not yet been fully developed. Therefore, this paper aims to enhance the algorithm’s applicability, building upon conventional point cloud registration methods and tailoring them for use under more challenging conditions. This paper not only refines the point cloud filtering process for more precise noise removal but also optimizes the normal vector calculation method and enhances the registration algorithm itself. The algorithm’s stability is enhanced through weighted distance information, and the final stitching accuracy is improved through various optimizations.
The structure of this paper is as follows: Section 2 outlines the specific procedures of the algorithm and associated point cloud processing techniques, encompassing the clustering method for noise filtering, an improved method for calculating normal vectors, and an improved ICP fine registration algorithm. Section 3 compares the experimental outcomes, focusing on the accuracy of the improved ICP algorithm versus other point cloud registration algorithms across two scenarios: the fitted point cloud model and actual point cloud data.

2. Materials and Methods

The traditional Iterative Closest Point (ICP) algorithm is one of the most widely used methods for point cloud registration. Its principle involves continuously calculating the sum of distances between each point in the source point cloud and its nearest point in the target point cloud, and minimizing this sum of distance through the adjustment of rotation and translation matrices, as shown in Equation (1),
f ( R , T ) = 1 k i = 1 k | | q i ( R P i + T ) | | 2
where P and q are the source and target point clouds, respectively, and k is the number of points.
However, the traditional ICP algorithm presents several issues, including low computational efficiency and a propensity for converging to local optima. Furthermore, when the traditional ICP algorithm is applied to point clouds with a significant initial positional discrepancy, the outcomes are frequently unsatisfactory. To address the shortcomings of the traditional ICP algorithm, this paper adopts a “Rough-to-Precise” registration strategy. This strategy initially aligns the two point clouds using a Rough registration technique to refine their initial position and orientation, followed by an improved ICP algorithm for the fine registration phase. Figure 1. shows the specific flow of the algorithm.

2.1. Point Cloud Preprocessing Part

Point cloud data captured by cameras or lidar in real-world scenarios often contain noise and is subject to environmental interference. To mitigate these interferences, preprocessing of the point cloud data is essential. Common preprocessing methods for point clouds include statistical filtering [20], mean filtering [21], and radius filtering [22], among others [23], as Figure 2. However, the effectiveness of these preprocessing methods can be compromised when the parameters are not appropriately set. Additionally, some algorithms may alter the integrity of the original point cloud data, resulting in a filtered point cloud that differs from the actual data collected by the camera.
The clustering algorithm classifies the point cloud based solely on density, without fitting new data, thus preserving the authenticity and integrity of the point cloud. This paper proposes employing the Euclidean clustering method to extract the necessary key points from the environmental point cloud. This approach aims to achieve noise removal from the point cloud. The specific steps of the process are as follows:
(1)
Firstly, the average distance to the n neighboring points of each point is calculated for point P i , from which a Gaussian distribution is derived. Next, the mean μ and standard deviation σ of this distribution are determined, the point cloud’s threshold density is set as ρ = n/(μ + σ), and the noise point cloud’s density threshold is established, typically at 18.7% of the point cloud’s density threshold ρ.
(2)
From the point cloud dataset P = { P i R 3 , i = 1 , 2 , , N p } , select any point P i and identify the number k (related to the point cloud’s resolution) of points P i k within the search radius around P i . Calculate the point cloud density ρ = k/R, if ρ exceeds the set threshold for point cloud density, classify P i as a key point and establish a direct density relationship with P i k associated with P i . If ρ is below the threshold for point cloud density but above the threshold for noise point density, classify P i as an edge point. If ρ falls below the noise point density threshold, classify P i as a noise point. Continue this classification process until all points in the point cloud have been categorized, such as Figure 3 and Figure 4.
(3)
Next, select any point P i k from the point cloud, excluding the point P i already chosen in Step (1), and repeat the process of Step (1). A point P j , that lies within the neighborhood of P i k but not within the neighborhood of P i is considered to have a density reachability relationship with P i . Continue this process as described in Step (3) until the relationships for all neighborhood points of P i identified in Step (1) are established.
(4)
Traverse the point cloud dataset P = { P i R 3 , i = 1 , 2 , , N p } to cluster each point, and group all points that are density-connected into the same cluster, continuing this process until the entirety of the point cloud data have been searched.
Clustering allows for the division of the entire point cloud into distinct segments, facilitating both the extraction of key point clouds and the filtering process. This method outperforms traditional filtering techniques by effectively removing isolated noise points with minimal parameter selection requirements, thereby enhancing efficiency and offering significant benefits for processing point clouds captured in real-world settings, its effect is shown as Figure 5.

2.2. Rough Point Cloud Registration

Due to the ICP algorithm’s stringent initial pose requirements for point clouds, it is essential to perform rough registration following the completion of point cloud preprocessing. This process involves using a rotation and translation matrix to position the point cloud to be registered in an approximate alignment, thereby enhancing the accuracy for subsequent steps. The typical approach to point cloud rough registration involves extracting features from the point cloud, identifying corresponding feature points, and then applying a rigid transformation matrix. This matrix aligns or closely positions these feature points relative to the source point cloud, achieving the initial registration of the point cloud.

2.2.1. Normal Vector Estimation

Hoppe et al. [24] introduced a method for normal vector estimation that utilizes PCA (principal component analysis) for dimensionality reduction. The principle involves calculating a direction n such that the projection of all neighboring points onto n is most concentrated. However, because the feature description of FPFH necessitates the calculation of normal vectors for each point, the traditional PCA method requires enhancement. In this section, we employ a method that fuses least squares fitting with principal component analysis for normal vector estimation, incorporating new constraints into the planar point cloud to bolster the algorithm’s robustness and refine parameter selection. In the point cloud data P = { P i R 3 , i = 1 , 2 , , N p } , a central point P i is identified, and a local plane H is fitted using P i and all points P i k within a neighborhood radius R, as described by Equation (2). Additionally, the centroid P ¯ of all points within the neighborhood is determined, as shown in Equation (3). The normal vector for point P i is then estimated using the normal vector of the fitted plane, which involves minimizing Equation (2).
H = a r g m i n i = 1 k ( ( x i P ¯ ) T n ) 2
P ¯ = ( x ¯ , y ¯ , z ¯ ) T = ( 1 n i = 1 k x i , 1 n i = 1 k y i , 1 n i = 1 k z i )
a x + b y + c z = d
Let n represent the principal component vector, which is the normal vector we aim to determine. We calculate the angle between the vector extending from the centroid of the neighborhood to each point within that neighborhood and the principal component vector. If this angle exceeds a specified threshold, the estimation of the normal vector is deemed to be skewed. Consequently, the point is classified as an outlier within the neighborhood and is excluded from the plane estimation process. Instead, a new plane is iteratively fitted. The specific steps are as follows:
(1)
Calculate the mean value P ¯ of the point cloud dataset P, where P ¯ = ( x ¯ , y ¯ , z ¯ ) . Given that PCA is highly sensitive to the variance in the initial data variables, de-meaning is essential to prevent distortion of the principal components. Subsequently, project each point onto the plane fitted by least squares, calculate the distance d i from each point to this plane, and transform the normal vector estimation into the extremum problem as formulated in Equation (6), as Figure 6.
d i = a x i + b y i + c z i d
F = i = 1 k d i 2 λ ( a 2 + b 2 + c 2 1 )
where d represents the distance from the centroid point to the best-fit plane.
(2)
The covariance matrix M is calculated to represent the correlation between points in a given direction.
M = 1 k i = 1 k ( p i p ¯ ) ( p i p ¯ ) T
(3)
The covariance matrix is decomposed into its eigenvalues, which are then ordered from largest to smallest, yielding three distinct eigenvalues λ 1 , λ 2 , and λ 3 . The eigenvector associated with the smallest eigenvalue represents the normal vector to the fitting plane, which is also the normal vector for the point P i .
(4)
After the normal vector calculation is completed, its accuracy should be verified by measuring the angle between the estimated normal vector and a reference direction at each point. If this angle significantly deviates from the expected value ideally around 90° then, as depicted in Figure 7, the estimation for that point is deemed inaccurate. In such cases, the outlier point is excluded, and the normal vector is recalculated without its influence.
(5)
Given that the normal vector can point in two opposite directions, its orientation must be determined based on the angle it forms with the vector from the centroid to point P i .
n = n ,       n P ¯ P i > 0 n ,   n P ¯ P i < 0
The direction of the normal vector is ascertained using this method.

2.2.2. Normal Vector Estimation

When determining the parameters for the normal vector, we can rely not only on empirical selection but also on the magnitude of information entropy. Information entropy, a concept in information theory, quantifies the uncertainty or the amount of information inherent in a system or information source. As the search radius parameter must be defined in the calculation of the normal vector, the eigenvalues λ 1 , λ 2 , and λ 3 derived from the covariance matrix can characterize the normal vector’s attributes for the point and its surrounding area. The proportion of item j within this index for scheme i can be expressed as
P i j = Y i j i = 1 n Y i j ,   i = 1 , 2 , , n ,   j = 1 , 2 , , m
Therefore, the gravity of the three eigenvalues in the normal vector is
P i = λ i i = 1 3 λ i
The equation for calculating information entropy is
H X = i = 1 n p ( x ) log 2 p ( x )
Therefore, the information entropy equation described by the measured normal vector is
H X = λ 1 i = 1 3 λ i log 2 ( λ 1 i = 1 3 λ i ) λ 2 i = 1 3 λ i log 2 ( λ 2 i = 1 3 λ i ) λ 3 i = 1 3 λ i log 2 ( λ 3 i = 1 3 λ i )
A lower information entropy value indicates less uncertainty and a clearer feature in this dimension, suggesting that the estimation of the normal vector is more reasonable. Consequently, one can substitute various radii R based on experience to calculate the normal vector. Then, the eigenvalues from these calculations can be used in Equation (12) to determine the information entropy, thereby optimizing the selection of normal vector parameters.
Relying solely on normal vectors as features is insufficient to ensure the accuracy of registration. Therefore, this paper enhances the accuracy of rough registration by additionally extracting key points and utilizing the Fast Point Feature Histogram (FPFH) [25,26].
Once the normal vectors are calculated, feature points can be identified and described. With the feature points described, we can then find matches between feature points with identical feature descriptions to perform a rough alignment, thereby completing the preliminary stitching. The Sample Consensus Initial Algorithm (SAC-IA) is a rough registration method that builds upon the principles of the Random Sample Consensus (RANSAC) algorithm. In contrast to the RANSAC algorithm, which randomly selects inliers, fits a model, and iteratively refines the model by testing against outliers, SAC-IA employs a more directed approach. It begins by selecting distinctive feature points in the source point cloud’s FPFH and corresponding points with similar FPFH values in the target cloud. These points form potential matching pairs. SAC-IA then calculates the rigid transformation matrix for all these pairs to achieve initial registration. This is followed by an iterative process that minimizes the total distance to refine the registration, resulting in a more accurate preliminary alignment. The specific process is as follows:
(1)
Step 1, select n points P i from the source point cloud dataset P = { P i R 3 , i = 1 , 2 , , N p } . Ensure the distance between each pair of points exceeds a specified threshold to ensure a more uniform sampling that captures a diverse set of features.
(2)
Step 2, for each point P i selected in Step 1, identify a corresponding point in the target point cloud with a similar feature descriptor value, such as a feature histogram eigenvalue.
(3)
Calculate the rigid transformation matrix for the corresponding points identified in Step 2. Use the change in the sum of distances between these point pairs after the transformation as a measure of the current transformation’s quality. Continuously refine the matrix, iterating until either the maximum number of iterations is reached or the convergence criteria are satisfied.
To enhance efficiency, the Sample Consensus Initial Alignment (SAC-IA) algorithm refrains from utilizing all possible point pairs in each iteration; instead, it randomly selects a subset of corresponding point pairs to estimate the rigid transformation matrix. Consequently, while the accuracy of SAC-IA may not suffice for certain applications, it is well-suited for initial rough registration, providing a starting point for subsequent fine registration processes.

2.3. Precise Point Cloud Registration

Upon completing the rough registration, the source point cloud should ideally be roughly aligned with the target point cloud. For large point clouds, capturing the entire dataset in a single attempt while ensuring the accuracy of the camera-captured point cloud is challenging, necessitating the acquisition of point cloud data through multiple shots. This approach can lead to inconsistencies between the two sets of point cloud data. Consequently, if the traditional ICP algorithm continues to calculate the distances for all point pairs for matching, it may introduce significant errors and potentially result in incorrect registration.
When registering point clouds with incomplete overlap, considering the total distance between all point pairs can lead to errors. Moreover, calculating two clouds as fully overlapping when they should not be can result in incorrect registration, as illustrated in Figure 8. Thus, accurately extracting the overlapping region between the two point cloud segments is essential. Incorrectly extracting these overlapping regions can severely impact registration outcomes. If the selected overlapping region is too extensive, some point pairs may fail to find accurate matches, adversely affecting the calculation of distances and potentially causing registration errors. Conversely, if the chosen overlapping region is too limited, it may provide insufficient point pair data, which can decrease the accuracy of the final registration.
Accordingly, this section employs a method inspired by the Least Trimmed Squares (LTS) fitting approach to identify the overlapping region between point clouds, as Figure 9. The concept involves calculating the squared distance between corresponding points in the two point cloud segments as residuals, sorting these in ascending order, and selecting the points that correspond to the smallest percentage of distances to define the overlapping area. A rotation and translation transformation is applied to the point cloud data within the identified overlap region, aiming to minimize the objective function representing the sum of distances, thereby determining the rotation and translation matrix. Post transformation, the distances between corresponding points in the two point cloud segments are recalculated and sorted, and the process is iterated until the specified criteria are satisfied. The specific process is as follows:
(1)
Firstly, select an arbitrary point P i from the source point cloud dataset P = { P i R 3 , i = 1 , 2 , , N p } , and calculate the square of the distance to its nearest corresponding point q i in the target point clouds.
(2)
Calculate the square of the distance between the point P i and their nearest corresponding points q i , sort these distances in descending order, and perform a statistical analysis (Figure 10). Establish a threshold at α%, identified from the inflection point relating to the size of the overlap region and the distribution of distances. Select the point pairs P i that correspond to the first hundredth of α% of the distances, and designate this subset as representing the overlap between the source and target point clouds.
(3)
By minimizing the objective function, determine the rotation matrix R and the translation vector t that best align the point clouds.
f ( R , t ) = a r g m i n | | R p i + t q i | | 2
(4)
Apply the 3D transformation matrix to the point P i . Recalculate the square of the distances between all points P i in the source point cloud and their nearest points q i in the target point cloud, sort these distances, and iteratively repeat steps (2), (3), and (4) until the desired accuracy is achieved.
Calculating all point cloud data using the ICP algorithm is not only highly time-consuming but also susceptible to errors due to local optima in the registration process. Upon analyzing the ICP algorithm, we understand that its principle involves initially identifying the nearest corresponding point q i   for each p i , as expressed in Equation (14), followed by minimizing the objective function presented in Equation (15).
q i   = a r g m i n | | R ( k ) p i + t ( k ) q | |
min R , t i = 0 M ( D i ( R , t ) ) 2 + I S O ( d ) ( R )
where k is the number of iterations.
If incorrect selections occur during the search for the nearest corresponding points, it can lead to significant errors in calculating the minimum sum of distances. However, achieving perfect selection of the nearest corresponding points is not always possible, hence two optimization approaches are proposed. The first approach involves setting a threshold to assess the correctness of the nearest point pair, considering those beyond this threshold to be erroneous correspondences. The second approach weights the distance contributions based on the proximity of the corresponding points, assigning lower weights to those identified incorrectly or with greater distances, thereby diminishing their impact on the total distance sum. Considering the application scope, the threshold for method 1 should be adjusted according to the specific context, size, and density of the point clouds. An excessively high threshold may fail to eliminate incorrect point pairs, while an overly low threshold might exclude some correct ones, affecting the outcome. The second method offers broader applicability. This method retains all point cloud data by applying weights in line with a Gaussian distribution, assigning varying weights to point pairs based on their distances, which helps to mitigate the influence of incorrect correspondences and enhances algorithm stability. Accordingly, the second method requires an optimization of the traditional ICP objective function by adjusting the weights assigned to the distances between point pairs.
The generalized loss function, presented in Equation (16), displays varying characteristics depending on the value of α. For instance, as α approaches 0, the function f ( x , α , c ) converges to the Cauchy distribution. When α approaches 2, f ( x , α , c ) converges to the L2 loss function. At α = −2, the function f ( x , α , c ) = 2 ( x / c ) 2 ( x / c ) 2 + 4 represents the Geman–McClure loss. As α approaches negative infinity, we obtain the loss function utilized in this paper. The specific curve of the loss function is depicted in Figure 11.
Based on the graphical representation and the partial derivative of α concerning the function f(x, α, c), a smaller α value implies reduced sensitivity to the influence of outliers, especially when errors are substantial. Consequently, the Welsch loss function is chosen as α approaches negative infinity, resulting in the formulation of Equation (18) after modification.
f ( x , α , c ) = | α 2 | α ( ( ( x / c ) 2 | α 2 | + 1 ) α / 2 1 ) ,
min R , t i = 0 M φ c ( D i ( R , t ) ) 2 + I S O ( d ) ( R ) ,
φ c ( x ) = ( 1 e x 2 c ) / c ,
The function φ c ( x ) is bounded above and monotonically increases on the interval [0, +∞). Consequently, beyond a certain error threshold, it becomes less susceptible to the influence of erroneous outliers. Additionally, an increased value of the constant c results in a greater opening degree of the function, requiring a larger value of x to reach its upper limit, thereby enhancing its resistance to interference.
After adjusting the weights in the traditional ICP’s objective function, the new objective function is defined by Equation (16). However, because Equation (17) represents a nonlinear function that is neither convex nor concave, we should employ the Majorization–Minimization (MM) algorithm. This algorithm iteratively approximates the minimum value, aiming to achieve the lowest possible objective function value illustrated in Figure 12. The procedure is as follows:
(1)
Identify a surrogate function u ( x , x k ) at the current iteration point x k that majorizes the original function f ( x ) , ensuring that u ( x , x k ) provides an upper bound for f ( x ) at each point x.
(2)
Compute the minimum value of the surrogate function u ( x , x k ) to determine the next iteration point x k + 1 , such that x k + 1 = a r g m i n u ( x , x k ) .
(3)
Evaluate f ( x k + 1 ) by substituting x k + 1 into the original function f ( x ) , then proceed with step (1) using the updated surrogate function u ( x , x k + 1 ) in place of f ( x ) .
(4)
Repeat step (2) to identify the minimum of the surrogate function and continue iterating; this process gradually converges to the minimum of the original function.
To ensure that the chosen surrogate function closely approximates the original function, aligns with its curvature, and is tangent to it without exceeding its bounds, this paper employs the first-order Taylor expansion ε c ( x | x 0 ) at the current point x 0 to approximate the original function f ( x ) . Given that φ(x) is a strictly concave function, the first-order Taylor approximation should serve as a global upper bound for f ( x ) .
φ x = ( 1 e x c ) / c ,
φ x φ ( x 0 ) + ( x x 0 ) φ ( x 0 ) ,
The equal sign is true when x is equal to x 0 . This gives the substitute function ε c x x 0 .
ε c x x 0 = φ ( x 0 ) + ( x x 0 ) φ ( x 0 ) ,
And have
φ c x = φ x 2 ε c x 2 x 0 2 = φ ( x 0 2 ) + ( x 2 x 0 2 ) ( 1 φ x 0 2 ) / c ,
Consequently, we can determine the minimum value from Equation (16). The rotation matrix R and the translation vector t can then be derived through Singular Value Decomposition (SVD) [27] and subsequent iterations.
To mitigate the influence of erroneous point matching on the outcome, especially for points with large distances, the loss function is chosen such that α approaches negative infinity. The selection of different c values in Equation (18) influences the behavior of the function φ c x , as shown in Figure 13. A large value of c results in the function φ c x converging to its upper bound slowly, allowing it to accommodate more multipoint pairs, yet it becomes more susceptible to error. Conversely, a small value of c causes φ c x to converge to its upper bound more rapidly, reducing the impact of error, but limiting the capacity to include extensive point pair information.
Considering this characteristic of the function, we opt to apply varying magnitudes of c to the loss function in a phased manner, aiming to weight the calculation by distance as much as possible, thereby minimizing the influence of erroneous points on the outcomes. Leveraging this feature, we implement a strategy akin to our overall stitching approach, where the iterative refinement process is broadly divided into two phases. In the initial iteration, we employ a larger c value to capture a broader set of point pair information. To refine the accuracy, a smaller c value is utilized in the subsequent iteration to ensure the precision of the final registration.

3. Results

3.1. Experimental Process

The point cloud registration optimization algorithm proposed in this study is versatile and effective for both large planar point clouds—characterized by weak three-dimensional features—as well as for conventional point clouds. In this experiment, the performance of the W-ICP registration algorithm introduced in this paper is evaluated and compared against other algorithms, including the traditional ICP, SICP, 4PCS, and NDT algorithms. This comparison is conducted under two scenarios: using self-developed model data and using actual point cloud images captured by cameras.

3.2. Dataset Verification

Firstly, we utilized the classic Stanford Bunny point cloud model for precision comparison. We selected point cloud models with 0° and 45° rotations and presented the distribution of RMSE values and point-to-point distances for the algorithm, which demonstrated high registration accuracy, as illustrated in Figure 14 and Figure 15.
Figure 14 intuitively demonstrates that the bunny model achieves excellent registration results. Figure 15, which presents the statistics of point-to-point distances, shows that the improved W-ICP algorithm and the point-to-plane SICP algorithm from this study are more focused, with the majority of point pair distances clustered within a small range, indicating superior accuracy compared with other algorithms. Comparing the registration results of the bunny model reveals that for standard point cloud models with distinctive three-dimensional features like the bunny, the improved ICP algorithm from this study offers a modest accuracy advantage over other algorithms, attributable to its minimal algorithmic demands. However, regarding time expenditure, the 4PCS algorithm’s time requirement is significantly influenced by the number of points randomly selected during each iteration, leading to considerable time costs. Consequently, the improved ICP algorithm from this study also incurs extended calculation times due to the multiple enhancements made to the traditional ICP algorithm, as detailed in Table 1.
In this experimental section, we select a table model that resembles the planar point cloud with weak three-dimensional features previously mentioned, subject it to a random 3D positional transformation, and then perform registration. Given that the majority of this model’s point cloud data represent a table with uniform features, and the geometric features of the objects on the table are similarly shaped cylinders, this model presents a challenge for successful point cloud splicing. The following section compares the accuracy differences among various algorithms, with the detailed results depicted in Figure 16 and Figure 17.
After numerous iterations of parameter tuning and debugging for all algorithms, the optimal outcome is chosen as the final registration result, as depicted in Figure 16. As illustrated in Figure 16 and Figure 17, when there is a significant initial pose discrepancy, both the point-to-plane ICP and SICP algorithms can effectively identify registration errors. However, the weak three-dimensional features of the other compared algorithms, which primarily consist of point clouds from similar desktops and cylindrical objects, result in lower registration accuracy. Consequently, these algorithms achieve lower final registration accuracy. Notably, only the 4PCS algorithm’s registration accuracy approaches that of the improved ICP algorithm presented in this paper. However, the 4PCS algorithm’s failure to correctly identify feature points leads to incorrect registration, characterized by high computational costs and unstable accuracy. Thus, the improved ICP algorithm in this study demonstrates strong adaptability for objects with large initial pose differences and weak three-dimensional features, as detailed in Table 2 and Figure 18.
Commonly utilized point cloud datasets typically consist of high-quality, noise-free point clouds. In experiments involving the rabbit model with a minor initial pose discrepancy, the registration accuracies of several standard algorithms are largely comparable, and it is evident that registration errors are negligible. However, when dealing with the table model, which involves a significant initial pose difference and is characterized by sparse three-dimensional features and partial overlap between the target and source point clouds, only the improved ICP algorithm presented in this paper is capable of achieving accurate registration.

3.3. Verification of Real Shooting Point Cloud Data

Following the completion of the comparison experiment with high-quality point cloud dataset models, the performance of various registration algorithms on real-world point cloud data are assessed. Given that the actual collected point cloud data contain significant noise, it is necessary to filter the source point cloud prior to registration to compare the outcomes. Initially, a camera is used to photograph a standard step plate, as depicted in Figure 19. Subsequently, portions of the point clouds on either side of the standard step plate are selected as the source and target point clouds, simulating the data collection process on large object surfaces and ensuring there is a defined overlap between each pair of point clouds, as Figure 20 and Figure 21.
In the standard step plate experiments characterized by similar three-dimensional features, the W-ICP algorithm presented in this paper demonstrates superior registration accuracy compared with the NDT algorithm. This advantage is attributed to the similar three-dimensional features and the partial overlap between the target and source point clouds, which most other algorithms fail to accurately assess, leading to incorrect registration outcomes. The detailed results of these experiments are presented in Table 3.
Subsequently, the LiDAR scan of the actual point cloud on the surface of the building also exhibits characteristics of partial overlap and limited three-dimensional features. The building is divided into left and right sections, with each section scanned separately to serve as the source and target point clouds, respectively. The registration accuracy of the various algorithms is then calculated and compared, as depicted in Figure 22 and Figure 23.
In the registration of large planar point clouds, such as building surfaces, the improved W-ICP, NDT, and 4PCS algorithms demonstrate effective registration outcomes. The traditional ICP, point-to-point ICP, and sparse ICP algorithms all exhibit misregistration issues akin to those observed in the stepboard experiment. Since the objective function is minimized using all point cloud data, the resulting registrations are inaccurate. In terms of processing time, the improved ICP algorithm is faster than both the NDT and 4PCS algorithms, making it more suitable for practical engineering applications. The detailed results are presented in Table 4.

3.4. Comparison of Iterative Convergence Rates

Given that the point cloud registration algorithms evaluated in the experiment are fundamentally iterative, comparing their convergence rates is essential to assess operational efficiency and time expenditure. To ensure accurate registration across all algorithms, the bunny model, known for its distinctive features, is selected for the experiment. The registration accuracy of point clouds is analyzed after applying 10, 20, 30, 50, or more iterations to the algorithms. The detailed results of these analyses are as Table 5 and Figure 24.
The curve illustrates that when employing the standard bunny model, the majority of algorithms, including the W-ICP algorithm presented in this paper, can achieve rapid convergence, with the primary distinction being in their final accuracies. Notably, the NDT algorithm exhibits a relatively slower convergence rate, while the point-to-plane ICP algorithm’s performance is significantly impacted by the normal calculation, leading to fluctuating outcomes. The traditional ICP algorithm achieves convergence and completes its iterations around 20 to 30 times, whereas W-ICP typically completes its iterations in about 10, demonstrating a superior convergence rate and higher final accuracy compared with other standard algorithms. Considering both the accuracy and time efficiency of the algorithms, the proposed method, in comparison with conventional ones, is more suitable for practical engineering applications, including the inspection of wing surfaces.

4. Conclusions and Summary

In this study, we verify the accuracy and stability of the improved algorithm using both a common point cloud dataset model and actual point clouds collected by cameras, following a progression from simple to complex scenarios. Firstly, we establish the accuracy of common point cloud registration algorithms through a relatively straightforward experiment using a bunny model. Next, we conduct a table model experiment to simulate a smooth surface scenario. Given the model’s surface lacks distinct three-dimensional features and there is a significant initial pose difference, only the W-ICP algorithm introduced in this paper successfully completed the registration. Subsequently, we utilize two types of point cloud data, collected by cameras, for further verification. In the standard stepboard experiment, both the algorithm proposed in this paper and the NDT algorithm achieve correct registration; however, the proposed algorithm demonstrates a significantly lower time cost compared with the NDT algorithm. In the building surface experiment, the W-ICP, NDT, and 4PCS algorithms all achieve correct registration. However, when considering both accuracy and time cost, the W-ICP algorithm proposed in this paper offers superior performance. Lastly, we confirm the algorithm’s convergence speed. In the bunny model experiment, the W-ICP algorithm converges in approximately 10 iterations, whereas other algorithms typically require 20 to 30 iterations, thereby demonstrating the faster operational speed of the algorithm proposed in this paper.
The point cloud registration optimization algorithm presented in this paper is versatile and applicable to both typical general point cloud registration tasks and planar point cloud detection on large objects with weak three-dimensional features. Utilizing an improved clustering algorithm, key regions are extracted for point cloud filtering, followed by an enhanced normal vector estimation method to more precisely identify feature points. Ultimately, the refined ICP algorithm calculates the overlapping area based on weighted distances, achieving accurate point cloud registration. Comparisons with other prevalent point cloud registration algorithms across various scenarios involve analyzing the nearest point pair distances and RMSE values, thereby verifying the feasibility of our optimization algorithm. In conclusion, the algorithm demonstrates not only robust registration accuracy for standard point cloud models but also excels in scenarios with partial overlaps, offering high registration precision, rapid convergence, and commendable performance. However, this optimization strategy still encounters challenges, including a complex multistep process and the requirement to adjust parameters for different input point cloud conditions, indicating areas that require further refinement.

Author Contributions

H.G.: Writing—original draft, Validation, Methodology, Investigation, Conceptualization, P.S.: Writing—review & editing. W.Z.: Methodology, Investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Key Research and Development Program of China (Grant No. 2021YFD1300502).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Nurunnabi, A.; Sadahiro, Y.; Laefer, D.F. Robust statistical approaches for circle fitting in laser scanning three-dimensional point cloud data. Pattern Recognit. J. Pattern Recognit. Soc. 2018, 81, 417–431. [Google Scholar] [CrossRef]
  2. Su, Y.; Hou, M.; Li, S. Three-dimensional point cloud semantic segmentation for cultural heritage: A comprehensive review. Remote Sens. 2023, 15, 548. [Google Scholar] [CrossRef]
  3. Shen, Y.; Ren, J.; Huang, N.; Zhang, Y.; Zhang, X.; Zhu, L. Surface form inspection with contact coordinate measurement: A review. Int. J. Extrem. Manuf. 2023, 5, 022006. [Google Scholar] [CrossRef]
  4. Besl, P.J.; Mckay, N.D. A method for registration of 3-D shapes. IEEE Trans. Son Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  5. Chen, Y.; Medioni, G. Object modeling by registration of multiple range images. In Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 9–11 April 1991; pp. 145–155. [Google Scholar]
  6. Yang, J.; Lih, J. Go-ICP: Solving 3D Registration Efficiently and Globally Optimally. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013. [Google Scholar]
  7. He, Y.; Lee, C.H. An improved ICP registration algorithm by combining PointNet++ and ICP algorithm. In Proceedings of the 6th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 20–23 April 2020; pp. 741–745. [Google Scholar]
  8. Simon, D.A.; Hebert, M.; Kanade, T. Techniques for Fast and Accurate Intrasurgical Registration. J. Image Guid. Surg. 1995, 1, 17–29. [Google Scholar] [CrossRef]
  9. Pavlov, A.L.; Ovchinnikov, G.W.; Derbyshev, D.Y.; Tsetserukou, D.; Oseledets, I.V. AA-ICP: Iterative Closest Point with Anderson Acceleration. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 3407–3412. [Google Scholar]
  10. Ren, Y.; Zhou, F.C. A 3D point cloud registration algorithm based on feature points. In Proceedings of the 1st International Conference on Information Sciences, Machinery, Materials and Energy, Chongqing, China, 11–13 April 2015; Atlantis Press: Chongqing, China, 2015; pp. 802–806. [Google Scholar]
  11. Yang, J.; Wang, C.; Luo, W.; Zhang, Y.; Chang, B.; Wu, M. Research on point cloud registering method of tunneling roadway based on 3D NDT-ICP algorithm. Sensors 2021, 21, 4448. [Google Scholar] [CrossRef] [PubMed]
  12. Zhu, J.; Jin, C.; Jiang, Z.; Xu, S.; Xu, M.; Pang, S. Robust point cloud registration based on both hard and soft assignments. Opt. Laser Technol. 2019, 110, 202–208. [Google Scholar] [CrossRef]
  13. Zhao, H.; Tang, M.; Ding, H. HoPPF: A novel local surface descriptor for 3D object recognition. Pattern Recognit. 2020, 103, 107272. [Google Scholar] [CrossRef]
  14. Li, P.; Wang, R.; Wang, Y.; Gao, G. Fast method of registration for 3D RGB point cloud with improved four initial point pairs algorithm. Sensors 2019, 20, 138. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2241–2254. [Google Scholar] [CrossRef]
  16. Wu, Z.; Chen, H.; Du, S.; Fu, M.; Zhou, N.; Zheng, N. Correntropy based scale ICP algorithm for robust point set registration. Pattern Recognit. 2019, 93, 14–24. [Google Scholar] [CrossRef]
  17. Wu, P.; Li, W.; Yan, M. 3D scene reconstruction based on improved ICP algorithm. Microprocess. Microsyst. 2020, 75, 103064. [Google Scholar] [CrossRef]
  18. Marchel, Ł.; Specht, C.; Specht, M. Testing the accuracy of the modified ICP algorithm with multimodal weighting factors. Energies 2020, 13, 5939. [Google Scholar] [CrossRef]
  19. Salti, S.; Tombari, F.; Stefano, L.D. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  20. Justusson, B.I. Median filtering: Statistical properties. In Two-Dimensional Digital Signal Prcessing II: Transforms and Median Filters; Springer: Berlin/Heidelberg, Germany, 2006; pp. 161–196. [Google Scholar]
  21. Pan, J.-J.; Tang, Y.-Y.; Pan, B.-C. The algorithm of fast mean filtering. In Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2–4 November 2007; Volume 1. [Google Scholar]
  22. Tsirikolias, K. Low level image processing and analysis using radius filters. Digit. Signal Process. 2016, 50, 72–83. [Google Scholar] [CrossRef]
  23. Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  24. Hoppe, H.; DeRose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Surface reconstruction from unorganized points. ACM SIGGRAPH Comput. Graph. 1992, 26, 71–78. [Google Scholar] [CrossRef]
  25. Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
  26. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 1848–1853. [Google Scholar]
  27. Sorkine-Hornung, O.; Rabinovich, M. Least-squares rigid motion using svd. Computing 2017, 1, 1–5. [Google Scholar]
Figure 1. The specific process of the point cloud registration policy.
Figure 1. The specific process of the point cloud registration policy.
Electronics 13 02696 g001
Figure 2. Comparison of filtering effect (from left to right is the original point cloud, the point cloud after radius filtering, the point cloud after statistical filtering, and the point cloud after cluster filtering).
Figure 2. Comparison of filtering effect (from left to right is the original point cloud, the point cloud after radius filtering, the point cloud after statistical filtering, and the point cloud after cluster filtering).
Electronics 13 02696 g002
Figure 3. Clustering schematic diagram (the left diagram shows the relationship between P i and its neighbor point P i k , and the right diagram shows the relationship between P i and the next neighbor point P j , where P i and P j are density reachable).
Figure 3. Clustering schematic diagram (the left diagram shows the relationship between P i and its neighbor point P i k , and the right diagram shows the relationship between P i and the next neighbor point P j , where P i and P j are density reachable).
Electronics 13 02696 g003
Figure 4. Point clouds divided into different clusters (A and B are two clusters).
Figure 4. Point clouds divided into different clusters (A and B are two clusters).
Electronics 13 02696 g004
Figure 5. Clustering process (where (a) represents the point cloud originally captured, (b) indicates that the original point cloud is divided into several parts after the clustering is completed, and (c) represents the needed part of the point cloud extracted from (b)).
Figure 5. Clustering process (where (a) represents the point cloud originally captured, (b) indicates that the original point cloud is divided into several parts after the clustering is completed, and (c) represents the needed part of the point cloud extracted from (b)).
Electronics 13 02696 g005
Figure 6. Normal vector calculation ( P i k for all adjacent points, n for normal vector direction).
Figure 6. Normal vector calculation ( P i k for all adjacent points, n for normal vector direction).
Electronics 13 02696 g006
Figure 7. Normal vector error estimation diagram (where P i is the central point, n is the direction of the normal vector, and the blue point is a discrete point in this neighborhood).
Figure 7. Normal vector error estimation diagram (where P i is the central point, n is the direction of the normal vector, and the blue point is a discrete point in this neighborhood).
Electronics 13 02696 g007
Figure 8. Diagram of correct and incorrect registration of the ceiling acquisition results (where red point cloud is the target point cloud, blue point cloud is the source point cloud, the left picture is the correct registration diagram, and the right picture is the wrong registration caused by the direct use of all point clouds for registration).
Figure 8. Diagram of correct and incorrect registration of the ceiling acquisition results (where red point cloud is the target point cloud, blue point cloud is the source point cloud, the left picture is the correct registration diagram, and the right picture is the wrong registration caused by the direct use of all point clouds for registration).
Electronics 13 02696 g008
Figure 9. Schematic diagram of least truncated squares and least squares (where red is a line fitted by least squares and blue is a line fitted by least truncated squares fitted only to the previous part of the data).
Figure 9. Schematic diagram of least truncated squares and least squares (where red is a line fitted by least squares and blue is a line fitted by least truncated squares fitted only to the previous part of the data).
Electronics 13 02696 g009
Figure 10. Distance statistics between point pairs.
Figure 10. Distance statistics between point pairs.
Electronics 13 02696 g010
Figure 11. Loss function curves with different values of α.
Figure 11. Loss function curves with different values of α.
Electronics 13 02696 g011
Figure 12. Schematic diagram of Majorization–Minimization algorithm approaching the minimum value.
Figure 12. Schematic diagram of Majorization–Minimization algorithm approaching the minimum value.
Electronics 13 02696 g012
Figure 13. Function curves of c under different values.
Figure 13. Function curves of c under different values.
Electronics 13 02696 g013
Figure 14. Comparison of bunny model registration results.
Figure 14. Comparison of bunny model registration results.
Electronics 13 02696 g014
Figure 15. Bunny model point-to-point distance statistical graph (where red areas are nonoverlapping areas and are not involved in the calculation).
Figure 15. Bunny model point-to-point distance statistical graph (where red areas are nonoverlapping areas and are not involved in the calculation).
Electronics 13 02696 g015
Figure 16. Comparison of table model registration results.
Figure 16. Comparison of table model registration results.
Electronics 13 02696 g016
Figure 17. Table model point pair distance statistical graph (where the red area is a nonoverlapping area and does not participate in the calculation).
Figure 17. Table model point pair distance statistical graph (where the red area is a nonoverlapping area and does not participate in the calculation).
Electronics 13 02696 g017
Figure 18. Comparison of bunny model registration results.
Figure 18. Comparison of bunny model registration results.
Electronics 13 02696 g018
Figure 19. Full view of the point cloud shot with the standard step plate.
Figure 19. Full view of the point cloud shot with the standard step plate.
Electronics 13 02696 g019
Figure 20. Comparison of real shot step board point cloud stitching results.
Figure 20. Comparison of real shot step board point cloud stitching results.
Electronics 13 02696 g020
Figure 21. Statistical chart of distance between data points on the ladder board (where the red area is a nonoverlapping area and does not participate in the calculation).
Figure 21. Statistical chart of distance between data points on the ladder board (where the red area is a nonoverlapping area and does not participate in the calculation).
Electronics 13 02696 g021
Figure 22. Comparison of real shot building surface point cloud stitching results.
Figure 22. Comparison of real shot building surface point cloud stitching results.
Electronics 13 02696 g022
Figure 23. Statistical map of point-to-point distance on the surface of the house (where the red area is a nonoverlapping area and does not participate in the calculation).
Figure 23. Statistical map of point-to-point distance on the surface of the house (where the red area is a nonoverlapping area and does not participate in the calculation).
Electronics 13 02696 g023
Figure 24. Comparison of multiple algorithms RMSEs and iteration numbers.
Figure 24. Comparison of multiple algorithms RMSEs and iteration numbers.
Electronics 13 02696 g024
Table 1. Precision of bunny model registration results.
Table 1. Precision of bunny model registration results.
RMSE (mm)Time (s)
W-ICP0.2343611.009
ICP0.632142.509
NDT0.5863029.825
4PCS0.65075104.070
Sparse ICP1.23559224.258
Sparse Point-to-Plane ICP0.3546710.747
Point-to-Plane ICP0.625571.640
Table 2. Precision of table model registration results.
Table 2. Precision of table model registration results.
RMSE (mm)Time (s)
W-ICP0.6263812.856
ICP2.0683932.152
NDT5.3641956.905
4PCS1.805711677.170
Sparse ICP2.46771319.674
Sparse Point-to-Plane ICP5.3854551.665
Point-to-Plane ICP11.0248650.802
Table 3. Precision of step plate registration results.
Table 3. Precision of step plate registration results.
RMSE (m)Time (s)
W-ICP0.00142410.846
ICP0.0054793.168
NDT0.01903226.215
4PCS0.005963220.679
Sparse ICP0.005583228.175
Sparse Point-to-Plane ICP0.00582880.910
Point-to-Plane ICP0.00586414.407
Table 4. Precision of building surface registration results.
Table 4. Precision of building surface registration results.
RMSE (m)Time (s)
W-ICP0.0084447.035
ICP0.2201358.254
NDT0.01215412.157
4PCS0.0092393405.7
Sparse ICP0.337585481.381
Sparse Point-to-Plane ICP0.354409133.035
Point-to-Plane ICP0.3572507.844
Table 5. Comparison of convergence rate results (the 10, 20, 30, 50 represent the number of iterations).
Table 5. Comparison of convergence rate results (the 10, 20, 30, 50 represent the number of iterations).
RMSE (mm)/Time (s)
10 (Number of Iterations)203050
W-ICP0.65544/8.1800.23437/8.3390.23437/9.8080.23437/10.735
ICP1.08041/0.6180.73169/1.1890.63644/1.5910.63644/2.377
NDT7.04746/2.7872.62302/6.4771.90987/7.2770.58630/8.237
4PCS2.27900/262.8720.87899/535.5700.64038/59.2020.61628/1346.594
Sparse ICP4.09899/10.8083.89407/22.0643.79135/35.4303.62544/56.304
Sparse Point-to-Plane ICP4.20625/20.4194.31974/42.1424.80921/66.0754.80921/66.075
Point-to-Plane ICP2.87403/0.7682.78403/1.4672.78400/1.7662.78400/1.792
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Geng, H.; Song, P.; Zhang, W. An Improved Large Planar Point Cloud Registration Algorithm. Electronics 2024, 13, 2696. https://doi.org/10.3390/electronics13142696

AMA Style

Geng H, Song P, Zhang W. An Improved Large Planar Point Cloud Registration Algorithm. Electronics. 2024; 13(14):2696. https://doi.org/10.3390/electronics13142696

Chicago/Turabian Style

Geng, Haocheng, Ping Song, and Wuyang Zhang. 2024. "An Improved Large Planar Point Cloud Registration Algorithm" Electronics 13, no. 14: 2696. https://doi.org/10.3390/electronics13142696

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop