Next Article in Journal
Hybrid Quantum–Classical Deep Neural Networks Based Smart Contract Vulnerability Detection
Next Article in Special Issue
Dynamic Obstacle Avoidance for Robotic Arms Using Deep Reinforcement Learning with Adaptive Reward Mechanisms
Previous Article in Journal
Deep Learning-Based FSS Spectral Characterization and Cross-Band Migration
Previous Article in Special Issue
A Lightweight TA-YOLOv8 Method for the Spot Weld Surface Anomaly Detection of Body in White
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on a Point Cloud Registration Method Based on Dynamic Neighborhood Features

College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 4036; https://doi.org/10.3390/app15074036
Submission received: 8 March 2025 / Revised: 29 March 2025 / Accepted: 1 April 2025 / Published: 7 April 2025
(This article belongs to the Special Issue Motion Control for Robots and Automation)

Abstract

:
This paper introduces a method that can enhance the accuracy and efficiency of point cloud data registration. This method selects the centroid of the point cloud as the feature point and uses the projected distance of this feature point within the dynamic neighborhood to other points as the feature information. Through this feature information, it accomplishes the registration of two sets of point cloud data. This method increases the density and integrity of point cloud data, improves the accuracy and robustness of point cloud registration, and the selection of feature points reduces the computational load thereby enhancing processing efficiency. The introduction of the dynamic neighborhood enables the method to flexibly handle point cloud data of different scales and densities. Experimental results show that the proposed method has good performance in terms of accuracy and efficiency for achieving point cloud data registration and dealing with data under various complex conditions and can effectively improve the effect of point cloud data registration and fusion.

1. Introduction

In recent years, the rapid development of light detection and ranging (LiDAR) technology has provided strong support for the acquisition and processing of point cloud data, and has promoted technical advancements and expanded applications in related fields [1]. Currently, LiDAR is widely used in autonomous driving, intelligent robotics, drones, surveying and mapping/geographic information systems, construction engineering, environmental monitoring, medical treatment, and other fields [2,3]. LiDAR offers advantages such as high resolution, long detection range, high real-time performance, and strong anti-interference capabilities. Due to factors such as the shape of the object, the scanning angle of the LiDAR system, and the accuracy of the equipment, point cloud data from different poses and angles must be registered and fused in order to obtain more complete data. The main goal of point cloud registration is to combine point clouds acquired from different viewpoints into a unified and complete three-dimensional model [4]. Many researchers have employed different methods and techniques to optimize this process. For example, the iterative closest point (ICP) method, introduced by Chen and Medioni [5] and Besl and McKay [6], minimizes the distance between two point clouds by calculating rotation and translation transformations. This method is simple and efficient, but it requires a good initial pose to avoid registration failure. Xu G X [7] proposed an improved iterative closest point (ICP) method, combining random sample consensus (RANSAC), intrinsic shape signatures (ISSs), and 3D shape context (3DSC) features. Although this method has improved registration accuracy and efficiency, it is still highly dependent on the initial pose and sensitive to parameters. Dior Arger et al. [8] proposed the four-point congruent sets (4PCSs) method, which utilizes co-planar four-point sets for fast registration. However, the 4PCSs method has high complexity and poor performance when the overlap rates of the two sets of point cloud data are low. Therefore, Pascal Willy Theiler et al. [9] proposed an improved method, key-point-based four-points congruent sets (K4PCSs), which refines the point cloud using key points to address issues such as low registration area density, large registration scanning angles, and low computational and memory efficiency. Peter Biber et al. [10] proposed a three-dimensions-normal distributions transform (3D-NDT) method, introducing the normal distributions transform (NDT) to three-dimensional space for the first time, and validated its performance through registration of real-world mining data. This method is faster and more reliable than the traditional ICP method, but it is still sensitive to the initial pose and more susceptible to noise. To mitigate the significant impact of the initial pose on point cloud registration results, many studies employ a coarse-to-fine strategy. This involves performing a coarse registration to obtain a sufficiently accurate initial pose, followed by a fine registration for further refinement [11,12]. While this approach can lead to higher registration accuracy, it often comes at the cost of increased computational time.
To address the limitations of existing point cloud registration methods, which often suffer from insufficient accuracy and low efficiency, this paper proposes a novel registration method based on single LiDAR dynamic neighborhood features. In this approach, the centroid of the point cloud serves as the feature point, and the distance projection between this feature point and other points within a dynamic neighborhood is extracted as feature information. By calculating the chi-square distance between these feature vectors [13], the registration of two point cloud datasets is achieved. The size of the dynamic neighborhood is adaptively determined based on curvature, density, and curvature-based wavelet transform detail coefficients [14]. This method effectively captures the local geometric characteristics of the point cloud and accurately identifies optimal matching pairs. Compared to existing point cloud registration algorithms, the proposed point cloud registration algorithm in this paper can achieve high-precision point cloud registration more quickly, resulting in significant improvements in both registration accuracy and speed.

2. Methods

The main process of the proposed point cloud data registration method based on dynamic neighborhood features is illustrated in Figure 1. After acquiring the source and target point cloud data, statistical outlier removal is first performed to filter out noise. Subsequently, the point cloud is divided into small, equally sized voxels, and the centroid of the points within each voxel is selected as a feature point. A dynamic neighborhood is determined for each feature point based on its local average curvature, point cloud density, and detail coefficients from a curvature-based wavelet transform. The local feature descriptor for each feature point is then constructed as a cumulative histogram of the projected distances from the remaining points within its dynamic neighborhood onto the feature point along the direction of maximum principal curvature. Feature point pairing is achieved by calculating the chi-squared distance between the cumulative histograms of the projected distances for feature points in the two point clouds. The pair with the minimum chi-squared distance is considered the best match and retained. Matching pairs with chi-squared distances exceeding a predefined threshold are discarded to improve matching accuracy. Finally, the transformation matrix between the source and target point clouds is computed using the retained best matching pairs. This transformation matrix is then applied to register the source point cloud to the target point cloud [15,16].

2.1. Introduction to LiDAR

In this study, some of the point cloud data were collected using the Livox-Mid360 LiDAR (Shenzhen, China). The physical object and main performance parameters of the Livox-Mid360 LiDAR are shown in Figure 2 and Table 1.

2.2. Statistical Filtering

Point cloud data acquired by LiDAR scanning typically exhibit non-uniform density, and measurement errors often introduce sparse outliers. These outliers can lead to inaccuracies in local feature descriptions (such as normal vectors or curvatures at sampling points), which subsequently affect downstream processes like point cloud registration. Exploiting the spatial sparsity of outliers, statistical filtering [17] is employed for their removal. For each point P i x i , y i , z i in the point cloud data, the k nearest points are selected as its neighborhood set, and the average distance d i between point P i and its neighbors is then calculated using the following equation:
d i = 1 k i = 1 k S i
where S i is the distance between point P i and other points P m x m ,     y m ,   z m in its neighborhood. To calculate S i the following equation is used:
S i = x i x m 2 + y i y m 2 + z i z m 2
The average value of d i is d ¯ , which is calculated using the following equation:
d ¯ = 1 n i = 1 n d i
The standard deviation is σ , which is calculated using the following equation:
σ = 1 n i = 1 n d i d ¯ 2
where n is the number of points in the point cloud data.
If the average distance d i of a point satisfies d ¯ ρ σ d i d ¯ + ρ σ , the point is retained; otherwise, the point is considered an outlier and removed. Here, ρ is a multiplier of the standard deviation.

2.3. Local Feature Point Selection

Given an input point cloud dataset, a minimum bounding box (MBB) [18] is constructed based on the maximum and minimum coordinates in the X, Y, and Z directions. Specifically, the MBB is defined by x m a x , y m a x , z m a x , x m i n , y m i n , and z m i n , representing the extrema of the point cloud in each dimension. The resulting MBB tightly encloses the entire point cloud, with edge lengths denoted as L x , L y , and L z . This is represented by the following equation:
L x = x m a x x m i n L y = y m a x y m i n L z = z m a x z m i n
The edges of the minimum bounding box (MBB) are divided into n 1 , n 2 , and n 3 equal segments along the X, Y, and Z axes, respectively. Consequently, the MBB is partitioned into n 1 · n 2 · n 3 smaller voxels. The edge lengths of each voxel are denoted by m 1 , m 2 , and m 3 , as shown by the following equation:
m 1 = L x n 1 m 2 = L y n 2 m 3 = L z n 3
For each voxel, the centroid p ¯ of the enclosed point cloud data is computed. The point within each voxel that is nearest to the centroid p ¯ is then selected as the characteristic point P c . This is shown in the following equation:
p ¯ = 1 N j i = 1 N j p i
where N j represents the number of points within each voxel, with j denoting the index of the j -th voxel.

2.4. Dynamic Neighborhood Calculation

Following the selection of feature points, it is necessary to determine a dynamic neighborhood for each feature point to characterize its local geometric information [19]. To define the size of this dynamic neighborhood, we introduce three parameters: local point density, local mean curvature, and wavelet transform detail coefficients based on the curvature.
Point cloud density, representing the degree of sparseness or denseness of point cloud data in space, is a crucial parameter for determining the size of the dynamic neighborhood [20]. In regions with high point cloud density a larger number of points are contained within a unit volume, indicating richer point cloud information. In such cases, if the neighborhood size is excessively large it may encompass too many points, leading to a smoothing effect on the computed local features and a loss of critical detail information. Therefore, the dynamic neighborhood in dense point cloud regions should be appropriately reduced to ensure that the calculation of local features can describe finer geometric structures and avoid feature generalization. Smaller neighborhoods can better capture subtle variations in regions with rich surface details (e.g., textures, folds). Conversely, in regions with a sparse point cloud distribution the number of points contained within a unit volume is limited, resulting in relatively insufficient point cloud information. If the neighborhood is too small it may lead to an insufficient number of points to support the calculation of local features, introducing noise or instability [21]. Thus, in sparse regions, the dynamic neighborhood should be appropriately enlarged to ensure that there are enough points within the neighborhood to compute local features. In spacious background areas or regions with missing point clouds a larger neighborhood can compensate for the lack of information caused by point cloud sparsity. In summary, point cloud density indirectly determines the size of the dynamic neighborhood by influencing the number of points within it: dense regions require smaller dynamic neighborhoods to preserve details while sparse regions require larger dynamic neighborhoods to ensure computational stability.
Local mean curvature is an important metric for describing the geometric characteristics of point cloud surfaces, reflecting the average bending of the point cloud surface in the vicinity of a given point. Compared to single curvature measures, mean curvature provides a more comprehensive description of local surface geometry. In regions with high mean curvature (e.g., edges, corners, sharp features), the geometric variations in the point cloud surface are significant, resulting in larger mean curvature values. These regions typically contain rich geometric detail information, such as object contours and sharp edges, which play a crucial role in tasks like point cloud registration, segmentation, and recognition [22,23]. To preserve these detailed features, the dynamic neighborhood should be appropriately reduced to avoid feature smoothing caused by an excessively large neighborhood, preventing the loss of important geometric information. Conversely, in regions with low mean curvature (e.g., planar surfaces, smooth surfaces) the geometric variations in the point cloud surface are gradual, resulting in smaller mean curvature values. The geometric features in these regions are relatively uniform, without significant detail variations. In such cases, the dynamic neighborhood can be appropriately enlarged to capture more local geometric information while enhancing the robustness of feature computation.
Curvature-based wavelet transform detail coefficients provide a further extension of local curvature information, capturing local geometric details and curvature variations in point cloud data through multi-scale analysis. In regions with high curvature, the geometric changes on the point cloud surface are significant and the wavelet transform can extract prominent high-frequency information, resulting in larger detail coefficient values. These high-frequency components reflect the complexity and detailed variations in local geometric features. In such instances, the dynamic neighborhood should be appropriately reduced to ensure accurate capture and preservation of these detailed features. Conversely, in regions with low curvature, the geometric changes on the point cloud surface are gradual and the wavelet transform extracts less high-frequency information, leading to smaller detail coefficient values [24]. The geometric features in these regions are relatively uniform, with no significant detail variations. In these cases, the dynamic neighborhood can be appropriately enlarged. This approach captures more local geometric information while enhancing the stability of feature computation.
Therefore, the size of the dynamic neighborhood needs to comprehensively consider the influence of point cloud density, local curvature, and detail coefficients. In dense or high-curvature regions, the point cloud information is rich and the geometric features are complex, requiring a smaller dynamic neighborhood to preserve details. In sparse or low-curvature regions, the point cloud information is insufficient and the geometric features are simple, necessitating a larger dynamic neighborhood to ensure computational stability [25]. As a complement to local curvature, the detail coefficients further refine the neighborhood adjustment strategy, ensuring accurate capture of geometric features in point cloud data across multiple scales.
The following details the computational steps involved in determining the dynamic neighborhood.
First, the principal direction is computed for each local feature point P c within each small grid cell. The covariance matrix T is constructed using the other points within the same grid cell, The principal directions and curvature information at a point can be calculated through the covariance matrix, as shown in Equation (8) which is as follows:
T = 1 N j i = 1 N j p i p ¯ p i p ¯ T
The covariance matrix T is then subjected to eigenvalue decomposition, yielding eigenvalues λ 1 , λ 2 , and λ 3 (assuming λ 1 > λ 2 > λ 3 ), with corresponding eigenvectors v 1 , v 2 , and v 3 . From this, the dominant principal direction v 1 and the secondary principal direction v 2 of the local feature point P c are obtained.
Similarly, the eigenvalues λ i 1 , λ i 2 , and λ i 3 (assuming λ i 1   > λ i 2 >   λ i 3 ) for each of the other points within the same grid cell can be computed by constructing a covariance matrix T i . The curvature C i of the other points within the grid cell is calculated as shown in Equation (9), which is as follows:
C i = λ i 3 λ i 1 + λ i 2 + λ i 3
The average curvature C i ¯ of the points within the grid cell is obtained from Equation (10), which is as follows:
C i ¯ = 1 N j i = 1 N j C i
The point cloud density ρ i within a grid cell refers to the number of points per unit volume, as shown in Equation (11) which is as follows:
ρ i = N j m 1 m 2 m 3
The curvature-based wavelet detail coefficient W i reflects local shape variations. The following outlines the computation of W i .
The curvature C i of each point within the grid cell is used as the input signal x for wavelet transformation. The input order of the curvature values is determined based on the sequence of points along the maximum principal direction v 1 and the second maximum principal direction v 2 within the grid cell, which can be shown as follows:
x = C 1 C 2 C 3 C n
We choose Daubechies (db4) as the wavelet base for discrete wavelet transform of curvature. The db4 wavelet transform requires two filters: a low-pass filter for extracting low-frequency components and a high-pass filter for extracting high-frequency components.
The input curvature signal is processed using a db4 wavelet filter for low-pass (approximation) and high-pass (detail) filtering. The low-pass filtering yields the approximation coefficients A 1 n , while the high-pass filtering yields the detail coefficients D 1 n . After downsampling both sets of coefficients (taking every other sample), the curvature signal x, after the first level of transformation, is decomposed as shown in Equation (13):
x n = A 1 n + D 1 n
where A 1 n represents the approximation coefficients of the first layer (i.e., the low-frequency components), and D 1 n represents the detail coefficients of the first layer (i.e., the high-frequency components) [24].
After the first-level transform is completed, the approximation coefficients A 1 n are further subjected to wavelet transform, resulting in the second-level approximation coefficients A 2 n and the second-level detail coefficients D 2 n . The equation is as follows:
A 1 n = A 2 n + D 2 n
The above process is repeated until the desired decomposition level J is reached, resulting in the decomposition of the input curvature signal into the approximation coefficients A J n and the detail coefficients D 1 n ,     D 2 n D J [ n ] . The input signal x is ultimately decomposed as shown in Equation (15):
x = A J n + D 1 [ n ] + D 2 [ n ] + + D J [ n ]
The detail coefficient components are extracted to calculate the curvature-based wavelet transform detail coefficients, W i , as shown in Equation (16):
W i = j = 1 J n D j [ n ] 2
Finally, the dynamic neighborhood L of the feature point P c within each voxel is determined based on the point cloud data density ρ i , the average curvature C i ¯ , and the curvature-based wavelet transform detail coefficients W i , as defined in Equation (17):
L = k 1 ρ i 1 1 + α W i + β C ¯ i
where k is the scaling factor used to control the overall size of the dynamic neighborhood, which can be determined by the average point spacing within the neighborhood, typically ranging from 0.8 to 1.5. α and β are weighting parameters that balance the effects of curvature and detail coefficients on neighborhood size. Increasing α and β will enhance the sensitivity of W i and C ¯ i , while decreasing them will suppress this sensitivity. Generally, α can take values from 0.1 to 1, and β typically ranges from 0.8 to 1.5.   ρ i represents the local point cloud density at point i . W i is the detail coefficient derived from a wavelet transform based on curvature. C ¯ i denotes the average curvature of points within the neighborhood. When the local point cloud density is high, or when the curvature variation is complex and the average curvature is relatively high, it will cause L to decrease to avoid losing detail information. Conversely, L will increase in order to capture more detailed information.

2.5. Description of Local Features

Following feature point extraction from the point cloud and computation of dynamic neighborhood radii, a feature descriptor is generated by constructing a cumulative histogram of projection distances between each feature point and its neighbors within the dynamically sized neighborhood.
For each feature point, the Euclidean distance to every point P i within its dynamic neighborhood L is computed. The resulting distance vector is then projected onto the feature point’s principal eigenvector v 1 , corresponding to the largest eigenvalue, yielding a projection length d p , as shown in Equation (18):
d p = ( P i P c ) · v 1
Upon obtaining the projected distances for all points within the neighborhood, a cumulative projected distance histogram is constructed for each feature point. The histogram’s range is defined by the minimum projected distance, d p m i n , and the maximum projected distance, d p m a x , which serve as the lower and upper bounds, respectively. This range is then partitioned into a set of equally sized bins, and the frequency of projected distances falling within each bin is accumulated. The resulting histogram represents the distribution of projected distances with respect to the orientation defined by v 1 and is shown in the following equation:
H ( i ) = c o u n t d d p , d p + 1
where H ( i ) represents the number of points within the i-th interval [ d p , d p + 1 ].
The cumulative projected distance histogram provides a local feature descriptor for each feature point.

2.6. Pairing of Feature Points

Following the derivation of cumulative projected distance histograms for feature points in both the source and target point cloud datasets, point correspondence is established by minimizing the chi-squared distance, denoted as D c , between the respective histograms, the formulation of which is detailed in Equation (20):
D c = χ 2 H ^ 1 , H ^ 2 = 1 2 i = 1 n H ^ 1 i H ^ 2 i 2 H ^ 1 i + H ^ 2 i    
The optimal correspondence is identified when the chi-squared distance, D c , is minimized. For each feature point in the first dataset, the feature point in the second dataset exhibiting the minimum chi-squared distance is designated as the best matching candidate. To enhance matching accuracy, putative correspondences with a chi-squared distance exceeding a predefined threshold are subsequently discarded.
Employing the chi-squared distance as a metric for quantifying the dissimilarity between cumulative projected distance histograms enables effective registration of source and target point clouds, thereby yielding optimal point correspondences.

2.7. Solve Transformation Matrix

After obtaining several sets of optimal correspondences, a rigid transformation is computed from the source point cloud data to the target point cloud data. This rigid transformation allows for the registration of the source point cloud data into the coordinate system of the target point cloud data [26].
Initially, the centroids of both the source point cloud data (P) and the target point cloud data (Q) are calculated. The centroid calculation is shown in Equation (21):
C P = 1 N P i = 1 N P p i C Q = 1 N Q i = 1 N Q q i
where N P and N Q represent the number of points in the source point cloud data and the target point cloud data, respectively.
The corresponding pairs in both datasets are then translated to the origin, as shown in Equation (22):
p i = p i C P q i = q i C q
The covariance matrix H is then obtained by multiplying the translated matching pairs, as shown in Equation (23):
H = l ˙ = 1 N p i q i T
where N is the number of matching pairs.
Singular value decomposition is performed on the covariance matrix H , as shown in Equation (24):
H = U Σ V T
where U and V are orthogonal matrices and Σ is a diagonal matrix. The rotation matrix R can be calculated using U and V , as shown in Equation (25):
R = V U T
The translation vector t can be calculated from the centroid of the point cloud data and the rotation matrix R , as shown in Equation (26):
t = C Q R C P
After calculating the rotation matrix R and the translation vector t , the source point cloud data can be transformed into the target point cloud data using Equation (27):
q i = p i R + t
where p i and q i represent the point clouds in the source and target point cloud data, respectively.
In this way, the registration of point cloud data is completed.

3. Results

The data used in this paper comes from self-collected data from LiDAR equipment and open-source datasets. The equipment used for data collection is the Livox-Mid-360 radar. The scenarios are indoor collections, including indoor point cloud data, storage box point cloud data, and chair point cloud data. The open-source datasets used are the Stanford University point cloud data Dragon and Bunny datasets.
As shown in Figure 3, statistical filtering has been applied. The results indicate that this preprocessing technique effectively smooths edge details within the point cloud data. Furthermore, the filtering process not only mitigates the presence of noise and outlier data points, it also maintains the integrity of detailed features. This enhances the accuracy of subsequent point cloud data registration.
After applying statistical outlier removal filtering, local feature points were selected from the resulting point cloud data. This feature point selection process was performed on four distinct point cloud datasets: (a) an indoor point cloud captured from varying viewpoints; (b) the Dragon point cloud dataset, with two instances representing the same object but scanned from different positions and orientations; (c) a storage box point cloud dataset, consisting of one partially occluded scan and a more complete scan; and (d) point cloud data of a chair which is captured from different angles and positions resulting in partial occlusions.
Figure 4 illustrates the feature points extracted from the source and target point clouds, depicted in red and green, respectively.
Following feature point extraction from the point cloud data, the dynamic neighborhood, L , is computed for each point via Equation (17). A cumulative histogram of projected distances to neighboring points within L then provides a discriminative feature descriptor, facilitating the establishment of feature point correspondences.
Optimal correspondences are determined by minimizing the chi-squared distance, which subsequently allows for the computation of the rotation matrix R and translation vector t . This completes the registration of the point cloud data.
The data used for point cloud data registration are house point cloud data (a), Dragon point cloud data (b), storage box point cloud data (c), and chair point cloud data (d). The registration results are shown in Figure 5.
In Figure 5, red point cloud data are target point cloud data and green point cloud data are target point cloud data. After the registration algorithm, the two sets of point cloud data overlap and the data become more complete.
To validate the rationality and effectiveness of the proposed method, we compared its registration performance against K4PCSs, a k-dimensional (KD)-tree-based improved ICP method, and 3D-NDT under the same experimental conditions. Registration quality was evaluated using registration time and root mean square error (RMSE). The comparison experiments utilized three point cloud datasets: the Stanford Bunny and Dragon datasets, and a chair dataset acquired using a Livox-Mid360 LiDAR. The Bunny dataset comprised 35,947 points in both the source and target point clouds. The Dragon dataset contained 29,103 points in both the source and target point clouds. The acquired chair dataset consisted of 76,099 points in the source point cloud and 65,433 points in the target point cloud. Registration results for each method are shown in Figure 6.
Figure 6 provides a comparative visualization of registration results. Three datasets are used: the Stanford Bunny, the Stanford Dragon, and a chair dataset captured with LiDAR (arranged from top to bottom). The columns display, from left to right: the original, unregistered point clouds, the point clouds registered using K4PCSs, the point clouds registered using a KD-tree-based improved ICP, the point clouds registered using 3D-NDT, and the point clouds registered using the proposed method.
The more overlap of point cloud data, the better the registration effect, which can usually be measured by root mean square error. Root mean square error is the relationship between the distance difference between source point cloud data and target point cloud data after matrix transformation [27], as shown in Equation (28):
E = 1 n i = 1 n A p i q i 2
where A is the transformation matrix; p i represents source point cloud; q i is the target point cloud.
The smaller the value of E , the smaller the registration error and the higher the accuracy. The registration effect of each method is measured by the time taken for the registration of point cloud data and the root mean square error, as shown in Table 2 and Table 3.
Table 2 shows the registration time for each method. K4PCSs achieved the shortest registration time for the Bunny dataset (4.9 s), while the proposed method had the shortest registration times for both the Dragon (4.69 s) and chair (10.36 s) datasets.
Table 3 shows the root mean square error (RMSE) for the registration of each method. K4PCSs achieved the lowest RMSE for the Bunny dataset (0.599 mm), while the proposed method yielded the lowest RMSE for both the Dragon (0.964 mm) and chair (1.115 mm) datasets.

4. Discussion

Based on the data in Table 2 and Table 3, the registration time of the proposed method is only slightly slower than that of K4PCSs for the Bunny dataset; for all other datasets tested, the proposed method achieves the fastest registration time. Regarding registration accuracy, KD-ICP performs slightly better than the proposed method on the Bunny dataset; however, the registration time of KD-ICP is approximately twice that of the proposed method. For the Dragon and chair datasets, the proposed method demonstrates both optimal accuracy and the fastest registration speed. These results indicate that, compared to the improved KD-ICP method [4,5], K4PCSs method [9], and the 3D-NDT method [10], the proposed approach achieves faster and more accurate data registration, significantly enhancing the accuracy and efficiency of point cloud data registration. Noise typically occurs at the edges of the main data, making it easy to filter out the noise that is far from the main data using statistical filters. For unevenly distributed data, the dynamic neighborhood is adjusted appropriately based on data density to include more feature information. In the registration of diverse point cloud datasets the proposed method offers a favorable balance of accuracy and efficiency, enabling the completion of point cloud data registration with high precision and speed.

5. Conclusions

This paper introduces a novel point cloud registration method based on dynamic neighborhood features. The method leverages the projection distance relationship between statistical feature points and other points within the dynamic neighborhood. This approach establishes accurate feature point correspondences and enables robust point cloud registration. By accurately matching feature points and overcoming the limitations of traditional registration techniques that often struggle to simultaneously optimize accuracy and efficiency, this method offers a compelling solution. The incorporation of multiple parameters enhances the robustness of the approach. Furthermore, by using feature points instead of the entire point cloud for registration, the computational burden is reduced, leading to more efficient data processing. The dynamic neighborhood approach also allows the method to adapt to point cloud data with varying scales and densities, enhancing its flexibility and broadening its application potential in complex environments, especially in fields such as autonomous driving, robotic navigation, building reconstruction, and 3D map construction. Integrating this method into practical systems can enhance the capability of processing point cloud data in complex environments. This study was tested only on a limited set of data and did not extend to a wider range of complex environmental data, which presents a limitation. Future work needs to involve testing on more diverse datasets to validate the feasibility of the proposed method and to develop a simpler and more direct guideline for parameter selection.

Author Contributions

Conceptualization, X.L. and Z.W.; methodology, X.L.; software, X.L.; validation, X.L., Z.W. and R.W.; formal analysis, R.W.; investigation, R.W.; resources, Z.W.; data curation, X.L.; writing—original draft preparation, X.L.; writing—review and editing, Z.W.; visualization, R.W.; supervision, Z.W.; project administration, Z.W.; funding acquisition, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ICPIterative Closest Point
LiDARLight Detection and Ranging
RANSACRandom Sample Consensus
ISSsIntrinsic Shape Signatures
3SDC3D Shape Context
4PCSsFour-Point Congruent Sets
K4PCSsKeypoint-Based Four-Points Congruent Sets
3D-NDT3 Dimensions-Normal Distributions Transform
NDTNormal Distributions Transform
KDK-Dimensional
RMSERoot Mean Square Error

References

  1. Marani, R.; Renò, V.; Nitti, M.; D’Orazio, T.; Stella, E. A modified iterative closest point algorithm for 3D point cloud registration. Comput.-Aided Civ. Infrastruct. Eng. 2016, 31, 515–534. [Google Scholar] [CrossRef]
  2. Yang, H.; Shi, J.; Carlone, L. Teaser: Fast and certifiable point cloud registration. IEEE Trans. Robot. 2020, 37, 314–333. [Google Scholar] [CrossRef]
  3. Zou, Z.; Lang, H.; Lou, Y.; Lu, J. Plane-based global registration for pavement 3D reconstruction using hybrid solid-state LiDAR point cloud. Autom. Constr. 2023, 152, 104907. [Google Scholar]
  4. Yang, J.; Cao, Z.; Zhang, Q. A fast and robust local descriptor for 3D point cloud registration. Inf. Sci. 2016, 346, 163–179. [Google Scholar]
  5. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar]
  6. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; SPIE: Boston, MA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  7. Xu, G.; Pang, Y.; Bai, Z.; Wang, Y.; Lu, Z. A fast point clouds registration algorithm for laser scanners. Appl. Sci. 2021, 11, 3426. [Google Scholar] [CrossRef]
  8. Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. In Proceedings of the SIGGRAPH ‘08: ACM SIGGRAPH 2008 Papers, Los Angeles, CA, USA, 11–15 August 2008; pp. 1–10. [Google Scholar]
  9. Theiler, P.W.; Wegner, J.D.; Schindler, K. Keypoint-based 4-points congruent sets–automated marker-less registration of laser scans. ISPRS J. Photogramm. Remote Sens. 2014, 96, 149–163. [Google Scholar]
  10. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27 October–1 November 2003; Volume 3, pp. 2743–2748. [Google Scholar]
  11. Xue, S.; Zhang, Z.; Lv, Q.; Meng, X.; Tu, X. Point cloud registration method for pipeline workpieces based on PCA and improved ICP algorithms. IOP Conf. Ser. Mater. Sci. Eng. 2019, 612, 032188. [Google Scholar]
  12. Huang, X.; Zhang, J.; Wu, Q.; Fan, L.; Yuan, C. A coarse-to-fine algorithm for matching and registration in 3D cross-source point clouds. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2965–2977. [Google Scholar]
  13. Sadeghi, H.; Raie, A.A. Approximated Chi-square distance for histogram matching in facial image analysis: Face and expression recognition. In Proceedings of the 2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP), Isfahan, Iran, 22–23 November 2017; pp. 188–191. [Google Scholar]
  14. Pittner, S.; Kamarthi, S.V. Feature extraction from wavelet coefficients for pattern recognition tasks. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 83–88. [Google Scholar]
  15. Zheng, Y.; Li, Y.; Yang, S.; Lu, H. Global-PBNet: A novel point cloud registration for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22312–22319. [Google Scholar] [CrossRef]
  16. Sun, R.; Zhang, E.; Mu, D.; Ji, S.; Zhang, Z.; Liu, H.; Fu, Z. Optimization of the 3D point cloud registration algorithm based on FPFH features. Appl. Sci. 2023, 13, 3096. [Google Scholar] [CrossRef]
  17. Arce, G.R. Multistage order statistic filters for image sequence processing. IEEE Trans. Signal Process. 1991, 39, 1146–1163. [Google Scholar] [CrossRef]
  18. Chan, C.K.; Tan, S.T. Determination of the minimum bounding box of an arbitrary solid: An iterative approach. Comput. Struct. 2001, 79, 1433–1449. [Google Scholar] [CrossRef]
  19. Gressin, A.; Mallet, C.; Demantké, J.; David, N. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge. ISPRS J. Photogramm. Remote Sens. 2013, 79, 240–251. [Google Scholar] [CrossRef]
  20. Biglia, A.; Zaman, S.; Gay, P.; Aimonino, D.R.; Comba, L. 3D point cloud density-based segmentation for vine rows detection and localisation. Comput. Electron. Agric. 2022, 199, 107166. [Google Scholar] [CrossRef]
  21. Zou, R.; Zhang, Y.; Chen, J.; Li, J.; Dai, W.; Mu, S. Density estimation method of mature wheat based on point cloud segmentation and clustering. Comput. Electron. Agric. 2023, 205, 107626. [Google Scholar] [CrossRef]
  22. Nurunnabi, A.; West, G.; Belton, D. Outlier detection and robust normal-curvature estimation in mobile laser scanning 3D point cloud data. Pattern Recognit. 2015, 48, 1404–1419. [Google Scholar] [CrossRef]
  23. Yao, Z.; Zhao, Q.; Li, X.; Bi, Q. Point cloud registration algorithm based on curvature feature similarity. Measurement 2021, 177, 109274. [Google Scholar] [CrossRef]
  24. Heil, C.E.; Walnut, D.F. Continuous and discrete wavelet transforms. SIAM Rev. 1989, 31, 628–666. [Google Scholar] [CrossRef]
  25. Makovetskii, A.; Voronin, S.; Kober, V.; Voronin, A. Point cloud registration based on multiparameter functional. Mathematics 2021, 9, 2589. [Google Scholar] [CrossRef]
  26. Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef]
  27. Draper, C.; Reichle, R.; de Jeu, R.; Naeimi, V.; Parinussa, R.; Wagner, W. Estimating root mean square errors in remotely sensed soil moisture over continental scale domains. Remote Sens. Environ. 2013, 137, 288–298. [Google Scholar]
Figure 1. Flowchart of the dynamic neighborhood-based feature point registration method.
Figure 1. Flowchart of the dynamic neighborhood-based feature point registration method.
Applsci 15 04036 g001
Figure 2. Lidar equipment.
Figure 2. Lidar equipment.
Applsci 15 04036 g002
Figure 3. Comparison of the effects of statistical filtering on different point cloud data. (a) Chair data filtering, (b) storage box data filtering.
Figure 3. Comparison of the effects of statistical filtering on different point cloud data. (a) Chair data filtering, (b) storage box data filtering.
Applsci 15 04036 g003
Figure 4. Feature point selection results of different point cloud data. (a) Feature points of indoor data, (b) feature points of the dragon data, (c) feature points of the storage box data, and (d) feature points of the chair data.
Figure 4. Feature point selection results of different point cloud data. (a) Feature points of indoor data, (b) feature points of the dragon data, (c) feature points of the storage box data, and (d) feature points of the chair data.
Applsci 15 04036 g004
Figure 5. Registration results of different point cloud data. (a) Registration result of indoor data, (b) registration result of the Dragon data, (c) registration result of the storage box data, and (d) registration result of the chair data.
Figure 5. Registration results of different point cloud data. (a) Registration result of indoor data, (b) registration result of the Dragon data, (c) registration result of the storage box data, and (d) registration result of the chair data.
Applsci 15 04036 g005
Figure 6. Comparison of point cloud data registration results.
Figure 6. Comparison of point cloud data registration results.
Applsci 15 04036 g006
Table 1. Livox-Mid360 LiDAR main specifications.
Table 1. Livox-Mid360 LiDAR main specifications.
SpecificationsValue
Weight265 g
Dimensions65 (L) × 65 (W) × 60 (H) mm
Communication Interface100 BASE-TX Ethernet
Field of View (FOV)Horizontal 360°, Vertical −7°~52°
Point Cloud Output200,000 points/second
Range0.1~70 m
Power25 W
Resolution<0.15°
Frame Rate10 Hz
Table 2. Registration time for point cloud data.
Table 2. Registration time for point cloud data.
MethodRegistered DataTime/sStandard Deviations
K4PCSs method Bunny4.96.976
Dragon14.11
Chair22.07
KD-ICP methodBunny11.8610.15
Dragon8.64
Chair31.5
3D-NDT methodBunny8.72.206
Dragon7.61
Chair12.78
Proposed methodBunny5.72.46
Dragon4.69
Chair10.36
Table 3. The root mean square error (RMSE) of the method after registration.
Table 3. The root mean square error (RMSE) of the method after registration.
MethodRegistered DataRMSE/mmStandard Deviations
K4PCSs method Bunny2.860.1028
Dragon3.089
Chair2.882
KD-ICP methodBunny0.5991.520
Dragon0.978
Chair4.207
3D-NDT methodBunny1.5870.7225
Dragon1.931
Chair3.264
Proposed methodBunny0.6980.173
Dragon0.964
Chair1.115
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Wang, R.; Wang, Z. Research on a Point Cloud Registration Method Based on Dynamic Neighborhood Features. Appl. Sci. 2025, 15, 4036. https://doi.org/10.3390/app15074036

AMA Style

Liu X, Wang R, Wang Z. Research on a Point Cloud Registration Method Based on Dynamic Neighborhood Features. Applied Sciences. 2025; 15(7):4036. https://doi.org/10.3390/app15074036

Chicago/Turabian Style

Liu, Xinrui, Rutao Wang, and Zongsheng Wang. 2025. "Research on a Point Cloud Registration Method Based on Dynamic Neighborhood Features" Applied Sciences 15, no. 7: 4036. https://doi.org/10.3390/app15074036

APA Style

Liu, X., Wang, R., & Wang, Z. (2025). Research on a Point Cloud Registration Method Based on Dynamic Neighborhood Features. Applied Sciences, 15(7), 4036. https://doi.org/10.3390/app15074036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop