Next Article in Journal
Hybrid Optoelectronic SAR Moving Target Detection and Imaging Method
Previous Article in Journal
Category-Guided Transformer for Semantic Segmentation of High-Resolution Remote Sensing Images
Previous Article in Special Issue
Single Image to Semantic BIM: Domain-Adapted 3D Reconstruction and Annotations via Multi-Task Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Resolution VGICP Algorithm for Robust and Efficient Point-Cloud Registration

1
National Key Laboratory of Uranium Resources Exploration-Mining and Nuclear Remote Sensing, East China University of Technology, Nanchang 330013, China
2
School of Surveying and Geoinformation Engineering, East China University of Technology, Nanchang 330013, China
3
Key Laboratory of Mine Environmental Monitoring and Improving Around Poyang Lake of Ministry of Natural Resources, East China University of Technology, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 3056; https://doi.org/10.3390/rs17173056
Submission received: 20 July 2025 / Revised: 22 August 2025 / Accepted: 31 August 2025 / Published: 2 September 2025
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))

Abstract

To address the problem of point-cloud registration accuracy degradation or even failure in traditional Voxelized GICP(VGICP) under bad initial pose due to improper voxel resolution settings, this paper proposes an Adaptive Resolution VGICP (AR-VGICP) algorithm. The algorithm first automatically estimates the initial voxel resolution based on the absolute deviations between source points outside the target voxel grid and their nearest neighbors in the target cloud, using the Median Absolute Deviation (MAD) method, and performs initial registration. Subsequently, the voxel resolution is dynamically updated according to the average nearest neighbor distance between the transformed source points and the target points, enabling progressive refined registration. The resolution update process terminates until the resolution change rate falls below a predefined threshold or the updated resolution does not exceed the density-adaptive resolution. Experimental results on both simulated and real-world datasets demonstrate that AR-VGICP achieves a 100% registration success rate, while VGICP fails in some cases due to small voxel resolution. On the KITTI dataset, AR-VGICP reduces translation error by 9.4% and rotation error by 14.8% compared to VGICP with a fixed 1 m voxel resolution, while increasing computation time by only 3%. Results from UAV LiDAR experiments show that, in residential area data, AR-VGICP achieves a maximum reduction of 33.4% in translation error and 21.4% in rotation error compared to VGICP (1.0 m). These results demonstrate that AR-VGICP attains a higher registration success rate when the initial pose between point-cloud pairs is bad, and delivers superior point-cloud registration accuracy in urban scenarios compared to VGICP.

1. Introduction

With the rapid development of 3D laser scanning technology, three-dimensional (3D) point-cloud data has become a vital means of representing objects and environments across various domains, including autonomous driving [1,2], cultural heritage preservation [3,4], agricultural monitoring [5,6], and healthcare [7,8]. A point cloud is composed of a large number of discrete spatial points and does not rely on explicit topological structures. It enables high-precision recording of an object’s spatial position and geometric shape, providing rich information about the physical form, surface geometry, and spatial structure of real-world objects and environments. Compared to traditional imagery or video data, point clouds offer inherent advantages in expressing three-dimensional structural information, making them particularly well-suited for tasks involving 3D reconstruction [9], shape analysis [10], and spatial modeling [11].
However, the 3D scanning process is typically constrained by factors such as sensor viewpoints, occlusions, and measurement range, resulting in single scan or single flightline that can only capture partial surface information of the target object or scene. To address these inherent limitations, two complementary research directions have emerged: point-cloud completion [12], which reconstructs the complete 3D shape from partial observations using data-driven methods, and point-cloud registration [13], which provides a systematic solution by accurately aligning multiple scans. Specifically, obtaining a complete and globally consistent 3D representation generally requires multiple scans or flightlines from different viewpoints. The point-cloud data collected from these various positions inherently exhibit spatial discrepancies and must be accurately aligned and fused into a unified coordinate system to produce a consistent 3D model [14]. The key technology enabling this objective is point-cloud registration. Serving as a fundamental step for subsequent tasks such as 3D reconstruction, map building, object modeling, and structural inspection, the accuracy and robustness of point-cloud registration directly determine the completeness and reliability of the final data.
At present, registration methods used in engineering applications are still based on traditional techniques. Traditional registration methods usually consist of two parts: coarse registration and fine registration [15]. Coarse registration is performed when the relative positions between point clouds are completely unknown, with the aim of providing a good initial alignment for fine registration and thus improving overall registration accuracy [16,17,18]. Fine registration builds upon the coarse registration results and achieves high-precision spatial alignment through iterative optimization algorithms [19]. This paper mainly focuses on the key techniques and optimization methods for fine registration.
The Iterative Closest Point (ICP) [20] algorithm and its variants are among the most widely used techniques for point-cloud registration. ICP operates by establishing correspondences through nearest neighbor search between the source and target point clouds, and iteratively estimates the rigid transformation that minimizes the alignment error. Despite its simplicity and effectiveness, ICP suffers from low computational efficiency and is highly sensitive to noise and outliers. To overcome these limitations, numerous scholars have introduced diverse improvement strategies. Chen et al. proposed the Point-to-Plane ICP [21], which replaces the traditional point-to-point distance metric with the distance from a point to the tangent plane of the corresponding point in the target cloud, thereby improving registration accuracy and convergence speed. However, this approach requires normal estimation, which increases computational cost and still lacks robustness to noise. To address the local minima problem, Yang et al. proposed the Globally Optimal ICP (Go-ICP) [22], which integrates a branch-and-bound strategy into the ICP framework to achieve a global optimum for joint rotation–translation optimization without requiring an initial pose. Segal et al. developed the Generalized-ICP (GICP) [23], which models the alignment error using the Mahalanobis distance derived from local point distributions, significantly improving robustness. However, GICP also introduces increased computational complexity due to covariance matrix estimation. To address efficiency concerns while maintaining high accuracy, Koide et al. proposed Voxelized GICP (VGICP) [24], which integrates the voxelization concept from the Normal Distributions Transform (NDT) [25] into the GICP framework. This approach retains the precision of GICP while significantly enhancing computational speed through voxel-based processing. Liu et al. proposed Multi-Scale VGICP (MVGICP) [26], which employs a hierarchical voxelization strategy to progressively refine alignment, improving robustness and efficiency in noisy and large-angle rotation scenarios. Zhang et al. proposed MFINet [27], a multi-scale feature interaction network that leverages multi-branch feature extraction and interaction mechanisms to enhance the accuracy and robustness of point-cloud registration.
Although VGICP achieves both high accuracy and efficiency, it relies on a fixed voxel resolution, particularly when there is a large initial misalignment between the source and target point clouds. When the voxel resolution is set too small, the overlapping region between point clouds may be insufficient, making the registration process ineffective or even prone to failure. Conversely, if the resolution is set too large, the robustness of registration improves, but at the cost of introducing greater approximation errors, which ultimately degrades the registration accuracy. MVGICP adaptively sets and updates the resolution based on point-cloud density, with the primary goal of improving registration efficiency and accuracy. However, when there exists a large initial pose deviation between the source and target point clouds, the applicability of this method becomes limited.
To address the issue that traditional VGICP algorithm often suffers from reduced registration accuracy or even failure under bad initial pose due to improper voxel resolution settings, this paper proposes an Adaptive Resolution VGICP (AR-VGICP) algorithm. Specifically, based on the spatial relationship between the source and target point clouds, the method first estimates a suitable initial voxel resolution based on the Median Absolute Deviation (MAD) of the absolute errors between source points located outside the target voxel grid and their nearest neighbors in the target point cloud. Using this estimated resolution, an initial registration is performed. Subsequently, the voxel resolution is dynamically updated based on the average nearest neighbor distance between the transformed source points and the target points, progressively achieving fine registration. Finally, the resolution update process is terminated when the resolution change rate falls below a predefined threshold or the updated resolution no longer exceeds the density-adaptive resolution. Compared to VGICP, the proposed adaptive strategy not only achieves higher registration accuracy, but also significantly enhances the robustness and stability of point-cloud registration under bad initial pose, thereby greatly improving its practicality and applicability.

2. Materials and Methods

2.1. Experimental Datasets

Experiments were conducted on both synthetic and real-world mobile LiDAR scanning (MLS) and UAV LiDAR scanning (ULS) datasets, which is consistent with the data types used in the original VGICP work. Since AR-VGICP is primarily designed for geospatial point-cloud registration tasks, the MLS and ULS point clouds provide a representation of complex environmental features and challenges, making them suitable for the comprehensive evaluation of the proposed registration algorithm.

2.1.1. Synthetic Dataset

In the simulation experiments, two point clouds were selected for testing: one from the publicly available Fast GICP test data and the other from sequence 2011_10_03_drive_0027_sync of the KITTI dataset, as shown in Figure 1. To construct the registration test set, translation and rotation errors were randomly introduced along all three axes. The translation errors were within the range of [−3 m, 3 m], and the rotation errors were within the range of [−3°, 3°]. By combining these perturbation parameters, a total of 10 different error samples were generated to simulate significant position and orientation deviations commonly encountered in practical registration scenarios, as shown in Figure 2 and Figure 3.

2.1.2. Multi-Flightline UAV LiDAR Scanning Data

To evaluate the performance of the point-cloud registration algorithm in typical UAV application scenarios, this study utilizes multi-strip UAV LiDAR datasets covering both residential and mountainous areas, as shown in Figure 4. The dataset has two flightlines in the residential area and two flightlines in the mountainous area. Based on these datasets, four pairs of point clouds with noticeable misalignments between flightlines were selected. As shown in Figure 5 and Figure 6, the red point cloud was used as the target point cloud, and the green point cloud served as the source point cloud.

2.1.3. KITTI Public Dataset

To evaluate the performance of point-cloud registration algorithms in vehicle-mounted scenarios, this study adopts sequence 2011_10_03_drive_0027_sync from the widely used KITTI public dataset, which is a standard benchmark in autonomous driving research. The point-cloud data in this sequence were collected using a Velodyne HDL-64E rotating LiDAR manufactured by American company OUSTER Inc. (https://ouster.com/), capturing a segment of vehicle motion in an urban environment. The dataset parameters are summarized in Table 1. For testing and analysis purposes, the sequence is divided into 26 trajectory segments, each approximately 120 m in length, with each frame containing about 12,000 points.

2.2. Methods

Figure 7 illustrates the overall workflow of the proposed AR-VGICP algorithm. Initially, an initial voxel resolution for the current scenario is estimated based on the Median Absolute Deviation (MAD) of the absolute errors between source points located outside the target voxel grid and their nearest neighbors in the target point cloud. Using this estimated resolution, an initial registration is performed. Subsequently, the voxel resolution is dynamically refined according to the average nearest neighbor distance between the transformed source points and the target points, progressively achieving a finer registration. Finally, the resolution update process terminates when the change in resolution falls below a predefined threshold or when the updated resolution no longer exceeds the appropriate resolution computed from the target point cloud, completing the final registration.

2.2.1. Initial Resolution Estimation

In scenarios with bad initial pose between source and target point clouds, traditional VGICP may fail when using small voxel resolutions due to insufficient correspondences. To enhance robustness, this paper employs an adaptive strategy that estimates the initial voxel resolution based on statistical analysis of the absolute nearest neighbor errors along the x, y, z directions between source points not falling into the target voxel grid and their corresponding target points. Specifically, medians and MAD of these errors are used to determine a suitable resolution for the current scene. This approach effectively mitigates correspondence scarcity and significantly improves registration robustness in low-overlap regions.
Let the source point cloud be A = { a 0 , , a N } , and the target point cloud be B = { b 0 , , b M } , with an initial transformation matrix T S E 3 . Then, each transformed source point is given by:
A = a i = T a i ,   a i A
A target voxel grid with resolution R p r e s e t is constructed. For each transformed source point a i , a query is performed to determine whether a valid voxel exists in the target grid. Points without a corresponding voxel are considered unmatched and collected into the set U . For each unmatched point a i U , its nearest neighbor b j in the target point cloud is found via a KD (K-dimensional) tree, a spatial data structure that enables efficient nearest neighbor queries in high-dimensional spaces. The absolute deviation of their 3D coordinates are computed as:
d x , i = b x , j a x , i , d y , i = b y , j a y , i , d z , i = b z , j a z , i
For each set of deviation d x , i , d y , i and d z , i , the median is computed as:
m x = M e d i a n d x , i , m y = M e d i a n d y , i , m z = M e d i a n d z , i
The median MAD is then calculated as:
M A D x = M e d i a n d x , i m x , M A D y = M e d i a n d y , i m y , M A D z = M e d i a n d z , i m z
A scaling factor k 2 , 3 is used to control the sensitivity of the threshold. The threshold along each axis is computed as:
α x = m x + k M A D x , α y = m y + k M A D y , α z = m z + k M A D z
The final initial voxel resolution is then defined as:
R e s o l u t i o n i n i t   = R e s o l u t i o n p r e s e t + 2 max α x , α y , α z
In addition, to avoid unnecessary computational overhead when most source points are already well aligned, the voxel grid used for mismatch detection is directly adopted for the subsequent registration process. Specifically, if the number of unmatched points in set U is less than 35% of the total number of source points, a fixed voxel grid with a resolution of R p r e s e t is applied for the following registration.

2.2.2. Resolution Updating

In the initial stage, registration is performed based on the estimated initial voxel resolution. However, this resolution may be overly coarse, potentially introducing significant approximation errors and affecting registration accuracy. To address this issue, a dynamic resolution update mechanism is introduced in the subsequent registration process to progressively refine the voxel resolution and achieve more accurate alignment.
Since the initial registration has roughly aligned the source and target point clouds, only 50% of the transformed source points are used in the subsequent refinement stage to reduce computational cost. For each sampled point a i , the nearest neighbor b j in the target cloud is found, and the Euclidean distance is computed as:
d i = a i b j 2
where a i = T a i is the transformed source point under the current estimated T . The updated voxel resolution is computed as twice the mean of all Euclidean distances:
R e s o l u t i o n u p d a t e   = 2 1 N i   = 1 N d i
where N is the number of sampled points used in the computation.

2.2.3. Termination of Iteration

During the resolution update stage, a threshold R r e s is introduced to balance registration accuracy and computational efficiency. For each point in the target cloud, a nearest neighbor search is performed to compute distances to its nearest neighbor. These distances are aggregated to construct a global histogram, and the peak of this histogram is selected as R r e s .
d i j = b i b j 2
H B n = i = 1 M j = 1 k I d i j Δ = n
R r e s = 2 × ( arg min b H B b + 0.5 )
In the equation, b i and b j denote points in the target point cloud, and d i j represents the Euclidean distance between point b i and its nearest neighbor b j . M is the total number of points in the target point cloud, k is the number of nearest neighbors considered for each point, Δ is the bin width used to construct the distance histogram, n is the bin index, H B n is the global histogram counting the number of distances falling into each bin, and the indicator function I () determines whether a given distance falls within a specific bin.
If the current resolution is greater than R r e s , it indicates that the voxel size is still too large and should be further refined to improve registration accuracy. Otherwise, the update process is terminated and the final registration result is output.
In addition, in order to prevent unnecessary updates and ensure convergence, the resolution update process is also terminated when the difference between the current updated resolution and the resolution of the previous iteration is lower than the minimum.
R e s o l u t i o n l a s t R e s o l u t i o n c u r r e n < ϵ

2.2.4. Transformation Estimation

AR-VGICP is an improvement over the traditional VGICP algorithm. It retains the voxel-based transformation estimation framework while taking into account the relationship between the distribution of source points and the nearest-neighbor voxel distribution within the target point cloud. By utilizing the weighted distribution differences in points within each voxel grid, it achieves smooth distribution-to-distribution registration. The core of the transformation estimation lies in minimizing a cost function based on the Mahalanobis distance [23]:
T = a r g m i n T i ­ N i b j N i T a i T C j B N i + T C i A T T 1 b j N i T a i
In this equation, a i represents a source point, and b j denotes a point within the nearest neighbor voxel of the target point cloud that contains a i . C i A and C j B are the covariance matrices of a i and b j . N i is the number of points contained in the corresponding target voxel.

3. Results

To comprehensively evaluate the proposed algorithm’s performance in registration accuracy and robustness, three sets of comparative experiments were designed. The first set involves simulated tests where artificial disturbances are applied to point clouds to assess the algorithm’s adaptability under different initial transformations. The second set utilizes multi-strip UAV LiDAR data, while the third employs vehicle-mounted LiDAR data from the KITTI public dataset. These two real-world datasets serve to validate the algorithm’s applicability and stability in practical scenarios. VGICP and Fast GICP, two classical methods with multi-threading capability, were selected as baseline algorithms. All experiments were conducted on a Windows 10 operating system using Visual Studio 2022 as the development environment. Point-cloud processing was implemented using the Point Cloud Library (PCL 1.12.1), and the hardware platform was equipped with an Intel Core i7-8750H processor manufactured by Intel Corporation, located in Santa Clara, CA, USA, and 16 GB of RAM manufactured by King Bank, a memory module brand from China. In all three experiment groups, the target voxel resolution used in the initial resolution computation for AR-VGICP was set to 1 m.

3.1. Simulated Experiments

To evaluate the registration accuracy and robustness of the proposed algorithm under different initial transformations, two sets of synthetic data, synthetic data_01 and synthetic data_02, were selected for testing. Target point clouds were generated by applying translational and rotational perturbations along all three axes. The registration algorithms included VGICP with voxel resolutions of 3 m and 1 m, Fast GICP, and the proposed AR-VGICP.
We evaluate the registration algorithm using both rotational and translational errors [28,29,30,31,32]. Given the target point cloud P b , the transformation from the source point cloud P a to P b is denoted as T a , b . The residual transformation T a , b from P a to P b is defined as follows:
T a , b = T a , b ( T a , b G ) 1 = R a , b t a , b 0 1
In the equation, T a , b is the estimated transformation matrix obtained by the algorithm, and T a , b G is the ground-truth transformation matrix.
Then, based on their corresponding rotational component and R a , b translational component t a , b , the rotation error e a , b r and translation error e a , b t from P a to P b are computed as follows:
e a , b r = a r c c o s ( t r R a , b 1 2 ) × 180 π e a , b t = t a , b
Here, t r R a , b denotes the trace of the rotation matrix R a , b , and the rotation error e a , b r corresponds to the rotation angle in the axis-angle representation.
Given the rotational error e a , b r and translational error e a , b t , the successful registration(SR) of registration is defined as:
SR = 1 ( e a , b r < σ r ) ( e a , b t < σ t ) 0   e l s e
Here, σ r and σ t are the predefined thresholds for rotational and translational errors, respectively, which are set to 0.1° and 0.1 m in this study.
Figure 8 and Figure 9 show the registration results for 10 datasets from synthetic data_01 and synthetic data_02, respectively. Across all 20 datasets, Fast GICP and AR-VGICP consistently achieved successful registration, whereas VGICP (1.0 m) and VGICP (3.0 m) exhibited significant misalignments. Specifically, VGICP (1.0 m) failed in cases a, c, d, i, and j of synthetic data_01, and in cases a, e, and j of synthetic data_02. VGICP (3.0 m) failed in cases b, c, d, f, h, and j of synthetic data_01, and in cases a, d, f, and j of synthetic data_02. These results indicate that VGICP lacks robustness under poor initial poses.
Figure 10 and Figure 11 illustrate the translation and rotation errors on synthetic data_01 and synthetic data_02. In synthetic data_01, VGICP (1.0 m) produced six results with translation errors exceeding 1 m, reflecting weak robustness. In contrast, VGICP (3.0 m) yielded only one result with translation error above 1 m, but the large voxel size led to degraded accuracy, resulting in seven failed registrations, two of which met translation criteria but had rotation errors close to the threshold. In synthetic data_01, both VGICP (1.0 m) and VGICP (3.0 m) failed in four cases. Across both datasets, when both variants succeeded, VGICP (3.0 m) generally exhibited lower accuracy, with translation and rotation errors one to two orders of magnitude higher than VGICP (1.0 m), highlighting the accuracy loss caused by coarse voxel resolution.
In all experiments, both AR-VGICP and Fast-GICP successfully completed all registration tasks, with translation and rotation errors consistently controlled within 1 mm and 0.001°, respectively. Compared with VGICP, Fast-GICP achieved higher registration success rates and accuracy, benefiting from its nearest-neighbor search with the distance parameter set to the maximum finite value and point-to-point association strategy, which avoids the detail loss caused by coarse voxel resolutions. However, this approach incurs a significant computational burden in high-density point-cloud scenarios. In contrast, AR-VGICP achieves accuracy comparable to Fast-GICP while retaining the voxelization strategy.
Figure 12 presents the registration time of each algorithm on the synthetic datasets synthetic data_01 and synthetic data_02. In general, AR-VGICP requires longer registration time than VGICP and Fast-GICP, with its runtime typically about twice that of VGICP However, in certain cases, the runtime of AR-VGICP and VGICP is comparable. For example, in samples a and b of synthetic data_01 and sample d of synthetic data_02, the time difference between the two methods remains within 20%.

3.2. Real-World Experiments on Multi-Flightline UVA LiDAR Data

To evaluate the registration accuracy and robustness of the proposed method under large initial deviations in practical airborne scenarios, four pairs of airborne point-cloud data with significant misalignments were selected for testing. Five algorithms were compared: Fast GICP, Fast GICP (1.0 m), VGICP (1.0 m), VGICP (0.5 m), and the proposed AR-VGICP. The ground truth for each dataset was obtained through manual registration. The evaluation methodology was consistent with that used in the simulated experiments, focusing on both translational and rotational errors.
Figure 13 and Figure 14 show the registration results of the different algorithms. As shown, Fast GICP (1.0 m), VGICP (1.0 m), and AR-VGICP successfully completed registration across all datasets. In contrast, Fast GICP exhibited noticeable misalignments in four datasets, while VGICP (0.5 m) produced misaligned results on residence-01, residence-02, and mountain-02.
Figure 15 illustrates the translation and rotation errors of different algorithms. For the four data pairs, both Fast GICP and VGICP (0.5 m) exhibited poor performance, with translation errors exceeding 0.1 m and rotational errors exceeding 0.1° in three datasets. In terms of translation error, AR-VGICP consistently outperformed VGICP (1.0 m) in residential scenes, achieving a maximum error reduction of 33.4%. However, in mountainous areas, AR-VGICP showed slightly reduced accuracy compared to VGICP (1.0 m). Regarding rotational error, Fast GICP (1.0 m), VGICP (1.0 m), and AR-VGICP all maintained errors within 0.1°, demonstrating overall stability.
Figure 16 compares the registration time of each algorithm. Overall, AR-VGICP exhibited lower computation time than Fast GICP (1.0 m). For structurally rich point clouds (e.g., residential areas), the runtime difference between AR-VGICP and VGICP (1.0 m) was small, with AR-VGICP incurring up to 38% more time in the worst case. For structurally sparse point clouds (e.g., mountainous areas), the time gap between the two methods was large.

3.3. Real-World Experiments Using Data Acquired from Vehicle-Mounted LiDAR Sensors

To evaluate the registration accuracy and efficiency of the proposed method in real-world vehicle-mounted scenarios, experiments were conducted based on the KITTI dataset. Four registration methods were compared in the experiments: VGICP (1 m), VGICP (0.5 m), Fast GICP (1 m), and AR-VGICP. To comprehensively assess the performance of these methods, Absolute Trajectory Error (ATE) and Relative Trajectory Error (RTE) were adopted as evaluation metrics [33]. The definition of ATE is as follows:
AT E rot = 1 N i ­ | | Δ R i | | 2 1 2
AT E pos = 1 N i ­ | | Δ p i | | 2 1 2
where Δ R i = R i 1 R ^ i and Δ p i = p i R i p ^ i . The definition of RTE is as follows:
R T E r o t = ( δ R k ) = ( R e R ^ e T )
R T E p o s = | | p e δ R k p ^ e | | 2
where R e = R t 1 R t + N and p e = p t p t + N . The parameters t and t + N define the evaluation window for RTE. In our experiments, the window sizes are set to 10, 15, and 25 m, corresponding to the traveled distance of the sensor. The RTE is evaluated using a sliding window approach.
Figure 17 illustrates the trajectory results obtained by different registration methods, with the overall trajectory composed by stitching together 26 segments. It is evident that the trajectory estimated by VGICP (0.5 m) exhibits significant deviations from the ground truth, whereas the trajectories from VGICP (1 m), Fast GICP (1 m), and AR-VGICP show relatively smaller deviations. Notably, the VGICP (1 m) trajectory reveals registration failures in certain turning areas, which cause the subsequent straight segments to deviate from the true trajectory. This issue is more pronounced in VGICP (0.5 m), where most segments exhibit substantial misalignments. In contrast, AR-VGICP demonstrates superior robustness, with such registration errors rarely occurring, resulting in a more stable and accurate overall trajectory.
We computed both the ATE and RTE for all 26 trajectories, with results presented in Table 2 and Figure 18. The experiments demonstrate that AR-VGICP outperforms VGICP (1 m) and VGICP (0.5 m) in registration accuracy, while also exhibiting significantly higher efficiency compared to Fast GICP. Notably, VGICP’s performance varies considerably with voxel resolution, and mismatched resolutions lead to substantially larger errors. The translation error difference between VGICP (1 m) and VGICP (0.5 m) in ATE reaches 3.247 m. Figure 18 shows that although the median errors of all algorithms are close, VGICP (0.5 m) displays a pronounced skewed distribution in translation RTE, with some errors significantly exceeding the median and exhibiting large variability. In contrast, AR-VGICP’s error distribution is more uniform and overall accuracy is higher, demonstrating stronger robustness in point-cloud registration.

4. Discussion

Through three sets of experiments, this study systematically reveals that improper voxel resolution settings can lead to significant degradation—or even failure—of VGICP registration, particularly in cases with large point-cloud deviations. To address this issue, AR-VGICP is proposed, which effectively reduces the sensitivity of registration performance to voxel resolution settings and significantly enhances both accuracy and robustness. In all three experimental scenarios, AR-VGICP achieved a 100% registration success rate while both Fast GICP and VGICP experienced registration failures due to inappropriate parameter settings.
Further analysis based on experiments using data acquired from vehicle-mounted LiDAR sensors indicates that AR-VGICP significantly outperforms VGICP in continuous registration tasks within urban environments. This is mainly attributed to its adaptive resolution strategy, which dynamically adjusts voxel size according to actual point-cloud features, thereby improving registration stability and enhancing overall trajectory consistency. In contrast, VGICP (0.5 m), due to improper resolution settings, exhibits substantial registration deviations in some frames, with a noticeably skewed distribution of translational relative errors and some errors far exceeding the median, severely impacting registration accuracy.
Moreover, multi-flightline UVA LiDAR data experiments revealed a significant increase in registration time for AR-VGICP in point-cloud scenarios with unclear or sparse structures. This is mainly because, in such scenarios, the advantages of voxel-based structures diminish, while the adaptive adjustment mechanism and repeated optimization iterations lead to an increased number of iterations, thereby raising the overall computational cost. Although these extreme cases are relatively rare in practical applications, they highlight the necessity of further optimizing the algorithm’s structure and computational efficiency under complex conditions.
Overall, AR-VGICP effectively overcomes the strong resolution dependence problem of traditional VGICP and achieves higher registration accuracy and robustness, making it suitable for a wide range of practical application scenarios. Nevertheless, AR-VGICP still has room for further improvement in dynamic environments and multi-sensor fusion tasks. Future work will focus on optimizing the efficiency of the algorithm under sparse point-cloud conditions and large initial deviations, for example, by incorporating GPU-based parallel processing strategies. Moreover, its potential applications in multi-sensor fusion–based localization and mapping will be further explored to continuously expand the applicability and engineering value of AR-VGICP.

5. Conclusions

This paper proposes an AR-VGICP method to address the problem of degraded registration accuracy or even failure in traditional VGICP, which often arises from improper voxel resolution settings—particularly under bad initial pose between point clouds. Unlike conventional VGICP methods that require manually defined fixed resolutions, the proposed AR-VGICP automatically determines the initial voxel size based on the distribution characteristics of the source and target point clouds. After the coarse registration stage, the voxel structure is further dynamically updated according to local registration results, enabling finer and more accurate alignment.
Extensive experiments on both simulated and real-world datasets demonstrate that AR-VGICP achieves a 100% registration success rate. Specifically, on the KITTI dataset, AR-VGICP reduces translation error by 9.4% and rotation error by 14.8% compared to VGICP (1.0 m), while increasing computation time by only 3%. Results from UAV LiDAR experiments show that, in residential area data, AR-VGICP achieves a maximum reduction of 33.4% in translation error and 21.4% in rotation error compared to VGICP (1.0 m). These findings strongly validate its superior accuracy and robustness. More importantly, AR-VGICP effectively mitigates the sensitivity of registration results to voxel resolution settings, significantly enhancing adaptability in complex environments, and achieves a favorable balance between registration accuracy and computational cost in most cases, demonstrating strong practical value and broad application potential.

Author Contributions

Conceptualization, Z.L. and H.L.; methodology, Z.L.; software, Z.L.; validation, Z.L.; formal analysis, Z.L., H.L. and Y.X.; investigation, Z.L.; resources, Z.L. and H.L.; data curation, Z.L.; writing—original draft preparation, Z.L., H.L. and Y.X.; writing—review and editing, Z.L. and H.L.; visualization, Z.L.; supervision, H.L. and Y.X.; project administration, H.L.; funding acquisition, Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China under Grant No. 42174055 and Ganpo Talent Support Program—Key Disciplinary Academic and Technical Leader Support Project under Grant No. 20243BCE51111.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, S.; Liu, B.; Feng, C.; Vallespi-Gonzalez, C.; Wellington, C. 3D Point Cloud Processing and Learning for Autonomous Driving: Impacting Map Creation, Localization, and Perception. IEEE Signal Process. Mag. 2021, 38, 68–86. [Google Scholar] [CrossRef]
  2. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3412–3432. [Google Scholar] [CrossRef] [PubMed]
  3. Yang, S.; Xu, S.; Huang, W. 3D Point Cloud for Cultural Heritage: A Scientometric Survey. Remote Sens. 2022, 14, 5542. [Google Scholar] [CrossRef]
  4. Crisan, A.; Pepe, M.; Costantino, D.; Herban, S. From 3D Point Cloud to an Intelligent Model Set for Cultural Heritage Conservation. Heritage 2024, 7, 1419–1437. [Google Scholar] [CrossRef]
  5. Yang, Y.; Zhang, J.; Wu, K.; Zhang, X.; Sun, J.; Peng, S.; Li, J.; Wang, M. 3D Point Cloud on Semantic Information for Wheat Reconstruction. Agriculture 2021, 11, 450. [Google Scholar] [CrossRef]
  6. Fusaro, D.; Magistri, F.; Behley, J.; Pretto, A.; Stachniss, C. Horticultural Temporal Fruit Monitoring via 3D Instance Segmentation and Re-Identification Using Point Clouds. arXiv 2024, arXiv:2411.07799. [Google Scholar] [CrossRef]
  7. Drokin, I.; Ericheva, E. Deep Learning on Point Clouds for False Positive Reduction at Nodule Detection in Chest CT Scans. In Proceedings of the International Conference on Analysis of Images, Social Networks and Texts, Tbilisi, Georgia, 16–18 December 2021; Volume 12602, pp. 201–215. [Google Scholar]
  8. Saiti, E.; Theoharis, T. Multimodal Registration across 3D Point Clouds and CT-Volumes. Comput. Graph. 2022, 106, 259–266. [Google Scholar] [CrossRef]
  9. Ma, Z.; Liu, S. A Review of 3D Reconstruction Techniques in Civil Engineering and Their Applications. Adv. Eng. Inform. 2018, 37, 163–174. [Google Scholar] [CrossRef]
  10. Blomley, R.; Weinmann, M.; Leitloff, J.; Jutzi, B. Shape distribution features for point cloud analysis—A geometric histogram approach on multiple scales. ISPRS Ann. 2014, II-3, 9. [Google Scholar] [CrossRef]
  11. Accurate and Serialized Dense Point Cloud Reconstruction for Aerial Video Sequences. Available online: https://www.mdpi.com/2072-4292/15/6/1625 (accessed on 2 July 2025).
  12. Zhuang, Z.; Zhi, Z.; Han, T.; Chen, Y.; Chen, J.; Wang, C.; Cheng, M.; Zhang, X.; Qin, N.; Ma, L. A Survey of Point Cloud Completion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5691–5711. [Google Scholar] [CrossRef]
  13. Huang, X.; Mei, G.; Zhang, J.; Abbas, R. A Comprehensive Survey on Point Cloud Registration. arXiv 2021, arXiv:2103.02690. [Google Scholar] [CrossRef]
  14. Theiler, P.W.; Wegner, J.D.; Schindler, K. Globally Consistent Registration of Terrestrial Laser Scans via Graph Optimization. ISPRS J. Photogramm. Remote Sens. 2015, 109, 126–138. [Google Scholar] [CrossRef]
  15. Yang, J.; Zhang, C.; Wang, Z.; Cao, X.; Ouyang, X.; Zhang, X.; Zeng, Z.; Zeng, Z.; Lu, B.; Xia, Z.; et al. 3D Registration in 30 Years: A Survey. arXiv 2024, arXiv:2412.13735. [Google Scholar] [CrossRef]
  16. Díez, Y.; Roure, F.; Lladó, X.; Salvi, J. A Qualitative Review on 3D Coarse Registration Methods. ACM Comput. Surv. 2015, 47, 1–36. [Google Scholar] [CrossRef]
  17. Pirotti, F.; Guarnieri, A.; Chiodini, S.; Bettanini, C. Automatic Coarse Co-Registration of Point Clouds from Diverse Scan Geometries: A Test of Detectors and Descriptors. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2023, X-1/W1-2023, 581–587. [Google Scholar] [CrossRef]
  18. Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U. Pairwise Coarse Registration of Point Clouds in Urban Scenes Using Voxel-Based 4-Planes Congruent Sets. ISPRS J. Photogramm. Remote Sens. 2019, 151, 106–123. [Google Scholar] [CrossRef]
  19. Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot 2015, 4, 1–104. [Google Scholar] [CrossRef]
  20. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  21. Chen, Y.; Medioni, G. Object Modeling by Registration of Multiple Range Images. In Proceedings of the 1991 IEEE International Conference on Robotics and Automation Proceedings, Sacramento, CA, USA, 9–11 April 1991; Volume 3, pp. 2724–2729. [Google Scholar]
  22. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2241–2254. [Google Scholar] [CrossRef]
  23. Segal, A.; Haehnel, D.; Thrun, S. Generalized-ICP. In Proceedings of the Robotics: Science and Systems V, Robotics: Science and Systems Foundation, Seattle, WA, USA, 28 June 2009. [Google Scholar]
  24. Koide, K.; Yokozuka, M.; Oishi, S.; Banno, A. Voxelized GICP for Fast and Accurate 3D Point Cloud Registration. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11054–11059. [Google Scholar]
  25. Biber, P.; Strasser, W. The Normal Distributions Transform: A New Approach to Laser Scan Matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 3, pp. 2743–2748. [Google Scholar]
  26. Liu, H.; Zhang, Y.; Lei, L.; Xie, H.; Li, Y.; Sun, S. Hierarchical Optimization of 3D Point Cloud Registration. Sensors 2020, 20, 6999. [Google Scholar] [CrossRef]
  27. Cao, H.; Chen, D.; Zhang, Y.; Zhou, H.; Wen, D.; Cao, C. MFINet: A Multi-Scale Feature Interaction Network for Point Cloud Registration. Vis. Comput. 2025, 41, 4067–4079. [Google Scholar] [CrossRef]
  28. Guo, Y.; Sohel, F.; Bennamoun, M.; Wan, J.; Lu, M. An Accurate and Robust Range Image Registration Algorithm for 3D Object Modeling. IEEE Trans. Multimed. 2014, 16, 1377–1390. [Google Scholar] [CrossRef]
  29. Pomerleau, F.; Colas, F.; Siegwart, R.; Magnenat, S. Comparing ICP Variants on Real-World Data Sets. Auton. Robot. 2013, 34, 133–148. [Google Scholar] [CrossRef]
  30. Theiler, P.W.; Wegner, J.D.; Schindler, K. Keypoint-Based 4-Points Congruent Sets–Automated Marker-Less Registration of Laser Scans. ISPRS J. Photogramm. Remote Sens. 2014, 96, 149–163. [Google Scholar] [CrossRef]
  31. Yang, B.; Dong, Z.; Liang, F.; Liu, Y. Automatic Registration of Large-Scale Urban Scene Point Clouds Based on Semantic Feature Points. ISPRS J. Photogramm. Remote Sens. 2016, 113, 43–58. [Google Scholar] [CrossRef]
  32. Dong, Z.; Yang, B.; Liang, F.; Huang, R.; Scherer, S. Hierarchical Registration of Unordered TLS Point Clouds Based on Binary Shape Context Descriptor. ISPRS J. Photogramm. Remote Sens. 2018, 144, 61–79. [Google Scholar] [CrossRef]
  33. Zhang, Z.; Scaramuzza, D. A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7244–7251. [Google Scholar]
Figure 1. The synthetic dataset (overhead view).
Figure 1. The synthetic dataset (overhead view).
Remotesensing 17 03056 g001
Figure 2. Data composition of synthetic data_01 (red point cloud: target; blue point cloud: source). (a) point clouds with translation T (0.1 m, 2.4 m, 1.2 m) and rotation R (1.7°, 2.6°, 0.8°); (b) point clouds with translation T (−1.0 m, −1.7 m, −2.3 m) and rotation R (−2.5°, −0.7°, 2.8°); (c) point clouds with translation T (2.7 m, −0.9 m, −1.9 m) and rotation R (−2.0°, 2.7°, −2.2°); (d) point clouds with translation T (−2.8 m, 2.0 m, 1.9 m) and rotation R (1.1°, 2.6°, 0.2°); (e) point clouds with translation T (−1.3 m, −2.1 m, 1.3 m) and rotation R (−1.2°, −2.9°, −2.4°); (f) point clouds with translation T (2.7 m, −2.5 m, −1.7 m) and rotation R (−1.1°, 2.6°, −0.1°); (g) point clouds with translation T (0.1 m, 0.7 m, 0.1 m) and rotation R (−2.0°, −1.3°, 0.6°); (h) point clouds with translation T (−1.6 m, −0.8 m, −2.3 m) and rotation R (0.4°, 1.7°, 1.0°); (i) point clouds with translation T (1.0 m, −1.2 m, 1.0 m) and rotation R (2.4°, 0.5°, 2.7°); (j) point clouds with translation T (1.1 m, −2.5 m, −2.3 m) and rotation R (−2.6°, −1.2°, −1.1°).
Figure 2. Data composition of synthetic data_01 (red point cloud: target; blue point cloud: source). (a) point clouds with translation T (0.1 m, 2.4 m, 1.2 m) and rotation R (1.7°, 2.6°, 0.8°); (b) point clouds with translation T (−1.0 m, −1.7 m, −2.3 m) and rotation R (−2.5°, −0.7°, 2.8°); (c) point clouds with translation T (2.7 m, −0.9 m, −1.9 m) and rotation R (−2.0°, 2.7°, −2.2°); (d) point clouds with translation T (−2.8 m, 2.0 m, 1.9 m) and rotation R (1.1°, 2.6°, 0.2°); (e) point clouds with translation T (−1.3 m, −2.1 m, 1.3 m) and rotation R (−1.2°, −2.9°, −2.4°); (f) point clouds with translation T (2.7 m, −2.5 m, −1.7 m) and rotation R (−1.1°, 2.6°, −0.1°); (g) point clouds with translation T (0.1 m, 0.7 m, 0.1 m) and rotation R (−2.0°, −1.3°, 0.6°); (h) point clouds with translation T (−1.6 m, −0.8 m, −2.3 m) and rotation R (0.4°, 1.7°, 1.0°); (i) point clouds with translation T (1.0 m, −1.2 m, 1.0 m) and rotation R (2.4°, 0.5°, 2.7°); (j) point clouds with translation T (1.1 m, −2.5 m, −2.3 m) and rotation R (−2.6°, −1.2°, −1.1°).
Remotesensing 17 03056 g002
Figure 3. Data composition of synthetic data_02 (rotated −90°, red point cloud: target; blue point cloud: source). (a) point clouds with translation T (0.1 m, 2.4 m, 1.2 m) and rotation R (1.7°, 2.6°, 0.8°); (b) point clouds with translation T (−1.0 m, −1.7 m, −2.3 m) and rotation R (−2.5°, −0.7°, 2.8°); (c) point clouds with translation T (2.7 m, −0.9 m, −1.9 m) and rotation R (−2.0°, 2.7°, −2.2°); (d) point clouds with translation T (−2.8 m, 2.0 m, 1.9 m) and rotation R (1.1°, 2.6°, 0.2°); (e) point clouds with translation T (−1.3 m, −2.1 m, 1.3 m) and rotation R (−1.2°, −2.9°, −2.4°); (f) point clouds with translation T (2.7 m, −2.5 m, −1.7 m) and rotation R (−1.1°, 2.6°, −0.1°); (g) point clouds with translation T (0.1 m, 0.7 m, 0.1 m) and rotation R (−2.0°, −1.3°, 0.6°); (h) point clouds with translation T (−1.6 m, −0.8 m, −2.3 m) and rotation R (0.4°, 1.7°, 1.0°); (i) point clouds with translation T (1.0 m, −1.2 m, 1.0 m) and rotation R (2.4°, 0.5°, 2.7°); (j) point clouds with translation T (1.1 m, −2.5 m, −2.3 m) and rotation R (−2.6°, −1.2°, −1.1°).
Figure 3. Data composition of synthetic data_02 (rotated −90°, red point cloud: target; blue point cloud: source). (a) point clouds with translation T (0.1 m, 2.4 m, 1.2 m) and rotation R (1.7°, 2.6°, 0.8°); (b) point clouds with translation T (−1.0 m, −1.7 m, −2.3 m) and rotation R (−2.5°, −0.7°, 2.8°); (c) point clouds with translation T (2.7 m, −0.9 m, −1.9 m) and rotation R (−2.0°, 2.7°, −2.2°); (d) point clouds with translation T (−2.8 m, 2.0 m, 1.9 m) and rotation R (1.1°, 2.6°, 0.2°); (e) point clouds with translation T (−1.3 m, −2.1 m, 1.3 m) and rotation R (−1.2°, −2.9°, −2.4°); (f) point clouds with translation T (2.7 m, −2.5 m, −1.7 m) and rotation R (−1.1°, 2.6°, −0.1°); (g) point clouds with translation T (0.1 m, 0.7 m, 0.1 m) and rotation R (−2.0°, −1.3°, 0.6°); (h) point clouds with translation T (−1.6 m, −0.8 m, −2.3 m) and rotation R (0.4°, 1.7°, 1.0°); (i) point clouds with translation T (1.0 m, −1.2 m, 1.0 m) and rotation R (2.4°, 0.5°, 2.7°); (j) point clouds with translation T (1.1 m, −2.5 m, −2.3 m) and rotation R (−2.6°, −1.2°, −1.1°).
Remotesensing 17 03056 g003
Figure 4. The UAV point-cloud dataset (overhead view).
Figure 4. The UAV point-cloud dataset (overhead view).
Remotesensing 17 03056 g004
Figure 5. Residential area of UAV LiDAR point-cloud data. (The blue line segment represents the profile line; red point cloud: target; green point cloud: source).
Figure 5. Residential area of UAV LiDAR point-cloud data. (The blue line segment represents the profile line; red point cloud: target; green point cloud: source).
Remotesensing 17 03056 g005
Figure 6. Mountainous area of UAV LiDAR point-cloud data. (The blue line segment represents the profile line; red point cloud: target; green point cloud: source).
Figure 6. Mountainous area of UAV LiDAR point-cloud data. (The blue line segment represents the profile line; red point cloud: target; green point cloud: source).
Remotesensing 17 03056 g006aRemotesensing 17 03056 g006b
Figure 7. The overview illustration of the proposed AR-VGICP algorithm.
Figure 7. The overview illustration of the proposed AR-VGICP algorithm.
Remotesensing 17 03056 g007
Figure 8. Registration results of different algorithms on synthetic dataset _01 (rotated −90°). (The a to j correspond to the data series in Figure 2; red point cloud: target; blue point cloud: source).
Figure 8. Registration results of different algorithms on synthetic dataset _01 (rotated −90°). (The a to j correspond to the data series in Figure 2; red point cloud: target; blue point cloud: source).
Remotesensing 17 03056 g008aRemotesensing 17 03056 g008b
Figure 9. Registration results of different algorithms on synthetic data_02. (The a to j correspond to the data series in Figure 3; red point cloud: target; blue point cloud: source).
Figure 9. Registration results of different algorithms on synthetic data_02. (The a to j correspond to the data series in Figure 3; red point cloud: target; blue point cloud: source).
Remotesensing 17 03056 g009aRemotesensing 17 03056 g009b
Figure 10. Translation and rotation errors of different algorithms on synthetic data_01. (The a to j correspond to the data series in Figure 2).
Figure 10. Translation and rotation errors of different algorithms on synthetic data_01. (The a to j correspond to the data series in Figure 2).
Remotesensing 17 03056 g010
Figure 11. Translation and rotation errors of different algorithms on synthetic data_02. (The a to j correspond to the data series in Figure 3).
Figure 11. Translation and rotation errors of different algorithms on synthetic data_02. (The a to j correspond to the data series in Figure 3).
Remotesensing 17 03056 g011
Figure 12. Registration time of synthetic data_01 and synthetic data_02. (The data_01 of a to j correspond to the data series in Figure 2, The data_02 of a to j correspond to the data series in Figure 3).
Figure 12. Registration time of synthetic data_01 and synthetic data_02. (The data_01 of a to j correspond to the data series in Figure 2, The data_02 of a to j correspond to the data series in Figure 3).
Remotesensing 17 03056 g012
Figure 13. Registration results of different algorithms. (ad) correspond to the datasets residence-01, residence-02, mountain-01, and mountain-02, respectively; 1–5 represent: (1) Fast GICP, (2) Fast GICP (1.0 m), (3) VGICP (1.0 m), (4) VGICP (0.5 m), and (5) AR-VGICP. (red point cloud: target; green point cloud: source).
Figure 13. Registration results of different algorithms. (ad) correspond to the datasets residence-01, residence-02, mountain-01, and mountain-02, respectively; 1–5 represent: (1) Fast GICP, (2) Fast GICP (1.0 m), (3) VGICP (1.0 m), (4) VGICP (0.5 m), and (5) AR-VGICP. (red point cloud: target; green point cloud: source).
Remotesensing 17 03056 g013aRemotesensing 17 03056 g013b
Figure 14. Section profile view of registration results using different algorithms.
Figure 14. Section profile view of registration results using different algorithms.
Remotesensing 17 03056 g014aRemotesensing 17 03056 g014b
Figure 15. Translation and rotation errors of different algorithms.
Figure 15. Translation and rotation errors of different algorithms.
Remotesensing 17 03056 g015
Figure 16. Registration time of different algorithms.
Figure 16. Registration time of different algorithms.
Remotesensing 17 03056 g016
Figure 17. Trajectories of different algorithms.
Figure 17. Trajectories of different algorithms.
Remotesensing 17 03056 g017
Figure 18. RTE of different algorithms.
Figure 18. RTE of different algorithms.
Remotesensing 17 03056 g018
Table 1. Parameters of KITTI Sequence 2011_10_03_drive_0027_sync.
Table 1. Parameters of KITTI Sequence 2011_10_03_drive_0027_sync.
2011_10_03_drive_0027_syncParameters
Data CategoryResidential
LiDAR SensorVelodyne HDL-64E
Number of Frames4550
Duration7 min 35 s
Table 2. Processing speed and transformation error.
Table 2. Processing speed and transformation error.
MethodFPSTranslation [m]Rotation [°]
Fast GICP(1.0 m)5.230.903 ± 0.4300.758 ± 0.319
VGICP(0.5 m)9.314.463 ± 4.5200.843 ± 0.363
VGICP(1.0 m)8.231.219 ± 0.9080.872 ± 0.320
AR-VGICP7.961.106 ± 0.6970.743 ± 0.309
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xia, Y.; Liu, Z.; Liu, H. Adaptive Resolution VGICP Algorithm for Robust and Efficient Point-Cloud Registration. Remote Sens. 2025, 17, 3056. https://doi.org/10.3390/rs17173056

AMA Style

Xia Y, Liu Z, Liu H. Adaptive Resolution VGICP Algorithm for Robust and Efficient Point-Cloud Registration. Remote Sensing. 2025; 17(17):3056. https://doi.org/10.3390/rs17173056

Chicago/Turabian Style

Xia, Yuanping, Zhibo Liu, and Hua Liu. 2025. "Adaptive Resolution VGICP Algorithm for Robust and Efficient Point-Cloud Registration" Remote Sensing 17, no. 17: 3056. https://doi.org/10.3390/rs17173056

APA Style

Xia, Y., Liu, Z., & Liu, H. (2025). Adaptive Resolution VGICP Algorithm for Robust and Efficient Point-Cloud Registration. Remote Sensing, 17(17), 3056. https://doi.org/10.3390/rs17173056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop