Next Article in Journal
CFD Assessment of Car Park Ventilation System in Case of Fire Event
Next Article in Special Issue
Research on Driving Style Recognition of Autonomous Vehicles Based on ACO-BP
Previous Article in Journal
Taxi Demand and Fare Prediction with Hybrid Models: Enhancing Efficiency and User Experience in City Transportation
Previous Article in Special Issue
Path Planning for Autonomous Vehicles in Unknown Dynamic Environment Based on Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Lidar System Localization and Mapping with Online Calibration

1
Department of Automotive Engineering, Xi’an Vocational University of Automobile, Xi’an 710038, China
2
Department of Automobile, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10193; https://doi.org/10.3390/app131810193
Submission received: 1 August 2023 / Revised: 28 August 2023 / Accepted: 7 September 2023 / Published: 11 September 2023
(This article belongs to the Special Issue Autonomous Vehicles: Technology and Application)

Abstract

:
Currently, the demand for automobiles is increasing, and daily travel is increasingly reliant on cars. However, accompanying this trend are escalating traffic safety issues. Surveys indicate that most traffic accidents stem from driver errors, both intentional and unintentional. Consequently, within the framework of vehicular intelligence, intelligent driving uses computer software to assist drivers, thereby reducing the likelihood of road safety incidents and traffic accidents. Lidar, an essential facet of perception technology, plays an important role in vehicle intelligent driving. In real-world driving scenarios, the detection range of a single laser radar is limited. Multiple laser radars can improve the detection range and point density, effectively mitigating state estimation degradation in unstructured environments. This, in turn, enhances the precision and accuracy of synchronous positioning and mapping. Nonetheless, the relationship governing pose transformation between multiple lidars is intricate. Over extended periods, perturbations arising from vibrations, temperature fluctuations, or collisions can compromise the initially converged external parameters. In view of these concerns, this paper introduces a system capable of concurrent multi-lidar positioning and mapping, as well as real-time online external parameter calibration. The method first preprocesses the original measurement data, extracts linear and planar features, and rectifies motion distortion. Subsequently, leveraging degradation factors, the convergence of the multi-lidar external parameters is detected in real time. When deterioration in external parameters is identified, the local map of the main laser radar and the feature point cloud of the auxiliary laser radar are associated to realize online calibration. This is succeeded by frame-to-frame matching according to the converged external parameters, culminating in laser odometer computation. Introducing ground constraints and loop closure detection constraints in the back-end optimization effectuates global estimated pose rectification. Concurrently, the feature point cloud is aligned with the global map, and map update is completed. Finally, experimental validation is conducted on data acquired from Chang’an University to substantiate the system’s online calibration and positioning mapping accuracy, robustness, and real-time performance.

1. Introduction

Intelligent vehicles rely on key technologies such as environmental perception, intelligent decision-making, control execution, and system design technology [1,2]. Among these, positioning technology underpins the functionalities of each module. Only with precise knowledge of the vehicle’s position and orientation in the current environment can reliable information be provided for perception and planning decisions [3,4,5]. In addition to GPS for self-driving vehicles’ positioning, two sensor types, namely lidar and cameras, are also widely used in positioning technology [6]. Compared to other sensors, lidar offers the advantages of an extended detection range, immunity to adverse weather conditions, and depth information [7,8]. Furthermore, it is prominently featured in cutting-edge research [9].
In lidar positioning, two schemes prevail: multiple lidar and single lidar [10]. Although a single multi-line lidar yields rich information, inherent blind spots may arise due to installation locations, leading to relatively sparse point cloud scans in some areas and close ranges [11,12,13]. To address these limitations, self-driving vehicles are equipped with multiple laser radars employing low scanning lines. This mitigates issues of blind spots and sparse point clouds found in single laser radars, while considering cost-effectiveness. Lidars with low scanline resolution are economically advantageous over their high scanline resolution counterparts [14,15]. Therefore, building upon the foundation of multi-lidar systems, this paper introduces a system capable of concurrent multi-lidar positioning, mapping, and real-time external parameter calibration to meet the high-precision positioning demands of unmanned vehicles.
Multi-lidar extrinsic calibration is generally categorized into two methods: reference markers and motion-based [16]. The method based on reference markers centers on point cloud registration, typically involving feature extraction for matching or minimizing spatial distances between point clouds. Bo Shi [17] and others propose using two lidars to measure structured scenes from different perspectives, conducting multiple repeated scans by altering the pose of the sensor carrier, extracting at least three pairs of planes, and forming constraints to determine external parameters. This method obviates the need for artificial calibration boards, relying solely on walls and floors. He Yanbing and Ye Bin [18] extract 3D-HARRIS key points from original point cloud data, use the histogram of orientation (SHOT) feature descriptor for characterization, and employ a random sample consensus algorithm for point cloud registration. Feature descriptors construct vectors, gauging similarity through vector subtraction and modulus. Jianhao Jiao and Qinghai Liao et al. [19] select three linearly independent corner planes, extract the corresponding plane features through the RANSAC algorithm, retain the largest three planes based on the number of feature points, match plane features and normal vectors of the two lidars, and solve the displacement vector via the least squares method along with the rotation matrix through the Kabsch algorithm [20]. Following the acquisition of the initial external parameters, a nonlinear optimization model is established and iterated until convergence, yielding precise extrinsic results. In natural environments, dedicated reference markers for calibration are often elusive, necessitating motion-based calibration.
The motion-based method: Livox introduces a target-free calibration algorithm that assumes consistency among local 3D models scanned by multiple lidars then reconstructs the point cloud from both auxiliary and main lidars during movement [21,22,23]. These maps are registered until the algorithm converges. Xiyuan Liu and FuZhang [24] propose generating common visible features by overlapping common-view fields of view through motion, facilitating mapping and registration of all lidars to acquire reference radar pose and point cloud maps for all radars. Jiao et al. [25] achieve automatic online calibration of radar pose using the multi-lidar odometer based on the sliding window approach. They determine relative motion between two lidar frames through line and surface feature frame matching, enabling calculation of initial states for each radar’s coordinate system pose. The hand–eye calibration problem is then employed to complete coarse external parameter calibration. The initial outcomes of the algorithm’s rough external parameter calibration serve as the foundation for constructing an optimization problem. By associating and matching the feature point cloud within the sliding window with the reference radar’s local map, accurate calibration results are attained through continuous iterations until convergence.
Current research calls for enhancements in efficiency and accuracy. Many multi-sensor, external parameter calibration strategies are carried out offline and often in a controlled calibration environment. Prolonged driving can lead to shifts in sensor position and orientation, rendering prior offline calibration outcomes invalid and affecting the usability of autonomous driving functions. The online calibration of external parameters among multiple lidars within a varying environment also bears significant research importance.
The innovation aspects of this paper are outlined as follows:
(1)
The registration method based on line and surface features is used for point cloud matching, and the LM algorithm is used for nonlinear optimization. A refined root mean square error (RMSE) method is introduced to evaluate the point cloud registration accuracy.
(2)
A feature fusion strategy, grounded in feature point cloud smoothness and distribution, enables frame-to-frame matching using the Ceres library and LM nonlinear optimization method. This prevents an excessive number of multi-lidar feature point clouds from impairing the real-time performance of the system.
(3)
The GTSAM library is employed to formulate a graph optimization model, supplemented by a loop closure detection constraint factor grounded in the Scan Context descriptor and a ground constraint factor relying on plane feature matching. These elements collectively achieve global trajectory optimization and z-axis error limitation.
(4)
A real vehicle experimental platform is established to validate the robustness, real-time performance, and accuracy of the online calibration algorithm presented in this paper. This study also verifies that the multi-radar SLAM system has a more pronounced accuracy improvement than the single radar system.

2. Front-End Laser Odometry and Online Calibration

In this paper, the point cloud data from two laser radars undergo preprocessing and feature extraction, enabling real-time calibration of external parameters among multiple laser radars. When non-convergence of external parameters is detected, the feature point cloud from the main lidar is utilized for inter-frame matching to solve the laser odometer information [26,27,28]. Concurrently, online calibration is performed to unify the coordinate systems across multiple radars and vehicles. Upon detecting the convergence of external parameters, a fusion strategy proposed in this paper is employed, merging feature point clouds from the two radars to solve the laser odometer. The specific process is illustrated in Figure 1.

2.1. Point Cloud Preprocessing

Ground point cloud screening, as depicted in Figure 2, involves two points, A and B, scanned by two adjacent laser beams on the same column. In the figure, h indicates the height difference between the points, d is the horizontal distance, and θ represents the angle between the straight line formed by the points and the horizontal plane. The determination of whether points A and B belong to the ground point cloud hinges on comparing the horizontal angle with a predetermined threshold. If it surpasses the threshold, both points are considered non-ground. Otherwise, both points are considered as part of the ground point cloud.
The angle θ can be computed as follows:
θ = arctan z 1 z 2 x 1 x 2 2 + y 1 y 2 2
When clustering the point cloud, the initial step involves traversing each point for processing. If the current point has not been marked, it is designated as a new cluster, and the current point is then used as a center for traversal of its four neighbors: up, down, left, and right. During this process, it is essential to calculate the angle between the laser radar, the current point, and the neighbor point. Should the angle exceed the designated threshold, both the current point and the neighbor point are categorized under the same point cloud. The calculation process is illustrated in Figure 3, where O represents the center point of the lidar, A is the current point of traversal, B denotes the neighbor of the current point, d1 is the length of line segment OA, d2 is the length of line segment OB, α is the angle between OA and OB, and θ is the angle to be calculated.
The angle θ can be computed as follows:
θ = arctan d 2 sin α d 1 d 2 cos α
Neighbor points are continuously traversed until the current cluster no longer contains points that meet the requirements. At that point, the clustering process is deemed completed. The decision on whether to retain the cluster is based on the number of point clouds within it. If this count falls below a certain threshold, the cluster is considered invalid.
In the process of correcting point cloud motion distortion, assuming constant lidar movement speed between two frames, the prediction of relative motion between the current frames hinges on the calculation results of the laser odometer from the preceding two frames, as expressed in the following equation:
T k 1 w 1 T k w = T k 2 w 1 T k 1 w
where T k w , T k 1 w , and T k 2 w represent the laser odometer information for frames k, k − 1, and k − 2, respectively, in the world coordinate system. In accordance with the assumption of a uniform motion model, the relative motion between frames k − 2 and k − 1 is consistent with the relative motion between frames k − 1 and k. Consequently, the pose transformation for frame k can be computed. This approach effectively achieves motion distortion correction for the lidar point cloud.

2.2. Realization of Multi-Lidar Online Calibration

2.2.1. Online External Parameter Calibration

In the process of multi-lidar calibration, point clouds in different coordinate systems are brought into alignment within a common coordinate system through point cloud registration [29,30,31,32]. The assessment of point cloud registration accuracy generally employs RMSE as a quantitative metric that establishes corresponding points between two point clouds using a Kd-tree, subsequently determining the discrepancy between the Euclidean distance and the actual distance:
RMSE = i = 1 n x i x ^ i 2 n
where n is the number of corresponding points, x i is the actual Euclidean distance between corresponding points, and x ^ i is the Euclidean distance between corresponding points after registration. As inferred from the formula, a smaller RMSE indicates a lesser gap between the Euclidean distance and the actual distance of corresponding points after registration, signifying a higher degree of point cloud overlap. An RMSE of 0 implies complete coincidence of the two point clouds.
However, this evaluation method holds certain shortcomings. Poor point cloud registration results may yield a small number of associated corresponding points and erroneous associations, thereby affecting the reliability of calculated results [33,34]. Therefore, this paper proposes an improved RMSE index to evaluate point cloud registration accuracy. This improved index considers the number of corresponding points as a weight: the greater the count, the more dependable the index outcome, as expressed in the following:
RMSE = 1 n 2 i = 1 n x i x ^ i 2 n × 10 6
During multi-lidar online external parameter calibration, the process begins by projecting the feature point cloud of the auxiliary laser radar onto the main laser radar’s coordinate system using the current external parameters, as indicated in the following:
X ˜ k + 1 , i L = R X k + 1 , i L + t = G X k + 1 , i L , T k + 1 L
Within two-dimensional space, the modulus length of the cross product of two vectors equates to the area of the parallelogram formed by these vectors. Dividing the area of the parallelogram by the side length yields the distance from point i to line jl, thus obtaining the line feature optimization equation:
d e = X ( k + 1 , i ) L X ( k , j ) L × X ( k + 1 , i ) L X ( k , l ) L X ( k , j ) L X ( k , l ) L
In three-dimensional space, the volume of a three-dimensional object can be obtained through vector dot multiplication. Similarly, the distance between point i and plane jlm is deducible by dividing the volume by the area of the bottom surface, resulting in the optimization equation for surface characteristics:
d h = X ( k + 1 , i ) X ( k , j ) X ( k , j ) X ( k , l ) × X ( k , j ) X ( k , m ) X ( k , j ) X ( k , l ) × X ( k , j ) X ( k , m )
Based on (7) and (8), the residual equation for feature association can be unified as follows:
  loss = i = 1 N e d e i + i = 1 N h d h i = D X ˜ k + 1 , i L
The total loss function can be obtained by combining (6) and (9):
d = F X ˜ k + 1 , i L , T k + 1 L = D G X ˜ k + 1 , i L , T k + 1 L
Substituting into (11), we evaluate as follows:
Δ T = J J T + λ D T D 1 J T d
The gradient descent is then applied using the following equation:
T k + 1 L T k + 1 L J J T + λ D T D 1 J T d
Ultimately, the refined pose transformation matrix serves as an accurate extrinsic parameter matrix used to transform the feature point cloud of the auxiliary lidar, thereby yielding the calibration result.

2.2.2. Convergence Assessment of External Parameters

The LM algorithm used in this paper transforms the nonlinear problem into a local linear problem by computing the Jacobian matrix J of the error function. Consequently, the linear problem of each row can be regarded as a hyperplane of the solution space. The resultant solution x is a point in this space, fulfilling the requirement of minimal Euclidean distance to every hyperplane. The degeneracy factor is established by investigating the geometric distribution of the plane, incorporating additional hyperplane constraints and perturbations, as shown in Figure 4.
In Figure 4, the black line represents the plane constraint, the orange line denotes the additional hyperplane constraint, the blue point x0 represents the optimization problem’s solution, c is the normal vector of the additional hyperplane constraint with a modulus length of 1, δd denotes the offset distance of the hyperplane after adding the disturbance, and δxc is the offset distance of the constraint, i.e., the offset distance of x0 after adding the disturbance. The derivation of the degradation factor can be obtained. Since the constraint intersects with x0, the latter is not altered by the additional constraint, resulting in the following equation:
c T x x 0 = 0 , c = 1
If the constraint is moved along its normal direction c by a certain distance, then x0 will produce a displacement of δxc in the same direction. For a given disturbance, let δxc* represent the maximum displacement:
δ x c = max c δ x c
The definition of the degradation factor is then expressed as follows:
D = δ d δ x c
Namely, the degradation factor is defined as the ratio of the hyperplane’s movement distance to the solution’s maximum displacement. Evidently, a smaller value implies greater system stability. Simultaneously considering (14) and (15), we obtain the following:
A c T x = b c T x 0
After introducing perturbations, we have the following:
A c T x = b c T x 0 + δ d
Substituting the left pseudo-inverse A c T l e f t 1 = A T c A c T 1 A T c into (16), we obtain the following:
δ x = A T A + c c T 1 c δ d
The displacement δxc in the c direction can be calculated as the dot product of c and δx:
δ x c = c T δ x = c T A T A + c c T 1 c δ d
This illustrates that the degradation factor only relates to the original constraint direction A, independent of the initial constraint position b; parallel original constraint directions induce degradation.
Converting c and c T according to the pseudo-inverse c l e f t 1 = c T c 1 c T = c T , c r i g h t 1 = c c T c 1 = c yields the following:
δ x c = c T A T A c + 1 1 δ d
For maximizing, (21) must be satisfied:
c = arg min c c T A T A c c = 1 .
Since the modulus of c is 1, (21) can also be expressed as follows:
c = arg min c c T A T A c c T c
which takes the form of a Rayleigh quotient, with the minimum value occurring when c is the minimum eigenvalue of ATA. Introducing c here, we have the following:
δ x c = δ d λ min + 1
Thus, the degradation factor is obtained as follows:
D = λ min + 1
In summary, the magnitude of the degradation factor allows for judgment regarding degradation within the constraint. The information matrix’s minimum eigenvalue exhibits a positive linear correlation with the degradation factor, which reveals the condition of the optimization-based state estimation problem, enabling assessment of whether sufficient constraints exist within the online calibration optimization problem to solve the precise external parameters. In this paper, the information matrix is defined based on the computed Jacobian matrix J, and convergence of multi-lidar external parameters is evaluated by comparing the minimum eigenvalue of the information matrix to a designated threshold.

2.3. Inter-Frame Registration

After obtaining the feature point clouds of two consecutive frames, they are connected through scan-to-scan. The Ceres library is used to solve the least squares problem. Its definition is formulated as follows:
min x 1 2 i ρ i f i x i 1 , , x i n 2   s . t .   l j x j u j
where ρ i f i x i 1 , , x i n 2 is the residual block functioning as the cost function and depending on the parameter block x i 1 , x i 2 , x i n , while lj and uj are the upper and lower limits of the j-th optimization variable, respectively. Typically, these boundaries do not restrict the optimization variable’s range; l j = , u j = . Additionally, the loss function ρ i = x is assumed to be an identity function. The objective function is the sum of squares of multiple terms, resulting in an unconstrained nonlinear least squares formulation:
1 2 i f i x i 1 , x i 2 , x i n 2

3. Back-end Optimization and Map Construction

The back-end optimization in this study is based on the graph optimization library GTSAM. The global pose is optimized by constructing a pose graph supplemented with loopback and ground constraint factors. Additionally, key frames are extracted, and their feature point clouds are matched to the global map for comprehensive map generation and updating.

3.1. Loop Detection Based on Scan Context

In this study, the Scan Context environment descriptor is selected. The initial step involves reorganizing the point cloud using a ring and sector methodology. This procedure maps the 3D point cloud into a 2D feature map, as illustrated in Figure 5.
By segmenting the point cloud into rings and using a small sector for finer division, distinct indexed regions can be located within the two-dimensional feature map. The feature map’s values correspond to the highest reflection point’s elevation in the respective area. This approach extracts a two-dimensional feature map while retaining the spatial structural characteristics of the point cloud, as exemplified in Figure 6.
Upon extracting the Scan Context descriptor, defining a distance calculation formula is necessary to assess the similarity between two descriptors. Given the Scan Context of two point clouds, denoted as Iq and Ic, the cosine distance of the column vectors in the two feature matrices is used as the distance measure, and the result is then normalized by division as expressed in the following:
d I q , I c = 1 N i = 1 N 1 c j q c j c c j q c j c
Directly computing the distance of column vectors is inadequate when dealing with scenarios involving perspective switches within the same scene. Consequently, it is crucial to calculate the distance between all column vectors of the two Scan Contexts, selecting the minimum value as the final distance.
To ensure efficient searching and real-time performance of the SLAM system, sparse optimization is also necessary. Introducing the concept of the ring key facilitates obtaining a descriptor insensitive to rotation from the column vector in the Scan Context descriptor:
ψ r i = r i 0 N
Given that ring characteristics are unaffected by the vehicle’s orientation, the ring key becomes rotation insensitive. While it contains less information compared to the complete Scan Context descriptor, it enhances candidate frame searching efficiency. By constructing a Kd-tree and selecting the Scan Context corresponding to the multiple historical ring keys most similar to the ring key of the current scene, the nearest scene can be identified using the distance calculation formula described in the preceding section. This entire scene recognition process is depicted in Figure 7.

3.2. Z-Axis Error Limitation Based on Ground Constraints

As the vehicle’s travel distance increases, the cumulative error of the z-axis also progressively increases, as indicated by the following:
  error z = S × sin ϕ
where S is the actual travel distance of the vehicle and ϕ is the pitch angle.
This paper adopts a constraint to extend the planar point cloud feature into a ground to mitigate the drift issue of the z-axis.
The fitted global ground equation is first initialized:
A w x + B w y + C w z + D w = 0
RANSAC fitting is concurrently applied to the current frame’s ground point cloud to obtain the ground equation:
A i x + B i y + C i z + D i = 0
Since the two plane equations are in different coordinate systems, it becomes essential to project the global plane equation PW from the world coordinate system to the radar coordinate system of the current frame i, as expressed in the following equation:
P w = T i w P w
Upon obtaining the two plane equations that share a unified coordinate system, the transformation relationship between the two normal vectors comes into play for constructing the residual, as illustrated in Figure 8.
Considering the two angles in Figure 8, the normal vectors of the two planes can be made entirely consistent. Subsequently, the intersection point of the two planes is measured to formulate the residual in three dimensions as follows:
α = arctan y x
β = arctan z y 2 + x 2
  error d = D w D i
These residual items in all three dimensions are integrated into the least squares problem. Through this process, optimization is conducted collectively with other constraints.

3.3. Map Construction and Update

The map construction in the laser SLAM system outlined in this paper involves registering the feature point cloud of each key frame onto the global map. In comparison to the inter-frame matching of the odometer, this approach boasts higher matching accuracy; however, it is accompanied by a decrease in matching speed. With the continuous expansion of the map, the computational resources required rise significantly. Therefore, the use of key frames, local map construction, and point cloud downsampling is employed to bolster scan-to-map matching speed and prevent excessive computational burden on the overall system performance.
The subsequent step entails the construction and update of the point cloud global map. The specific process is illustrated in Figure 9, where Q k represents the map point cloud constructed at time k, Q k is the feature point cloud at time k + 1, T k W is the pose estimation of the vehicle relative to the global coordinate system at time k, and T k + 1 L is the pose estimation of the laser odometer at time k + 1.
Consequently, the pose estimation of the vehicle relative to the global coordinate system at time k + 1 is expressed as follows:
T k + 1 W = T k + 1 L T k W
Based on the pose transformation relationship, the feature point cloud of the current frame can be projected into the global coordinate system, as expressed in the following equation:
P k + 1 = T k + 1 W Q ¯ k + 1

4. Real-World Experiment Validation and Result Analysis

4.1. Online Calibration Experiment Results and Analysis

The online calibration experiment conducted in this paper is divided into two parts. The first part verifies the robustness of the calibration algorithm by altering the installation layout of multiple laser radars. The online calibration results are then compared with actual measurements within the same scene. The second part uses diverse calibration algorithms on the same data, comparing the results of this paper’s algorithm with others to validate its accuracy and real-time efficacy.

4.1.1. Comparison and Analysis of Online Calibration Results for Different

Lidar Layouts

This section employs six multi-radar layouts, deriving actual inter-radar external parameters through manual measurements. Multiple individuals participated in the measurements, and the mean of all measurements was taken as the ultimate value, aiming to minimize manual measurement errors. The online calibration results and corresponding radar layouts are shown in Table 1.
Upon reviewing Table 1, it is evident that the online calibration outcomes closely align with actual measurements, affirming the efficacy of the calibration algorithm. Further analysis will be conducted to statistically assess deviations between the online calibration results and actual measurements, as shown in Table 2.
Upon analyzing Table 2, it is deduced that the maximum translation error of the online calibration algorithm is 0.0617705 m, and the maximum rotation error is 1.497818°, both falling within acceptable limits. The smaller deviation in translation calibration results can be attributed to the precision of actual translation measurements as well as minimal translation deviation between the two lidars, rendering this paper’s algorithm more capable of achieving accurate calibration results. Notably, the yaw angle exhibits greater deviation among the three rotation angles, primarily due to the lidar’s relatively low vertical resolution and limited vertical direction constraints.

4.1.2. Comparison and Analysis of Online Calibration Algorithm Results for

Multiple Lidars

In the experiment, the proposed calibration algorithm was compared with multi_lidar_calibrator [35] and OpenCalib [36]. The data collected within the same scene were used for calibration using each algorithm. Calibration results were then analyzed to discern the strengths and weaknesses of the proposed algorithm within natural scenes. The results of the three calibration algorithms are shown in Figure 10.
Visual assessment of the calibration outcomes unveils varying degrees of ineffectiveness in the other two calibration algorithms, and the proposed algorithm exhibits superior efficacy. By employing the improved RMSE method as described in this paper, along with the visualization results, calibration results from each algorithm are analyzed and compared to assess calibration quality. Statistical results are presented in Table 3.
Through detailed analysis of experimental data, it can be concluded that the calibration algorithm proposed in this paper not only guarantees the point cloud registration accuracy, but also takes into account real-time performance, ensuring that when the lidar point cloud data is input at a frequency of 10 Hz, it can be processed in a timely manner, while the other two calibration algorithms have poor real-time performance and can only be used as offline calibration algorithms. At the same time, it can be clearly seen from the visualization calibration results that the point cloud calibrated by the multi_lidar_calibrator algorithm is not well registered, but its original RMSE indicator is the smallest, which means it has the highest accuracy. The improved RMSE indicator is consistent with the visualized results, verifying the feasibility of evaluating point cloud registration based on the improved RMSE algorithm. A comprehensive analysis of experimental data underscores that the proposed calibration algorithm not only ensures accurate point cloud registration, but also balances real-time performance. It guarantees that when the lidar point cloud data is input at a frequency of 10 Hz, it can be processed promptly. Conversely, the real-time performance of the other two calibration algorithms is less satisfactory, relegating them to offline calibration use. Visual calibration results further highlight that point clouds calibrated by the multi_lidar_calibrator algorithm exhibit misalignment. However, the algorithm yields the smallest original RMSE index, indicating the highest accuracy. Notably, the improved RMSE index is consistent with the visual results, substantiating the feasibility of the point cloud registration based on the enhanced RMSE algorithm.

4.2. Experimental Results and Analysis of LiDAR SLAM on a Real Vehicle

In this section, the proposed SLAM algorithm is compared with other LiDAR SLAM algorithms, and the absolute pose error (APE) is obtained by comparing the estimated trajectory of the SLAM algorithm with the GPS trajectory using the EVO tool, in order to verify the accuracy of the proposed SLAM algorithm. The trajectory comparison is shown in Figure 11. The experiment in this section involves a comparison between the proposed SLAM algorithm and prominent laser SLAM algorithms. The analysis is conducted using the EVO tool. The absolute pose error (APE) is obtained by comparing the GPS trajectory with the trajectory estimated by the SLAM algorithm to validate the accuracy of the SLAM algorithm proposed in this paper. This trajectory comparison is illustrated in Figure 11.
Based on the preceding analysis, it becomes evident that, apart from the A-LOAM algorithm, the errors along the x and y axes of the other laser SLAM algorithms are relatively small. However, as historical feature points gradually vanish, the A-LOAM algorithm’s absence of loop detection renders it unsuitable for large-scale scene mapping. In the z-axis direction, all algorithms exhibit notable errors; LeGO-LOAM and M-LOAM manifest the most substantial deviations. In contrast, the SLAM algorithm presented in this paper effectively curtails z-axis errors by incorporating ground constraints, leading to the smallest error among the surveyed laser SLAM algorithms.
The APE and RPE outcomes output by the EVO tool underscore the APE values for each laser SLAM algorithm. Table 4 quantifies the conformity between the estimated and actual trajectories by evaluating absolute pose estimation.
These APE statistical data provide a tangible insight into the positioning accuracy of each SLAM algorithm. Using the mean average APE output as a benchmark, the algorithm outlined in this paper emerges as the most adept in trajectory estimation, due to the integration of constraints that heighten the accuracy potential of the laser SLAM system.
RPE pertains to the comparison of relative pose errors across different trajectories through attitude increments. It involves an ongoing juxtaposition between real and estimated trajectories, culminating in a localized trajectory accuracy assessment and a gauge of the SLAM system’s drift. This paper considers both rotational and translational errors. Figure 12 and Table 5 present the RPE values of each laser SLAM algorithm.
The analysis of RPE statistical data unveils that while the absolute pose accuracy of the proposed algorithm surpasses that of the other laser SLAM algorithms, its relative pose accuracy lags behind, exhibiting an approximate discrepancy of 0.3 m compared to other algorithms. This observation suggests that the proposed algorithm carries a certain level of local accuracy drift.

5. Conclusions

This paper designs a laser SLAM system that can realize online external parameter calibration and synchronous positioning and mapping of multiple laser radars at the same time, and the feasibility of the scheme is verified by real vehicle experiments. Firstly, the point cloud preprocessing is carried out, and the motion distortion of the lidar point cloud is corrected based on the uniform motion model, and the ground extraction, point cloud segmentation, and feature extraction are completed. Then the realization of online calibration is carried out by judging the convergence of multi-lidar external parameters through the degradation factor. This was followed by matching the feature point cloud of the auxiliary lidar with the local map of the main lidar so as to solve the pose transformation relationship between multiple lidars. In the implementation process of the front-end odometer, based on the smoothness of the multi-radar feature point cloud and its distribution, high-quality features are screened for fusion, and then the Ceres library is used for inter-frame matching to solve the estimated pose. Then the GTSAM library is used to build a graph optimization model at the back end. The loop closure detection factor is added based on the Scan Context descriptor for global pose optimization and the ground constraint factor based on plane feature matching to reduce the z-axis error of the overall trajectory. Finally, the real vehicle experiment verification was carried out, the software and hardware platform of the real vehicle experiment was built, and the data was collected in Tianwenlu, Chang’an University, which verified the feasibility of the scheme.

Author Contributions

Conceptualization, F.W. and L.W.; methodology, X.Z. and Y.H.; software, H.G.; resources, X.Z.; data curation, H.G. and Y.H.; writing—original draft preparation, S.W.; writing—review and editing, L.W. and F.W.; supervision, F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Construction of the “Scientist + Engineer” team of QinChuangYuan, Shaanxi Province, grant number 2022KXJ-021.

Institutional Review Board Statement

The study did not involve humans or animals.

Informed Consent Statement

The study did not involve humans.

Data Availability Statement

The data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peng, Y.H.; Jiang, M.; Ma, Z.Y.; Zhong, C. Research progress in key technologies of automobile autonomous vehicle driving. J. Fuzhou Univ. 2021, 49, 691–703. [Google Scholar]
  2. Guo, W.H. Intelligent and connected vehicle technology Roadmap 2.0 generation. Intell. Connect. Car 2020, 6, 10–13. [Google Scholar]
  3. Yu, Z.L. Research on Intelligent Vehicle Positioning Technology Based on Visual Inertial Navigation Fusion. Master’s Thesis, University of Electronic Science and Technology of China, UESTC, Chengdu, China, 2022. [Google Scholar]
  4. Pi, J.H. Visual SLAM Algorithm for a Complex and Dynamic Traffic Environment. Master’s Thesis, Dalian University of Technology, Dalian, China, 2022. [Google Scholar]
  5. Cai, Y.F.; Lu, Z.H.; Li, Y.W.; Chen, L.; Wang, H. A-coupled SLAM system based on multi-sensor fusion. Automob. Eng. 2022, 44, 350–361. [Google Scholar]
  6. Lezki, H.; Yetik, I.S. Localization Using Single Camera and Lidar in GPS-Denied Environments. In Proceedings of the 28th Signal Processing and Communications Applications Conference (SIU), Gaziantep, Turkey, 5–7 October 2020. [Google Scholar]
  7. Wang, H.; Xu, Y.S.; Cai, Y.F.; Chen, L. Summary of multi-target detection technology for intelligent vehicles based on multi-sensor fusion. J. Automot. Saf. Energy Conserv. 2021, 12, 440–455. [Google Scholar]
  8. Feng, M.C.; Gao, X.Q.; Wang, J.S.; Feng, H.Z. Research on the vehicle target shape position fusion algorithm based on stereo vision and lidar. Chin. J. Sci. Instrum. 2021, 42, 210–220. [Google Scholar]
  9. Zhao, Y.C.; Wei, T.X.; Tong, D.; Zhao, J.B. Visual SLAM algorithm for fused YOLOv5s in dynamic scenes. Radio Eng. 2023, 1–10. Available online: http://kns.cnki.net/kcms/detail/13.1097.TN.20230629.1308.002.html (accessed on 6 September 2023).
  10. Guo, P.; Wang, Q.; Cao, Z.; Xia, H. High Precision Odometer and Mapping Algorithm Based on Multi Lidar Fusion. In Proceedings of the International Conference on Image, Vision and Intelligent Systems, Jinan, China, 15–17 August 2022. [Google Scholar]
  11. Cao, M.C.; Wang, J.M. Obstacle Detection for Autonomous Driving Vehicles With Multi-LiDAR Sensor Fusion. J. Dyn. Syst. Meas. Control 2020, 142, 021007. [Google Scholar] [CrossRef]
  12. Hu, Z.Y.; Dong, T.H.; Luo, Y. A vehicle front collision warning system with sensor fusion. J. Chongqing Univ. Technol. 2022, 36, 46–53. [Google Scholar]
  13. Liu, J.Y.; Tang, Z.M.; Wang, A.D.; Shi, Z.X. Detection of negative obstacles in an unstructured environment based on multi-lidar and combined features. Robot 2017, 39, 638–651. [Google Scholar]
  14. Guan, L.M.; Zhang, Q.; Chu, Q.L.; Zhu, J.Y. Algorithm for roadside dual lidar data fusion. Laser J. 2021, 42, 38–44. [Google Scholar]
  15. Wang, H.; Luo, T.; Lu, P.Y. Application of lidar in unmanned vehicles and its key technology analysis. Laser Infrared 2018, 48, 1458–1467. [Google Scholar]
  16. Jiao, J.H.; Yu, Y.; Liao, Q.H.; Ye, H.; Liu, M. Automatic Calibration of Multiple 3D LiDARs in Urban Environments. arXiv 2019, arXiv:1905.04912. [Google Scholar]
  17. Shi, B.; Yu, P.D.; Yang, M.; Wang, C.; Bai, Y.; Yang, F. Extrinsic Calibration of Dual LiDARs Based on Plane Features and Uncertainty Analysis. IEEE Sens. J. 2021, 21, 11117–11130. [Google Scholar] [CrossRef]
  18. He, Y.B.; Ye, B.; Li, H.J.; Zhou, X.Y. Multi-solid-state lidar calibration algorithm based on point cloud registration. Semicond. Photoelectr. 2022, 43, 195–200. [Google Scholar]
  19. Jiao, J.H.; Liao, Q.H.; Zhu, Y.L.; Liu, T.; Yu, Y.; Fan, R.; Wang, L.; Liu, M. A Novel Dual-Lidar Calibration Algorithm Using Planar Surfaces; IEEE: Piscataway Township, NJ, USA, 2019. [Google Scholar]
  20. Jiang, F. External parameter calibration method of mobile robot dual lidar system. Inf. Syst. Eng. 2021, 7, 105–107. [Google Scholar]
  21. Yeul, J.K.; Eun, J.H. Extrinsic Calibration of a Camera and a 2D LiDAR Using a Dummy Camera with IR Cut Filter Removed; IEEE ACCESS: Piscataway Township, NJ, USA, 2020; Volume 8. [Google Scholar]
  22. Wang, K.L.; Guo, L.Z.; Zhu, H.; Bao, C.X.; Wang, X.G. Study on the calibration method of intelligent navigation and multiline lidar. J. Test Technol. 2023, 37, 381–385+393. [Google Scholar]
  23. Zhang, J.; Chen, C.; Sun, J.W. Roadside multi-lidar cooperative sensing and error self-calibration methods. Inf. Comput. 2023, 35, 88–92+99. [Google Scholar]
  24. Liu, X.Y.; Zhang, F. Extrinsic Calibration of Multiple LiDARs of Small FoV in Targetless Environments. IEEE Robot. Autom. Lett. 2021, 6, 2036–2043. [Google Scholar]
  25. Jiao, J.H.; Ye, H.Y.; Zhu, Y.L.; Liu, M. Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration. IEEE Trans. Robot. 2021, 38, 351–371. [Google Scholar]
  26. Yang, X.K. Research on Obstacle Detection Based on Multi-Lidar Fusion. Master’s Thesis, University of Northern Technology, Beijing, China, 2022. [Google Scholar]
  27. Zhang, Z.Q. Research on Multi-Sensor Joint Calibration Algorithm for Intelligent Vehicles. Master’s Thesis, University Of Chongqing, Chongqing, China, 2022. [Google Scholar]
  28. Shan, T.X.; Englot, B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  29. Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar]
  30. Wang, H.; Wang, C.; Chen, C.L.; Xie, L. F-LOAM: Fast LiDAR Odometry and Mapping. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September 2021–1 October 2021. [Google Scholar]
  31. Himmelsbach, M.; Hundelshausen, F.V.; Wuensche, H.J. Fast Segmentation of 3D Point Clouds for Ground Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 560–565. [Google Scholar]
  32. Bogoslavskyi, I.; Stachniss, C. Fast Range Image-based Segmen-tation of Sparse 3D Laser Scans for Online Operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Republic of Korea, 9–14 October 2016; pp. 163–169. [Google Scholar]
  33. Du, P.F. Research on Key Technology of Monocular Vision SLAM Based on Graph Optimization. Master’s Thesis, Zhejiang University, Hangzhou, China, 2018. [Google Scholar]
  34. Chai, M.N.; Liu, Y.S.; Ren, L.J. Two-step loop detection based on laser point cloud NDT feature. Laser Infrared 2020, 50, 17–24. [Google Scholar]
  35. Adamshan. Automatic Calibration Algorithm and Code Examples. Available online: https://blog.csdn.net/AdamShan/article/details/105930565?spm=1001.2014.3001.5501 (accessed on 5 May 2020).
  36. Yan, G.H.; Liu, Z.C.; Wang, C.J.; Shi, C.; Wei, P.; Cai, X.; Ma, T.; Liu, Z.; Zhong, Z.; Liu, Y.; et al. OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving. arXiv 2022, arXiv:2205.14087. [Google Scholar] [CrossRef]
Figure 1. Laser odometer and online external parameter calibration process.
Figure 1. Laser odometer and online external parameter calibration process.
Applsci 13 10193 g001
Figure 2. Ground point cloud extraction process.
Figure 2. Ground point cloud extraction process.
Applsci 13 10193 g002
Figure 3. Point cloud clustering model.
Figure 3. Point cloud clustering model.
Applsci 13 10193 g003
Figure 4. Degradation factor definition model.
Figure 4. Degradation factor definition model.
Applsci 13 10193 g004
Figure 5. Schematic diagram of Scan Context segmentation.
Figure 5. Schematic diagram of Scan Context segmentation.
Applsci 13 10193 g005
Figure 6. Scan Context two-dimensional feature map.
Figure 6. Scan Context two-dimensional feature map.
Applsci 13 10193 g006
Figure 7. Loop detection based on Scan Context descriptor.
Figure 7. Loop detection based on Scan Context descriptor.
Applsci 13 10193 g007
Figure 8. Ground-constrained residual model.
Figure 8. Ground-constrained residual model.
Applsci 13 10193 g008
Figure 9. Map update model.
Figure 9. Map update model.
Applsci 13 10193 g009
Figure 10. Comparison of results from various multi-lidar calibration algorithms. (a) multi_lidar_calibrator; (b) OpenCalib; and (c) my_calibration.
Figure 10. Comparison of results from various multi-lidar calibration algorithms. (a) multi_lidar_calibrator; (b) OpenCalib; and (c) my_calibration.
Applsci 13 10193 g010
Figure 11. Comparison of trajectories of various laser SLAM algorithms with GPS trajectories. (a) Comparison of trajectories on different axes for various laser SLAM algorithms; (b) trajectory in the xy plane; (c) trajectory in the xz plane; and (d) trajectory in the yz plane.
Figure 11. Comparison of trajectories of various laser SLAM algorithms with GPS trajectories. (a) Comparison of trajectories on different axes for various laser SLAM algorithms; (b) trajectory in the xy plane; (c) trajectory in the xz plane; and (d) trajectory in the yz plane.
Applsci 13 10193 g011
Figure 12. RPE comparison for each laser SLAM algorithm.
Figure 12. RPE comparison for each laser SLAM algorithm.
Applsci 13 10193 g012
Table 1. Online calibration results for different lidar layouts.
Table 1. Online calibration results for different lidar layouts.
X (m)Y (m)Z (m)Roll (°)Pitch (°)Yaw (°)
The 32-line lidar is placed at the center of the roof, while the 16-line lidar is placed at the center of the front of the vehicle.
Actual measurement values0.820−0.050155
Algorithm calibration values0.848785−0.0669597−0.110832−1.0151415.0528254.54818
The 32-line lidar is centered on the roof, and the 16-line lidar is centered on the front of the vehicle.
Actual measurement values0.820.45−0.05−2155
Algorithm calibration values0.7885230.450865−0.0857012−2.9002215.06709234.27292
The 32-line lidar is placed at the center of the roof, and the 16-line lidar is placed at the left side of the rear end.
Actual measurement values−0.630.45−0.031−1−1
Algorithm calibration values−0.690390.440672−0.0104850.38109943−0.78497562−1.66093791
The 32-line lidar is mounted on the center of the roof, and the 16-line lidar is mounted on the front right of the vehicle.
Actual measurement values0.82−0.45−0.05815−10
Algorithm calibration values0.812278−0.428665−0.09595227.3027616.0154646−9.31442
The 32-line Lidar is placed on the roof in the center, while the 16-line Lidar is placed on the rear of the car and to the right.
Actual measurement values−0.63−0.45−0.0310−3−10
Algorithm calibration values−0.644347−0.37646−0.07850910.6063−3.16483−11.497818
The 32-line lidar is mounted on the right side of the roof, and the 16-line lidar is mounted on the left side of the roof.
Actual measurement values00.67−0.02000
Algorithm calibration values0.06177050.732039−0.06733410.39993226−0.106645611.47808788
Table 2. Deviation statistics between calibration and actual measurements for six multi-radar layout algorithms.
Table 2. Deviation statistics between calibration and actual measurements for six multi-radar layout algorithms.
X (m)Y (m)Z (m)Roll (°)Pitch (°)Yaw (°)
maximum deviation0.06177050.073640.0608321.015141.01546461.497818
minimum deviation0.0077220.0008650.0195150.39932260.0528250.45182
mean deviation0.03408180.039011110.04297390.70628880.270313650.9168873
Table 3. Detailed evaluation indicators for various multi-lidar calibration algorithms.
Table 3. Detailed evaluation indicators for various multi-lidar calibration algorithms.
Algorithm NameImproved RMSEOriginal RMSEVisual EvaluationRunning TimeCorresponding Point Number
multi_lidar_calibrator2.1949135.33469Poor14,121 ms1559
OpenCalib1.9854868.4994Good5366.5 ms2069
my_calibration0.91478513.307Excellent98 ms3814
Table 4. APE values of different laser SLAM algorithms.
Table 4. APE values of different laser SLAM algorithms.
Max (m)Mean (m)Median (m)Min (m)RMSE (m)Std (m)
A-LOAM44.5071925.4946792.9108640.01638410.9444849.465212
F-LOAM6.3357692.7977922.3169420.0240183.2067011.566937
LeGO-LOAM5.7273963.0541583.1923760.0611863.2522861.117804
M-LOAM7.5726332.6758611.5001300.0616753.4592262.192261
My-SLAM3.2509232.1967472.6480330.0284972.4337831.047665
Table 5. RPE values for different laser SLAM algorithms.
Table 5. RPE values for different laser SLAM algorithms.
Max (m)Mean (m)Median (m)Min (m)RMSE (m)Std (m)
A-LOAM2.8529860.4886300.4469490.0020570.6101430.365397
F-LOAM2.8025530.4553930.4262440.0013290.6022860.394164
LeGO-LOAM2.8077660.5085250.4368760.0041830.6253850.364017
M-LOAM2.8071510.5367700.4773800.0083370.6486260.364133
My-SLAM2.2015080.8290930.8340600.0067640.9518270.467525
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, F.; Zhao, X.; Gu, H.; Wang, L.; Wang, S.; Han, Y. Multi-Lidar System Localization and Mapping with Online Calibration. Appl. Sci. 2023, 13, 10193. https://doi.org/10.3390/app131810193

AMA Style

Wang F, Zhao X, Gu H, Wang L, Wang S, Han Y. Multi-Lidar System Localization and Mapping with Online Calibration. Applied Sciences. 2023; 13(18):10193. https://doi.org/10.3390/app131810193

Chicago/Turabian Style

Wang, Fang, Xilong Zhao, Hengzhi Gu, Lida Wang, Siyu Wang, and Yi Han. 2023. "Multi-Lidar System Localization and Mapping with Online Calibration" Applied Sciences 13, no. 18: 10193. https://doi.org/10.3390/app131810193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop