Next Article in Journal
Slotted Monopole Patch Antenna for Microwave-Based Head Imaging Applications
Previous Article in Journal
Temperature and Humidity Sensitivity of Polymer Optical Fibre Sensors Tuned by Pre-Strain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extrinsic Calibration of Multiple 3D LiDAR Sensors by the Use of Planar Objects

Department of Mechanical Engineering, Korea University, Seoul 02841, Korea
*
Author to whom correspondence should be addressed.
Current address: The 1st R&D Institute–5th Directorate, Agency for Defense Development (ADD), Daejeon 34186, Korea.
Sensors 2022, 22(19), 7234; https://doi.org/10.3390/s22197234
Submission received: 8 August 2022 / Revised: 6 September 2022 / Accepted: 15 September 2022 / Published: 23 September 2022
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Three-dimensional light detection and ranging (LiDAR) sensors have received much attention in the field of autonomous navigation owing to their accurate, robust, and rich geometric information. Autonomous vehicles are typically equipped with multiple 3D LiDARs because there are many commercially available low-cost 3D LiDARs. Extrinsic calibration of multiple LiDAR sensors is essential in order to obtain consistent geometric information. This paper presents a systematic procedure for the extrinsic calibration of multiple 3D LiDAR sensors using plane objects. At least three independent planes are required within the common field of view of the LiDAR sensors. The planes satisfying the condition can easily be found on objects such as the ground, walls, or columns in indoor and outdoor environments. Therefore, the proposed method does not require environmental modifications such as using artificial calibration objects. Multiple LiDARs typically have different viewpoints to reduce blind spots. This situation increases the difficulty of the extrinsic calibration using conventional registration algorithms. We suggest a plane registration method for cases in which correspondences are not known. The entire calibration process can easily be automated using the proposed registration technique. The presented experimental results clearly show that the proposed method generates more accurate extrinsic parameters than conventional point cloud registration methods.

1. Introduction

Autonomous navigation technology has been applied to robots and vehicles. Furthermore, commercialization of autonomous navigation systems operating in urban environments has been accelerating in recent years. Consequently, the perception of environment in autonomous systems is becoming increasingly crucial for system and user safety. Therefore, many navigation systems are equipped with a sensor system consisting of multiple sensors of the same type or sensors with different modalities.
Three-dimensional light detection and ranging (LiDAR) sensors provide accurate range measurements and abundant geometric information. A wide range of products for 3D LiDAR sensors has been developed in recent years, placing 3D LiDAR sensors among the most preferred in the field of autonomous navigation. Although the 3D LiDAR sensors that have high vertical resolution and a wide vertical field of view (e.g., Velodyne HDL64) provide a dense point cloud, it is not easy to apply them to commercial autonomous navigation systems because of their exorbitant prices. In this case, the 3D LiDAR sensors that have low vertical resolution (e.g., Velodyne VLP-16) can act as substitutes because the prices of the vertically low-resolution LiDAR sensors are relatively lower than those of the high vertical resolution 3D LiDAR sensors, such as HDL-64. However, they have a narrow vertical field of view. It is essential for an autonomous navigation system to measure a wider range of the environment while lowering the cost of the entire system. Furthermore, there is a need for a sensor system that reduces blind spots. Consequently, multiple 3D LiDAR sensors are often applied to autonomous navigation systems. Because the LiDAR sensors are typically mounted on a robotic or vehicular platform at different locations and directions, the transformation between the sensors should be well defined. From this point of view, extrinsic calibration to estimate the relative position and rotation between the sensors should precede the operation of the autonomous navigation system.
The importance of extrinsic calibration is evident for the following reasons: First, when commercializing autonomous navigation systems equipped with multiple sensors, large numbers of systems need to be calibrated in batches. Second, for research platforms, the poses of the sensors are often adjusted to find a suitable configuration. Furthermore, sensors are often attached and detached while inspecting the platform or reconfiguring the sensor systems. Therefore, the extrinsic calibration is not a one-off operation for the systems. Consequently, an easily applicable and accurate extrinsic calibration method is required.
Regarding environmental perception, it is efficient for autonomous navigation systems to cover a wide ranging area with few sensors where possible, especially when using multiple sensors with the same modality. Therefore, multiple LiDAR sensors are forced to install the viewpoints of the sensors quite differently so that the sensor systems have a wide field of view. Consequently, the overlapping regions between the sensors are reduced. However, for extrinsic calibration, the reduced overlapping regions make it difficult to determine the extrinsic parameters between sensors. For this reason, point cloud registration algorithms [1,2,3,4,5] may fail in sensor configurations with different points of view.
Calibration methods using artificial calibration objects require processes for making the objects and for placing them in the target environments [6,7,8]. To obtain a robust estimation of extrinsic parameters, the calibration objects should be large enough, or they should be sufficient in number. However, artificial calibration objects are generally small, meaning that the measurement points of the objects are only a fraction of the large number of point clouds that LiDAR sensors provide. Consequently, the calibration results are sensitive to sensor noise. Furthermore, it is sometimes necessary to set up a region of interest without automatically detecting the object.
There are several studies on motion-based extrinsic calibration. These methods are based on hand-eye calibration using structure from motion. The extrinsic parameter is calibrated using the motions obtained by each sensor. Because detection of mutual information is not required, it is usually used for the sensors with different modalities, such as LiDAR-Camera-IMU [9] or LiDAR-Camera [10,11]. However, the motion-based techniques require the exact trajectories of the vehicle. Although the trajectory can be obtained via the inertial navigation system (INS) or visual odometry, it is difficult to estimate the exact trajectory of the system. Moreover, the accuracy of the trajectory estimation is affected by the noise of the sensor. Thus, the trajectory estimation errors may result in estimation errors on extrinsic parameters.
In this study, we propose a method for extrinsic calibration of multiple 3D LiDAR sensors. The contributions of this paper are threefold. First, the proposed method uses plane objects in surrounding environments as calibration objects. Specifically, at least three planes whose normal vectors span R 3 are required. The planes satisfying the condition can easily be found in nearby structures (e.g., the ground, walls, or columns). Therefore, the proposed method does not require environmental modifications such as placing artificial calibration objects. Secondly, we suggest a plane registration method for matching the plane objects detected by each sensor. For sensors with different viewpoints, there are only small overlapping regions. This situation increases the difficulty of assigning plane correspondences. The proposed registration technique can easily automate the entire calibration process without human intervention. Thirdly, we consider the uncertainty of the plane features in the optimization process. Noisy planes may exist among the detected plane features. The reliability of the calibration results can be increased by lowering the weight of the noisy plane.

2. Related Work

Several studies have addressed the extrinsic calibration of one or more LiDAR sensors. One possible approach is matching geometrically distinctive features. Choi et al. [12] proposed a method for estimating the relative pose between two 2D LiDAR sensors. A set of planes were used as the calibration object, and the coplanarity and orthogonality of the scan points were considered to determine the extrinsic parameters. In [13], a method for extrinsic calibration of a set of 2D LiDAR sensors was proposed. This method used observations of a planar surface from different orientations. Their extended research [14] presents a calibration technique that uses perpendicular planes. Line segments were detected for the orthogonal planes and calibration was performed using the orthogonality and coplanarity of the line segments. Although these research studies may be extended to planes detected using 3D LiDAR sensors, it is expected that considerable modification of the implementation will be required.
An alternative to extrinsic calibration of LiDAR sensors is the use of artificial objects such as reflective tape, poles, or boxes. Underwood et al. [6] proposed a method for estimating the relative pose of a LiDAR sensor and a vehicle in which a vertical pole with reflective tape was used as a calibration object. Similarly, Gao et al. [7] placed several reflective tapes on vertical poles. The relative pose between LiDAR sensors and a vehicle were estimated by matching the reflective targets. In [8], a method for extrinsic calibration of a multi-LiDAR–multi-camera system using a hexahedron box was proposed. LiDAR–LiDAR calibration was performed by applying a singular value decomposition (SVD) to the vertices of the box.
Attempts have also been made to determine the extrinsic parameters using the quality of a 3D point cloud. Levinson and Thrun [15] proposed an unsupervised approach for intrinsic and extrinsic calibration of a high-resolution LiDAR sensor. An energy function was defined under the assumption that neighboring points lie on a contiguous surface. Intrinsic and extrinsic parameters minimizing the energy function were optimized using a grid search. Sheehan et al. [16] proposed a method for calibrating a rotating 3D LiDAR comprising three 2D LiDAR sensors. In their extended research, Maddern et al. [17] proposed an extrinsic calibration method of 2D and 3D LiDAR sensors. Rényi’s quadratic entropy (RQE) was used to measure the quality of a point cloud, while the extrinsic parameters were estimated by maximizing the quality function of the point cloud. However, these methods require an accurate trajectory of the vehicle.
Point cloud registration algorithms, such as iterative closest point (ICP) variants [1,2,3,4] and normal distribution transform (NDT) [5], can be exploited for the extrinsic calibration of 3D LiDAR sensors. In [18], the development of a LiDAR data set in urban environments is presented. To calibrate the extrinsic parameters between 3D LiDAR sensors, a generalized ICP [3] was applied to data from the overlapping region between the sensors. Trihedral features, such as the corner of a building, should exist in the overlapping regions. An open source software, Autoware [19], provided the implementation of various navigation algorithms for the autonomous vehicle. NDT was used for the extrinsic calibration of multiple LiDAR sensors. However, registration may fail if the point clouds are sparse or if the viewpoints of the sensors are significantly different. These studies, therefore, selected points for overlapping regions and applied them to the registration algorithms.

3. Problem Statement and System Overview

This paper describes a calibration method for extrinsic parameters of multiple LiDAR sensors mounted on a robotic or vehicular platform at different locations and directions. The extrinsic parameters refer to the relative position and the relative angle from the reference LiDAR sensors to the other sensors. The purpose of this study is to define the transformation between LiDAR sensors by accurately estimating the six degrees of freedom (DOF) extrinsic parameters [ t x , t y , t z , ϕ , θ , ψ ] , as shown in Figure 1.
There are two assumptions for the proposed method. The first one is that at least three non-parallel planes are required within the common field of view of the LiDAR sensors. It is necessary that the normal vector of the plane should span R 3 to estimate the 6DOF pose estimation of extrinsic parameters. The second is that the intrinsic parameter of each LiDAR sensor is assumed to be calibrated in advance. The intrinsic parameters relate to the internal optical components of the 3D LiDAR. If there is an error in the poses of the internal components, a distorted point cloud is generated, or a point cloud is entirely shifted from the origin of the LiDAR sensor. In this study, it is assumed that the intrinsic parameters are calibrated in a manufacturing process; consequently, there is neither distortion nor shift in the point clouds.
The initial poses of each LiDAR sensor can be defined by a human operator. We define the initial poses as modeling poses. The modeling pose refers to the user-defined extrinsic parameters between LiDAR sensors, which are mounted on a robot or a vehicle platform in different positions and orientations. It can be obtained via a drawing of the sensor mount or by measuring the relative distance and angles between sensors with a ruler and protractor. However, it is difficult to determine the modeling pose because not all detailed drawings of sensor mounts installed in a robotic or vehicular system are publicly available. Even if a detailed drawing is provided, installation errors or mechanical distortion of sensor mounts lead to inaccuracy in the modeling poses. Because the errors between the modeling poses and the actual poses are inevitable, this study aims to reduce these errors.
In this study, we set one of the sensors as a reference frame R for calibration and calculate the relative poses between R and remainder sensors. If the coordinate frames of the remainder sensors to be calibrated are defined as source frames S , the goal of this research is to determine the rotation matrix R S R SO(3) and translation vector t S R R 3 .
The proposed method is one approach to pairwise calibration [20,21]. However, calibration between source LiDAR sensors is not considered because the sensors have the same modality. It is sufficient to take one sensor as the reference and determine the transformation of the remainder of the LiDAR sensors in relation to it.
Figure 2 presents an outline of the extrinsic calibration process. The transformation between R and S is calculated using plane features π R and π S , which are detected by the reference and source LiDAR, respectively. The point cloud measured from each LiDAR sensor is segmented first, and plane objects to be used for calibration are detected by a planarity filter that considers the variance of points in the normal direction and the number of points in the segments. For sensors with different points of view, the initial estimation of the extrinsic parameters is performed by repeating the correspondence detection and matching the correspondence. The primary purpose of initial estimation is to find the right corresponding planes among plane objects measured by each sensor. An iterative plane registration process for finding and matching the corresponding planes is performed in the initial estimation. Instead of using the points, the plane parameters (normal, centroid, distance from the sensor origin) are used for plane registration. Subsequently, the extrinsic parameters are optimized by using the point clouds of the corresponding plane set. The optimization process aims to refine the extrinsic parameters obtained by initial estimation. The extrinsic parameters are jointly optimized to minimize the error between the measurement points and the plane.

4. Plane Feature Detection

In this section, we briefly describe the method of plane detection. First, segmentation of the point clouds is performed. There are some segmentation methods suitable for detecting plane features. A 3D LiDAR provides a dense point cloud in small-scale environments because the LiDAR is not far from the surrounding structures. In this case, we use region growing segmentation [22], which is based on the smoothness constraint of neighboring points. This method can divide the point cloud through smoothly connected areas. In contrast, in large-scale environments, a sufficiently dense point cloud for region growing segmentation cannot be obtained from low-resolution 3D LiDAR sensors. In this case, the dominant planes can be detected by MLESAC [23]. We use MLESAC for ground plane detection because low-resolution 3D LiDAR sensors do not provide enough ground points. For the segmentation of off-ground points, we used the region growing method.
A planarity filter is applied to the segmented point clouds to detect plane features. The filter tests three attributes concerned with the distribution of scan points on a segment. Let x = ( x i , y i , z i ) T be the scan point on the segment and x ¯ = 1 n i = 1 n x i be the centroid of the segment. Then, the covariance of the segments is calculated using C = 1 n 1 ( x x ¯ ) T ( x x ¯ ) . The eigenvectors of C are the principal components of the segments, and the eigenvalues are the explained variance of the principal components. The attributes of the segments are computed by using the sorted eigenvalues λ 1 λ 2 λ 3 0 .
The point cloud of a planar structure has eigenvalues λ 1 λ 2 λ 3 . The first attribute of a segment is planarity P λ = ( λ 2 λ 3 ) / λ 1 as defined in [24,25]. The remaining attributes are explained variance λ 3 and the number of points (n). The segment to be considered as a plane feature should satisfy P λ P λ , t h r , λ 3 λ t h r , and n N t h r .

5. Extrinsic Calibration

5.1. Initial Transform Estimation through Plane Registration

The initial extrinsic parameters between 3D LiDAR sensors can be estimated by matching plane correspondences. The correspondence between two consecutive scans can easily be detected using the similarity of the plane parameters. However, if there are no given sensor poses or if the modeling pose errors are large, it is difficult to find plane correspondences at once. Furthermore, if the sensors have considerably different points of view, it is difficult to obtain a corresponding plane between the point clouds captured by each sensor. Because LiDAR sensors are mounted on a robot or a vehicle platform in different positions and orientations, even if the same wall is measured, other parts of the wall can be measured by each sensor. Moreover, the distance between each sensor and the plane object is also different. Therefore, the plane objects captured by each sensor can have different shapes, sizes, and poses. This situation is evident in a LiDAR sensor with low vertical resolution and a low vertical field of view. In essence, it is unknown which plane of the reference sensor corresponds to which plane of the source LiDAR sensor. The plane registration corresponds to lines 2–17 of Algorithm 1.
Sensors 22 07234 i001
The proposed method iterates the process of finding correspondences and matching the correspondences to determine the extrinsic parameters from unknown correspondences. Several test measures are calculated using the plane parameters to find the correspondences. Let π R and π S be planes detected by the reference and the source LiDAR sensor, respectively. Their plane parameters are [ ( n R ) T , d R , ( p ¯ R ) T ] and [ ( n S ) T , d S , ( p ¯ S ) T ] , which consist of normal vector n , perpendicular distance d from the origin of each sensor, and centroid p ¯ of the measured plane. Figure 3 illustrates the test measures that are calculated from the plane parameters. The test measures are the differences between the planes in angle of normal vectors ϵ a n g , distances ϵ e u c , and centroids ϵ c e n , which are computed based on lines 6–8 of Algorithm 1.
To detect corresponding planes, similarity distance D ( π i R , π j S ) between the target and query plane, which is the weighted sum of the test measures, is calculated for all possible pairs of planes based on a brute-force search (line 9). w represents a weight for each test measure. Subsequently, the pair of planes with the smallest weighted sum is detected as plane correspondence (line 12). c i denotes the corresponding plane of the i-th target plane. If the similarity distance is above specific threshold value D m a x , the pair of planes is rejected from the iteration of the calibration process.
To calculate the relative transformation between LiDAR sensors, the initial estimation of the extrinsic parameters is performed using the plane parameters of a set of plane correspondences. Let us define the correspondence planes of the reference sensor and the source sensor as π i R and π i S ( i = 1 , 2 , 3 , , C ) , respectively. We also calculate the initial estimation of transformation [ R ˜ S R , t ˜ S R ] . The geometric constraints of the corresponding planes to be matched are presented in the following equations:
R S R n i S = n i R
n i R ( R S R p ¯ i S + t S R ) + d i R = 0
Equation (1) is used to obtain rotation matrix R S R . As the planes have noises in their parameters, R S R can be obtained via the least squares solution in line 15. A closed-form solution of line 15 can be obtained using the Kabsch algorithm [26]. Let us define the cross-covariance of a set of normal vectors of the corresponding planes as H = [ n 1 S , , n C S ] [ n 1 R , , n C R ] T . If the singular value decomposition (SVD) of H is U Σ V T , the rotation matrix is calculated using the following equation:
R ˜ S R = V 1 0 0 0 1 0 0 0 d e t ( V U T ) U T
We can obtain a linear system by stacking (2) for the translation vector. The translation vector t ˜ S R can be obtained from the following linear system:
N t ˜ S R = b N = ( n 1 R ) T ( n C R ) T , b = ( n 1 R ) T R S R p ¯ 1 d 1 R ( n C R ) T R S R p ¯ C d C R
The closed form solution of the least-squares problem in (4) is given by line 16.
In (3), the rank of the cross-covariance matrix H should be three in order to obtain a unique solution for a rotation matrix. Similarly, the set of normal vectors N in (4) should be invertible to obtain a unique solution. In essence, the rank of N should also be three. Consequently, at least three of the non-parallel planes are required to calculate the relative pose. The process for the plane registration is repeated until the sum of the similarity distance is converged. Therefore, The initial estimation is completed when the weighted sum D k = i C D ( π i R , π i S ) in the k-th iteration satisfies D k 1 D k < ϵ .

5.2. Optimization of Calibration Parameters

In line 16 of Algorithm 1, the relative transformation between sensors is optimized using the scan points on the planes. A set of parameters Θ = [ t x , t y , t z , ϕ , θ , ψ ] that includes Euler angles, and a translation vector is jointly estimated in the optimization process. The projection of the points on a source plane onto its plane correspondence should be minimized. Optimized transformation R ^ S R and t ^ S R can be obtained by minimizing the following cost function:
( R ^ S R , t ^ S R ) argmin R , t i = 1 C w i E i ( R ˜ , t ˜ , π R , π S ) E i ( R ˜ , t ˜ , π i R , π i S ) = p π s n i R · ( R ˜ S R p S + t ˜ S R ) + d i R 2 ,
where C is the number of plane correspondences, and ω i is the weight reflecting the uncertainty of the i-th plane correspondence.
The weight of the correspondence is computed as the inverse of the variance along the normal direction: ω i = 1 / ( σ i S ) 2 . In this study, we applied the Levenberg–Marquardt (LM) algorithm [27] to solve a non-linear least square problem. The purpose of the LM method is to iteratively estimate calibration parameters Θ that minimize the cost function. Let us define the residual satisfying E i ( R ˜ , t ˜ , π i R , π i S ) = p π s e i ( Θ ) 2 as e i ( Θ ) . The Jacobian matrix for the parameter estimation is given as follows:
J i = e i Θ , e i = 1 σ i S ( n i R ( R S R p S + t S R ) + d i R )
The length of Jacobian matrix J is i = 1 M N i , and the dimension is six, which is the same as that of the set of parameters. We set the transformation obtained from Section 5.1 as the initial values of the optimization process, and the updated parameters in each LM iteration are given by the following equation:
Θ k + 1 = Θ k ( J T J + λ d i a g ( J T J ) ) 1 ( J T e )

6. Experiments

6.1. Experimental Setup

To validate our calibration approach, we conducted experiments using a robotic and a vehicular system. The systems were equipped with multiple low-resolution 3D LiDAR sensors (Velodyne VLP-16). Two LiDAR sensors were mounted on the robotic platform as shown in Figure 4a. One was installed horizontally at a height of 1.9 m, and the other was tilted about 22.5 toward the ground at a height of 1.4 m. We set the horizontally mounted LiDAR sensor as the reference frame R , and the tilted LiDAR sensor was considered as the source frame S . The calibration of the relative transformation between R and S was performed in the environment shown in the right side of Figure 4a.
Figure 4b shows the vehicular system and the experimental environment. Similar to the robotics system, the vehicle was equipped with three LiDAR sensors to minimize blind spots. A sensor was attached to the top, front, and rear of the vehicle. We set the sensor on the top of the vehicle as the reference frame R . The front and rear sensors were set as source frames S 1 and S 2 , respectively. The extrinsic calibration of the vehicular system was performed in an underground parking lot. The plane objects on the front and the rear side of the vehicle were used as calibration objects.

6.2. Extrinsic Calibration of Robotics System

For the first step of the extrinsic calibration, we detected the plane features in the target environment and found the correspondences. Figure 5 shows the process of plane correspondence detection. Figure 5a shows the raw point cloud collected from each LiDAR sensor. The ground plane was extracted using MLESAC on account of a sparse point cloud on the ground. Then, we detected other plane features using region growing segmentation. For the segmentation of the off-ground point cloud, we computed the normal vector of each scan point using its neighboring points. In this study, we set the number of the neighboring points to calculate the point normal to 30. Then, the 30 nearest neighbors of the current seed point were used to group the points belonging to the smooth surface. We set the thresholds for the smoothness angle and the residual to 10.0 and 10.0 m 1 .
The planarity filter stated in Section 4 was applied to the segmented point clouds. Figure 5b shows the plane features detected by using the planarity filters. The upper figure is the result from the reference sensor, and the lower figure is the result from the source sensor. The colored points represent segments that correspond to the detected planes, and parts other than the plane are expressed in gray.
Because we did not know the correspondences between the planes detected by each sensor, the iterative process of finding correspondences and matching them was performed to detect the plane correspondence and to simultaneously compute the initial transform [ R ˜ S R , t ˜ S R ] . Figure 5c shows the result of deriving the correspondences from the plane segments in Figure 5b. Each plane correspondence is expressed in the same color. There were a total of nine plane correspondences in the target environment.
Figure 6 shows a process of plane registration in a challenging scenario where the initial modeling pose is significantly different from the ground truth. The shape of the point cloud with respect to the ground (orange color) is different for each LiDAR sensor. This is because the reference LiDAR is mounted facing forward, while the source LiDAR is tilted toward the ground. Different plane shapes and errors in modeling poses make it challenging to find a corresponding plane. In this scenario, the point clouds are registered while finding the correct corresponding plane after several iterations by the proposed plane matching algorithm.
Figure 7a,c show the raw point clouds of the target environment before the extrinsic calibration. The blue points represent the point cloud measured by the reference LiDAR sensor, and the red points are from the tilted LiDAR sensor. The green bounding box in Figure 7a corresponds to the environment in Figure 4a. If the transformation between sensors was correct, the blue and red points for plane π 1 π 4 should be matched. However, it can be seen that the two point clouds are not aligned. Figure 7c shows the side view of the point clouds. The ground plane in each point cloud is expressed as a dashed line. It can be seen that the point cloud from the tilted LiDAR sensor is inclined by about 22 compared to the point cloud of the reference sensor.
Optimized transform [ R ^ S R , t ^ S R ] is calculated through the LM iteration. The calibration parameters obtained from the initial estimation process were set as the initial values of the optimization process. The resultant transformation was applied to the point cloud of the tilted sensor. The results of the extrinsic calibration are shown in Figure 7b,d, which are the top view and the side view of the point cloud, respectively. In the magnified view of Figure 7b, the blue and the red point clouds that correspond to plane π 1 π 4 were perfectly matched. Furthermore, the ground surfaces were also aligned with each other, as shown in Figure 7d. Therefore, it was verified that the extrinsic parameters between the two LiDAR sensors were successfully calibrated using the proposed method.

6.3. Extrinsic Calibration of Vehicular System

The modeling poses of the LiDAR sensors were not set except for a 180-degree rotation on the z-axis for the rear sensor. The point clouds before calibration are shown in Figure 8a. The blue, red, and green points represent the point clouds acquired from the top, front, and rear sensors. The upper panel of Figure 8a shows a top view of the point clouds. It was observed that the discrepancies in point clouds were evident on plane π β π δ . The front and rear point clouds were rotated on the z-axis about 2.5 . The middle and lower panels of Figure 8a show side views of the point clouds. The dashed lines represent the ground planes, which correspond to plane π α in Figure 4b. At the center of the reference sensor, the distance between the ground planes of the reference and front sensors was approximately 1.3 m, and the angle difference was about 2.2 . Similarly, for the ground plane of the rear sensor, the differences from the ground plane of the reference sensor were 1.5 m in height and 2.5 in angle.
Extrinsic calibration of the vehicular system was also performed using the proposed method. The entire process was the same as the calibration process for the robotic system. For the process of detecting plane features, the parameters applied to the segmentation methods and planarity filter were the same as those for the robotic system. The point clouds transformed by the determined extrinsic parameters are shown in Figure 8b. In the top view, the point clouds that correspond to side wall π β π δ were perfectly aligned. Furthermore, the ground planes were also matched with each other, as shown in the middle and lower panels in Figure 8b. Therefore, it was verified that the extrinsic parameters of the LiDAR sensors were successfully calibrated using the proposed method.

6.4. Evaluation

To evaluate our calibration approach, we compared the proposed method with ICP [1] and its variants [2,3,4] and NDT [5]. One of the contributions of this paper is the plane registration technique, which can easily automate the entire calibration process without human intervention. The conventional work for plane-based extrinsic calibration requires a process in which a user manually assigns corresponding planes for a plane pair [28]. Therefore, it was not included in the comparison. There are other conventional works that automatically calibrate extrinsic parameters using GICP [18] and NDT [19]. GICP uses plane-to-plane features using point normal information for matching. The four types of ICP variants considered for the evaluation were point-to-point ICP (pt-ICP) [1], point-to-plane ICP (pl-ICP) [2], generalized-ICP (GICP) [3,18], and normal ICP (NICP) [4]. Since NICP uses the normal, curvature, and projection distance of a local plane, it can be seen as a method using plane features. When a raw point cloud was applied to the registration algorithms, alignment often failed due to the cluttering of the points. In this study, the point clouds of the plane segments detected in Section 4 were applied to the registration algorithms. The point cloud of each sensor was downsampled using a grid size of 0.1 m. The NDT grid size was set to 2.0 m.
Table 1 shows the results for the quantitative evaluation. The quantitative measure for the evaluation was the root mean square error (RMSE) between the plane model and the scan points for all plane correspondences.
The RMSE for the k-th corresponding plane pair is calculated as follows.
RMSE k = 1 N i = 1 N n k R p i + d k R | | n k R | | 2 ,
where n k R is the normal vector of the reference plane, and d k R is the distance from the origin to the reference plane. p i is the i-th measurement point belonging to the k-th corresponding plane pair, and N is the total number of measurement points of a pair of correspondence planes.
For all methods, the RMSE was calculated for three cases: the ground plane, off-ground planes, and all planes. The lower value of the RMSE represents a more accurate extrinsic calibration result.
The evaluation result on the robotic system is shown in the row referred to as (a) in Table 1. The proposed method showed the lowest RMSE values for the three cases. In particular, the RMSE value for the ground showed a noticeable difference compared to other registration methods, except for GICP. The RMSE value of all planes measured by the reference LiDAR was about 0.0153 m. The proposed method showed results similar to the planes measured using a single sensor. Among the conventional methods, the RMSE values of the GICP were the closest to that of the proposed method. However, the proposed method showed a slightly better performance. Although pt-ICP and 3D NDT showed RMSE values for the off-ground close to the values of the proposed method, the RMSE values for the ground plane were more than twice those of the proposed method. These results appear because the registration accuracy is low due to the insufficient number of corresponding points on the ground plane. Consequently, the RMSE values for all planes were also twice as large compared to those for the proposed method. The RMSE values of pl-ICP were more than three times compared to those for the proposed method in all cases.
The evaluation results on the vehicular system are shown in the rows referred to as (b) and (c) in Table 1. The proposed method also showed the lowest RMSE values for all scenarios. The RMSE value of all planes that were measured by the reference LiDAR is about 0.0254 m. As with the robot system, the overall RMSE values from the proposed method is 0.0290 m, which is close to the value of the reference sensor. The GICP showed better performance than other registration methods. However, in R to S 2 calibration, the RMSE value for the ground is 0.0761 m, which is four times greater than that of the proposed method. This is because there were few overlaps between the reference sensor and the rear sensor in the ground data. The ground RMSE values of pt-ICP, pl-ICP, and NICP are above 0.1 m, which implies that they practically failed in registration.
NICP and NDT failed to match the two sets of point clouds from the vehicular system when the initial translation was not given. We additionally set up the initial translation of the source LiDAR sensors. The initial translation was set to 2 m for S 1 LiDAR and -2 m for S 2 LiDAR in the direction of the x-axis of the reference coordinate system. Although the initial translation was given, the RMSE values for the ground planes were found to be more than 0.2 m.
The conventional registration methods shared a lower accuracy for the ground plane matching. The reason for this result is that the methods require a large number of corresponding points in overlapping regions. In the experiments, VLP-16 provides a sparse point cloud in a single scan. Furthermore, there were only small overlapping regions on the ground surfaces because the points of view of the LiDAR sensors were quite different. Therefore, the number of point correspondences on the ground was smaller than those not on the ground. Consequently, ICP variants and NDT did not achieve convergence on the ground planes. However, the proposed method was robust enough for the sensor configuration, where the points of view of the sensors were quite different because it utilized the plane models and variances of the plane correspondences.

7. Conclusions

In this paper, we have presented a method for extrinsic calibration of multiple 3D LiDAR sensors using planar features. The only requirement is more than three independent planes that can easily be found in surrounding environments. Therefore, the proposed method does not require any environmental modifications. A plane registration method is presented to deal with cases in which the viewpoints of the LiDARs are significantly different. The proposed method iterates the process of assigning correspondences and aligning the correspondences to estimate the initial transformation. The uncertainties of the plane are also considered in the optimization process to lower the effect of the noisy plane.
We conducted several experiments on a robotic and a vehicular system to validate our calibration algorithm. For evaluation, we compared the proposed method with ICP variants and 3D NDT. The proposed method showed better performance than the conventional registration methods in terms of accuracy. Therefore, we have demonstrated that the proposed method achieves robust performance using sensors with different points of view.

Author Contributions

Conceptualization, H.L. and W.C.; methodology, H.L.; software, H.L.; validation, H.L.; formal analysis, H.L.; investigation, H.L.; resources, W.C.; data curation, H.L.; writing—original draft preparation, H.L.; writing—review and editing, W.C.; visualization, H.L.; supervision, W.C.; project administration, W.C.; funding acquisition, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, industry & Energy (MOTIE, Republic of Korea), the National Research Foundation of Korea(NRF), MSIP (NRF-2021R1A2C2007908), and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (2022-0-01010, Development of an intelligent manufacturing logistics system based on autonomous transport mobile manipulation).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Besl, P.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  2. Chen, Y.; Medioni, G. Object modeling by registration of multiple range images. In Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 9–11 April 1991; pp. 2724–2729. [Google Scholar] [CrossRef]
  3. Segal, A.; Haehnel, D.; Thrun, S. Generalized-ICP. In Proceedings of the Robotics: Science and Systems, Seattle, WA, USA, 28 June–1 July 2009; Volume 2. [Google Scholar]
  4. Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 742–749. [Google Scholar] [CrossRef]
  5. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 2743–2748. [Google Scholar]
  6. Underwood, J.; Hill, A.; Scheding, S. Calibration of range sensor pose on mobile platforms. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3866–3871. [Google Scholar] [CrossRef]
  7. Gao, C.; Spletzer, J.R. On-line calibration of multiple LIDARs on a mobile vehicle platform. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 279–284. [Google Scholar] [CrossRef]
  8. Pusztai, Z.; Eichhardt, I.; Hajder, L. Accurate calibration of multi-lidar-multi-camera systems. Sensors 2018, 18, 2139. [Google Scholar] [CrossRef] [PubMed]
  9. Taylor, Z.; Nieto, J. Motion-based calibration of multimodal sensor arrays. In Proceedings of the IEEE International Conference on Robotics and Automation 2015, Seattle, WA, USA, 26–30 May 2015; pp. 4843–4850. [Google Scholar] [CrossRef]
  10. Ishikawa, R.; Oishi, T.; Ikeuchi, K. LiDAR and camera calibration using motions estimated by sensor fusion odometry. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 7342–7349. [Google Scholar] [CrossRef]
  11. Napier, A.; Corke, P.; Newman, P. Cross-calibration of push-broom 2D LiDARs and cameras in natural scenes. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3679–3684. [Google Scholar] [CrossRef]
  12. Choi, D.G.; Bok, Y.; Kim, J.S.; Kweon, I.S. Extrinsic calibration of 2-D LiDARs using two orthogonal planes. IEEE Trans. Robot. 2016, 32, 83–98. [Google Scholar] [CrossRef]
  13. Fernández-Moral, E.; González-Jiménez, J.; Arévalo, V. Extrinsic calibration of 2D laser rangefinders from perpendicular plane observations. Int. J. Robot. Res. 2015, 34, 1401–1417. [Google Scholar] [CrossRef]
  14. Fernández-Moral, E.; Arévalo, V.; González-Jiménez, J. Extrinsic calibration of a set of 2D laser rangefinders. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 2098–2104. [Google Scholar] [CrossRef]
  15. Levinson, J.; Thrun, S. Unsupervised calibration for multi-beam lasers. In Proceedings of the International Symposium on Experimental Robotics, New Delhi, India, 18–21 December 2010; pp. 179–193. [Google Scholar]
  16. Sheehan, M.; Harrison, A.; Newman, P. Self-calibration for a 3D laser. Int. J. Robot. Res. 2012, 31, 675–687. [Google Scholar] [CrossRef]
  17. Maddern, W.; Harrison, A.; Newman, P. Lost in translation (and rotation): Rapid extrinsic calibration for 2D and 3D LiDARs. In Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3096–3102. [Google Scholar] [CrossRef]
  18. Jeong, J.; Cho, Y.; Shin, Y.S.; Roh, H.; Kim, A. Complex urban dataset with multi-level sensors from highly diverse urban environments. Int. J. Robot. Res. 2019, 38, 642–657. [Google Scholar] [CrossRef]
  19. Kato, S.; Tokunaga, S.; Maruyama, Y.; Maeda, S.; Hirabayashi, M.; Kitsukawa, Y.; Monrroy, A.; Ando, T.; Fujii, Y.; Azumi, T. Autoware on board: Enabling Autonomous vehicles with embedded systems. In Proceedings of the ACM/IEEE International Conference on Cyber-Physical Systems, Porto, Portugal, 11–13 April 2018; pp. 287–296. [Google Scholar] [CrossRef]
  20. Santos, V.; Rato, D.; Dias, P.; Oliveira, M. Multi-sensor extrinsic calibration using an extended set of pairwise geometric transformations. Sensors 2020, 20, 6717. [Google Scholar] [CrossRef] [PubMed]
  21. He, M.; Zhao, H.; Davoine, F.; Cui, J.; Zha, H. Pairwise LIDAR Calibration Using Multi-Type 3D Geometric Features in Natural Scene. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 1828–1835. [Google Scholar]
  22. Rabbani, T.; van den Heuvel, F.; Vosselman, G. Segmentation of point clouds using smoothness constraints. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248–253. [Google Scholar]
  23. Torr, P.H.; Zisserman, A. MLESAC: A new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef]
  24. Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 86–91. [Google Scholar]
  25. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality based scale selection in 3D LiDAR point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 97–102. [Google Scholar] [CrossRef]
  26. Kabsch, W. A solution for the best rotation to relate two sets of vectors. Acta Crystallogr. Sect. A 1976, 32, 922–923. [Google Scholar] [CrossRef]
  27. Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  28. Jiao, J.; Liao, Q.; Zhu, Y.; Liu, T.; Yu, Y.; Fan, R.; Wang, L.; Liu, M. A novel dual-LiDAR calibration algorithm using planar surfaces. In Proceedings of the IEEE Intelligent Vehicles Symposium, Paris, France, 9–12 June 2019; Volume 2019, pp. 1499–1504. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Problem statement for the proposed method. The aim of this research is to find extrinsic parameters that relate to the transformation between reference and source LiDAR sensors. The proposed method requires at least three non-parallel planes within the common field of view of the LiDAR sensors.
Figure 1. Problem statement for the proposed method. The aim of this research is to find extrinsic parameters that relate to the transformation between reference and source LiDAR sensors. The proposed method requires at least three non-parallel planes within the common field of view of the LiDAR sensors.
Sensors 22 07234 g001
Figure 2. An outline of the proposed method. The calibration first performs a plane feature detection process for segmenting a point cloud and detecting plane objects. After that, an initial estimation of extrinsic parameters is performed through an iterative plane registration. The primary purpose of initial estimation is to find the right corresponding planes. Finally, the extrinsic parameters are refined through optimization using the measurement points within the corresponding planes.
Figure 2. An outline of the proposed method. The calibration first performs a plane feature detection process for segmenting a point cloud and detecting plane objects. After that, an initial estimation of extrinsic parameters is performed through an iterative plane registration. The primary purpose of initial estimation is to find the right corresponding planes. Finally, the extrinsic parameters are refined through optimization using the measurement points within the corresponding planes.
Sensors 22 07234 g002
Figure 3. The test measures for assigning plane correspondences using the plane parameters.
Figure 3. The test measures for assigning plane correspondences using the plane parameters.
Sensors 22 07234 g003
Figure 4. Experimental setup (a) a robotic system (b) a vehicular system.
Figure 4. Experimental setup (a) a robotic system (b) a vehicular system.
Sensors 22 07234 g004
Figure 5. Process of plane correspondence detection. Each row corresponds to the reference and source LiDAR sensors. Each column corresponds to (a) raw measurements, (b) plane object detection, and (c) correspondence detection.
Figure 5. Process of plane correspondence detection. Each row corresponds to the reference and source LiDAR sensors. Each column corresponds to (a) raw measurements, (b) plane object detection, and (c) correspondence detection.
Sensors 22 07234 g005
Figure 6. Process of plane registration in a challenging scenario where the initial modeling pose is significantly different from the ground truth. Each pair of corresponding planes is marked with the same color and connected by a white line.
Figure 6. Process of plane registration in a challenging scenario where the initial modeling pose is significantly different from the ground truth. Each pair of corresponding planes is marked with the same color and connected by a white line.
Sensors 22 07234 g006
Figure 7. Point clouds from the reference (blue dots) and source (red dots) LiDAR sensors of the robotic system. Top view of the point cloud (a) before and (b) after calibration. The magnified area of the top views represent the environment shown in Figure 4a. Side view of the point cloud (c) before and (d) after calibration. The dashed lines in the side views correspond to the ground planes.
Figure 7. Point clouds from the reference (blue dots) and source (red dots) LiDAR sensors of the robotic system. Top view of the point cloud (a) before and (b) after calibration. The magnified area of the top views represent the environment shown in Figure 4a. Side view of the point cloud (c) before and (d) after calibration. The dashed lines in the side views correspond to the ground planes.
Sensors 22 07234 g007
Figure 8. Point clouds from the reference (blue dots), front (red dots), and rear (green dots) LiDAR sensors of the vehicular system. (a) the point clouds before calibration. (b) the point clouds after calibration. Each row of the figures corresponds to the top view (upper), side view of the front (middle), and side view of the rear (lower), respectively. The dashed lines in the side views correspond to the ground planes.
Figure 8. Point clouds from the reference (blue dots), front (red dots), and rear (green dots) LiDAR sensors of the vehicular system. (a) the point clouds before calibration. (b) the point clouds after calibration. Each row of the figures corresponds to the top view (upper), side view of the front (middle), and side view of the rear (lower), respectively. The dashed lines in the side views correspond to the ground planes.
Sensors 22 07234 g008
Table 1. Root mean square errors of the plane correspondences. (a) A robotic system, (b) vehicular system (R to S 1 ), (c) vehicular system (R to S 2 ).
Table 1. Root mean square errors of the plane correspondences. (a) A robotic system, (b) vehicular system (R to S 1 ), (c) vehicular system (R to S 2 ).
Scene(Unit: m)Proposedpt-ICPpl-ICPGICPNICPNDT
(a)ground0.01440.04970.05640.01610.03580.0337
non-ground0.01750.02000.05870.01750.04150.0175
overall0.01590.03910.05740.01670.03850.0275
(b)ground0.01500.23410.10150.01810.23770.3328
non-ground0.03810.08530.05120.03970.11680.0564
overall0.02900.17660.08060.03080.18760.2394
(c)ground0.01730.33080.19100.07610.67340.6435
non-ground0.03620.05700.04000.03710.14330.0527
overall0.02860.23440.13630.05940.47480.4453
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, H.; Chung, W. Extrinsic Calibration of Multiple 3D LiDAR Sensors by the Use of Planar Objects. Sensors 2022, 22, 7234. https://doi.org/10.3390/s22197234

AMA Style

Lee H, Chung W. Extrinsic Calibration of Multiple 3D LiDAR Sensors by the Use of Planar Objects. Sensors. 2022; 22(19):7234. https://doi.org/10.3390/s22197234

Chicago/Turabian Style

Lee, Hyunsuk, and Woojin Chung. 2022. "Extrinsic Calibration of Multiple 3D LiDAR Sensors by the Use of Planar Objects" Sensors 22, no. 19: 7234. https://doi.org/10.3390/s22197234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop