Next Article in Journal
Hyper-Spectral Imaging Technique in the Cultural Heritage Field: New Possible Scenarios
Previous Article in Journal
Optimal Geostatistical Methods for Interpolation of the Ionosphere: A Case Study on the St Patrick’s Day Storm of 2015
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extrinsic LiDAR/Ground Calibration Method Using 3D Geometrical Plane-Based Estimation

1
Laboratoire d’Informatique Signal et Image de la Côte d’Opale, Université du Littoral Côte d’Opale, LISIC, 59183 Dunkerque, France
2
Remote Sensing Research Center, National Council of Scientific Research (CNRS-L), Mansouriyeh 22411, Lebanon
3
Department of Physics and Electronics, Faculty of Science, Lebanese University, Hadath 11-8281, Lebanon
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(10), 2841; https://doi.org/10.3390/s20102841
Submission received: 31 March 2020 / Revised: 11 May 2020 / Accepted: 12 May 2020 / Published: 16 May 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
This paper details a new extrinsic calibration method for scanning laser rangefinder that is precisely focused on the geometrical ground plane-based estimation. This method is also efficient in the challenging experimental configuration of a high angle of inclination of the LiDAR. In this configuration, the calibration of the LiDAR sensor is a key problem that can be be found in various domains and in particular to guarantee the efficiency of ground surface object detection. The proposed extrinsic calibration method can be summarized by the following procedure steps: fitting ground plane, extrinsic parameters estimation (3D orientation angles and altitude), and extrinsic parameters optimization. Finally, the results are presented in terms of precision and robustness against the variation of LiDAR’s orientation and range accuracy, respectively, showing the stability and the accuracy of the proposed extrinsic calibration method, which was validated through numerical simulation and real data to prove the method performance.

1. Introduction

1.1. Overview

With the evolution of technology, the 3D intelligent sensors have posed great challenges in signal processing, especially in their outstanding acquisition performance even in rough environment. Moving to road networks maintenance and transportation safety, the responsibility imposes itself in detecting and locating the road distortion (cracking, patching, potholes, rutting, shoving, etc.). The literature review in [1] presents different automated detection experiments and extensive research conducted on pavement adversity in recent years. The work shows the importance and the incredible progress of 3D sensors compared with the other sensors, especially the laser profiler that is characterized by its high precision measurement capability, high spatial resolution and acquisition flexibility.
In a related context, road defects present a big danger on the traffic circulation ending up with possible traffic accidents. Some traffic accidents result from the presence of disabilities or small obstacles on the roads. In fact, this is one of the major problems that people suffer from in their daily lives. The key problem in this research concerns the characterization of the road surface by detection, localization, and tracking the presence of potentially dangerous areas and road defects using a 3D LiDAR sensor. Although 2D LiDAR sensors can provide 3D data, they require the use of an additional instrument in the form of a tilt unit.
Various promising applications, which rely on LiDAR sensors, are developed in different fields: intelligent transportation systems, mobile robotics, and connected vehicles. Thus, LiDAR is a fundamental sensor contributing in multi-vehicles tracking [2], simultaneous localization and mapping [3,4], road and road boundaries detection [5,6], autonomous vehicles [7,8], recognition [9,10], and 3D reconstruction [11,12]. Almost all of these applications appear in world challenges such as DARPA Urban and Grand Challenge [13,14,15,16,17].
Basically, LiDAR sensor operation relies on the calibration process which improves the defect detection and other study procedure. Two different types of calibration exist: intrinsic and extrinsic calibration. The intrinsic calibration considers the modeling between the beam creation and measurement of the environment to estimate the sensor–environment relationship in terms of the sensor internal parameters. On the other hand, the extrinsic calibration considers the determination of the relationship between the sensor frame and the world reference frame by rotation and translation transformation.

1.2. Related Works

Numerous authors investigated intrinsic and extrinsic calibration methods in LiDAR sensors. An intrinsic calibration technique is presented in [18]. The calibration process is based on an optimization method, where the calibration pattern is a wide planar wall on a flat surface scanned using Velodyne HDL-64E. In addition, Glennie and Lichti [19] presented a static calibration technique to derive an optimal solution for the laser’s intrinsic calibration parameters by a planar feature-based least squares in advantage of minimal constrained network. The study in [20] shows a correlation between the internal operating temperature of the LiDAR and the Laser scanner ranging error (intrinsic parameter). The calibration process considers a planar calibration approach to estimate the internal parameters for Velodyne VLP-16.
On the other hand, an extrinsic calibration technique is presented in [21]. In this technique, a flat plane is used for the calibration and an algorithm based on the inequality of two symmetric rays in azimuth with respect to the origin is proposed. This inequality is due to the shift angle of the center line. Another extrinsic calibration technique is presented in [22], where the authors worked on a 2D laser scanner and on the rotating platform to extract the rotation axis and radius using point-plane constraint. The Levenberg–Marquardt optimization method is applied in the two above extrinsic calibration methods to solve the non-linear least squares function problem.
In [23], a numerical algorithm is presented to compute both of the intrinsic and extrinsic parameters by minimizing the systematic errors due to the geometric calibration factors. Another approach introduced in [24] computes the intrinsic and extrinsic parameters of LiDAR sensor (Velodyne HDL-64E) by unsupervised calibration for each of multi-laser beams. An optimization function seeking to minimize the point-to-plane iterated closest point is then proposed.

1.3. Proposed Method

In transport applications, many articles use LiDAR to detect and track objects of interest (vehicle, pedestrians, etc.) from 3D measurements. The LiDAR sensor is also used to detect the road, often in addition to camera sensors. In these applications, the idea is to have a thorough view of the driver’s environment over the widest possible horizon. This therefore involves a LiDAR sensor with a low angle of inclination (horizontally oriented sensor).
The perspective of our paper is to propose a calibration method (and road plane estimation) that works under difficult experimental conditions (high angle of inclination). Indeed, we aim at developing a calibration method that allows to determine precisely the road plane in a very close vicinity of the vehicle. The idea in the long term is to detect road defects when driving on the road network. Although developed with this in mind (i.e., with a high degree of accuracy in determining the road plan), our method is general enough to be applicable in any wider operational context.
In the context of this study (road defects detection), the LiDAR sensor is rotated toward the ground to increase the points’ density covering the defects by the multi elevation laser. This causes a complicated modification in the ground 3D view scene with respect to the LiDAR frame. Therefore, extrinsic calibration was adopted in order to transform the LiDAR frame into a global reference frame, thus modifying the ground impact points transformation into an understandable view scene.
To attain the above key objective, a first method was applied by Zaiter et al. [25] on simulation data, which was restricted to the Euler angles estimation. In this conference paper, we proposed a first approach to extrinsic calibration of LRF sensors. It has been developed for the Velodyne VLP-16 LiDAR and the theoretical approach have been evaluated on some simulation results.
This paper addresses a new flexible extrinsic calibration method as compared to the previous plane-based methods. The approach developed is generalized to all types of scanning laser rangefinders. It now presents an optimized estimation of all extrinsic calibration parameters (angles and height). This global method can be implemented on different LiDAR sensors (low-cost 3D and full 3D) with various range accuracy. In addition, the proposed technique out performs in high orientation scenarios, which is a very interesting and challenging task that aims to increase the points’ density coverage. The proposed calibration method can be summarized by the following two-fold contributions: (1) ground plane model estimation; and (2) rotation transformation matrix estimation from world ground reference to LiDAR sensor frame. The 3D Euler’s angles (sensor orientation) and the height (sensor altitude above the ground) are two essential extrinsic parameters required to calibrate the full 3D LiDAR sensors, in order to improve the capability of road defect detection as explained in Section 2.1. In addition, the problem is modeled by 4-DOF (degree of freedom) transformation, namely 3-DOF rotation and 1-DOF height, instead of 6-DOF transformation, namely 3-DOF rotation and 3-DOF translation. This modeling advantage provides the simplicity in the optimization process of the extrinsic parameters.
The structure of this paper is as follows. Section 2 presents the correlation between extrinsic parameters and the geometrical pattern reflection on the ground, the rotated multi-laser beams projection modeling on the ground and the associated measurement errors on the 3D points cloud position. Section 3 presents in detail the different steps of the proposed LiDAR/Ground Calibration Method. Then, the method is evaluated on LiDAR’s synthetic and real data in Section 4.

2. LiDAR/Ground Geometrical Impact Modeling

The synthetic data are generated depending on the features of multi-laser rangefinder or 3D LiDAR sensor, where the environment impact points can be modeled as an intersection between the LiDAR laser beams and the environmental surrounding surfaces. In this work, the LiDAR sensor must be oriented toward the ground to study the road defects. Therefore, the LiDAR laser beams are represented as straight lines and the ground surface as a flat plane in a 3D frame.
Depending on the application situation, two concepts can represent the geometrical reflection model between the LiDAR sensor and the ground surface, as shown in Figure 1:
  • Practical orientation concept: The LiDAR laser beams ( d ) are supposed to be rotated and the ground’s real plane ( P r e ) is a fixed horizontal plane, as shown in Figure 1b.
  • Scientific orientation concept: The LiDAR laser beams ( d ) are supposed to be fixed and the virtual horizontal ground surface ( P H ) must be rotated by the LiDAR’s inverse orientation in the practical concept, to get the real oblique ground plane ( P r e ) in LiDAR frame, as shown in Figure 1c.

2.1. Extrinsic Parameters vs. Practical Concept

The four extrinsic parameters (altitude and orientation angles) affect the research goals, where the orientation parameter is the strongest influence factor on the ground points distribution process, as shown in Figure 2a,b. The proposed calibration method must satisfy two contradictory conditions in relation to the final research objectives:
  • Goal: The plane-based extrinsic calibration needs large sparsity area to improve the plane estimation, which requires high altitude and low orientation angles.
  • Constraint: The stated finality of road surface object detection needs high density points to improve the capability of defect coverage points, which requires low altitude and high orientation angles.
Therefore, a trade-off is needed to optimize the extrinsic parameters (altitude and orientation angles), providing the suitable coverage points distribution over the ground.
Four geometric view patterns are summarized in three cases depending on the variation of pitch angle ϕ x with respect to the LiDAR’s vertical field of view “ v F O V ”, as shown in Figure 2: (1) circles patterns (Figure 2c); (2) combination of ellipses, parabola, and hyperbolas patterns (Figure 2d); and (3) hyperbolas patterns (Figure 2e).
The study case in this problem focuses on the hyperbolas case, as shown in Figure 2e, in order to increase the points density on the ground.

2.2. LiDAR Laser Beams and Oblique Ground Surface Intersection

In the following part, the scientific orientation concept is chosen to model the reflections of the laser beams ( d ) on the real oblique ground surface ( P r e ) . Therefore, the LiDAR is supposed to be fixed and the parametric equations of the fixed straight lines ( d ) in LiDAR frame are given by:
( d ) : x y z = t tan α t t tan 2 α + 1 tan β for 0 < α < 90 o r 270 < α < 360 90 < β < 90 t 0 x y z = t tan α t t tan 2 α + 1 tan β for 90 < α < 270 90 < β < 90 t 0
where α and β describe the azimuth and the elevation angles of each laser beam and t is the parameter of the parametric representation.
The virtual horizontal ground plane ( P H ) must be rotated by the 3D Euler’s angles ψ z , θ y , ϕ x , so that the equation of the rotated real ground plane ( P r e ) is expressed as a function of the horizontal ground plane ( P H ) with height h and rotational matrix R z , y , x ( ψ z , θ y , ϕ x ) . This transformation is expressed as:
P r e = R z , y , x ( ψ z , θ y , ϕ x ) P H
The parametric and Cartesian equations of the horizontal ground plane ( P H ) are expressed as follows:
( P H ) : x y z = t + a w t + b w h parametric equation a , b R z + h = 0 Cartesian equation
The rotational matrix R z , y , x ( ψ z , θ y , ϕ x ) is expressed as:
R z , y , x ( ψ z , θ y , ϕ x ) = R z ( ψ z ) R y ( θ y ) R x ( ϕ x ) = cos ψ z cos θ y cos ψ z sin θ y sin ϕ x sin ψ z cos ϕ x cos ψ z sin θ y cos ϕ x + sin ψ z sin ϕ x sin ψ z cos θ y sin ψ z sin θ y sin ϕ x + cos ψ z cos ϕ x sin ψ z sin θ y cos ϕ x cos ψ z sin ϕ x sin θ y cos θ y sin ϕ x cos θ y cos ϕ x
Therefore, the Cartesian coordinates of the real points cloud c r e obtained from the intersection between the fixed straight lines ( d ) and the rotated real plane ( P r e ) are expressed as:
( c r e ) : x y z = t tan α t t tan 2 α + 1 tan β for 0 < α < 90 o r 270 < α < 360 90 < β < 90 t 0 t = ( u l u l ) h k + ( u k u k ) h l + ( l k l k ) h u ( k l l k ) tan α + ( l k k l ) + ( l k l k ) tan α 2 + 1 tan β x y z = t tan α t t tan 2 α + 1 tan β for 90 < α < 270 90 < β < 90 t 0 t = ( u l u l ) h k + ( u k u k ) h l + ( l k l k ) h u ( k l l k ) tan α + ( l k k l ) ( l k l k ) tan α 2 + 1 tan β
where
k = cos ψ z cos θ y + cos ψ z sin θ y sin ϕ x sin ψ z cos ϕ x l = a cos ψ z cos θ y + b cos ψ z sin θ y sin ϕ x b sin ψ z cos ϕ x u = cos ψ z sin θ y cos ϕ x + sin ψ z sin ϕ x k = sin ψ z cos θ y + sin ψ z sin θ y sin ϕ x + cos ψ z cos ϕ x l = a sin ψ z cos θ y + b sin ψ z sin θ y sin ϕ x + b cos ψ z cos ϕ x u = sin ψ z sin θ y cos ϕ x + cos ψ z sin ϕ x k = sin θ y + cos θ y sin ϕ x l = a sin θ y + b cos θ y sin ϕ x u = cos θ y cos ϕ x a and b R

2.3. Error Modeling in Polar and Cartesian Coordinates

In this section, the systematic and random errors w ρ , w α , w β are taken in consideration as a source of error to represent the modeling of the additive white Gaussian noise for each polar coordinates ρ , α , β of the real points cloud c r e in Equation (7), where ρ w , α w , β w are the real measurements of the range ρ , azimuth α and elevation β , respectively, for each reflecting point.
( c w ) : ρ w α w β w = ρ + w ρ α + w α β + w β
The 3D transformation from polar coordinates ρ w , α w , β w to Cartesian coordinates x w , y w , z w of the ground noisy points cloud c w is given by:
( c w ) : x w y w z w = ρ w cos β w sin α w ρ w cos β w cos α w ρ w sin β w
Then, the standard deviation of the error can be derived from polar to Cartesian parameters in Equation (9), assuming that σ x w , σ y w , σ z w , σ ρ w , σ α w , σ β w are, respectively, the standard deviations of the added noise on x w , y w , z w , ρ w , α w , β w . The terms of the standard deviations σ ρ w , σ α w , σ β w with a power higher than two can be neglected in this derivation to obtain this approximation:
σ x w 2 σ y w 2 σ z w 2 σ ρ w 2 cos 2 β w sin 2 α w + ρ w 2 ( σ β w 2 sin 2 β w sin 2 α w + σ α w 2 cos 2 β w cos 2 α w ) σ ρ w 2 cos 2 β w cos 2 α w + ρ w 2 ( σ β w 2 sin 2 β w cos 2 α w + σ α w 2 cos 2 β w sin 2 α w ) σ ρ w 2 sin 2 β w + ρ w 2 σ β w 2 cos 2 β w
Glennie et al. [20] showed in particular that the error of a scanning LiDAR sensor is mainly manifested over range. In this type of sensor, angles are not directly measured, but the error is mainly related to the reproducibility of the measurement for a given angle. The hypothesis of neglecting the scanning angle error is a very common assumption in the field of LiDAR detection: it is part of the manufacturers’ specifications and is commonly used in the literature. This is particularly related to the very small influence of the angle reproducibility errors on the range measurement of the object of interest.
In this study, we then focus on the range error σ ρ w and neglect the azimuth and elevation errors σ α w , σ β w , respectively, in the simulation data as given by the constructor. The transformation in Equation (9) is then simplified as:
σ x w σ y w σ z w = σ ρ w cos β w sin α w σ ρ w cos β w cos α w σ ρ w sin β w

3. LiDAR/Ground Extrinsic Calibration Method

In multi-sensor applications, data acquired from the different sensors must be fused in one common reference frame. In this application, the calibration of LiDAR frame scans is necessary to merge them in one world reference frame, in order to increase the points density coverage on the ground, which facilitates the road defect detection. Therefore, the extrinsic calibration aims to model the relationship between the LiDAR frame and the world reference frame.
We thus propose the LiDAR/Ground Calibration Method (LGCM) presented in Figure 3. The method includes the following procedures: fitting ground plane by Least Squares estimator, rotation about axis by Rodrigues formula, Least Squares Conic Algorithm, and height estimation. The proposed method is supplemented by Levenberg–Marquardt optimization algorithm as opt-LGCM. The main role of LGCM procedure is to estimate the extrinsic parameters: the Euler’s rotational angles ψ z , θ y , ϕ x and the height h. Then, the opt-LGCM is initialized by the estimated extrinsic parameters to optimize them. Finally, the distributed ground noisy points c w along the real plane ( P r e ) are rotated along the horizontal plane ( P H ) by the optimized extrinsic parameters in the frame of fixed LiDAR.
The proposed method consists mainly in two steps. The first, totally unsupervised step, consists of estimating a first value of the steering angles. This first estimate is then used as a basis for the optimization step, which seeks the best orientation parameters. The developed method is therefore totally unsupervised and does not require a priori knowledge of the orientation of the sensor by a tilt unit for example.

3.1. Fitting Plane

The first step aims to fit an estimated plane ( P e s t ) with the rotated ground noisy points c w . The Least Squares estimator is used to obtain the normal vector of the plane ( P e s t ) .
The equation of the estimated plane ( P e s t ) in the LiDAR frame is expressed by:
f ( x , y ) = z = A x + B y + D + w
where A, B, and D are the plane parameters and w is an additive white Gaussian noise with standard deviation σ w .
Therefore, Equation (11) of the estimated plane ( P e s t ) can be written in linear form as:
Z = H O + w
where
Z = z ( 0 ) z ( N 1 ) T
H = x ( 0 ) y ( 0 ) 1 x ( N 1 ) y ( N 1 ) 1
O = A B D T
w = w ( 0 ) w ( N 1 ) T where N is the number of reflected points.
The solution of Least Squares estimator for this linear model is expressed as:
O ^ L S = ( H T H ) 1 H T Z

3.2. Rotation about Axis

Rodrigues formula is an efficient rotation transformation that computes the rotation matrix R r o d , which rotates a vector into another vector in 3D frame around a fixed axis vector A x i s by rotational angle η [26]. Therefore, after having estimated the parameter vector of the oblique estimated plane ( P e s t ) according to Section 3.1, the next step is to compute the rotational matrix R r o d from the normal vector n 1 of the oblique estimated plane ( P e s t ) to the normal vector n 2 of the horizontal plane ( P H ) that is parallel to X L Y L -plane with height h (cf. Figure 1c). The objective of this step is to use Rodrigues formula in order to estimate the first two Euler’s angles pitch ϕ ^ x , roll θ ^ y , and the first partial yaw angle Ψ ^ z 1 —due to the incomplete calibration in yaw rotation, which is solved by the next step—from Rodrigues Matrix R r o d .
Assuming that n 1 , n 2 , and A x i s are expressed as:
n 1 ( A ^ , B ^ , 1 )
n 2 ( 0 , 0 , 1 )
A x i s ( m , n , p ) = n 1 × n 2 n 1 × n 2 , the Rodrigues rotation formula R r o d can be then written as:
R r o d = I 3 + sin η K + ( 1 cos η K 2 )
where
I 3 = 1 0 0 0 1 0 0 0 1
K = 0 p n p 0 m n m 0
sin η = n 1 × n 2 n 1 · n 2
cos η = n 1 · n 2 n 1 · n 2
Now, by using Equation (15),
R r o d = R x , y , z ( Ψ ^ z 1 , θ ^ y , ϕ ^ x ) = R x ( Ψ ^ z 1 ) R y ( θ ^ y ) R z ( ϕ ^ x )
Then, the Rodrigues matrix R r o d provides the computation of Ψ ^ z 1 , θ ^ y , and ϕ ^ x as expressed in the equations below:
Ψ ^ z 1 = arctan ( ( R r o d ) 21 / ( R r o d ) 11 )
θ ^ y = arcsin ( ( R r o d ) 31 )
ϕ ^ x = arctan ( ( R r o d ) 32 / ( R r o d ) 33 )
where i j represents the matrix element index of ( R r o d ) i j . As a graphical result, the ground noisy points c w are rotated by Rodrigues matrix R r o d to the distributed points c H 1 along the horizontal plane ( P H ) by Equation (19), as shown in Figure 4.
c H 1 = R r o d c w

3.3. Yaw Angle Estimation

After rotating the noisy points c w to the horizontal points c H 1 , the second partial yaw angle Ψ ^ z 2 is estimated by the efficient Algorithm 1 that we propose in Figure 5 to rotate the points c H 1 to c H 2 about z-axis. This algorithm is called Least Squares Conic Algorithm (LSCA), which takes advantage of the center s characteristic of the geometrical impact patterns (hyperbolas, parabolas, and circles) formed by the points c H 1 , as shown in Figure 5. The aim of this part is to compute yaw angle ψ ^ z from the partial angles Ψ ^ z 1 and Ψ ^ z 2 , as shown in Equation (20).
Algorithm 1: Least Squares Conic Algorithm.
   Input: x,y,z, α , β of the distributed points c H 1
   Output: Ψ ^ z 2
1
Fit the lines ( l ) and ( l ) that pass through the points at each ζ = 10 consecutive azimuth by Least Squares estimator.
The solution of Least Squares estimator for linear model:
O ^ L S = ( H T H ) 1 H T Y where Y = y ( 0 ) y ( N 1 ) , H = x ( 0 ) 1 x ( N 1 ) 1 , O ^ L S = m ^ b ^
2
Compute the coordinates of the intersection points s of each two symmetric lines of ( l ) and ( l ) . Assume that:
( l ) : y = m ^ 1 x + b ^ 1
( l ) : y = m ^ 2 x + b ^ 2
Therefore, the intersection points s of the straight lines ( l ) and ( l ) are computed as follows:
x s = b ^ 2 b ^ 1 m ^ 1 m ^ 2 , y s = m ^ 1 b ^ 2 b ^ 1 m ^ 1 m ^ 2 + b ^ 1
3
Fit a line ( v ) that passes through the intersection points s and the origin O by Least Squares estimator.
The solution of Least Squares estimator: O ^ L S = ( H T H ) 1 H T Y where Y = y ( 0 ) y ( N 1 ) , H = x ( 0 ) x ( N 1 ) , O ^ L S = m ^
4
Finally, compute the angle Ψ ^ z 2 formed by the fitting line ( v ) and y-axis:
Ψ ^ z 2 = arctan m ^ 90 if m ^ > 0
Ψ ^ z 2 = arctan m ^ + 90 if m ^ < 0
Therefore, the third Euler’s angle of rotation (yaw angle) ψ ^ z is computed as follows:
ψ ^ z = Ψ ^ z 1 Ψ ^ z 2
Finally, the points c H 1 are rotated to points c H 2 by an angle Ψ ^ z 2 around z-axis, as shown in Figure 6.
c H 2 = R z ( Ψ ^ z 2 ) c H 1

3.4. Height Estimation

At the end of LGCM approach, a suitable way to estimate the height is to compute the altitude mean of the points c H 2 , due to the ground geometrical model used in this paper. Therefore, the estimated height is then expressed as:
h ^ = 1 N i = 0 N 1 z i where N is the number of calibrated points

3.5. Extrinsic Parameters Optimization

Levenberg–Marquardt algorithm is an optimization algorithm that combines gradient descent and Gauss–Newton methods [27]. In addition, it is a very efficient technique to find the minima and it performs well on most non-linear functions. Therefore, the role of opt-LGCM is to optimize the extrinsic parameters ψ z , θ y , ϕ x , h by Levenberg–Marquardt algorithm, which is initialized by the estimated extrinsic parameters ψ ^ z , θ ^ y , ϕ ^ x , h ^ to obtain the optimized extrinsic parameters ψ ^ z , θ ^ y , ϕ ^ x , h ^ o p t , in order to minimize the mean square error m s e that represents the square difference between the position of noisy points c w and the optimized position of points c o p t in Equation (24). The optimized points c o p t represent the intersection between all the LiDAR beams ( d ) and the optimized plane ( P o p t ) , which is the rotation of the horizontal plane ( P H ) by the new optimized Euler’s angles ψ ^ z , θ ^ y , ϕ ^ x and the optimized height h ^ o p t in Equation (25). In other words, the importance of the above procedure is to get the optimized height h ^ o p t and the optimized Euler’s angles ψ ^ z , θ ^ y , ϕ ^ x that rotate in the inverse ordering orientation the horizontal plane ( P H ) , to fit the noisy points c w that are distributed along the oblique real plane ( P r e ) with minimum m s e on the position.
( ψ ^ z , θ ^ y , ϕ ^ x , h ^ o p t ) = arg min ( ψ z , θ y , ϕ x , h ) m s e
The non-linear function m s e is expressed by:
m s e = 1 m i = 1 m ( x c o p t x c w ) 2 + ( y c o p t y c w ) 2 + ( z c o p t z c w ) 2
The optimized points c o p t represent the intersection between the straight lines ( d ) and the rotated optimized plane ( P o p t ) , where the plane ( P o p t ) is the rotation of the fixed ground horizontal plane ( P H ) of height h ^ o p t by ψ ^ z , θ ^ y , ϕ ^ x based on R x , y , z rotation matrix, as shown in Equation (25):
P o p t = R x , y , z ( ψ ^ z , θ ^ y , ϕ ^ x ) P H
where the rotation matrix R x , y , z is the reverse of R z , y , x .
R x , y , z ( ψ ^ z , θ ^ y , ϕ ^ x ) = R x ( ϕ ^ x ) R y ( θ ^ y ) R z ( ψ ^ x ) = cos ψ ^ x cos θ ^ y sin ψ ^ x cos θ ^ y sin θ ^ y cos ψ ^ x sin θ ^ y sin ϕ ^ x sin ψ ^ x cos ϕ ^ x sin ψ ^ x sin θ ^ y sin ϕ ^ x + cos ψ ^ x cos ϕ ^ x cos θ ^ y sin ϕ ^ x cos ψ ^ x sin θ ^ y cos ϕ ^ x + sin ψ ^ x sin ϕ ^ x sin ψ ^ x sin θ ^ y cos ϕ ^ x cos ψ ^ x sin ϕ ^ x cos θ ^ y cos ϕ ^ x
Finally, the Cartesian coordinates of rotated optimized points cloud c o p t in the reverse orientation sense are estimated by ψ ^ z , θ ^ y , ϕ ^ x , h ^ o p t , as expressed below:
( c o p t ) : x y z = t tan α t t tan 2 α + 1 tan β for 0 < α < 90 o r 270 < α < 360 90 < β < 90 t 0 t = ( u l u l ) h ^ o p t k + ( u k u k ) h ^ o p t l + ( l k l k ) h ^ o p t u ( k l l k ) tan α + ( l k k l ) + ( l k l k ) tan α 2 + 1 tan β x y z = t tan α t t tan 2 α + 1 tan β for 90 < α < 270 90 < β < 90 t 0 t = ( u l u l ) h ^ o p t k + ( u k u k ) h ^ o p t l + ( l k l k ) h ^ o p t u ( k l l k ) tan α + ( l k k l ) ( l k l k ) tan α 2 + 1 tan β
where
k = cos ψ ^ z cos θ ^ y + sin ψ ^ z cos θ ^ y l = a cos ψ ^ z cos θ ^ y + b sin ψ ^ z cos θ ^ y u = sin θ ^ y k = sin ψ ^ z cos ϕ ^ x + cos ψ ^ z sin θ ^ y sin ϕ ^ x + cos ψ ^ z cos ϕ ^ x + sin ψ ^ z sin θ ^ y sin ϕ ^ x l = a sin ψ ^ z cos ϕ ^ x + a cos ψ ^ z sin θ ^ y sin ϕ ^ x + b cos ψ ^ z cos ϕ ^ x + b sin ψ ^ z sin θ ^ y sin ϕ ^ x u = cos θ ^ y sin ϕ ^ x k = sin ψ ^ z sin ϕ ^ x + cos ψ ^ z sin θ ^ y cos ϕ ^ x cos ψ ^ z sin ϕ ^ x + sin ψ ^ z sin θ ^ y cos ϕ ^ x l = a sin ψ ^ z sin ϕ ^ x + a cos ψ ^ z sin θ ^ y cos ϕ ^ x b cos ψ ^ z sin ϕ ^ x + b sin ψ ^ z sin θ ^ y cos ϕ ^ x u = cos θ ^ y cos ϕ ^ x a and b R

4. Experimental Results

The proposed calibration method LGCM was applied on two types of data: simulation data were obtained by the modeling mentioned in Section 2 and real data acquisition from the Velodyne VLP-16 LiDAR. The most important features of Velodyne VLP-16 LiDAR for modeling are shown in Table 1.
The extrinsic calibration results are presented in terms of precision and robustness. According to our application, the precision shows the stability of the method with respect to the variation of pitch angle ϕ x toward the ground, while the robustness shows the method strength with respect to the variation of range accuracy σ ρ of the measurements.
Therefore, the evaluation parameters of the results focus on the point cloud features of the real points c r e on the real plane ( P r e ) , noisy points c w distributed along the real plane ( P r e ) , estimated points c e s t on the estimated plane ( P e s t ) obtained by LGCM, and the optimized points c o p t on the optimized plane ( P o p t ) obtained by opt-LGCM, as described below:
  • The real height h, estimated height h ^ , and the optimized height h ^ o p t .
  • The standard deviation σ d w / i of the noisy points c w orthogonal Euclidean distance with respect to the real plane ( P r e ) , the estimated plane ( P e s t ) and the optimized plane ( P o p t ) .
    σ d w / i = 1 N ( d w / i d w / i ¯ ) 2
    d w / i = | A i x w + B i y w + C i z w + D i | A i 2 + B i 2 + C i 2
    where x w , y w , z w are the Cartesian coordinates of the noisy points c w , A i , B i , C i , D i are the coefficient parameters of the planes, i = { r e , e s t , o p t } , and N is the number of impact points.
  • The standard deviation σ ρ r e / ρ i of the real points c r e range difference with respect to the noisy points c w , the estimated points c e s t , and the optimized points c o p t .
    σ ρ r e / ρ i = 1 N ( ( ρ r e ρ i ) ( ρ r e ρ i ¯ ) ) 2
    where i = { w , e s t , o p t } and N is the number of impact points.
  • The standard deviation σ ρ w / ρ i of the noisy points c w range difference with respect to the real points c r e , the estimated points c e s t , and the optimized points c o p t .
    σ ρ w / ρ i = 1 N ( ( ρ w ρ i ) ( ρ w ρ i ¯ ) ) 2
    where i = { r e , e s t , o p t } and N is the number of impact points.
  • The gain in performance P F i that describes the range accuracy enhancement obtained from the Levenberg–Marquardt optimization algorithm which is defined as:
    P F i = σ ρ w / ρ i σ ρ w / ρ r e
    where σ ρ w / ρ r e is the LiDAR range accuracy and i = { e s t , o p t } .

4.1. Simulation Data Results

Using the simulation data, the setups used to validate the proposed calibration method are separated in two categories:
  • In term of precision, the real height h = 2   m , roll angle θ y = 2 , yaw angle ψ z = 2 , and LiDAR range accuracy σ ρ w / ρ r e = 0.03   m , with respect to the variation of pitch angle ϕ x = [ 70 , 70 ] .
  • In term of robustness, the real height h = 2   m , pitch angle ϕ x = 45 , roll angle θ y = 2 , and yaw angle ψ z = 2 , with respect to the variation of σ ρ w / ρ r e = [ 0 , 0.095   m ] .

4.1.1. Standard Deviation σ d w / i in Terms of Precision and Robustness

After analyzing Figure 7a, the increasing of standard deviation σ d w / i along the planes is due to the orientation effect of the LiDAR by the pitch angle ϕ x on σ d w / i . Thus, as pitch angle ϕ x tends to 90 , the standard deviation σ d w / i tends to the LiDAR range accuracy σ ρ w / ρ r e . In Figure 7b, the increasing of the standard deviation σ d w / i is due to increasing of LiDAR range accuracy σ ρ w / ρ r e . Moreover, Equation (34) describes the relation of σ d w / i with ϕ x and σ ρ w / ρ r e ,which proves the increasing of σ d w / i .
σ d w / i = sin ( ϕ x + φ k ) σ ρ w / ρ r e
where φ k is the elevation angle of each VLP-16 LiDAR laser, and k = { 1 , 2 , , 16 } represents the laser index.
In Figure 7c,d, we can see that the standard deviation σ d w / o p t is closer to the the standard deviation σ d w / r e than the standard deviation σ d w / e s t . This shows that the optimized plane ( P o p t ) is better fit to the real plane ( P r e ) than the estimated plane ( P e s t ) .

4.1.2. Standard Deviation σ ρ r e / ρ i and σ ρ w / ρ i in Terms of Precision and Robustness

In terms of precision and robustness, Figure 8 shows the increasing behavior of the range standard deviations σ ρ r e / ρ e s t and σ ρ w / ρ e s t after the LGCM calibration, due to:
  • The increase of pitch angle ϕ x on positive and negative sides decreases the sparsity of impact points on the ground. This leads to decrease the precision of plane fitting estimation, as shown in Figure 8a,c.
  • The increase of LiDAR range accuracy σ ρ w / ρ r e decreases the precision of plane fitting estimation, as shown in Figure 8b,d.
The standard deviation σ ρ r e / ρ o p t is lower than the standard deviation σ ρ r e / ρ e s t , as shown in Figure 8a,b, which indicate how the optimized points c o p t are closer to the real points c r e than the estimated points c e s t . On the other hand, the standard deviation σ ρ w / ρ o p t is closer to LiDAR range accuracy σ ρ w / ρ r e than the standard deviation σ ρ w / ρ e s t , as shown in Figure 8c,d, which indicate the equality of the noisy points c w range distribution along the real plane ( P r e ) and the optimized plane ( P o p t ) .
The negligible standard deviation σ ρ r e / ρ o p t in Figure 8a,b and the coincident standard deviations σ ρ w / ρ o p t and σ ρ w / ρ r e in Figure 8c,d prove the similarity of the real plane ( P r e ) and the optimized plane ( P o p t ) compare to the estimated plane ( P e s t ) .

4.1.3. Height Recovering in Terms of Precision and Robustness

In terms of precision and robustness, Figure 9 highlights the recovering of the height parameter and how the optimized height h ^ o p t is closer to the real height h than the estimated height h ^ , which presents the height optimization importance and the strength of the Levenberg–Marquardt optimization algorithm.

4.1.4. Performance Gain P F i in Terms of Precision and Robustness

Figure 10 shows the gain in performance of the optimized plane points c o p t against the estimated plane points c e s t distributed by the noisy points c w compared to the LiDAR range accuracy σ ρ w / ρ r e as expressed in Equation (33), with respect to the variation of pitch angle ϕ x and LiDAR range accuracy σ ρ w / ρ r e . Moreover, the negligibility of the method performance P F o p t appears after the optimization, which means that the standard deviation σ ρ w / ρ o p t after optimization is closer to the standard deviation σ ρ w / ρ e s t before optimization with respect to the LiDAR range accuracy σ ρ w / ρ r e . In addition, it presents the recovering of noisy points c w range distribution along the real plane ( P r e ) after the optimization algorithm, taking advantage of maintaining the standard deviation σ ρ w / ρ o p t value as negligible. The gain feature P F i proves again the better fit between the optimized plane ( P o p t ) and the real plane ( P r e ) rather than the estimated plane ( P e s t ) .

4.2. Real Data Results

The 3D point cloud acquisitions were obtained using a multi-lasers rangefinder VLP-16 LiDAR mounted on a vehicle. To obtain a telemetric information about the ground surface and to achieve the application goal, the VLP-16 LiDAR was rotated toward the ground direction with a pitch angle ϕ x 70 , and it was at a height of h 1.05   m above the ground surface. The real setup is shown in Figure 11.
The proposed method was applied to two different acquisitions:
  • Acquisition 1: The vehicle was at rest on the road.
  • Acquisition 2: The vehicle was moving at a slow speed on the road.

Standard Deviation σ ρ w / ρ i per LiDAR Frames

In the absence of real plane ( P r e ) when using real data, the results focus on the range distribution of the noisy points c w along the estimated plane ( P e s t ) and the optimized plane ( P o p t ) . It is clear that the standard deviations σ ρ w / ρ o p t curve is lower than the σ ρ w / ρ e s t in the two acquisitions, as shown in Figure 12. The optimization algorithm is thus proved to be more efficient for real data as well in decreasing the range distribution of the noisy points c w along the fitting planes.

4.3. Results Discussion

In general, the results prove the efficiency of the optimization algorithm, which is represented by the optimized plane ( P o p t ) , versus the estimated plane ( P e s t ) , compared with the real plane ( P r e ) , in terms of precision and robustness. On the other hand, the convergence of the optimization algorithm is granted automatically by the suitable initialization parameters, namely the estimated Euler’s angles ψ ^ z , θ ^ y , ϕ ^ x and the estimated height h ^ , which are computed in Stage 1 (LGCM) to obtain the estimated plane ( P e s t ) , and then optimized by Levenberg–Marquardt optimization algorithm (opt-LGCM) in Stage 2 to get the optimized Euler’s angles ψ ^ z , θ ^ y , ϕ ^ x and the optimized height h ^ o p t to obtain the optimized plane ( P o p t ) . Finally, the results show the strength and the method performance in terms of precision and robustness against the variation of pitch angle ϕ x and LiDAR range accuracy σ ρ w / ρ r e , respectively, in order to achieve the application’s aim, as shown in Figure 13.

5. Conclusions

A new extrinsic LiDAR/Ground calibration method for 3D LiDARs is presented in this paper. The solution relies on plane-based modeling of the ground, which allows the estimation of the LiDAR’s orientation and altitude using Rodrigues formula, Least Squares Conic Algorithm for yaw angle estimation and height estimation. The proposed method (LGCM) is extended to an optimized derivation (opt-LGCM) using the Levenberg–Marquardt algorithm and is shown to be a suitable solution to LiDAR/Ground calibration problem. It is implemented on synthetic and real LiDAR telemetric data. The results show the performance in terms of precision and robustness against the variation of LiDAR’s orientation and range accuracy, respectively, proving the stability and the accuracy of the proposed calibration method.

Author Contributions

Conceptualization and methodology: M.A.Z., R.L and J.-C.N.; supervision: M.A.Z., R.L., O.B., G.F. and J.-C.N.; writing—original draft preparation: M.A.Z.; writing—review and editing: M.A.Z., R.L., O.B. and J.-C.N.; funding acquisition: G.F. and J.-C.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was co-funded by Univ. Littoral Côte d’Opale (France) and CNRS-L (Lebanon).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Coenen, T.B.; Golroo, A. A review on automated pavement distress detection methods. Cogent Eng. 2017, 4, 1374822. [Google Scholar] [CrossRef]
  2. Fortin, B.; Lherbier, R.; Noyer, J. A Model-Based Joint Detection and Tracking Approach for Multi-Vehicle Tracking With Lidar Sensor. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1883–1895. [Google Scholar] [CrossRef]
  3. Lenac, K.; Kitanov, A.; Cupec, R.; PetroviÄ, I. Fast planar surface 3D SLAM using LIDAR. Robot. Auton. Syst. 2017, 92, 197–220. [Google Scholar] [CrossRef]
  4. Liang, X.; Chen, H.; Li, Y.; Liu, Y. Visual laser-SLAM in large-scale indoor environments. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016; pp. 19–24. [Google Scholar] [CrossRef]
  5. Fernandes, R.; Premebida, C.; Peixoto, P.; Wolf, D.; Nunes, U. Road Detection Using High Resolution LIDAR. In Proceedings of the IEEE Vehicle Power and Propulsion Conference (VPPC), 27–30 October 2014; pp. 1–6. [Google Scholar] [CrossRef]
  6. Zhang, W. LIDAR-based road and road-edge detection. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 845–848. [Google Scholar] [CrossRef]
  7. Yutong, Y.; Liming, F.; Bijun, L. Object detection and tracking using multi-layer laser for autonomous urban driving. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Janeiro, Brazil, 1–4 November 2016; pp. 259–264. [Google Scholar] [CrossRef]
  8. Li, Q.; Zhang, L.; Mao, Q.; Zou, Q.; Zhang, P.; Feng, S.; Ochieng, W. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR. Sensors 2014, 14, 16672–16691. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Cho, M. A Study on the Obstacle Recognition for Autonomous Driving RC Car Using LiDAR and Thermal Infrared Camera. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019; pp. 544–546. [Google Scholar] [CrossRef]
  10. Nagashima, T.; Nagasaki, T.; Matsubara, H. Object Recognition Method Commonly Usable for LIDARs With Different Vertical Resolution. In Proceedings of the 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), Nara, Japan, 9–12 October 2018; pp. 751–753. [Google Scholar] [CrossRef]
  11. Yang, S.; Fan, Y. 3D Building Scene Reconstruction Based on 3D LiDAR Point Cloud. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW), Ilan, Taiwan, 20–22 May 2019; pp. 127–128. [Google Scholar] [CrossRef]
  12. Qi, J.; Gastellu-Etchegorry, J.P.; Yin, T. Reconstruction of 3D Forest Mock-Ups from Airborne LiDAR Data for Multispectral Image Simulation Using DART Model. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 3975–3978. [Google Scholar] [CrossRef]
  13. Clifton, W.E.; Steele, B.; Nelson, G.; Truscott, A.; Itzler, M.; Entwistle, M. Medium altitude airborne Geiger-mode mapping LIDAR system. In Laser Radar Technology and Applications XX; and Atmospheric Propagation XII; Turner, M.D., Kamerman, G.W., Thomas, L.M.W., Spillar, E.J., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, DC, USA, 2015; Volume 9465, pp. 39–46. [Google Scholar] [CrossRef]
  14. Behringer, R.; Sundareswaran, S.; Gregory, B.; Elsley, R.; Addison, B.; Guthmiller, W.; Daily, R.; Bevly, D. The DARPA grand challenge - development of an autonomous vehicle. IEEE Intell. Veh. Symp. 2004, 2004, 226–231. [Google Scholar] [CrossRef]
  15. Owechko, Y.; Medasani, S.; Korah, T. Automatic recognition of diverse 3-D objects and analysis of large urban scenes using ground and aerial LIDAR sensors. In Proceedings of the CLEO/QELS: 2010 Laser Science to Photonic Applications, San Jose, CA, USA, 16–21 May 2010; pp. 1–2. [Google Scholar] [CrossRef]
  16. Thrun, S. Winning the DARPA Grand Challenge. In Machine Learning: ECML 2006; Fürnkranz, J., Scheffer, T., Spiliopoulou, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; p. 4. [Google Scholar]
  17. Buehler, M.; Iagnemma, K.; Singh, S. Special issue on the 2007 DARPA Urban Challenge, Part II. J. Field Robot. 2008, 25, 567–568. [Google Scholar] [CrossRef]
  18. Muhammad, N.; Lacroix, S. Calibration of a rotating multi-beam lidar. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5648–5653. [Google Scholar] [CrossRef]
  19. Glennie, C.; Lichti, D.D. Static Calibration and Analysis of the Velodyne HDL-64E S2 for High Accuracy Mobile Scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef] [Green Version]
  20. Glennie, C.; Kusari, A.; Facchin, A. Calibration and stability analysis of the VLP-16 laser scanner. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, XL-3/W4, 55–60. [Google Scholar] [CrossRef]
  21. Zeng, Y.; Yu, H.; Dai, H.; Song, S.; Lin, M.; Sun, B.; Jiang, W.; Meng, M.Q.H. An Improved Calibration Method for a Rotating 2D LIDAR System. Sensors 2018, 18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Kurnianggoro, L.; Dung, H.V.; Jo, K.H. Calibration of a 2D Laser Scanner System and Rotating Platform using a Point-Plane Constraint. Comput. Sci. Inf. Syst. 2015, 12, 307–322. [Google Scholar] [CrossRef]
  23. Atanacio-Jiménez, G.; González-Barbosa, J.; Hurtado-Ramos, J.; Ornelas-Rodríguez, F.; Jiménez-Hernández, H.; García-Ramirez, T.; González-Barbosa, R. LIDAR velodyne HDL-64E calibration using pattern planes. Int. J. Adv. Robot. Syst. 2011, 8, 59. [Google Scholar] [CrossRef]
  24. Levinson, J.; Thrun, S. Unsupervised calibration for multi-beam lasers. In Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 179–193. [Google Scholar]
  25. Zaiter, M.A.; Lherbier, R.; Faour, G.; Bazzi, O.; Noyer, J.C. 3D LiDAR Extrinsic Calibration Method using Ground Plane Model Estimation. In Proceedings of the IEEE 8th International Conference on Connected Vehicles and Expo (ICCVE), Graz, Austria, 4–8 November 2019. [Google Scholar]
  26. Taubin, G. 3D Rotations. IEEE Comput. Graph. Appl. 2011, 31, 84–89. [Google Scholar] [CrossRef] [PubMed]
  27. Duc-Hung, L.; Cong-Kha, P.; Trang, N.T.T.; Tu, B.T. Parameter extraction and optimization using Levenberg-Marquardt algorithm. In Proceedings of the Fourth International Conference on Communications and Electronics (ICCE), Hue, Vietnam, 1–3 August 2012; pp. 434–437. [Google Scholar] [CrossRef]
Figure 1. (a) No orientation; (b) practical orientation concept; and (c) scientific orientation concept.
Figure 1. (a) No orientation; (b) practical orientation concept; and (c) scientific orientation concept.
Sensors 20 02841 g001
Figure 2. Large sparsity area vs. high density points.
Figure 2. Large sparsity area vs. high density points.
Sensors 20 02841 g002
Figure 3. The proposed extrinsic calibration method block diagram.
Figure 3. The proposed extrinsic calibration method block diagram.
Sensors 20 02841 g003
Figure 4. (a) Distributed ground noisy points c w about the real plane ( P r e ) ; and (b) distributed points c H 1 along the horizontal plane ( P H ) .
Figure 4. (a) Distributed ground noisy points c w about the real plane ( P r e ) ; and (b) distributed points c H 1 along the horizontal plane ( P H ) .
Sensors 20 02841 g004
Figure 5. (1) Fitting the lines passing through the points of each ζ = 10 consecutive azimuth; (2) intersection points S of each symmetric lines between ( l ) and ( l ) ; (3) fitting line ( v ) that passes through the points S and the origin O; and (4) angle Ψ ^ z 2 formed by line ( v ) and y-axis.
Figure 5. (1) Fitting the lines passing through the points of each ζ = 10 consecutive azimuth; (2) intersection points S of each symmetric lines between ( l ) and ( l ) ; (3) fitting line ( v ) that passes through the points S and the origin O; and (4) angle Ψ ^ z 2 formed by line ( v ) and y-axis.
Sensors 20 02841 g005
Figure 6. (a) Points c H 1 before LSCA; and (b) points c H 2 after LSCA rotated by Ψ ^ z 2 about z-axis.
Figure 6. (a) Points c H 1 before LSCA; and (b) points c H 2 after LSCA rotated by Ψ ^ z 2 about z-axis.
Sensors 20 02841 g006
Figure 7. The variation of σ d w / i in terms of precision and robustness.
Figure 7. The variation of σ d w / i in terms of precision and robustness.
Sensors 20 02841 g007
Figure 8. The variation of σ ρ r e / ρ i and σ ρ w / ρ i in terms of precision and robustness.
Figure 8. The variation of σ ρ r e / ρ i and σ ρ w / ρ i in terms of precision and robustness.
Sensors 20 02841 g008
Figure 9. Height recovering in terms of precision and robustness.
Figure 9. Height recovering in terms of precision and robustness.
Sensors 20 02841 g009
Figure 10. The variation of P F i in terms of precision and robustness
Figure 10. The variation of P F i in terms of precision and robustness
Sensors 20 02841 g010
Figure 11. VLP-16 LiDAR mounted on a vehicle toward the ground.
Figure 11. VLP-16 LiDAR mounted on a vehicle toward the ground.
Sensors 20 02841 g011
Figure 12. The variation of σ ρ w / ρ i with respect to LiDAR frame.
Figure 12. The variation of σ ρ w / ρ i with respect to LiDAR frame.
Sensors 20 02841 g012
Figure 13. Uncalibrated and calibrated LiDAR frames from Acquisitions 1 and 2.
Figure 13. Uncalibrated and calibrated LiDAR frames from Acquisitions 1 and 2.
Sensors 20 02841 g013
Table 1. VLP-16 features.
Table 1. VLP-16 features.
FeaturesVLP-16
Laser beams16
Horizontal FOV 360
Vertical FOV 15 + 15
Azimuth angular resolution 0.1 0.2 0.4
Elevation angular resolution 2
Maximum range accuracy σ ρ 3   cm

Share and Cite

MDPI and ACS Style

Zaiter, M.A.; Lherbier, R.; Faour, G.; Bazzi, O.; Noyer, J.-C. Extrinsic LiDAR/Ground Calibration Method Using 3D Geometrical Plane-Based Estimation. Sensors 2020, 20, 2841. https://doi.org/10.3390/s20102841

AMA Style

Zaiter MA, Lherbier R, Faour G, Bazzi O, Noyer J-C. Extrinsic LiDAR/Ground Calibration Method Using 3D Geometrical Plane-Based Estimation. Sensors. 2020; 20(10):2841. https://doi.org/10.3390/s20102841

Chicago/Turabian Style

Zaiter, Mohammad Ali, Régis Lherbier, Ghaleb Faour, Oussama Bazzi, and Jean-Charles Noyer. 2020. "Extrinsic LiDAR/Ground Calibration Method Using 3D Geometrical Plane-Based Estimation" Sensors 20, no. 10: 2841. https://doi.org/10.3390/s20102841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop