Next Article in Journal
An Unsupervised Deep-Transfer-Learning-Based Motor Imagery EEG Classification Scheme for Brain–Computer Interface
Previous Article in Journal
An Anti-Jamming Method against Interrupted Sampling Repeater Jamming Based on Compressed Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR-Stabilised GNSS-IMU Platform Pose Tracking

School of Mechanical and Mining Engineering, The University of Queensland, Brisbane, QLD 4072, Australia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(6), 2248; https://doi.org/10.3390/s22062248
Submission received: 8 February 2022 / Revised: 7 March 2022 / Accepted: 11 March 2022 / Published: 14 March 2022
(This article belongs to the Section Navigation and Positioning)

Abstract

:
The requirement to estimate the six degree-of-freedom pose of a moving platform frequently arises in automation applications. It is common to estimate platform pose by the fusion of global navigation satellite systems (GNSS) measurements and translational acceleration and rotational rate measurements from an inertial measurement unit (IMU). This paper considers a specific situation where two GNSS receivers and one IMU are used and gives the full formulation of a Kalman filter-based estimator to do this. A limitation in using this sensor set is the difficulty of obtaining accurate estimates of the degree of freedom corresponding to rotation about the line passing through the two GNSS receiver antenna centres. The GNSS-aided IMU formulation is extended to incorporate LiDAR measurements in both known and unknown environments to stabilise this degree of freedom. The performance of the pose estimator is established by comparing expected LiDAR range measurements with actual range measurements. Distributions of the terrain point-to-model error are shown to improve from 0.27 m mean error to 0.06 m when the GNSS-aided IMU estimator is augmented with LiDAR measurements. This precision is marginally degraded to 0.14 m when the pose estimator is operated in an a prior unknown environment.

1. Introduction

An inertial measurement unit (IMU) aided by global navigation satellite system (GNSS) information is often used to track the six degree of freedom (DOF) pose (here, pose refers to the six degree of freedom position (x, y, z) and orientation ( θ , ϕ , ψ ) of the platform) of a moving platform. Solutions come in many forms but are broadly categorised as being tightly or loosely coupled depending on the level of sensor integration. The rationale for using the two sensor types is that GNSS receivers provide low-frequency information while the IMU provides information about higher frequency motion, noting that when stationary, the IMU measures gravity which can be considered as zero-frequency (DC) information. In a typical implementation, measurements from the two sensors are combined using optimal estimation methods, vis-à-vis Kalman filtering, to maintain an estimate of the pose of the platform frame P as it moves relative to a global frame G.
Where three GNSS receivers with non-collinear antenna are used, the low-frequency pose of the platform can be fully determined to the precision and bandwidth of the receivers. Each GNSS receiver provides positional knowledge of its antenna centres to a precision of approximately 0.05   m at a bandwidth from 0 Hz to 2–5 Hz (with measurement updates typically provided at 10 Hz ). Higher frequency motion is then determined from information provided by the IMU, which will typically provide acceleration and angular rate information from 0 Hz to 50 Hz, at update rates of 50 Hz to 100 Hz. Such an arrangement provides a nice division of labour, with the Kalman filter-based pose estimator serving to combine the disparate frequency information provided by the two sensor types.
However, where two GNSS receivers are used, only five of the six degrees of freedom of the platform can be established using the information provided by the GNSS receivers. The platform’s rotational orientation about the line drawn through the antenna centres is not defined. In theory, gravity can be used to resolve this, provided the placement of the IMU frame does not result in the vector emanating from the sensor origin and directed by gravity, intersecting the antenna centre line. In practice, however, for a similarly structured estimator, start-up bias and the conflation of gravity measurement with accelerations due to platform motion limits the accuracy of pose estimates.
This paper develops and evaluates a method for using three-dimensional LiDAR measurements to stabilise pose estimates for platforms having two GNSS receivers. We consider two situations: (i) where the environment around the platform is a priori known, and (ii) where it is not. LiDAR sensors are often present on robotic platforms as components of a perception system. Their use for stabilising pose estimates becomes relevant where three GNSS receivers are impractical or cost-prohibitive.
Pose estimation is clearly ground for fertile research and has been for some time. Google Scholar identifies no fewer than 4650 publications that contain all of the terms “GNSS”, “IMU”, and “loosely coupled”. A total of 851 of these were published after 2020; the first was published in 1968 [1]. If the term “pose” is included, 1050 papers are identified; 279 of these were published after 2020, while the earliest is [2]. Adding the term “LiDAR” reduces the set to 452, of which 167 have been published since 2020, with the earliest again dating from 1981 [3]. These, and the many other papers that use alternative but equivalent terminology, span a variety of approaches and applications, from terrestrial [4,5,6], to aeronautical [7,8,9,10,11], space [12,13,14,15]. In some instances, the search terms appear in reference to the technology being used for a specific application, e.g., precision agriculture [16,17,18,19] and construction [20,21]. In others, reference is made to implementation and algorithms. These are described at various levels of detail, but often with ambiguity.
Methods for stabilising pose estimates with LiDAR measurements exist and generally fall into two categories. The first are feature-based methods which seek to identify features, sometimes known as landmarks, in LiDAR point clouds and maintain an estimate of their location. Features can be lines or planes [22,23,24], curves [25,26], or corners [27]. Feature-based methods can also match sequential LiDAR scans, solving for the incremental change in pose between scans [28,29,30,31,32].
An alternative approach uses model-based methods. These generally compare measurements from a ranging sensor with a model of the environment, searching for a pose solution that provides an optimal solution.
Ref. [33] propose an airborne solution for aiding a navigation system through the use of a large-scale terrain model and LiDAR sensors. Similarly, Ref. [34] augment pose estimates by fitting LiDAR measurements against a known two-dimensional environment model. Refs. [35,36] show how pose estimates of known objects can be obtained from only LiDAR scans by finding the pose that is most likely among the set of all possible poses.
Other work has focused on the augmentation of pose when GNSS measurements are degraded or unavailable, such as loss of real-time kinematic (RTK) corrections. Ref. [37] develop a solution that uses a known geometric model of the world and that augments GNSS measurements with a pseudo-measurement of the expected height of the receiver. Ref. [38] use a conceptually similar approach, but apply it to aerial vehicles.
Work on model-based methods has also focused on building an environment model simultaneously with localisation within this model. One solution, presented by [39], develops an algorithm for a LiDAR- and GNSS-aided IMU navigation system for use in an airborne platform that only maps the terrain when GNSS and IMU measurements are available and reliable. When the integrity of the GNSS-aided IMU navigation system is compromised, LiDAR observations are fitted to the generated terrain model through an ICP-based method. The authors find that the solution is robust and improves the accuracy of the pose when the GNSS system is in a degraded state.
Ref. [40] present a method that augments a GNSS-aided IMU and odometry navigation system with a scanning LiDAR sensor. The authors estimate the environment using the LiDAR scanner as they move through an urban scene, and remove dynamic obstacles by compressing this model to a ground plane, coloured based on infrared reflectivity from LiDAR return intensities. They identify that their novel navigation system is up to an order of magnitude more accurate than a GNSS-aided IMU and odometry navigation system.
Our rationale for writing this paper is twofold. First, we believe the approach we propose for the integration of LiDAR measurement into a GNSS-aided IMU solution is novel. Second, we have repeatedly sought clear descriptions of the algorithms others have used, only to find missing details or gaps that have frustrated our implementation efforts. Here, we attempt to give a full and complete description of a loosely coupled GNSS-aided IMU navigation system in the expectation that it will be useful to others.

2. Experimental Platform

Figure 1 shows the experimental platform considered in this paper. The machine is called a track-type tractor or bulldozer and has applications in agriculture, construction, and mining. In all these domains, tracking spatial pose is needed for technology applications that include production reporting, guidance, and automation.
Figure 1 shows the coordinate frames in which variables relevant to the development of the pose estimation system are represented. The global frame is denoted as G. Frame P describes the pose of the tractor in Frame G by a transformation T P · , · . Sensors are described relative to Frame P: Frame I describes the position and orientation of the IMU; Frame L describes the position and orientation of a LiDAR; and G 1 and G 2 are the centres of the antennas of two GNSS receivers mounted on the tractor cabin.
The pose of the tractor is defined by the state vector,
x = p P p ˙ P p ¨ P Φ P ω P b a b r T .
The 21 state parameters of the state vector are intended to accommodate a constant acceleration model for translation and a constant rate model for attitude. Here,
  • p P = x , y , z T is the coordinate of the origin of P in G;
  • p ˙ P = x ˙ , y ˙ , z ˙ T is the velocity of point p P in G;
  • p ¨ P = x ¨ , y ¨ , z ¨ T is the acceleration of point p P in G;
  • Φ P = θ , ϕ , ψ T describes the attitude of Frame P in G as r o l l , p i t c h and y a w ;
  • ω P = ω x , ω y , ω z T describes the rotational velocity of Frame P in G;
  • b a = b x ¨ , b y ¨ , b z ¨ T describes the acceleration bias of the IMU in Frame I;
  • b r = b ω x , b ω y , b ω z T is the rotational rate bias of the IMU in Frame I.
The transformation matrix T P Φ P , p P is given by
T P Φ P , p P = R P Φ P p P 0 1 × 3 1 ,
where
R P Φ P = c ( ψ ) c ( ϕ ) c ( ψ ) s ( ϕ ) s ( θ ) s ( ψ ) c ( θ ) c ( ψ ) s ( ϕ ) c ( θ ) + s ( ψ ) s ( θ ) s ( ψ ) c ( ϕ ) s ( ψ ) s ( ϕ ) s ( θ ) + c ( ψ ) c ( θ ) s ( ψ ) s ( ϕ ) s ( θ ) c ( ψ ) s ( θ ) s ( ϕ ) c ( ϕ ) s ( θ ) c ( ϕ ) c ( θ ) .
Here, c · is the cosine function and s · is the sine function. Note, T P · , · transforms points in Frame P into Frame G; equally, it describes the location of Frame P in Frame G. The top left 3 × 3 block of the T P · , · matrix is the rotation matrix which aligns Frame P in Frame G.
The IMU provides measurements at 50 Hz ; the GNSS receivers are RTK-enabled and provide observations at 10 Hz . The LiDAR is a Velodyne VLP-16 and provides range measurements from 16 beams, equally spaced across a field of view of 30 with 360 and rotation at 10 Hz . Other details of the sensors are given in Table 1.

3. Pose Estimation Using GNSS, IMU, and LiDAR Measurements

The pose estimator has the structure given in Figure 2. Each block corresponds to a Kalman filter update. On receipt of a sensor measurement, a filter update occurs that depends on the type of the measurement (IMU, GNSS, LiDAR) with the structure designed to accommodate different data rates, including, prospectively, sequences of lost measurements. The structure takes advantage of the high-frequency IMU measurements, updated through an indirect Kalman filter, and the low-frequency GNSS and LiDAR measurements, which are updated through Kalman and Information filters, respectively. The structure depicted in Figure 2 is generally known as an indirect, or error-state, Kalman filter. It is the common method for implementing IMU-aided navigation solutions [42,43,44,45,46,47,48,49,50]. The formulation below assumes that IMU data arrive at a higher rate than either GNSS or LiDAR measurements.
Algorithm 1 gives the algorithm based on updates from the IMU, GNSS, and LiDAR measurements.
Algorithm 1: LiDAR- and GNSS aided IMU navigation solution algorithm.
Sensors 22 02248 i001

4. Process Model

The three filter updates of Figure 2 share the same process model which is assumed linear having the discrete-time form
x k + 1 = F t k + 1 t k · x k + t k t k + 1 F t k + 1 τ d β τ .
The state transition model F Δ t k + 1 = F t k + 1 t k predicts the platform’s state vector assuming a constant acceleration for translations and a constant rate for rotations. The state transition model also predicts the IMU acceleration and rotational rate biases through use of constants determined from experimental observations (see [51]). The 21 × 21 matrix F · has the form
F Δ t k = I 3 × 3 Δ t k · I 3 × 3 Δ t k 2 2 · I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 Δ t k · I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 Δ t k · I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 T b x ¨ , y ¨ , z ¨ T b x ¨ , y ¨ , z ¨ + Δ t k · I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 T b ω x , ω y , ω z T b ω x , ω y , ω z + Δ t k · I 3 × 3 .
Here, T b x ¨ , y ¨ , z ¨ describes the empirically determined accelerometer bias parameters, and T b ω x , ω y , ω z the empirically determined gyroscope bias parameters. Observe that
lim Δ t k 0 F Δ t = I 21 × 21 .
The second term of the right-hand side of Equation (4) describes the process noise which accounts for modelling approximations and model integration errors. We define
w t k = t k t k + 1 F t k + 1 τ d β τ ,
where w · is a white Gaussian discrete-time process and β · is Brownian motion with diffusion σ Q 2 t such that for consecutive measurement times t k and t k + 1 ,
E β t k β t k + 1 = 0 ,
E β t k β t k + 1 β t k β t k + 1 T = t k t k + 1 σ Q 2 t d t .
If the sample period is assumed to be small compared to the auto-correlation times of the process model modes,
σ Q , k + 1 2 = E w t k w T t k σ Q 2 ( t k ) · t k + 1 t k .
Observe that if the diffusion σ Q 2 t is time-invariant, the process noise covariance, σ 2 , increases linearly from the time since the last measurement. As this time increases, the process noise increases, resulting in the process model being weighted less in the Kalman filter update and measurements being weighted more.

5. Pose State Updates from IMU Measurements

5.1. IMU Measurement Model

The IMU is installed on the platform frame close to the platform’s centre of gravity and is defined by Frame I—see Figure 3. IMU measurements are made in an inertial frame instantaneously aligned with Frame I. The IMU measurement model follows from Equation (3) and takes the form
z I = h I x + v z , I ,
where
z I = a I ω I T = x ¨ I y ¨ I z ¨ I θ ˙ I ϕ ˙ I ψ ˙ I T
and measurement noise v z , I is assumed to be white and Gaussian with covariance σ z , I 2 —see Table 2.
To determine the function h I · , note that the position of the origin of the IMU in the global frame is given by
p I t 1 = R P t p P t 0 1 × 3 1 p i 1 ,
where p i is the (fixed and known) position of the origin of Frame I in Frame P. The velocity of this point at time t is
p ˙ I t 0 = R ˙ P t p ˙ P t 0 1 × 3 0 p i 1 = Ω P t v P t 0 1 × 3 1 p I t 1 ,
where Ω P t is the anti-symmetric matrix representing the angular velocity of Frame P in Frame G. The matrix Ω P t is
Ω P = sk ω P = 0 ω z ω y ω z 0 ω x ω y ω x 0 .
The acceleration of the origin of Frame I in Frame G is found by differentiating Equation (14), i.e.,
p ¨ I 0 = R ¨ P t p ¨ P t 0 1 × 3 0 p i 1 .
Noting that if a constant angular rate is assumed (i.e., Ω ˙ P = 0 ),
R ¨ P t = Ω ˙ P t · R P t + Ω P t · R ˙ P t = Ω ˙ P t · R P t + Ω P 2 t · R P t = Ω P 2 t · R P t ,
which gives,
p ¨ I t 0 = Ω P 2 t · R P t p ¨ P t 0 1 × 3 0 p i t 1 .
The measurement equations follow from Equation (18) and can be expressed in terms of the pose state vector:
z I = h I x = a I ω I = R I T · R P T Φ P · Ω P 2 ω P · R P Φ P · p i + p ¨ P + g ω P + b a b r .
Note that the rotation matrix R I T · R P T · serves to align the acceleration and rate vectors to Frame I in which measurements are made. The vector b r is the bias of rate gyro, b a is the accelerometer bias, and g denotes gravitational acceleration.
The Jacobian of h · can be found analytically from
h I x = 0 3 × 3 0 3 × 3 I 3 × 3 M 1 M 2 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 R I T · R P Φ P Φ P · ω P R I T · R P T Φ P 0 3 × 3 I 3 × 3 ,
where
M 1 = R I T · R P T Φ P Φ P · Ω P 2 ω P · R P Φ P · p i + p ¨ P + g + R I T · R P T Φ P · Ω P 2 ω P · R P Φ P Φ P · p i ,
M 2 = R I T · R P T Φ P · Ω P 2 ω P ω P · R P Φ P · p i .

5.2. IMU Updates

The approach to updating measurements from the IMU involves estimating the error on the state estimate, δ x ^ k k , at time t k given measurements to time t k , and combining this with the propagated state estimate, x ^ k k , from the most recent GNSS or LiDAR update. The state estimate update, x ^ k k , is then computed as the sum,
x ^ k k = x ^ k k + δ x ^ k k .
The IMU filter is reset after each GNSS or LiDAR update: state estimate and state estimate covariance are set from the error-corrected state estimate and error-corrected state estimate covariance,
x ^ k 1 k 1 = x ^ k 1 k 1 ,
σ x ^ , k 1 k 1 2 = σ x ^ , k 1 k 1 2 ,
and the error state estimate and error state estimate covariance are initialised to zero:
δ x ^ k 1 k 1 = 0 21 × 1 ,
σ δ x ^ , k 1 k 1 2 = 0 21 × 21 .
On receipt of an IMU measurement, the calculation of δ x ^ k k starts with the prediction of the state estimate,
x ^ k k 1 = F Δ t k · x ^ k 1 k 1 ,
the error state,
δ x ^ k k 1 = F Δ t k · δ x ^ k 1 k 1 ,
state covariance,
σ x ^ , k k 1 2 = F Δ t k · σ x ^ , k 1 k 1 2 · F Δ t k T .
and error state covariance,
σ δ x ^ , k k 1 2 = F Δ t k · σ δ x ^ , k 1 k 1 2 · F Δ t k T + σ Q , k 2 .
The innovation is computed from
ν I , k = z I , k h I x ^ k k 1 + h I x ^ k k 1 · δ x ^ k k 1 = z I , k z ^ I , x ^ k k 1 + z ^ I , δ x ^ k k 1 ,
and the innovation covariance, σ ν , k 2 , and Kalman gain, W I , k , are found from
σ ν , I , k 2 = h I x ^ k k 1 · σ δ x ^ , k k 1 2 · h I x ^ k k 1 T + σ z , I 2 ,
W I , k = σ δ x ^ , k k 1 2 · h I x ^ k k 1 T · σ ν , I , k 2 1 .
The corrected error state and the corrected error state covariance are computed from
δ x ^ k k = δ x ^ k k 1 + W I , k · ν I , k ,
σ δ x ^ , k k 2 = σ δ x ^ , k k 1 2 W I , k · σ ν , I , k 2 · W I , k T .
The predicted but uncorrected state estimate, x ^ k k 1 , and state estimate covariance, σ x ^ , k k 1 2 , are used in the next iteration of the filter,
x ^ k k = x ^ k k 1 ,
σ x ^ , k k 2 = σ x ^ , k k 1 2 .
The error-corrected state estimate, x ^ k k , which incorporates the newest IMU observation, is computed through the addition with the error state estimate,
x ^ k k = x ^ k k + δ x ^ k k ,
σ x ^ , k k 2 = σ x ^ , k k 2 + σ δ x ^ , k k 2 .
The indirect extended Kalman filter (IEKF) algorithm for IMU updates is detailed in Algorithm 2.    
Algorithm 2: IMUUpdate - An Indirect Extended Kalman Filter (IEKF) algorithm for state estimation using IMU observations.
Sensors 22 02248 i002

6. Pose State Updates from GNSS Measurements

6.1. GNSS Measurement Model

The GNSS measurement model gives the locations of the GNSS receivers in the global frame and takes the form
z G = h G x + v z , G ,
Here, h G · gives the locations of the GNSS receivers as a function of the state,
z G = h G x = R P Φ P · p G 1 + p P R P Φ P · p G 2 + p P ,
where p G 1 and p G 2 are the coordinates of the GNSS antennas in the platform frame (see Figure 4). The measurement vector z G is the stacked vectors of the antennas’ Cartesian coordinates in Frame G. Equation (42) can be readily extended if a third GNSS receiver were mounted to the platform. The measurement noise, v z , G , is assumed to be white and Gaussian with covariance σ z , G 2 (see Table 3).
The GNSS measurement Jacobian is given by
h G x = I 3 × 3 0 3 × 3 0 3 × 3 R P T Φ P Φ P · p G 1 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 R P T Φ P Φ P · p G 2 0 3 × 3 0 3 × 3 0 3 × 3 .

6.2. GNSS Updates

The estimation of the error-corrected state estimate, x ^ k k , begins with a prediction of state using a state transition model, F Δ t k , and the previous error-corrected state estimate, x ^ k 1 k 1 ,
x ^ k k 1 = F Δ t k · x ^ k 1 k 1 ,
and the prediction of the state covariance,
σ x ^ , k k 1 2 = F Δ t k · σ x ^ , k 1 k 1 2 · F Δ t k T + σ Q , k 2 .
The GNSS measurement innovation and innovation covariance are computed from
ν G , k = z G , k h G x ^ k k 1 = z G , k z ^ G , k ,
σ ν , G , k 2 = h G x ^ k k 1 · σ x ^ , k k 1 2 · h G x ^ k k 1 T + σ z , G 2 ,
where z G , k is the GNSS measurement.
The Kalman gain, W G , k , is then computed using
W G , k = σ x ^ , k k 1 2 · h G x ^ k k 1 T · σ ν , G , k 2 1 .
The error-corrected state estimate, x ^ k k , and error-corrected state estimate covariance, σ x ^ , k k 2 , are computed using
x ^ k k = x ^ k k 1 + W G , k · ν G , k ,
σ x ^ , k k 2 = σ x ^ , k k 1 2 W G , k · σ ν , G , k 2 · W G , k T .
The GNSS update is given as Algorithm 3.    
Algorithm 3: GNSSUpdate—An extended Kalman filter (EKF) algorithm for state estimation using GNSS observations.
Sensors 22 02248 i003

7. Pose State Updates Using LiDAR Measurements

7.1. LiDAR Measurement Model in a Known Environment

The LiDAR measurement model developed in this section computes the expected range measurements based on the state vector and a triangulated model of the environment, denoted X. The LiDAR measurement model ray-casts from the LiDAR to the environment model X to determine the expected measurement ranges from the LiDAR sensor. In this section, the terrain model X is assumed known; in Section 9, the approach is generalised to unknown terrain.
The LiDAR measurement function is denoted
z L = h L x , X + v z , L ,
where the LiDAR measurement vector, z L is a set of ranges corresponding to each ray from the LiDAR sensor. For n ranges,
z L = z 0 z 1 z n T .
The measurement noise v z , L is assumed to be white and Gaussian with covariance σ z , L 2 —see Section 7.2.
The LiDAR measurement function, h L ( · , · ) , determines the expected sensor observations by ray-casting against a known terrain model given a pose state estimate, X (see Figure 5). Each ray emanates from a point s 0 in Frame L known as the sensor origin. The direction of each ray or beam of the LiDAR can be defined by a second point s j , j = 1 n with | | s j s 0 | | = 1 . The location of these points in Frame G can be found with transformations
q 0 = T P Φ P , p P · T L · s 0 ,
q j = T P Φ P , p P · T L · s j .
Here, T L describes the location of Frame L in Frame P. This transform is assumed known. See [52] for approaches to finding it.
Each ray is parametrised by a range r j and can be described by
Γ r j = q 0 + r j · q j q 0 .
To find r j , it is necessary to first determine the triangle in X which it intersects. Denote the vertices of this triangle, Δ j = α j , β j , γ j . Then,
r j = n j · α j q 0 n j · q 0 q j ,
where
n j = β j α j × γ j α j .
The ray-triangle intersection is depicted in Figure 6.
The measurement model for n ray intersections can be constructed from
z L = r 1 , r 2 , , r n T ,
and the LiDAR measurement Jacobian from
h L x , X = r 1 p P 0 1 × 6 r 1 Φ P 0 1 × 9 r 2 p P 0 1 × 6 r 2 Φ P 0 1 × 9 r n p P 0 1 × 6 r n Φ P 0 1 × 9 .

7.2. LiDAR Measurement Noise Model

Observations from the LiDAR sensor present in this application are transformed from the LiDAR frame, L, into the global frame, G. However, each of the transforms are required to achieve this contain uncertainty. For sensors such as the GNSS receivers and IMU, the uncertainty introduced by these transforms is small, as observations are obtained at the sensor. However, for perception sensors such as LiDAR, which perceive the environment at potentially significant ranges, small transform errors can lead to quite significant observation errors. For example, a registration error of 1 at 50 m results in ∼ 0.7   m of endpoint error.
The uncertainty of frame locations and measurements may be propagated from their origin to the point of measurement to determine the effective uncertainty. We consider four sources of uncertainty which we propagate from their origin to the LiDAR ray endpoint (shown in Figure 7). The sources of uncertainty considered are (i) the uncertainty in the global frame location, σ G 2 ; (ii) the estimated location of the platform relative to the global frame, σ P 2 ; (iii) the uncertainty in the LiDAR registration, σ L 2 ; and (iv) the LiDAR’s measurement uncertainty, σ z , L 2 . The Jacobian of each of the relevant transforms is used to transform the uncertainty through the system.
The uncertainty from each of these sources propagates through to the endpoint of the ray as follows:
σ P , k 2 = T P Φ P , p P · σ G 2 · T P Φ P , p P T + σ P 2 ,
σ L , k 2 = T L · σ P , k 2 · T L T + σ L 2 ,
σ z , L 2 = T R · σ L , k 2 · T R T + σ z , L 2 .
Here, the notation T L is used to denote the Jacobian of the homogeneous transform—in this case, the transform from the platform frame to the LiDAR frame.
Figure 7 provides a visual representation of the propagation of uncertainty from the global frame through to the endpoint of the ray. The covariances shown in Figure 7 have been exaggerated for visual clarity.

7.3. LiDAR Updates

The number of LiDAR measurements typically exceeds the dimension of the state vector so it is computationally more efficient to use the information form of the extended Kalman filter to perform the pose update from LiDAR measurements [53].
The current state and covariance are transformed in the information space
Y k 1 k 1 = σ x ^ , k 1 k 1 2 1 ,
y ^ k 1 k 1 = Y k 1 k 1 · x ^ k 1 k 1 .
The Fisher information matrix (FIM) and Fisher information vector (FIV) are predicted to the current time step using
Y k k 1 = F Δ t k · Y k 1 k 1 1 · F Δ t k T + σ Q , k 2 1 ,
y ^ k k 1 = Y k k 1 · F Δ t k · x ^ k 1 k 1 .
The predicted state estimate is obtained from
x ^ k k 1 = Y k 1 k 1 1 · y ^ k k 1
and the innovation is obtained using the LiDAR observation function, the current predicted state estimate, and the terrain model, X:
ν L , k = z L , k h L x ^ k k 1 , X .
The predicted FIV and FIM are corrected through use of the innovation and the LiDAR range measurements using
y ^ k k = y ^ k k 1 + h L x ^ k k 1 , X T · σ z , L 2 1 · ν L , k + h L x ^ k k 1 , X · x ^ k k 1 ,
Y k k = Y k k 1 + h L x ^ k k 1 , X T · σ z , L 2 1 · h L x ^ k k 1 , X .
The FIM and FIV are now transformed back into the state space,
x ^ k k = Y k k 1 · y ^ k k ,
σ x ^ , k k 2 = Y k k 1 .
Algorithm 4 shows the LiDAR pose estimation extended information filter (EIF) algorithmically.    
Algorithm 4: LiDARUpdate—An extended information filter (EIF) algorithm for state estimation using LiDAR observations in a known environment.
Sensors 22 02248 i004

8. Evaluation

It is necessary to use indirect methods for the evaluation of the pose estimator as there no is ground truth pose trajectory. The approach chosen is to survey the environment to obtain a ground truth model and compare LiDAR measurements with the predicted ranges from ray-casting against this scanned ground truth model. These differences are representative of the error in the pose. The survey to obtain the ground truth model was created with a Faro Focus3D terrestrial LiDAR scanner [54]. This instrument claims a one standard deviation range precision of 0.0011   m . Figure 8 shows the environment model obtained by scanning.
The scene used for evaluation is shown in Figure 9. It comprises a generally flat terrain with piled dirt, forming mounds. The bulldozer was manually moved around in this environment following the path shown in Figure 8. During the execution of this motion, at various points, the platform was pitched and rolled using the bulldozer blade.
Figure 10 shows the point-to-model errors for the GNSS-aided IMU solution and the LiDAR- and GNSS-aided solution in a known environment. A ray having the length of the measurement is cast along the corresponding LiDAR beam at the estimated pose. The distance from the end point of the ray to the nearest point of the model is determined as the point-to-model error. The smaller the point-to-model error, the closer the real pose to the estimated pose. For perfect pose, a perfect model, and zero measurement error, the point-to-model error is zero.
The distribution of point-to-model errors given in Figure 10 evidences that the LiDAR- and GNSS-aided IMU solution is more accurate than the GNSS aiding. The mean point-to-model error is reduced from 0.25   m to 0.06   m . Significantly, the highest number of point-to-model errors for the LiDAR- and GNSS-aided solution corresponds to zero error. This is not unexpected: the positioning of the GNSS antennas provides information that supports good spatial positioning along with roll and yaw estimation. However, no information is available from GNSS measurements about the orientation of the axis that passes through the two antenna centres. This rotation heavily contributes to the pitch of the platform. The provision of measurements of the forward-facing 3D LiDAR provides information that stabilises the pitch estimate, this accounting for the improved accuracy. The mean value for point-to-model error for the GNSS-aided solution is prospectively due to start-up bias on the centre. It is noted that the GNSS-aided solution has a broader distribution of error.
The time estimates for the six degrees of freedom of the platform’s pose are shown in Figure 11. Significantly both estimators give similar results for x, y, z, and yaw. Most of the difference is in estimates for the pitch and roll of the platform. The bias in the pitch estimate for the GNSS-aided solution is attributed to start-up bias in the IMU sensor, which the estimator has been unable to account for; it is present at both the start and end of the motion sequence when the platform is stationary.
The results in this paper were produced on a computer running a Linux-based operating system (Intel Xeon W-2125 CPU, 32 GB of memory). The GNSS-aided IMU filter could update up to ∼ 10.000   Hz , much faster than the measurement rate. The LiDAR- and GNSS-aided IMU filter required more computing per time step and was capable of ∼ 500 Hz .

9. LiDAR Measurement Model in an Unknown Environment

The capability to estimate pose when the environment is known helps to establish the potential performance of the LiDAR- and GNSS-aiding estimator, but is of limited practical usefulness. Most environments of interest in which this platform operates have unknown geometry. The estimator is extended to achieve the benefits of pitch stabilisation provided by the LiDAR measurement by concurrently estimating the pose and the local terrain.
For this purpose, the terrain is modelled as a height grid that divides the ( x , y ) plane into a regular grid. Each grid element, known as a cell, is assigned a height. The points defined by each cell in the height grid can be triangulated to develop a representation of the terrain as a surface. A representation of the triangulated height grid with states h 0 , 0 to h 3 , 3 is shown in Figure 12.
Each of the o · p = q cell heights in the grid can be considered additional states for which estimates are sought. These states, collectively denoted x t , augment the state vector defined by Equation (1), shown here as x P . Here, x t takes the form
x t = h 0 , 0 , h i , j , , h n , m T .
The subscript t indicates terrain. The additional terrain state estimates are appended to the state vector as follows:
x = x p x t .
As the terrain model is now part of the state vector, the LiDAR observation function is of the form
z L = h L x + v z , L .
The measurement function used to estimate the terrain and pose is identical to that of the previous section. However, expected range measurements are now determined through ray-casting onto the triangulated height grid model constructed from the x t vector, not a provided static model. The z-value or height of the vertices of each triangle of the grid, α j , β j and γ j , are now defined by the elements of x t . The measurement Jacobian for the terrain states is determined analytically from the partial derivative of the LiDAR range measurement with respect to the z-value of the points of the intercepted triangle ( α j , β j and γ j ).
Figure 13 shows that the concurrent terrain and pose estimation system performs slightly worse than the LiDAR- and GNSS-aided IMU solution and significantly better than the GNSS-aided IMU solution. The mean point-to-model error for the concurrent terrain and pose estimation system is 0.14   m . The zero point-to-model error bin is the largest bin, with the error distribution extending to higher point-to-model error values.
Figure 14 shows the environment model generated by the concurrent terrain and pose estimation system. The terrain model is coloured based on the difference between the estimated terrain and the surveyed ground truth model. The terrain estimate shows close agreement with the truth model: generally, state errors are less than 0.1   m but extend out to 0.25   m . The terrain estimate has an RMS error of 0.126   m , which was achieved using a 1 m height grid terrain representation. Generally, areas of large error are present in high gradient locations of the terrain model. These can be difficult to capture using a regular grid terrain representation.
The estimates of the six degrees of freedom of the platform’s pose are shown in Figure 15. The solution generated without a priori knowledge of this environment shows some deviation in both roll and pitch when compared to that obtained using a known environment. The forward-looking orientation of the LiDAR sensor clearly stabilises the pitch estimate; the roll estimate is similar to the GNSS-aided IMU solution. A wider three-dimensional scan is expected to yield better stabilisation of roll. Equally, better roll estimates could be achieved by increasing the space between the GNSS antennas. The yaw, x, y, and z degrees of freedom are similar to those estimated when the environment is known. Overall, the results demonstrate the effectiveness of the approach.
The concurrent terrain and pose estimation filter, again, required more computing per timestep. It was demonstrated to run at ∼ 50 Hz on the target hardware (see Section 8).

10. Conclusions

The main result from this work is to show how LiDAR can be used to stabilise two-antenna GNSS-aided IMU pose estimators in environments with known and unknown geometry. The formulation is structured to accommodate multi-rate and non-synchronous measurements. It enables continuity through short-term periods of lost measurements from one of the sensors. Lost measurements are a characteristic observed on some brands of RTK-GNSS receivers, and the solution described in this paper adapts well to this situation.
We have not explored the possibility of using LiDAR stabilisation in GNSS-denied environments; however, the approach described here could be readily adapted to such situations.
In the paper introduction, we identified that LiDAR- and GNSS-aided IMU solutions can be effective when three-dimensional LiDAR measurements are available, but that three GNSS receivers are impractical or cost-prohibitive. Our recommendation, based on experience, is that if three receivers are possible, they should be used, particularly if an accurate pose solution to high frequencies is required.

Author Contributions

Conceptualization, T.D. and P.M.; methodology, T.D.; software, T.D. and T.P.; validation, T.D.; formal analysis, T.D. and P.M.; investigation, T.D. and T.P.; resources, T.D.; data curation, T.D.; writing—original draft preparation, T.D.; writing—review and editing, P.M. and T.P.; visualization, T.D.; supervision, P.M.; project administration, P.M.; funding acquisition, P.M. All authors have read and agreed to the published version of the manuscript.

Funding

TAD is financially supported by a postgraduate scholarship funded by Australian Coal Association (ACARP) project C27063.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank Richard Hensel and Peter Beasley who conducted the collection of data used in this paper. Access to the Caterpillar D11T Bulldozer platform was facilitated by Caterpillar product distributor WesTrac Pty. Ltd. We acknowledge the outstanding support of Caterpillar Inc. who are a partner in ACARP project C27063.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DCDirect current, referring to the zero-frequency component of a signal.
DOFDegree(s)-of-freedom
EKFExtended Kalman filter
EIFExtended information filter
FIMFisher information matrix
FIVFisher information vector
GNSSGlobal Navigation Satellite System
IEKFIndirect extended Kalman filter
IMUInertial measurement unit
LiDARLight detection and ranging
RMSRoot mean squared
RTKReal-time Kinematic
TTTTrack-type tractor

References

  1. Bryson, A., Jr.; Henrikson, L. Estimation using sampled data containing sequentially correlated noise. J. Spacecr. Rocket. 1968, 5, 662–665. [Google Scholar] [CrossRef]
  2. Lefferts, E.J.; Markley, F.L.; Shuster, M.D. Kalman filtering for spacecraft attitude estimation. J. Guid. Control. Dyn. 1982, 5, 417–429. [Google Scholar] [CrossRef]
  3. Shuster, M.D.; Oh, S.D. Three-axis attitude determination from vector observations. J. Guid. Control. 1981, 4, 70–77. [Google Scholar] [CrossRef]
  4. Li, Y.; Efatmaneshnik, M.; Dempster, A.G. Attitude determination by integration of MEMS inertial sensors and GPS for autonomous agriculture applications. GPS Solut. 2012, 16, 41–52. [Google Scholar] [CrossRef]
  5. Lim, J.; Choi, K.; Cho, J.; Lee, H. Integration of GPS and monocular vision for land vehicle navigation in urban area. Int. J. Automot. Technol. 2017, 18, 345–356. [Google Scholar] [CrossRef]
  6. Gakne, P.V.; O’Keefe, K. Tightly-coupled GNSS/vision using a sky-pointing camera for vehicle navigation in urban areas. Sensors 2018, 18, 1244. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Petritoli, E.; Giagnacovo, T.; Leccese, F. Lightweight GNSS/IRS integrated navigation system for UAV vehicles. In Proceedings of the 2014 IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 29–30 May 2014; pp. 56–61. [Google Scholar]
  8. Guo, P.; Li, X.; Gui, Y.; Zhou, X.; Zhang, H.; Zhang, X. Airborne vision-aided landing navigation system for fixed-wing UAV. In Proceedings of the IEEE 2014 12th International Conference on Signal Processing (ICSP), HangZhou, China, 19–23 October 2014; pp. 1215–1220. [Google Scholar]
  9. Zhang, G.; Hsu, L.T. Intelligent GNSS/INS integrated navigation system for a commercial UAV flight control system. Aerosp. Sci. Technol. 2018, 80, 368–380. [Google Scholar] [CrossRef]
  10. Watanabe, Y.; Manecy, A.; Hiba, A.; Nagai, S.; Aoki, S. Vision-integrated navigation system for aircraft final approach in case of GNSS/SBAS or ILS failures. In Proceedings of the AIAA Scitech 2019 Forum, San Diego, CA, USA, 7–11 January 2019; p. 0113. [Google Scholar]
  11. Hecker, P.; Angermann, M.; Bestmann, U.; Dekiert, A.; Wolkow, S. Optical Aircraft Positioning for Monitoring of the Integrated Navigation System during Landing Approach. Gyroscopy Navig. 2019, 10, 216–230. [Google Scholar] [CrossRef]
  12. Bhaskaran, S. Autonomous navigation for deep space missions. In Proceedings of the SpaceOps 2012, American Institute of Aeronautics and Astronautics, Stockholm, Sweden, 11–15 June 2012; p. 1267135. [Google Scholar]
  13. Wang, Y.; Zheng, W.; Sun, S. X-ray pulsar-based navigation system/Sun measurement integrated navigation method for deep space explorer. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2015, 229, 1843–1852. [Google Scholar] [CrossRef]
  14. Tweddle, B.E.; Saenz-Otero, A. Relative computer vision-based navigation for small inspection spacecraft. J. Guid. Control. Dyn. 2015, 38, 969–978. [Google Scholar] [CrossRef] [Green Version]
  15. Gao, Z.; Mu, D.; Zhong, Y.; Gu, C. A strap-down inertial navigation/spectrum red-shift/star sensor (SINS/SRS/SS) autonomous integrated system for spacecraft navigation. Sensors 2018, 18, 2039. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. English, A.; Ross, P.; Ball, D.; Corke, P. Vision based guidance for robot navigation in agriculture. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1693–1698. [Google Scholar]
  17. Yin, X.; Du, J.; Noguchi, N.; Yang, T.; Jin, C. Development of autonomous navigation system for rice transplanter. Int. J. Agric. Biol. Eng. 2018, 11, 89–94. [Google Scholar] [CrossRef] [Green Version]
  18. Lan, H.; Elsheikh, M.; Abdelfatah, W.; Wahdan, A.; El-Sheimy, N. Integrated RTK/INS navigation for precision agriculture. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019), Miami, FL, USA, 16–20 September 2019; pp. 4076–4086. [Google Scholar]
  19. Zhang, Z.; Li, P.; Zhao, S.; Lv, Z.; Du, F.; An, Y. An adaptive vision navigation algorithm in agricultural IoT system for smart agricultural robots. CMC-Comput. Mater. Contin. 2021, 66, 1043–1056. [Google Scholar] [CrossRef]
  20. Lopesa, J.; Trabancoa, J. Automated control systems for civil construction machinery using RTK-GNSS: An implementation review. Rev. Eng. 2019, 28–33. [Google Scholar]
  21. Shcherbakov, V.; Karpik, A.; Barsuk, M. Automation of Railroad Construction Technology Using Surveying Methods. In Proceedings of the International Scientific Siberian Transport Forum, Novosibirsk, Russia, 22–27 May 2019; Springer International Publishing: Cham, Switzerland, 2019; pp. 199–208. [Google Scholar]
  22. Zhang, X.; Rad, A.B.; Wong, Y.K.K. Sensor fusion of monocular cameras and laser rangefinders for line-based simultaneous localization and mapping (SLAM) tasks in autonomous mobile robots. Sensors 2012, 12, 429–452. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Zhao, S.; Farrell, J.A. 2D LIDAR Aided INS for vehicle positioning in urban environments. In Proceedings of the 2013 IEEE International Conference on Control Applications (CCA), Hyderabad, India, 28–30 August 2013; pp. 376–381. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, S.; Atia, M.M.; Karamat, T.B.; Noureldin, A. A LiDAR-Aided Indoor Navigation System for UGVs. J. Navig. 2015, 68, 253–273. [Google Scholar] [CrossRef] [Green Version]
  25. Núñez, P.; Vázquez-Martín, R.; del Toro, J.; Bandera, A.; Sandoval, F. Natural landmark extraction for mobile robot navigation based on an adaptive curvature estimation. Robot. Auton. Syst. 2008, 56, 247–264. [Google Scholar] [CrossRef]
  26. Vázquez-Martín, R.; Núñez, P.; Bandera, A.; Sandoval, F. Curvature-Based Environment Description for Robot Navigation Using Laser Range Sensors. Sensors 2009, 9, 5894–5918. [Google Scholar] [CrossRef] [PubMed]
  27. Im, J.H.; Im, S.H.; Jee, G.I. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area. Sensors 2016, 16, 1268. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR Scan Matching Aided Inertial Navigation System in GNSS-Denied Environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef] [PubMed]
  29. Chen, Y.; Tang, J.; Khoramshahi, E.; Hakala, T.; Kaartinen, H.; Jaakkola, A.; Hyyppä, J.; Zhu, Z.; Chen, R. Scan matching technology for forest navigation with map information. In Proceedings of the IEEE/ION PLANS 2016, Savannah, GA, USA, 11–14 April 2016; pp. 198–203. [Google Scholar]
  30. Niu, X.; Yu, T.; Tang, J.; Chang, L. An online solution of LiDAR scan matching aided inertial navigation system for indoor mobile mapping. Mob. Inf. Syst. 2017, 2017, 4802159. [Google Scholar] [CrossRef] [Green Version]
  31. Donoso, F.; Austin, K.J.; McAree, P.R. How do ICP variants perform when used for scan matching terrain point clouds? Robot. Auton. Syst. 2017, 87, 147–161. [Google Scholar] [CrossRef] [Green Version]
  32. Chang, L.; Niu, X.; Liu, T.; Tang, J.; Qian, C. GNSS/INS/LiDAR-SLAM integrated navigation system based on graph optimization. Remote Sens. 2019, 11, 1009. [Google Scholar] [CrossRef] [Green Version]
  33. de Haag, M.U.; Vadlamani, A.; Campbell, J.; Dickman, J. Application of laser range scanner based terrain referenced navigation systems for aircraft guidance. In Proceedings of the Third IEEE International Workshop on Electronic Design, Test and Applications (DELTA’06), Kuala Lumpur, Malaysia, 17–19 January 2006. [Google Scholar] [CrossRef]
  34. Atia, M.M.; Liu, S.; Nematallah, H.; Karamat, T.B.; Noureldin, A. Integrated Indoor Navigation System for Ground Vehicles with Automatic 3-D Alignment and Position Initialization. IEEE Trans. Veh. Technol. 2015, 64, 1279–1292. [Google Scholar] [CrossRef]
  35. Phillips, T.G.; McAree, P.R. An evidence-based approach to object pose estimation from LiDAR measurements in challenging environments. J. Field Robot. 2018, 35, 921–936. [Google Scholar] [CrossRef]
  36. Phillips, T.; D’Adamo, T.; McAree, P. Maximum Sum of Evidence—An Evidence-Based Solution to Object Pose Estimation in Point Cloud Data. Sensors 2021, 21, 6473. [Google Scholar] [CrossRef] [PubMed]
  37. Danezis, C.; Gikas, V. An iterative LiDAR DEM-aided algorithm for GNSS positioning in obstructed/rapidly undulating environments. Adv. Space Res. 2013, 52, 865–878. [Google Scholar] [CrossRef]
  38. Yun, S.; Lee, Y.J.; Sung, S. IMU/Vision/Lidar integrated navigation system in GNSS denied environments. In Proceedings of the 2013 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2013; pp. 1–10. [Google Scholar] [CrossRef]
  39. Toth, C.; Grejner-Brzezinska, D.A.; Lee, Y.J. Terrain-based navigation: Trajectory recovery from LiDAR data. In Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; pp. 760–765. [Google Scholar] [CrossRef]
  40. Levinson, J.; Montemerlo, M.; Thrun, S. Map-based precision vehicle localization in urban environments. In Robotics: Science and Systems; Citeseer: University Park, PA, USA, 2008; Volume 3, pp. 121–128. [Google Scholar] [CrossRef]
  41. Velodyne. Lidar Vlp-16 User Manual and Programming Guide; Velodyne: San Jose, CA, USA, 2016. [Google Scholar]
  42. Trawny, N.; Roumeliotis, S.I. Indirect Kalman filter for 3D attitude estimation. Univ. Minn. Dept. Comp. Sci. Eng. Tech. Rep. 2005, 2. [Google Scholar]
  43. González, R.; Rodriguez, F.; Guzmán, J.; Berenguel, M. Comparative study of localization techniques for mobile robots based on indirect Kalman filter. In IFR International Symposium on Robotics; Citeseer: University Park, PA, USA, 2009; pp. 253–258. [Google Scholar]
  44. Sirtkaya, S.; Seymen, B.; Alatan, A.A. Loosely coupled Kalman filtering for fusion of Visual odometry and inertial navigation. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013; pp. 219–226. [Google Scholar]
  45. Groves, P.D. Principles of GNSS, inertial, and multisensor integrated navigation systems, 2nd edition [Book review]. IEEE Aerosp. Electron. Syst. Mag. 2015, 30, 26–27. [Google Scholar] [CrossRef]
  46. Lee, C.G.; Dao, N.N.; Jang, S.; Kim, D.; Kim, Y.; Cho, S. Gyro drift correction for an indirect Kalman filter based sensor fusion driver. Sensors 2016, 16, 864. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Bryne, T.H.; Hansen, J.M.; Rogne, R.H.; Sokolova, N.; Fossen, T.I.; Johansen, T.A. Nonlinear Observers for Integrated INS∖GNSS Navigation: Implementation Aspects. IEEE Control. Syst. 2017, 37, 59–86. [Google Scholar] [CrossRef] [Green Version]
  48. Koch, D.P.; Wheeler, D.O.; Beard, R.W.; McLain, T.W.; Brink, K.M. Relative multiplicative extended Kalman filter for observable GPS-denied navigation. Int. J. Robot. Res. 2020, 39, 1085–1121. [Google Scholar] [CrossRef]
  49. Sun, W.; Wu, J.; Ding, W.; Duan, S. A Robust Indirect Kalman Filter Based on the Gradient Descent Algorithm for Attitude Estimation During Dynamic Conditions. IEEE Access 2020, 8, 96487–96494. [Google Scholar] [CrossRef]
  50. Abdolkarimi, E.S.; Mosavi, M.R.; Rafatnia, S.; Martín, D. A Hybrid Data Fusion Approach to AI-Assisted Indirect Centralized Integrated SINS/GNSS Navigation System During GNSS Outage. IEEE Access 2021, 9, 100827–100838. [Google Scholar] [CrossRef]
  51. Barshan, B.; Durrant-Whyte, H.F. Inertial Navigation Systems for Mobile Robots. IEEE Trans. Robot. Autom. 1995, 11, 328–342. [Google Scholar] [CrossRef] [Green Version]
  52. D’Adamo, T.A.; Phillips, T.G.; McAree, P.R. Registration of three-dimensional scanning LiDAR sensors: An evaluation of model-based and model-free methods. J. Field Robot. 2018, 35, 1182–1200. [Google Scholar] [CrossRef]
  53. Assimakis, N.; Adam, M.; Douladiris, A. Information Filter and Kalman Filter Comparison: Selection of the Faster Filter. Int. J. Inf. Eng. 2012, 2, 1–5. [Google Scholar]
  54. FARO Technologies. FARO Laser Scanner Focus3D Manual; FARO Technologies: Lake Mary, FL, USA, 2013. [Google Scholar]
Figure 1. The Caterpillar D11T bulldozer considered in this paper, showing the sensors installed on the platform. The platform is equipped with two GNSS receivers, one IMU, and a scanning LiDAR sensor that faces forward and scans around a horizontal axis. The frames referred to in this paper are labelled as the LiDAR frame, L, the IMU frame, I, the platform frame, P, and the global frame, G.
Figure 1. The Caterpillar D11T bulldozer considered in this paper, showing the sensors installed on the platform. The platform is equipped with two GNSS receivers, one IMU, and a scanning LiDAR sensor that faces forward and scans around a horizontal axis. The frames referred to in this paper are labelled as the LiDAR frame, L, the IMU frame, I, the platform frame, P, and the global frame, G.
Sensors 22 02248 g001
Figure 2. A diagrammatic representation of the proposed loosely coupled LiDAR- and GNSS-aided navigation system.
Figure 2. A diagrammatic representation of the proposed loosely coupled LiDAR- and GNSS-aided navigation system.
Sensors 22 02248 g002
Figure 3. IMU Frame, I, relative to the platform Frame, P, and the global frame, G.
Figure 3. IMU Frame, I, relative to the platform Frame, P, and the global frame, G.
Sensors 22 02248 g003
Figure 4. The locations of GNSS 1, p G 2 , from the platform frame, P.
Figure 4. The locations of GNSS 1, p G 2 , from the platform frame, P.
Sensors 22 02248 g004
Figure 5. The LiDAR measurement model provides the measurement that the LiDAR sensor is expected to observe given the current state vector. The difference between the expected and observed measurements is termed the innovation.
Figure 5. The LiDAR measurement model provides the measurement that the LiDAR sensor is expected to observe given the current state vector. The difference between the expected and observed measurements is termed the innovation.
Sensors 22 02248 g005
Figure 6. The measurement model for the LiDAR calculates the expected range measurement as the distance from the ray origin to the intercepted triangle of the model.
Figure 6. The measurement model for the LiDAR calculates the expected range measurement as the distance from the ray origin to the intercepted triangle of the model.
Sensors 22 02248 g006
Figure 7. The location of each of the frames of the platform are uncertain to varying degrees. This uncertainty is propagated through the transforms of the system, originating at the global frame and ending at the ray endpoint. The propagated uncertainty is the effective uncertainty of that ray’s endpoint.
Figure 7. The location of each of the frames of the platform are uncertain to varying degrees. This uncertainty is propagated through the transforms of the system, originating at the global frame and ending at the ray endpoint. The propagated uncertainty is the effective uncertainty of that ray’s endpoint.
Sensors 22 02248 g007
Figure 8. The environment in which the bulldozer platform manoeuvred is shown. The trajectory of the platform is shown as a green line. The platform exercised its rotational degrees of freedom through the use of the blade during this trajectory.
Figure 8. The environment in which the bulldozer platform manoeuvred is shown. The trajectory of the platform is shown as a green line. The platform exercised its rotational degrees of freedom through the use of the blade during this trajectory.
Sensors 22 02248 g008
Figure 9. A photo of the environment in which the platform traversed is shown. Here, the bulldozer platform is visible in the centre of the image. The mound visible to the left of Figure 8 is visible to the right of this image.
Figure 9. A photo of the environment in which the platform traversed is shown. Here, the bulldozer platform is visible in the centre of the image. The mound visible to the left of Figure 8 is visible to the right of this image.
Sensors 22 02248 g009
Figure 10. The point-to-model error distributions obtained by comparing the LiDAR endpoints with the ground truth terrain model using the GNSS-aided IMU and LiDAR- and GNSS-aided IMU navigation solutions.
Figure 10. The point-to-model error distributions obtained by comparing the LiDAR endpoints with the ground truth terrain model using the GNSS-aided IMU and LiDAR- and GNSS-aided IMU navigation solutions.
Sensors 22 02248 g010
Figure 11. A comparison of the results from the GNSS-aided IMU and the LiDAR- and GNSS-aided IMU in a known environment navigation solutions.
Figure 11. A comparison of the results from the GNSS-aided IMU and the LiDAR- and GNSS-aided IMU in a known environment navigation solutions.
Sensors 22 02248 g011
Figure 12. The terrain estimate is represented by a triangulated height grid structure. Each cell height is a state which we estimate in the filter.
Figure 12. The terrain estimate is represented by a triangulated height grid structure. Each cell height is a state which we estimate in the filter.
Sensors 22 02248 g012
Figure 13. The point-to-model error distributions obtained by comparing the LiDAR endpoints with the ground truth terrain model from the three navigation solutions presented in this paper.
Figure 13. The point-to-model error distributions obtained by comparing the LiDAR endpoints with the ground truth terrain model from the three navigation solutions presented in this paper.
Sensors 22 02248 g013
Figure 14. The error of the triangulated state mesh constructed whilst localising the platform. Here, the error of the terrain model is generally less than 0.05 m ; however, there are some regions of larger error. The terrain estimates pictured here are fit for the purpose of terrain mapping in the use case this paper targets.
Figure 14. The error of the triangulated state mesh constructed whilst localising the platform. Here, the error of the terrain model is generally less than 0.05 m ; however, there are some regions of larger error. The terrain estimates pictured here are fit for the purpose of terrain mapping in the use case this paper targets.
Sensors 22 02248 g014
Figure 15. A comparison of the results from the LiDAR- and GNSS-aided IMU navigation solution both with and without an a priori environment model.
Figure 15. A comparison of the results from the LiDAR- and GNSS-aided IMU navigation solution both with and without an a priori environment model.
Sensors 22 02248 g015
Table 1. Specifications of the LiDAR, IMU, and GNSS sensors used as inputs to the navigation system of this paper.
Table 1. Specifications of the LiDAR, IMU, and GNSS sensors used as inputs to the navigation system of this paper.
Sensors
Velodyne VLP-16 LiDAR Sensor [41]IMUGNSS Receivers
Accelerometers
Number of rays:16Axes:3x uncertainty ( 1 σ ): 0.03 m
Range uncertainty ( 1 σ ): 0.03 m Data Rate:50 Hz y uncertainty ( 1 σ ): 0.03 m
Horizontal field of view:360 z uncertainty ( 1 σ ): 0.055 m
Vertical field of view:30 (± 15 ) Signals tracked:L1, L2, L5
Horizontal resolution: 0.1 0.4 GyroscopesData rate:10 Hz
Vertical resolution:2 Axes:3Corrections:RTK
Scan rate:5–20 HzData Rate:50 Hz
Points per second:300,000
Table 2. The diagonal elements of the IMU measurement covariance matrix. The cross-terms are several orders of magnitude smaller than the diagonal element. These values were determined experimentally from statistical analysis of a stationary data segment of 930 s.
Table 2. The diagonal elements of the IMU measurement covariance matrix. The cross-terms are several orders of magnitude smaller than the diagonal element. These values were determined experimentally from statistical analysis of a stationary data segment of 930 s.
IMU Covariance, σ z , I 2
x ¨ I m s 2 2 y ¨ I m s 2 2 z ¨ I m s 2 2 θ ˙ I s 2 2 ϕ ˙ I s 2 2 ψ ˙ I s 2 2
7.8 × 10 5 3.9 × 10 5 4.0 × 10 4 7.6 × 10 3 7.6 × 10 3 1.4 × 10 2
Table 3. The diagonal elements of the GNSS measurement covariance matrix. The off-diagonal terms are several orders of magnitude smaller than the diagonal elements. These values were determined from statistical analysis of a 930 s measurement sequence where the platform was stationary.
Table 3. The diagonal elements of the GNSS measurement covariance matrix. The off-diagonal terms are several orders of magnitude smaller than the diagonal elements. These values were determined from statistical analysis of a 930 s measurement sequence where the platform was stationary.
GNSS Covariance, σ z , G 2
x G m 2 y G m 2 z G m 2
1.9 × 10 6 2.4 × 10 6 2.2 × 10 5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

D’Adamo, T.; Phillips, T.; McAree, P. LiDAR-Stabilised GNSS-IMU Platform Pose Tracking. Sensors 2022, 22, 2248. https://doi.org/10.3390/s22062248

AMA Style

D’Adamo T, Phillips T, McAree P. LiDAR-Stabilised GNSS-IMU Platform Pose Tracking. Sensors. 2022; 22(6):2248. https://doi.org/10.3390/s22062248

Chicago/Turabian Style

D’Adamo, Timothy, Tyson Phillips, and Peter McAree. 2022. "LiDAR-Stabilised GNSS-IMU Platform Pose Tracking" Sensors 22, no. 6: 2248. https://doi.org/10.3390/s22062248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop