Next Article in Journal
Electrically Driven Lower Limb Exoskeleton Rehabilitation Robot Based on Anthropomorphic Design
Next Article in Special Issue
Mechanical Design and a Novel Structural Optimization Approach for Hexapod Walking Robots
Previous Article in Journal
Experimental Analysis of the Effect of Wear Factors on Guide Vane of Hydraulic Turbine
Previous Article in Special Issue
Perspectives of RealSense and ZED Depth Sensors for Robotic Vision Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Cubature Kalman Filtering on Matrix Lie Groups Based on Intrinsic Numerical Integration Error Calibration with Application to Attitude Estimation

School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(4), 265; https://doi.org/10.3390/machines10040265
Submission received: 1 March 2022 / Revised: 4 April 2022 / Accepted: 5 April 2022 / Published: 7 April 2022
(This article belongs to the Special Issue Modeling, Sensor Fusion and Control Techniques in Applied Robotics)

Abstract

:
This paper investigates the numerical integration error calibration problem in Lie group sigma point filters to obtain more accurate estimation results. On the basis of the theoretical framework of the Bayes–Sard quadrature transformation, we first established a Bayesian estimator on matrix Lie groups for system measurements in Euclidean spaces or Lie groups. The estimator was then employed to develop a generalized Bayes–Sard cubature Kalman filter on matrix Lie groups that considers additional uncertainties brought by integration errors and contains two variants. We also built on the maximum likelihood principle, and an adaptive version of the proposed filter was derived for better algorithm flexibility and more precise filtering results. The proposed filters were applied to the quaternion attitude estimation problem. Monte Carlo numerical simulations supported that the proposed filters achieved better estimation quality than that of other Lie group filters in the mentioned studies.

1. Introduction

Estimating the state of dynamical systems is a research hotspot and a frequently faced essential issue in many real-life applications, e.g., navigation, target tracking, robotics, and automatic control [1,2,3]. Due to the imperfect system model and the existence of noise, filtering is the most used estimating approach. For linear systems with Gaussian uncertainties, the Kalman filter (KF) [4], as the most desirable estimator established on minimal mean-square-error (MMSE) criterion, can provide an optimal recursive solution. However, while the system is nonlinear, obtaining the optimal closed-form solution is intractable. Researchers proposed various methods via analytical or numerical approximation to address nonlinear state estimation issues. A successful analytical approximation method is the extended Kalman filter (EKF) [5], which employs first-order Taylor series approximation to linearize nonlinear dynamics, but may perform poorly with highly nonlinear systems or large initial errors. Moreover, the main drawback of the EKF is that it needs to calculate the Jacobians, which is relatively tricky for complicated models.
Instead of dealing with nonlinearities by evaluating Jacobians, another class of nonlinear filtering based on numerical approximation is preferred. These filters are collectively called sigma point filters, including the unscented Kalman (UKF) [6], Gauss–Hermite–Kalman (GHKF) [7], and cubature Kalman (CKF) [8] filters. Different deterministic sigma points are used in these filtering algorithms to approximate moments of the probability distribution functions (PDFs). Compared with the EKF, the sigma point Kalman filter can provide better robustness and estimation accuracy, but increases the computational burden. Another problem underlying the sigma point Kalman filter is that errors caused by the traditional numerical quadrature rules may inject bias into the estimated moments without compensation. Concerning that ignoring the integral approximation error may lead to overconfident estimates, several studies employed Bayesian quadrature (BQ) and other quadrature approaches based on BQ theory (such as Gaussian process quadrature (GPQ) and Bayes–Sard quadrature (BSQ)) to improve the sigma point Kalman filter [9,10,11]. These approaches are successful because BQ, GPQ, and BSQ rules can quantify uncertainty in numerical integration.
Most EKFs and sigma point Kalman filters are designed for system dynamical models in Euclidean spaces in the existing literature. Nevertheless, when the system state exists in a Riemannian manifold, considering the geometry of the manifold can have merits, for example, enlarging the convergence domain and increasing the convergence rate [12]. Many works are devoted to introducing Euclidean space filters to manifolds to obtain natural and good metrics. The authors in [13] generalized the unscented transform and the UKF to Riemannian manifolds, and produced a general optimization framework. By extending the main concepts of UKF for Riemannian state-space systems, the authors in [14] proposed a series of more widespread and consistent Riemannian UKFs (RiUKFs) that supplement the theory gap in UKF on manifolds. Another generalized method about UKF on manifolds was presented in [15], which further developed the authors’ previous work of UKF on Lie groups in [16,17].
Implementing KFs on Lie groups is another widely studied line of designing filtering algorithms on manifolds, which was first proposed in [18,19]. Due to the systems’ invariance properties, many studies proposed invariant KFs on Lie groups: the authors in [20,21,22,23] designed intrinsic continuous-discrete invariant EKFs (IEKFs) for continuous-time systems on Lie groups with discrete-time measurements in Euclidean spaces and applied them to attitude estimation and inertial navigation; in [24,25,26], the researchers proposed the invariant UKFs (IUKFs). To help practitioners fully understand IEKFs, Ref. [27] provided three typical examples, and [28] presented the right IEKF algorithm for simultaneous localization and mapping (SLAM). Moreover, the authors in [29,30,31,32] devised EKFs on Lie groups for systems with measurements on matrix Lie groups. Other designs proposed EKF extensions for Lie groups [12,33]. On the other hand, UKF on Lie groups (UKF-LG) was developed in [16,17] for SLAM and sensor fusion. The authors in [34] suggested an impressive UKF based on UKF theory on Lie groups for visual–inertial SLAM. Unlike UKF-LG, in [35], the authors proposed another type of UKF filter for matrix Lie groups. This filter takes the time propagation step on the Lie algebra, improving computation efficiency. In addition, the authors in [36] introduced an invariant CKF on Lie groups for the attitude estimation problem from vector measurements.
Even if the above works succeeded in extending the sigma point filters to the Lie group, these filters still do not consider integration errors caused by quadrature rules. Hence, this paper is concentrated on calibrating the numerical integration error in the sigma point filtering algorithm on matrix Lie groups to improve filtering estimation quality. Motived by the previous study on employing the Bayes–Sard quadrature (BSQ) method in Euclidean sigma point filtering, we attempted to extend the BSQ method to filters on matrix Lie groups. According to the theoretical framework of the Bayes–Sard quadrature moment transformation (BSQMT) on BSQ in [11], we first give a version of BSQMT that selects cubature points as its prepoints, and then leverage the BSQMT to design our method. Lastly, the proposed method was verified by simulations in an attitude estimation system. An attitude estimation system usually consists of gyroscopes that provide angular velocity measurements and at least two vector sensors (such as accelerometers, magnetometers, star trackers, and sun sensors) that provide at least two nonparallel vector measurements. Our main contributions are as follows.
(1)
Giving a generalized system measurement model covering measurements in both Euclidean spaces and Lie groups, and developing a Bayesian estimator by utilizing BSQMT with the generalized measurement model for the state estimation problem on Lie groups.
(2)
Deriving improved cubature Kalman filtering on matrix Lie groups with BSQMT to calibrate numerical integration errors, and introducing a method with the maximum likelihood principle to calculate adaptive expected model variance.
(3)
Applying the proposed Lie group filtering to quaternion attitude estimation problems, and providing numerical simulations to validate the effectiveness of the proposed filtering.
The rest of this paper is constructed as follows. Section 2 reviews the preliminary knowledge of the Lie group. Section 3 outlines a designed Bayesian estimation method on Lie groups through leveraging the BSQMT for two different types of measurements. Section 4 gives detailed derivations of the proposed filtering algorithm on Lie groups. In Section 5, numerical simulations about quaternion attitude estimation demonstrate the effectiveness of the proposed filters. Lastly, in Section 6, we draw conclusions and present further work.

2. Mathematical Preliminary

This section reviews the theory and some basic properties of Lie groups used in this work. The following contents are chiefly based on [22,37] and partly on [15].

2.1. Introduction to Matrix Lie Groups

Matrix Lie group: Let G R N × N be a matrix Lie group that is a subset of the invertible square matrix and holds properties including group identity, multiplication map, and inversion map: I N × N G ; χ 1 , χ 2 G , χ 1 χ 2 G ; χ G , χ 1 G ; where I N × N is the identity matrix of R N × N . The matrix Lie group is also characterized by a smooth manifold structure, so that both multiplication and inversion maps are smooth operations. Each point on G, i.e., χ G , can attach to a tangent space at that point, denoted by T χ G . Tangent space T χ G , defined by the derivative of any curve γ ( 0 ) , where γ ( 0 ) = χ (see the left plot of Figure 1), is a subvector space of R N × N with equal dimensions to those of G. Among all tangent spaces of G, the tangent space taken at group identity matrix I N × N and denoted as T I G is called Lie algebra.
Lie algebra: Let g denote the Lie algebra associated with a d-dimension Lie group; there is T I G : = g R d × d . As Lie groups and Lie algebras lack mathematical tools, one needs to identify Lie algebra g to Euclidean space R d for convenient calculation. There is a linear bijection between g and R d , i.e., [ · ] G : g R d . This linear bijection is invertible, and its inverse map is defined by [ · ] G : R d g . For example, assume vector η R d and Lie algebra η g ; then, we have [ η ] G = η g and η G = η R d . The middle plot of Figure 1 gives a more intuitive illustration.
Matrix Lie exponential and logarithm maps: Matrix Lie exponential map exp G : R d G and its inverse map log G : G R d (the matrix Lie logarithm map) give two bijections between a neighborhood of 0 R d and a neighborhood of group identity matrix I d × d G . We define the matrix Lie exponential and logarithm maps as exp G ( · ) : = exp m [ · ] G and log G ( · ) : = log m ( · ) G , respectively, where exp m and log m are matrix exponential and logarithm operations. Here, matrix exponential and logarithm operations present the link between a Lie group and its unique corresponding Lie algebra: exp m : g G and log m : G g . For vector η R d , we have exp G ( η ) = exp m η . All maps among matrix Lie group G, Lie algebra g , and vector space R d are illustrated in detail on the right plot of Figure 1.

2.2. Uncertainty on Matrix Lie Groups

Most system noise is assumed to be additive white Gaussian noise in Euclidean spaces. However, the approach of additive Gaussian noise is not applicable to Lie groups, as Lie groups lack the addition operation. To describe Gaussian uncertainties for general Lie groups, we adopted the method in [15,16,34], which first assumes the distribution of uncertainty in the Lie algebra to be Gaussian and then maps Gaussian distribution to the corresponding Lie group by exponential mapping.
Consider a d-dimension random variable χ evolving on G, and let the variable satisfy prior probability distribution χ N ϕ ( χ ^ , P ) . Referring to the definition of the probability distribution for a random variable on a manifold in [15], we give the definition of probability distribution for χ G as follows:
χ = ϕ χ ^ , η , η N ( 0 , P ) ,
where ϕ : G × R d G is a smooth function selected according to the composition of variable χ that must satisfy ϕ ( χ ^ , 0 ) = χ ^ , also named “retraction”; χ ^ G is noise-free and represents the mean of random variable χ ; operator N ( · , · ) denotes Gaussian distribution in a Euclidean space; P R d × d is the error covariance matrix associated with uncertainty perturbation η ; and η R d is a random Gaussian vector in a Euclidean space. Moreover, distribution N ϕ ( · , · ) is not Gaussian.
The specific operation of function ϕ ( · , · ) is generally determined by the system’s geometric structure. Left and right Lie group multiplications, and the matrix Lie group exponential are essential parts of ϕ ( · , · ) no matter the system geometry. Here, with right and left multiplication, we give two simplified definitions of ϕ ( χ ^ , η ) , denoted by ϕ R ( χ ^ , η ) = exp G ( η ) χ ^ and ϕ L ( χ ^ , η ) = χ ^ exp G ( η ) . The components of the variable evolving on G are different, so the expression of function ϕ ( · , · ) is different in various system dynamics and applications.

3. Bayesian Estimation Based on Bayes–Sard Quadrature Moment Transform

This section aims to find a method to calibrate numerical quadrature errors in the classical sigma point filtering on Lie groups. By taking the cubature Kalman filter into consideration and adopting the BSQMT instead of the cubature transform, we propose a Bayesian estimation method with a generalized measurement model.

3.1. Bayes–Sard Quadrature Moment Transform with Cubature Points

The BSQMT [11] is a universal moment transform proposed to accommodate improper estimation and prediction calibration caused by quadrature error in sigma point filters. The main technique of the BSQMT is the Bayesian quadrature, which treats the numerical process as probabilistic inference, and models the integration error by utilizing stochastic process models. Consider nonlinear vector function
z = g ( x ) ,
where input x R d is a Gaussian random variable that satisfies Gaussian distribution N ( x ¯ , P ) ; z R l is the output; g ( · ) : R d R l denotes a known nonlinear transformation function. A brief generalization of the BSQMT algorithm with cubature points is given as follows:
Step 1. Assume that the mean x ¯ and covariance P of random input variable x are known. Calculate third-degree cubature points that match the transform moment of random variable
X i = x ¯ + S ξ i , i = 1 , , 2 d ,
where S represents the lower triangular factor in the Cholesky decomposition of P ; ξ i denotes unit cubature points expressed as
ξ i = d e i , i = 1 , , d d e i d , i = d + 1 , , 2 d ,
where e i R d is the i-th column of identity matrix I d × d .
Step 2. Compute the quadrature weight: BSQMT uses a nonzero mean hierarchical Gaussian process (GP) prior model, and Bayes–Sard weights merely depend on the choice of sigma points and basic function space π : = span ψ 1 , , ψ N . Let ψ ( x ) = x α , x R d , where α is multi-index; then, x α = x 1 α 1 x d α d and | α | = | α 1 + + α d | . Here, we consider a special base function space in which N = 2 d holds.
Lemma 1.
Let p ( x ) = N ( x 0 , I ) be standard normal distribution, choose 2 d cubature points by (4), and construct 2 d dimensional function space
π = x 1 , , x d , x 1 2 , , x d 2 .
Then, a set of quadrature weights w = w m , n = 1 , , 2 d can decide the weighted mean of the state variable, and quadrature weights cohere with cubature transform weights w m = 1 / 2 d .
Proof (Proof of Lemma 1).
See Appendix A.    □
With third-degree unit cubature points, quadrature weights are calculated with the following equations:
w m = Ψ T ψ ¯ w = Ψ T W A Ψ 1 w c = W B Ψ 1 ,
in which
ψ ¯ = E ξ [ ψ ( ξ ) ] Ψ = ψ ξ 1 , , ψ ξ 2 d T W A = E ξ ψ ( ξ ) ψ ( ξ ) T W B = E ξ ξ ψ ( ξ ) T .
From Lemma 1, we know that w m = 1 / 2 d . Thus, we can skip the calculation of w m in (6).
Step 3. Obtain transformed moments with the following equations:
m z = Z w m ,
P z z = Z w Z T m z m z T + σ ¯ 2 I ,
C x z = S w c Z T ,
where Z = g X 1 , , g X 2 d . σ ¯ 2 is expected model variance (EMV) of which the value partly depends on the selection of radial basis function (RBF) kernel (covariance of GP model based on unit cubature points) and its parameter. However, in practice, EMV sometimes needs to be reparameterized, but we omitted calculating it here. See [11,38] for the detailed computation of EMV and the selection rules of the basic function.

3.2. Bayesian Estimation on Lie Groups Using BSQMT

In this subsection, we first give a generalized measurement model covering system measurements in two kinds of spaces: Euclidean space and Lie group. Consider random variable χ G and its prior probability density function p ( χ ) , χ follows (1) with known χ ^ and P . Assume that one can establish additional information about the random variable via an observation y:
y = h a ( χ , v ) = h a 1 ( χ ) + v , for y R l ϕ ( χ , v ) , for y G ,
where v is white Gaussian noise with distribution v N 0 , R in the vector space, and its dimension depends on the type of y ; h a ( · ) and h a 1 ( · ) represent the observation function, and h a ( · ) satisfies the following mapping:
h a ( · ) : G R l , for y R l G × R d G , for y G
The key to addressing the Bayesian estimation problem is to find parameters χ ^ + and P + in posterior distribution approximated by p ( χ y ) N ϕ χ ^ + , η + , η + N 0 , P + . Before estimating χ ^ + , posterior distribution p ( η y ) should be found and approximated using BSQMT. Whether the measurement is in a vector space or a Lie group, there is an achievable way to calculate posterior estimate η + in the vector space and then map it to the Lie group. Hence, while an available measurement y exists in a Lie group, we need to transform the matrix Lie group measurement into the vector space by utilizing the inverse mapping of ϕ ( · , · ) . Inverse map ϕ 1 ( · , · ) : G R d is defined as
ϕ 1 ( χ ^ , y ) = ϕ L 1 ( χ ^ , y ) = log G χ ^ 1 y , left multiplication ϕ R 1 ( χ ^ , y ) = log G y χ ^ 1 , right multiplication .
In terms of left and right multiplications in both ϕ ( · , · ) and ϕ 1 ( · , · ) , if y exists in a Lie group, there are actually four possible expansions to map it to the vector space. For instance, similar to [16], suppose measurement function y = χ exp G ( v ) , if χ = χ ^ exp G ( η ) ,
y L = ϕ L 1 ( χ ^ , y ) = log G χ ^ 1 χ exp G ( v ) = log G χ ^ 1 χ ^ exp G ( η ) exp G ( v ) = log G exp G ( η ) exp G ( v ) ,
else if χ = exp G ( η ) χ ^ , there is
y L = ϕ L 1 ( χ ^ , y ) = log G χ ^ 1 χ exp G ( v ) = log G χ ^ 1 exp G ( η ) χ ^ exp G ( v ) = log G exp G A d χ ^ 1 ( η ) exp G ( v ) ,
where A d ( · ) is the joint operator of a Lie group, there is A d χ ^ 1 ( η ) = χ ^ 1 η χ ^ .
In addition, when an available measurement y is in a vector space, it fulfils the following equation:
y = h a 1 ( χ ) + v = h a 1 ( ϕ ( χ ^ , η ) ) + v .
From (14)–(16), we can detect that the optimized measurement function is related to χ ^ , η and v . So, giving a novel measurement equation for the derivation of the generalized nonlinear filtering:
y = h ( χ ^ , η , v ) ,
where y is the new observation, and h ( · ) is the new generalized measurement function.
By using BSQMT, propagated measurement cubature points and the measurement mean are computed:
y ^ i = h χ ^ , η i , 0 , i = 1 , , 2 d y ¯ = Y w m ,
where η i is cubature points; Y = y ^ 1 , , y ^ 2 d , i = 1 , , 2 d and w m is the mean weight defined in the above subsection. Afterwards, by calculating measurement error covariance P y y and cross-covariance P η y , we obtain gain matrix K and parameters in the approximation of posterior distribution p ( η y ) N η ^ , P + :
K = P η y P y y 1 P + = P K P y y K T η ^ = K y y ¯ .
For converting posterior distribution p ( η y ) into a distribution on the Lie group, we reviewed analyses and results in [15,16] and obtained
χ ^ + = ϕ ( χ ^ , η ^ ) = ϕ χ ^ , K y y ¯ .
Lastly, the entire Bayesian estimation procedure is summarized in Algorithm 1.
Algorithm 1 Bayesian estimation on Lie groups using BSQMT.
Input: 
prior state χ ^ and prior covariance P , unit cubature points ξ i , i = 1 , 2 , , 2 d , sequence of observations y , modified measurement noise covariance R ^ , Bayes–Sard weights w m , w , w c , expected model variance σ ¯ h 2 (calculated by the method in [11])
Output: 
posterior state χ ^ + and associated posterior covariance P +
1:
Calculate predicted measurement, and its associated cross-covariance and innovation covariance:
η i = chol ( P ) ξ i , i = 1 , , 2 d y ^ i = h ϕ χ ^ , η i , 0 , i = 1 , , 2 d y ¯ = Y w m , in which Y = y ^ 1 , , y ^ 2 d , i = 1 , , 2 d P y y = Y w Y T y ¯ y ¯ T + σ ¯ h 2 I + R ^ P η y = chol ( P ) w c Y T
2:
Calculate the gain matrix: K = P η y P y y 1 ;
3:
Compute the estimated state and covariance:
χ ^ + = ϕ χ ^ , K y y ¯ , where y = y , for y R l ϕ 1 ( χ ^ , y ) , for y G P + = P K P y y K T

4. Proposed Filtering Algorithm

This section presents a Bayes–Sard cubature Kalman filter on matrix Lie groups built on numerical integration error calibration with BSQMT to correct the filtering accuracy and named BSCKF-LG. The proposed BSCKF-LG has two versions: one version uses left multiplication to define “retraction” as left BSCKF-LG; the other uses right multiplication defined as right BSCKF-LG. A systemic method that leverages the maximum likelihood criterion is also proposed to estimate the EMVs. By reconstructing the covariances of the prior state error and the innovation with the innovation sequence in a sliding window, the proposed method estimates EMVs and feeds them back into improved filtering to adjust the Lie Kalman gain matrix.

4.1. Improved Cubature Kalman Filtering on Lie Groups with BSQMT

Consider discrete system dynamics
χ k = f χ k 1 , ω k , n k y k = h a χ k , v k ,
where k represents the timestamp; χ k is the system state living in a matrix Lie group; n k N 0 , Q k and v k N 0 , R k are white Gaussian noise with known covariances in the vector space; and ω k is a known input. According to the above section, we first needed to remodel measurement y k into y k , and y k is in the Euclidean vector space.
The purpose of designing filtering in the Bayesian framework of this model is to approximate posterior distribution p χ k , P k y l : k . The recursive solution comprises two steps to the proposed filtering problem on Lie groups: propagation and update.
Propagation: start with giving prior probability destiny in the Bayesian framework.
p χ k 1 N ϕ χ ^ k 1 + , η k 1 + , η k 1 + N 0 , P k 1 + ,
where χ ^ k 1 + and P k 1 + are posterior estimation results of the previous moment. Then, try to approximate the state distribution of propagation:
p χ k y 1 : k 1 N ϕ χ ^ k , η k , η k N 0 , P k .
Our work needs to find χ ^ k and P k . In the above Bayesian estimation, posterior state mean χ ^ k + and its associated covariance P k + are estimated using the uncertainty represented by (1). Here, for Model (21), we still used this uncertainty representation and the associated inverse map that has
ϕ 1 χ ^ k , χ k = ϕ 1 χ ^ k , ϕ χ ^ k , η k = log exp η k = η k + O η k 2 ,
where O η k 2 stands for error term. The proposed filter first propagates state mean χ ^ k via employing the system’s deterministic part (noiseless state model), that is,
χ ^ k = f χ ^ k 1 + , ω k , 0 .
Subsequently, to compute associated covariance P k , we selected a set of unit cubature points for the BSQMT method and generated sigma points η k 1 , i + in vector space R d through P k 1 + . Then, mapping these points to the matrix Lie group and passing the mapped cubature points through noise-free Model (25), we could obtain propagated sigma points χ k i . On the basis of this work and with defined inverse map ϕ 1 χ ^ k , χ k i , i = 1 , , 2 d , we obtained prior covariance P k (see Algorithm 2 for details).
Update: in this procedure, we need to approximate posterior probability distribution.
p χ k y 1 : k N ϕ χ ^ k + , η k + , η k ^ + N 0 , P k + ,
when each available measurement y k arrives. Due to the same measurement model, the Bayesian estimation method in Section 3 was applied to compute the posterior state and its associated covariance. The proposed filtering algorithm executes the propagation step and does not perform the update step until the measurement arrives. Algorithm 2 summarizes the proposed filter in detail.
Algorithm 2 Bayes–Sard cubature Kalman filter (BSCKF-LG).
Input: 
initial conditions χ 0 and P 0 0 , sequence of inputs ω k , k = 1 , 2 , , modified process and measurement noise covariances Q ^ k and R ^ k , sequence of observations y k , k = 1 , 2 , , RBF kernel parameters θ f and θ h (for computing EMV), unit cubature points ξ i , i = 1 , 2 , , 2 d
Output: 
posterior state χ ^ k + , posterior covariance P k +
1:
Initialization: χ ^ 0 + = χ 0 and P 0 + = P 0 0 ;
2:
Compute Baye–Sard quadrature weights w m , w, and w c from (6) and (7), and calculate the EMV σ ¯ f 2 with the method in [11];
3:
for   k = 1 , 2 , 3 , do
Propagation:
4:
    Calculate state mean through noiseless system model
χ ^ k = f χ ^ k 1 + , ω k , 0
5:
    Generate propagated sigma points χ k i on the basis of selected unit cubature points according to (4),
η k 1 , i + = chol P k ˙ 1 + ξ i , i = 1 , , 2 d χ k i = f ϕ χ ^ k 1 + , η k 1 , i + , ω k , 0 , i = 1 , , 2 d
6:
    Calculate associated covariance according to inverse map ϕ 1 χ ^ k , χ k i , i = 1 , , 2 d ,
P k = Φ k w Φ k T + σ ¯ f 2 I + Q ^ k
      where Φ k = ϕ 1 χ ^ k , χ k 1 , , ϕ 1 χ ^ k , χ k 2 d , Q ^ k derived from the dynamical equation of η k ;
Update: (If the measurement y k arrives, start the update step of the filtering. BSQ weights and EMV value are known. )
7:
    Compute χ ^ k + , P k + from Algorithm 1 with χ ^ k , P k
8:
end for
Remark 1.
The propagation of state means does not need to utilize the BSQMT(or other moment transformation) because (25) was validated by [16] up to the second order. Instead of generating the sigma points directly by the distribution on the Lie group, all work about generating the corresponding sigma points is first done in the Euclidean space and then mapped to the Lie group.
Remark 2.
The unscented Kalman filters on Lie groups in [15,16,17] generate sigma points for the process noise and calculate the covariance matrix of the deviation due to noise. Unlike these filters, process and measurement noise covariances Q ^ k and R ^ k in Algorithm 2 are determined by the specified dynamic equation of η k and the remodeled measurement function of y k to save computing resources.
Remark 3.
BSQMT rules define all Bayes–Sard weights. Weights only depend on the selection of the sigma points and the basic function. The proposed filter requires two different kernel parameter values to obtain EMV values (see [11] for details) because there are two integrated functions. The kernel parameter’s misspecification affects the BSQMT little, so manual parameter tuning of the EMV is available in practice. In the following section, we give a method to obtain the time-varying value of EMV.

4.2. EMV Estimation

In this subsection, we attempt to develop an adaptive BSCKF-LG with time-varying EMVs. To derive the EMV at each instant k, we first assumed that the prior covariance and the innovation covariance are unknown constants. The innovation vector is denoted as y ˜ k = y k y ¯ k , and we consider the following maximum likelihood function of the rebuild measurements y k n + 1 , y k n + 2 , , y k :
L P y y , k or P k y k n + 1 : k = ln j = k n + 1 k p y j P y y , j or P j = ln j = k n + 1 k 1 ( 2 π ) l P y y , j exp 1 2 y ˜ j T P y y , j 1 y ˜ j = 1 2 j = k n + 1 k ln P y y , j + y ˜ j T P y y , j 1 y ˜ j + C ,
where | · | denotes the determinant operator, n represents the sliding window size, and C is the constant term. On the basis of (27), the maximum likelihood estimation of the error covariances of the prior state and the prior measurement is derived by
P ^ y y , k M L P ^ k M L = arg max L P y y , k or P k y k n + 1 : k .
To solve (28), we need to take the partial derivatives of L P y y , k or P k y k n + 1 : k with respect to estimated prior covariance P ^ k and estimated innovation covariance P ^ y y , k , and let them be equal to zero:
L P ^ y y , k = 0 and L P ^ k = 0 .
Subsequently, the maximum likelihood equations are obtained via Equation (29):
L P ^ y y , k = 1 2 tr j = k n + 1 k P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 P y y , j P ^ y y , k a 1 , b 1 ,
and
L P ^ k = 1 2 tr j = k n + 1 k P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 P y y , j P ^ k a 2 , b 2 ,
where tr ( · ) denotes the trace operator; P ^ y , k a 1 , b 1 corresponds to the a 1 th row, b 1 th column of P ^ y y , k with a 1 , b 1 [ 1 , , l ] ; similarly, P ^ k a 2 , b 2 corresponds to the a 2 th row, b 2 th column of P ^ k with a 2 , b 2 [ 1 , , d ] .
Equations (29) and (30) show that
tr j = k n + 1 k P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 P y y , j P ^ y y , k a 1 , b 1 = 0 .
Therefore, the problem of estimating P y y , k is actually computing the derivation of the innovation covariance with respect to P y y , j , j = k n + 1 , , k . As stated above, state errors are mapped from the Lie group into the Euclidean space, and prior state error covariance in the Euclidean space can be expressed as P k = E η k η k T . In this article, η k = ϕ 1 χ ^ k , χ k and η k + = ϕ 1 χ ^ k + , χ k . Using (21) and the formula of P k in Algorithm 2, we first have
η k = ϕ 1 f χ ^ k 1 + , ω k , f χ k 1 , ω k , n k = ϕ 1 f χ ^ k 1 + , ω k , f ϕ χ k 1 + , η k 1 + , ω k , n k .
Then, f ( · ) and (33) are expressed by the Taylor series expansion about η k 1 + , the estimation error η k becomes
η k = F k η k 1 + + O η k 1 + + n ^ k ,
in which F = d ϕ 1 f χ ^ k 1 + , ω k , f ϕ χ k 1 + , η k 1 + , ω k / d η k 1 + and O η k 1 + represent the second- and high-order moments about η k 1 + ; n ^ k is derived from the specified dynamic equation of η k , and its covariance is Q ^ k . To obtain exact equality, we consider the entire high order of η k 1 + by introducing a diagonal matrix α k = diag α 1 , k , , α d , k , which can scale the approximation error [39,40]:
η k = α k F k η k 1 + + n ^ k .
Similarly, by substituting (18) in y ˜ k = y k y ¯ k and expanding h ( · ) using the Taylor series about η k , we have
y ˜ k = H k η k + O η k + v ^ k ,
where H k = d h χ ^ k , η k / d η k , O η k is the second- and higher-order moment of η k ; v ^ k is defined by the remodeled measurement formula, and its covariance is R ^ k . To consider the entire higher order, we also introduce a diagonal matrix β k = diag β 1 , k , , , β l , k [39]:
y ˜ k = β k H k η k + v ^ k ,
Hence, prior state error covariance and innovation covariance can be expressed by
P k = α k F k P k 1 + F k T α k + Q ^ k ,
P y y , k = β k H k P k H k T β k + R ^ k .
Lastly, for linear system models (35) and (37), if the Kalman filtering is steady, P k converges to a constant matrix; P k + , K k and P y y , k also become constant matrices [41].
On the basis of the above analysis, we assumed that our filtering process inside the sliding window was in a steady state. Consequently, the approximation of P y y , k in the sliding window was around constant, and there exists P y y , j = P y y , k , j = k n + 1 , , k such that P y y , j / P ^ y y , k a 1 , b 1 = P y y , k / P ^ y y , k a 1 , b 1 are 1 for the a 1 row and b 1 column, while other elements are zeros. In light of the trace definition, we simplified (32) to
j = k n + 1 k P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 = 0 .
Since P y y , j was approximately constant in the window, we pre- and postmultiplied both sides of (40) with P y y , j :
P ^ y y , k = 1 n j = k n + 1 k P ^ y y , j = 1 n j = k n + 1 k y ˜ j y ˜ j T .
Substituting P y y = Y w Y T y ¯ y ¯ T + σ ¯ h 2 I + R ^ in (41) and letting Δ h = σ ¯ h 2 I , the EMV of the measurement function is described as
Δ ^ h = σ ¯ ^ h 2 I = 1 n j = k n + 1 k y ˜ j y ˜ j T Y j w Y j T + y ¯ j y ¯ j T R ^ j .
Next, similar steps were employed to estimate the EMV of the system function. Equations (29) and (31) show that
tr j = k n + 1 k P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 P y y , j P ^ k a 2 , b 2 = tr j = k n + 1 k P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 H k P j P ^ k a 2 , b 2 H k T = 0 ,
where H k = P η y , k T P k 1 or H k = d y k / d η k .
Again, since the filtering process was assumed to be stable in the sliding window, approximations of P k in the window were almost constant, P j = P k , j = k n + 1 , , k , such that P j / P ^ k a 2 , b 2 = P k / P ^ k a 2 , b 2 are 1 for the a 2 row and b 2 column, while other elements are zeros. Equation (43) was pre- and postmultiplied by H T and H T ; then, in terms of the trace definition, we simplified (43) to
j = k n + 1 k H j T P y y , j 1 P y y , j 1 y ˜ j y ˜ j T P y y , j 1 H j = 0 .
In sigma point filters, there exists K k = P η y , k P y y , k 1 = P k H k T P y y , k 1 ; thus, (44) can be transformed into
j = k n + 1 k P j 1 K j H j P j K j y ˜ j y ˜ j T K j T P j 1 = P j 1 j k n + 1 k K j H j P j K j y ˜ j y ˜ j T K j T P j 1 = 0 .
(45) is satisfied if
j = k n + 1 k K j H j P j K j y ˜ j y ˜ j T K j T = 0 .
From Algorithm 2, we know that
K k y ˜ k = ϕ 1 χ ^ k , χ ^ k + P k P k + = K k P y y K k T = K k H k P k .
Substituting (47) into (46), there readily is
j = k n + 1 k P j P j + ϕ 1 χ ^ k , χ ^ k + ϕ 1 χ ^ k , χ ^ k + T = 0 ,
then
j = k n + 1 k P j = j = k n + 1 k P j + + ϕ 1 χ ^ k , χ ^ k + ϕ 1 χ ^ k , χ ^ k + T ,
with P j = P k , j < k in the window, the maximum likelihood of prior error covariance P k could be acquired from
P ^ k = 1 n j = k n + 1 k P j = 1 n j = k n + 1 k P j + + ϕ 1 χ ^ k , χ ^ k + ϕ 1 χ ^ k , χ ^ k + T .
Combining P k = Φ w Φ T + σ ¯ f 2 I + Q ^ k and (50), and letting Δ f = σ ¯ f 2 I , the estimated EMV could be described as
Δ ^ f = σ ¯ ^ f 2 I = 1 n j = k n + 1 k P j + + ϕ 1 χ ^ k , χ ^ k + ϕ 1 χ ^ k , χ ^ k + T Φ j w Φ j T Q ^ j .
Remark 4.
As EMVs must be non-negative σ ¯ f 2 0 , σ ¯ h 2 0 and Δ f , Δ h are diagonal matrices, so only estimating the diagonal elements of the EMV matrix is enough. Diagonal elements need to be non-negative; if some diagonal elements are negative, one should set them to zero.
Remark 5.
Equations (42) and (51) show that the validity of EMV estimation depends on process accuracy and noise covariance measurement. Filtering accuracy also relies on noise statistics. In addition, even though (42) and (51) theoretically give adaptive EMVs, they bring computational burdens.

5. Application to Attitude Estimation

In this section, we apply the proposed method to a simulated attitude estimation system and compare it with various filters presented in the cited literature. Consider a rotation rigid body without translation that contains a triaxial gyroscope offering angular velocity ω m R 3 at a 100 Hz sampling frequency, a triaxial accelerometer providing acceleration f b R 3 at 10 Hz sampling frequency, and a triaxial magnetometer measuring magnetic field m b R 3 at 10 Hz sampling frequency. For the spacecraft attitude estimation problem, one can replace accelerometers and magnetometers with sun sensors and magnetometers [23]. The attitude kinematics in terms of the unit quaternion is described by
q ˙ = 1 2 q ω m b ω n ω b ˙ ω = n b ω .
Let accelerometer and magnetometer outputs be observations; observation equations are expressed by [23,42]
f b = q 1 f n q + v f m b = q 1 m n q + v m ,
in which q S O ( 3 ) is the unit quaternion representing the attitude in the body frame with respect to the navigation frame; b ω R 3 denotes gyroscope bias; n ω R 3 and n b ω R 3 are uncorrelated white Gaussian noise; f n R 3 is the gravity vector in the navigation frame; and m n R 3 represents Earth’s magnetic field; and v f R 3 and v m R 3 are sensor noises assumed to be white Gaussian noise. Let χ = ( q , b ω ) , which belongs to matrix Lie group G = S O ( 3 ) × R 3 , and with the definition of ϕ ( χ ^ , η ) in Section 2, we define the left and right estimation errors as follows:
q ˜ R b ˜ R = exp G η R = q ^ q 1 q b ^ ω b ω q 1 ( right estimation errors )
q ˜ L b ˜ L = exp G η L = q ^ 1 q b ω b ^ ω ( left estimation errors )
Then, retractions corresponding with the above estimation errors are
ϕ R χ ^ , η R = exp G η R χ ^ = exp q η R 1 : 3 q ^ q 1 b ^ ω η R 4 : 6 q
ϕ L χ ^ , η L = χ ^ exp G η L = q ^ exp q η L 1 : 3 b ^ ω + η L 4 : 6

Numerical Simulations

To evaluate estimation quality, numerical simulations were carried out. In the simulations, we selected the north–east–down frame as the navigation frame. The attitude trajectory that lasted 200 s was generated by ω = 0.1 cos ( 0.15 t ) , 0.1 sin ( 0.1 t ) , 0.1 cos ( 0.05 t ) T rad / s . The true trajectory without noise and bias is shown in Figure 2. Gyroscope bias was b ω = 0.012 , 0.021 , 0.014 T rad / s , and gyroscope noise was white Gaussian noise subjected to N 0 , 10 3 rad / s 2 . Two known corresponding measurement vectors in the navigation frame are f n = [ 0 , 0 , g ] T and m n = [ 0.3197 , 0 , 0.4226 ] T G , where g = 9.78 m / s 2 is the gravitational acceleration. The noises of the accelerometer and the magnetometer are white Gaussian noise and follow v f N 0 , 2 × 10 3 g 2 and v m N 0 , 4 × 10 3 G 2 , respectively. Some sensor parameters are provided by [42]. For filtering, we set the initial covariance matrix as P 0 = blkdiag 10 3 I 3 × 3 , 10 7 I 3 × 3 and the initial gyroscope bias as b ^ ω 0 = 0 , 0 , 0 T rad / s . The kernel scale and kernel length that constituted the kernel parameters were selected to be 0.001 and 0.3 .
Three cases were considered in our simulations, and in the first two cases, we utilized the trajectory defined above but with different sets of initial attitudes. Case 1 randomly selected the initial attitude from uniform distribution between 10 and 10 , and Case 2 randomly chose the initial attitude from uniform distribution 90 , 90 . In Case 3, we used the angular velocity set above, but while 50 s < t 60 s, angular velocity was set to be ω = 5 , 0.5 , 0.5 T rad / s ; while 100 s < t 108 s, angular velocity was set to be ω = 0.1 , 0.1 , 3 T rad / s ; while 150 s < t 152 s, angular velocity was set to be ω = 0.1 , 5 , 0.1 T rad / s . Initial attitudes of Case 3 were the same as those in Case 1.
For each case, we conducted 100 independent Monto Carlo simulations and utilized the following root-mean-square error (RMSE) and averaged RMSE of the Euler angles (roll, pitch, yaw) to evaluate estimation quality:
RMSE a t t ( k ) = 1 M m = 1 M x k m x ^ k m 2 ,
ARMSE a t t ( k ) = 1 M K k = 1 K m = 1 M x k m x ^ k m 2 ,
where M is the total number of simulations; K is the simulation time step; x k m and x ^ k m represent the truth Euler angles(simulated) and the estimated Euler angles, respectively.
With the simulated system and the initial conditions above, we compared nine different filters on Monte Carlo simulations:
*
S O ( 3 ) -based CKF ( S O ( 3 ) CKF) that considers the attitude embedded in a special orthogonal group ( S O ( 3 ) ) and the gyroscope bias in a vector space;
*
right invariant extended Kalman filter (Right-IEKF) and left invariant extended Kalman filter (Left-IEKF) in [21,23], where the gyroscope bias was treated as part of the Lie group structure;
*
right and left cubature Kalman filters on Lie groups (Right-CKF-LG and Left-CKF-LG) [36], which can be treated as extensions of UKF-LG in [16];
*
Bayes–Sard quadrature cubature Kalman filter ( S O ( 3 ) BSCKF) derived from [11] by utilizing the same Lie group action as the S O ( 3 ) CKF;
*
our proposed right and left BSCKFs on Lie groups (Right-BSCKF-LG and Left-BSCKF-LG) in Section 4;
*
the proposed adaptive right BSCKF-LG with time-varying EMVs (Right-BSCKF-LG-adaptive).
We set the sliding window size in both the propagation and update steps of the Right-BSCKF-LG-adaptive to be n = 35 in Cases 1 and 2, and n = 15 in Case 3.
Simulation results of Case 1 are illustrated in Figure 3 and Table 1. Figure 3 compares the RMSEs of the Euler angles, while Table 1 summarizes the corresponding ARMSEs. With small initial attitude errors, the proposed BSCKF-type filters with cubature points performed better than other filters did and needed less time to reach a steady state. Right-BSCKF-LG and its adaptive version obviously outperformed the other filters. Simulation results also verified that the performance of Right-IEKF and Right-CKF-LG was similar and slightly better than that of Left-IEKF and Left-CKF-LG under small initial estimation errors. The selection of sliding window size could affect the estimation accuracy of Right-BSCKF-LG-adaptive, and the filtering accuracy of Right-BSCKF-LG-adaptive was not significantly better than that of Right-BSCKF-LG.
Case 2 estimation results under large initial attitude errors are demonstrated in Figure 4 and Table 2. These graphs show that the estimation quality of S O ( 3 ) CKF, Left-CKF-LG, and Left-IEKF was poor, especially that of S O ( 3 ) CKF. In contrast, Right-CKF-LG and Right-IEKF achieved better accuracy and robustness than those of Left-CKF-LG and Left-IEKF. Again, BSCKF-type filters performed better. The estimation accuracy of Right-BSCKF-LG and Right-BSCKF-LG-adaptive was better than that of the other filters. Moreover, the accuracy of Right-BSCKF-LG-adaptive improved compared with that of Right-BSCKF-LG. Right-BSCKF-LG-adaptive performed best in the yaw angle and achieved the fastest convergence rate. As seen from the above analyses, the right estimation errors on Lie group G = S O ( 3 ) × R 3 are more suitable for attitude estimation with vector measurements.
In Case 3, we assumed that some fast motion occurred in the system trajectory. The true trajectory of Case 3 is shown in Figure 5. Estimation results under small initial attitude errors are indicated in Figure 6 and Table 3. Figure 6 shows that the performance of all filters was worse compared to that in Case 1, partly because of the definition domains of Euler angles. However, the right and left BSCKF-LGs proposed in this paper were still outperformed among all filters, especially adaptive right BSCKF-LG. Table 3 summarizes the ARMSEs of Euler angles. Left BSCKF-LG was slightly more accurate than right BSCKF-LG in this case. The accuracy of Right-BSCKF-LG-adaptive was also greatly improved compared with that of Right-BSCKF-LG.
Figure 7 exhibits the computation times of the compared filters. CKF-LGs and BSCKF-LGs spent more computation time, while IEKFs spent minimal time. When the Bayes–Sard weights were calculated in advance according to the known system-state and measurement dimensions, the computation costs of BSCKF-LGs are almost equal to those of CKF-LGs. Right-BSCKF-LG-adaptive did not significantly increase the computational burden compared with Right-BSCKF-LG. Overall, BSCKF-LGs showed more accurate and robust estimation results in this attitude estimation example.

6. Conclusions

This article proposed a generalized Bayesian estimation algorithm on matrix Lie groups to calibrate numerical integration errors in classical sigma point filters on Lie groups using BSQMT theory. This Bayesian estimation is applicable to both measurements in Euclidean space and those evolving on the Lie group. Afterwards, with the proposed Lie group Bayesian estimator, we presented a Bayes–Sard cubature Kalman filter on Lie groups that comes in two variants. To obtain more accurate estimation, we then developed an approach to calculate adaptive EMVs. Numerical simulation results on quaternion attitude estimation indicated the superiority of our proposed filters over CKFs on Lie groups and unvariant EKFs. Future work includes exploring novel suitable methods to compute adaptive EMVs and applying the proposed filtering algorithm to visual–inertial navigation and fast drone navigation.

Author Contributions

Conceptualization, H.G.; methodology, H.G.; software, H.G.; validation, H.G., H.L. and X.H.; formal analysis, H.G. and Y.Z.; investigation, H.G., X.H. and H.L.; resources, H.L., X.H. and Y.Z.; data curation, H.G. and Y.Z.; writing—original draft preparation, H.G.; writing—review and editing, H.G. and X.H.; visualization, H.G. and Y.Z.; supervision, H.L.; project administration, H.G.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under Grant 62073265.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available because it also forms part of ongoing studies.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

Because the dimension of the base function space was 2 d , weights were the unique solver of linear function Ψ w = ψ ¯ . Hence, under the cubature rule:
n = 0 2 d w n q ξ n = q ( x ) N ( x 0 , I ) d x
where q ξ n π . For standard normal distribution, let D = 1 , , d ; then,
N ( x 0 , I ) d x = n = 1 2 d w n = 1 x D N ( x 0 , I ) d x = n = 1 2 d w n x n , D = 0 x D 2 N ( x 0 , I ) d x = n = 1 2 d w n x n , D 2 = 1 .
According to the definition of cubature points, it can be inferred from the above formulas that w D = w d + D = 1 / 2 d . Thus, we obtain Bayes–Sard weights equal to third-degree cubature transform weights.

References

  1. Aligia, D.A.; Roccia, B.A.; Angelo, C.; Magallán, G.; Gonzalez, G.N. An orientation estimation strategy for low cost IMU using a nonlinear Luenberger observer. Measurement 2020, 173, 108664. [Google Scholar] [CrossRef]
  2. Reina, G.; Messina, A. Vehicle dynamics estimation via augmented extended Kalman filtering. Measurement 2018, 133, 383–395. [Google Scholar] [CrossRef]
  3. Huang, Y.; Zhang, Y.; Bo, X.; Wu, Z.; Chambers, J. A new outlier-robust student’s t based Gaussian approximate filter for cooperative localization. IEEE/ASME Trans. Mechatron. 2017, 22, 2380–2386. [Google Scholar] [CrossRef] [Green Version]
  4. Kalman, R. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  5. Sunahara Y, Y.K. An approximate method of state estimation for non-linear dynamical systems with state-dependent noise. Int. J. Control 1970, 11, 957–972. [Google Scholar] [CrossRef]
  6. Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2001, 45, 477–482. [Google Scholar] [CrossRef] [Green Version]
  7. Arasaratnam, I.; Haykin, S.; Elliott, R.J. Discrete-time nonlinear filtering algorithms using Gauss–Hermite quadrature. Proc. IEEE 2007, 95, 953–977. [Google Scholar] [CrossRef]
  8. Arasaratnam, I.; Haykin, S.; Hurd, T.R. Cubature Kalman filtering for continuous-discrete systems: Theory and simulations. IEEE Trans. Signal Process. 2010, 58, 4977–4993. [Google Scholar] [CrossRef]
  9. Prüher, J.; Šimandl, M. Bayesian quadrature in nonlinear filtering. In Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics, Colmar, France, 21–23 July 2015; pp. 380–387. [Google Scholar]
  10. Pruher, J.; Straka, O. Gaussian process quadrature moment transform. IEEE Trans. Autom. Control 2017, 63, 2844–2854. [Google Scholar] [CrossRef] [Green Version]
  11. Pruher, J.; Karvonen, T.; Oates, C.J.; Straka, O.; Sarkka, S. Improved calibration of numerical integration error in Sigma-point filters. IEEE Trans. Autom. Control 2020, 66, 1286–1296. [Google Scholar] [CrossRef]
  12. Bourmaud, G.; Mégret, R.; Giremus, A.; Berthoumieu, Y. From intrinsic optimization to iterated extended Kalman filtering on Lie groups. J. Math. Imaging Vis. 2016, 55, 284–303. [Google Scholar] [CrossRef] [Green Version]
  13. Hauberg, S.; Lauze, F.; Pedersen, K.S. Unscented Kalman filtering on Riemannian manifolds. J. Math. Imaging Vis. 2013, 46, 103–120. [Google Scholar] [CrossRef]
  14. Menegaz, H.; Ishihara, J.Y.; Kussaba, H. Unscented Kalman filters for Riemannian state-space systems. IEEE Trans. Autom. Control 2019, 64, 1487–1502. [Google Scholar] [CrossRef] [Green Version]
  15. Brossard, M.; Barrau, A.; Bonnabel, S. A code for unscented Kalman filtering on Manifolds (UKF-M). In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1487–1502. [Google Scholar]
  16. Brossard, M.; Condomines, J.P. Unscented Kalman filtering on Lie groups. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2485–2491. [Google Scholar]
  17. Brossard, M.; Bonnabel, S.; Barrau, A. Unscented Kalman filter on Lie groups for visual inertial odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 649–655. [Google Scholar]
  18. Bonnabel, S. Left-invariant extended Kalman filter and attitude estimation. In Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 1027–1032. [Google Scholar]
  19. Bonnabel, S.; Martin, P.; Salaun, E. Invariant extended Kalman filter: Theory and application to a velocity-aided attitude estimation problem. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) Held Jointly with 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; pp. 1297–1304. [Google Scholar]
  20. Barrau, A.; Bonnabel, S. Intrinsic filtering on Lie groups with applications to attitude estimation. IEEE Trans. Autom. Control 2015, 60, 436–449. [Google Scholar] [CrossRef]
  21. Barrau, A.; Bonnabel, S. The invariant extended Kalman filter as a stable observer. IEEE Trans. Autom. Control 2017, 62, 1797–1812. [Google Scholar] [CrossRef] [Green Version]
  22. Barrau, A.; Bonnabel, S. Invariant Kalman filtering. Annu. Rev. Control Robot. Auton. Syst. 2018, 1, 237–257. [Google Scholar] [CrossRef]
  23. Gui, H.; de Ruiter, A.H.J. Quaternion invariant extended Kalman filtering for spacecraft attitude estimation. J. Guid. Control Dyn. 2018, 41, 863–878. [Google Scholar] [CrossRef]
  24. Condomines, J.P.; Hattenberger, G. Nonlinear state estimation using an invariant unscented Kalman filter. In Proceedings of the AIAA Guidance, Navigation, and Control, Boston, MA, USA, 19–22 August 2013; p. 4869. [Google Scholar]
  25. Condomines, J.P.; Seren, C.; Hattenberger, G. Invariant unscented Kalman filter with application to attitude estimation. In Proceedings of the IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, VIC, Australia, 12–15 December 2017; pp. 2783–2788. [Google Scholar]
  26. Condomines, J.P.; Hattenberger, G. Invariant Unscented Kalman Filtering: A Parametric Formulation Study for Attitude Estimation. 2019. hal-02072456. Available online: https://hal-enac.archives-ouvertes.fr/hal-02072456 (accessed on 4 April 2022).
  27. Barrau, A.; Bonnabel, S. Three examples of the stability properties of the invariant extended Kalman filter. IFAC-PapersOnLine 2017, 50, 431–437. [Google Scholar] [CrossRef]
  28. Barrau, A.; Bonnabel, S. An EKF-SLAM algorithm with consistency properties. arXiv 2015, arXiv:1510.06263. [Google Scholar]
  29. Bourmaud, G.; Mégret, R.; Giremus, A.; Berthoumieu, Y. Discrete extended Kalman filter on lie groups. In Proceedings of the 21st European Signal Processing Conference, Marrakech, Morocco, 9–13 September 2013; pp. 1–5. [Google Scholar]
  30. Bourmaud, G.; Mégret, R.; Arnaudon, M.; Giremus, A. Continuous-Discrete extended Kalman filter on matrix Lie groups using concentrated Gaussian distributions. J. Math. Imaging Vis. 2015, 51, 209–228. [Google Scholar] [CrossRef] [Green Version]
  31. Phogat, K.S.; Chang, D.E. Invariant extended Kalman filter on matrix Lie groups. Automatica 2020, 114, 108812. [Google Scholar] [CrossRef] [Green Version]
  32. Phogat, K.S.; Chang, D.E. Discrete-time invariant extended Kalman filter on matrix Lie groups. Int. J. Robust Nonlinear Control 2020, 30, 4449–4462. [Google Scholar] [CrossRef]
  33. Teng, Z.; Wu, K.; Su, D.; Huang, S.; Dissanayake, G. An Invariant-EKF VINS algorithm for improving consistency. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1578–1585. [Google Scholar]
  34. Brossard, M.; Bonnabel, S.; Barrau, A. Invariant Kalman filtering for visual inertial SLAM. In Proceedings of the International Conference on Information Fusion, Cambridge, UK, 10–13 July 2018; pp. 2021–2028. [Google Scholar]
  35. Sjberg, A.M.; Egeland, O. Lie Algebraic unscented Kalman filter for pose estimation. IEEE Trans. Autom. Control 2022, in press. [Google Scholar] [CrossRef]
  36. Guo, H.; Liu, H.; Zhou, Y.; Li, J. Quaternion invariant cubature Kalman filtering for attitude estimation. In Proceedings of the 2020 3rd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Shanghai, China, 4–6 December 2020; pp. 67–72. [Google Scholar]
  37. Barrau, A.; Bonnabel, S. Stochastic observers on Lie groups: A tutorial. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 17–19 December 2018; pp. 1264–1269. [Google Scholar]
  38. Karvonen, T.; Oates, C.J.; Srkk, S. A Bayes-Sard cubature method. arXiv 2018, arXiv:1804.03016. [Google Scholar]
  39. Xiong, K.; Liu, L.; Zhang, H. Modified unscented Kalman filtering and its application in autonomous satellite navigation. Aerosp. Sci. Technol. 2009, 13, 238–246. [Google Scholar] [CrossRef]
  40. Hu, G.; Gao, B.; Zhong, Y.; Gu, C. Unscented kalman filter with process noise covariance estimation for vehicular ins/gps integration system. Inf. Fusio 2020, 64, 194–204. [Google Scholar] [CrossRef]
  41. Wang, J.; Li, M. Covariance regulation based invariant Kalman filtering for attitude estimation on matrix Lie groups. IET Control Theory Appl. 2021, 15, 2017–2025. [Google Scholar] [CrossRef]
  42. Nagy, S.B.; Arne, J.T.; Fossen, T.I.; Ingrid, S. Attitude estimation by multiplicative exogenous Kalman filter. Automatica 2018, 95, 347–355. [Google Scholar]
Figure 1. Schematic of mappings among Lie group, Lie algebra, and Euclidean space. Lie group model is oversimplified. Two distinct identifications of tangent space T χ G at χ with tangent space T I G at I N × N afforded by left or right Lie multiplication. Multiplication map includes left and right multiplication.
Figure 1. Schematic of mappings among Lie group, Lie algebra, and Euclidean space. Lie group model is oversimplified. Two distinct identifications of tangent space T χ G at χ with tangent space T I G at I N × N afforded by left or right Lie multiplication. Multiplication map includes left and right multiplication.
Machines 10 00265 g001
Figure 2. True trajectory of Euler angles in Cases 1 and 2.
Figure 2. True trajectory of Euler angles in Cases 1 and 2.
Machines 10 00265 g002
Figure 3. RMSEs of Euler angles in Case 1. (a) RMSEs of roll angle; (b) RMSEs of pitch angle; (c) RMSEs of yaw angle.
Figure 3. RMSEs of Euler angles in Case 1. (a) RMSEs of roll angle; (b) RMSEs of pitch angle; (c) RMSEs of yaw angle.
Machines 10 00265 g003
Figure 4. RMSEs of Euler angles in Case 2. (a) RMSEs of roll angle; (b) RMSEs of pitch angle; (c) RMSEs of yaw angle.
Figure 4. RMSEs of Euler angles in Case 2. (a) RMSEs of roll angle; (b) RMSEs of pitch angle; (c) RMSEs of yaw angle.
Machines 10 00265 g004
Figure 5. True trajectory of Euler angles in Case 3.
Figure 5. True trajectory of Euler angles in Case 3.
Machines 10 00265 g005
Figure 6. RMSEs of Euler angles in Case 3. (a) RMSEs of roll angle; (b) RMSEs of pitch angle; (c) RMSEs of yaw angle.
Figure 6. RMSEs of Euler angles in Case 3. (a) RMSEs of roll angle; (b) RMSEs of pitch angle; (c) RMSEs of yaw angle.
Machines 10 00265 g006
Figure 7. Execution times of different filters.
Figure 7. Execution times of different filters.
Machines 10 00265 g007
Table 1. ARMSEs of Euler angles in Case 1.
Table 1. ARMSEs of Euler angles in Case 1.
FiltersARMEs of Roll (deg)ARMEs of Pitch (deg)ARMEs of Yaw (deg)
S O ( 3 ) CKF 1.5172 1.5174 3.5564
Left-CKF-LG 0.5011 0.8032 2.2217
Right-CKF-LG 0.4571 0.7298 1.9835
Left-IEKF 0.5103 0.8185 2.1791
Right-IEKF 0.4463 0.7226 1.9114
S O ( 3 ) BSCKF 0.2505 0.2351 0.7080
Left-BSCKF-LG 0.2156 0.1701 0.5407
Right-BSCKF-LG 0.2062 0.1708 0.5277
Right-BSCKF-LG-adaptive 0.2113 0.1745 0.4586
Table 2. ARMSEs of Euler angles in Case 2.
Table 2. ARMSEs of Euler angles in Case 2.
FiltersARMEs of Roll (deg)ARMEs of Pitch (deg)ARMEs of Yaw (deg)
S O (3) CKF 4.3881 3.2943 10.104
Left-CKF-LG 3.7219 2.6936 8.8371
Right-CKF-LG 1.5143 1.1443 4.1851
Left-IEKF 4.2617 2.9011 9.2021
Right-IEKF 0.7231 0.8164 2.4947
S O (3) BSCKF 1.9847 1.3373 5.6511
Left-BSCKF-LG 1.8803 1.2971 5.4425
Right-BSCKF-LG 0.6037 0.5515 1.8587
Right-BSCKF-LG-adaptive 0.5401 0.4604 1.4699
Table 3. ARMSEs of Euler angles in Case 3.
Table 3. ARMSEs of Euler angles in Case 3.
FiltersARMEs of Roll (deg)ARMEs of Pitch (deg)ARMEs of Yaw (deg)
S O (3) CKF 18.5315 2.8001 15.7173
Left-CKF-LG 9.2549 1.4878 12.3105
Right-CKF-LG 15.2969 2.1041 13.0327
Left-IEKF 21.5772 2.6352 13.7434
Right-IEKF 21.6621 2.3658 15.1156
S O (3) BSCKF 14.3881 2.4143 17.4939
Left-BSCKF-LG 3.7793 1.2411 12.2423
Right-BSCKF-LG 8.4749 1.4365 10.6021
Right-BSCKF-LG-adaptive 3.7641 0.5966 2.4787
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, H.; Zhou, Y.; Liu, H.; Hu, X. Improved Cubature Kalman Filtering on Matrix Lie Groups Based on Intrinsic Numerical Integration Error Calibration with Application to Attitude Estimation. Machines 2022, 10, 265. https://doi.org/10.3390/machines10040265

AMA Style

Guo H, Zhou Y, Liu H, Hu X. Improved Cubature Kalman Filtering on Matrix Lie Groups Based on Intrinsic Numerical Integration Error Calibration with Application to Attitude Estimation. Machines. 2022; 10(4):265. https://doi.org/10.3390/machines10040265

Chicago/Turabian Style

Guo, Huijuan, Yan Zhou, Huiying Liu, and Xiaoxiang Hu. 2022. "Improved Cubature Kalman Filtering on Matrix Lie Groups Based on Intrinsic Numerical Integration Error Calibration with Application to Attitude Estimation" Machines 10, no. 4: 265. https://doi.org/10.3390/machines10040265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop