Next Article in Journal
A Segmentation Model of ECU Excitation Signal Based on Characteristic Parameters
Next Article in Special Issue
Reversible Jump MCMC for Deghosting in MSPSR Systems
Previous Article in Journal
Pattern Recognition in Music on the Example of Reconstruction of Chest Organ from Kamień Pomorski
Previous Article in Special Issue
Development of Magnetic-Based Navigation by Constructing Maps Using Machine Learning for Autonomous Mobile Robots in Real Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Interacting Multiple Model Filter and Smoother on Boxplus-Manifolds †

Multi-Sensor Interactive Systems Group, University of Bremen, 28359 Bremen, Germany
*
Author to whom correspondence should be addressed.
This paper is an extended version of our published in Koller, T.L.; Frese, U. The Interacting Multiple Model Filter on Boxplus-Manifolds. In Proceedings of the 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Karlsruhe, Germany, 14–16 September 2020.
Sensors 2021, 21(12), 4164; https://doi.org/10.3390/s21124164
Submission received: 13 April 2021 / Revised: 7 June 2021 / Accepted: 12 June 2021 / Published: 17 June 2021
(This article belongs to the Special Issue Multisensor Fusion and Integration)

Abstract

:
Hybrid systems are subject to multiple dynamic models, or so-called modes. To estimate the state, the sequence of modes has to be estimated, which results in an exponential growth of possible sequences. The most prominent solution to handle this is the interacting multiple model filter, which can be extended to smoothing. In this paper, we derive a novel generalization of the interacting multiple filter and smoother to manifold state spaces, e.g., quaternions, based on the boxplus-method. As part thereof, we propose a linear approximation to the mixing of Gaussians and a Rauch–Tung–Striebel smoother for single models on boxplus-manifolds. The derivation of the smoother equations is based on a generalized definition of Gaussians on boxplus-manifolds. The three, novel algorithms are evaluated in a simulation and perform comparable to specialized solutions for quaternions. So far, the benefit of the more principled approach is the generality towards manifold state spaces. The evaluation and generic implementations are published open source.

1. Introduction

The interacting multiple model filter (IMM) is widely used in the field of target tracking. After its original invention [1] for radar-based aircraft tracking [2,3,4], it has been used in various applications such as attitude estimation [5], lane change prediction [6], and sensor fault detection [7].
The IMM is applied when a single dynamical model does not predict the behavior of the system accurately [8]. This is the case when the system dynamics depend on modes, i.e., discrete states that change abruptly. The original IMM runs one Kalman Filter (KF) per mode and fuses the estimates of the filters probabilistically based on the likelihood of the models. Up to now, several adaptations of the IMM have been published to use different nonlinear filters such as the Extended Kalman Filter (EKF) [8], the Unscented Kalman Filter (UKF) [4], or the Particle Filter (PF) [9].
The accuracy of IMM estimates can be improved by smoothing [10]. Smoothing improves the past estimate by propagating the information of measurements backwards [11]. Retrospectively, it uses measurements from the past and future. Similarly to the IMM filter, the IMM smoother uses a smoother for each mode and fuses their estimates probabilistically. Several mixing schemes already exist and further schemes are being developed [12,13].
Typically, nonlinear filters and smoothers are designed to operate on vector spaces ( R n ). Thus, it is difficult to maintain manifold structures like the rotation quaternion in the filter. As it is often required to estimate orientations in target tracking, various extensions [14,15,16,17] have been developed to handle rotation quaternions or matrices correctly without destroying their manifold properties, e.g., unit norm or orthogonality. To our knowledge, no such extension exists for the IMM nor the IMM smoother.
For quaternions, the approach to normalize the quaternion exists [5]. However, such an approach usually degrades the performance of the estimator due to erroneous covariance matrices. Besides that approach, publications avoid using quaternions in the state by using 2D rather than 3D [6] or by using models that work in world coordinates only, as is common in aircraft tracking [2]. In [18], the so called delta-quaternion is used in the state, while the quaternion orientation is predicted outside of the IMM.
We see a gap in the literature on how to handle manifolds in the IMM smoother in a principled way. This paper closes the gap by proposing an IMM smoother that can handle manifolds properly. While we use quaternions as an accessible application example, we designed our novel algorithms to handle more general manifolds—e.g., a sphere.
The core difficulty of an IMM smoother on manifolds is the probabilistic mixing of states. In the IMM, the estimates of all filters are mixed in a weighted sum. Unfortunately, the operator + breaks the manifold structure, wherefore, a sum cannot be computed [14]. Similar problems appear for the covariance, which conceptually assumes additive noise on the state.
To overcome this problem for single-mode filtering on quaternions, the multiplicative EKF (MEKF) [15] or the error state KF (ESKF) [16] were developed. Both methods update the quaternion estimate by quaternion multiplication only, which sustains the manifold structure in contrast to addition. The boxplus-method (⊞-method) of Hertzberg et al. [14] generalizes this concept for manifolds. It only allows changes to the manifold, which do not break its structure. It gained attention in pose tracking in the last years since it is a general approach to handle manifolds in nonlinear filtering [19,20] and least squares optimization [21]. The ⊞-method encapsulates manifolds as black boxes, so that algorithms can handle them generically. Furthermore, it provides the necessary definitions to calculate a weighted sum of Gaussians over manifolds as it is required for the IMM on manifolds.
This paper is an extended version of [22], where we derived an IMM filter that uses the ⊞-method to properly handle the mixing of states and covariances in the manifold case. The method has been proven to perform a first-order correct probabilistic mixing of Gaussians. The additional contribution of this paper is the extension of the IMM filter on ⊞-manifolds to an IMM smoother. To do so, we extend the ⊞-method to Rauch–Tung–Striebel (RTS) smoothing [23].
In this work, we focus on the tangent spaces of manifolds to derive the RTS smoother equations and reinterpret the mixing of covariances with tangent space transforms. To achieve this, we propose a generalized definition of Gaussians on ⊞-manifolds, which clarifies how to transform Gaussians between different tangent spaces. The rigorous consideration of tangent space transformations results in a generic Extended Kalman smoother (EKS) on ⊞-manifolds. The combination of the novel EKS and the probabilistic mixing of Gaussians of [22] directly yields the IMM smoother on ⊞-manifolds based on the scheme of [11].
The remainder of the paper is structured as follows: The theoretic foundation of probabilistic mixing on ⊞-manifolds with a new derivation of the covariance mixing based on our generalized definition of Gaussians is shown in Section 2. The resulting IMM Filter on ⊞-manifolds is presented in Section 3. As a basis for the IMM smoother, an RTS smoother on ⊞-manifolds is derived in Section 4. The novel IMM smoother is shown in Section 5. In Section 6, we give a simulation example for the new algorithms and analyze their performance compared to state of the art approaches. Finally, we conclude and discuss future work in Section 7.

2. Weighted Sum of Gaussians on ⊞-Manifolds

Usually, the representation of a manifold S is overparametrized, i.e., represented with more parameters than it has degrees of freedom (DOF). The key idea of the ⊞-method is to allow changes of the manifolds only in the direction of the DOFs [14]. This direction is called the tangent space V R D O F . The direction of the DOFs depends on the specific instance of the manifold. Thus, the tangent space is always defined with respect to a reference instance. Small changes to the manifold instance can be expressed in the tangent space and are applied with the operator : S × V S . The ⊞-operator enforces the manifold structure.
The difference of two manifold instances is also expressed in the tangent space and can be derived by the complementary boxminus-operator : S × S V . The ⊟-operator calculates the geodesic between two manifold instances, i.e., the shortest path between them on the manifold ( x ( y x ) = y ). The quadruplet { S , , , V } is called a ⊞-manifold. The operators for commonly used manifolds can be found in [14].
The ⊞-method allows us to compound a state of multiple manifolds and vectors. The / -operators of a compound manifold x simply apply the operators elementwise:
x = { x 1 , , x n } ,
x δ = { x 1 δ 1 , , x n δ n } ,
y x = { y 1 x 1 , , y n x n } .
For vectors, the operators naturally reduce to + / . Thus, the method allows to compound states of manifolds and vectors seamlessly, which is essential for target tracking applications.
One axiomatic property of ⊞-manifolds is that every instance y of the manifold can be represented in the tangent space of any other instance x by y = x ( y x ) [14]. Thus, all operations can be defined with respect to the tangent space of x although singularities may arise if x and y are distant. Since the tangent space is a vector space, standard definitions and operations can be used. By choosing the tangent space appropriately, troublesome singularities are avoided. Essentially, this is the engineering ”trick“ of the ⊞-method.
Hertzberg et al. already stated that it is not possible to compute an expected value or a weighted sum of manifolds with the classic definition since the +-operator is undefined [14]. Instead, they derived an implicit definition of the expected value:
E X X ¯ = 0 ,
where E ( · ) computes the expected value, X is a random variable on S , and X ¯ is the expected value of X. Essentially, X X ¯ expresses all manifolds elements in the tangent space of X ¯ , which allows us to apply the standard computation of the expected value.
In the IMM, Gaussian distributions are mixed instead of simple instances. Thus, X is a mixture of Gaussians X j N ( x ¯ j , P j ) with mean x ¯ j and covariance P j . For ⊞-manifolds, the Gaussian is defined as follows:
N ( μ , P ) : = μ N ( 0 , P ) ,
where P R D O F x D O F . The key of this definition is that the covariance of the Gaussian is defined with respect to the tangent space of the mean. Hence, if two Gaussians have different means, their covariances refer to different tangent spaces.
In the manifold case, the weighted sum of the mean values of the Gaussians x ¯ is not guaranteed to be the expected value X ¯ of the complete distribution. However, we prove that it is at least a first-order correct approximation of X ¯ .
Theorem 1.
Let X be the mixture of M normally distributed random variables X j N ( x ¯ j , P j ) with probabilities p j , where X i = 1 M p j N ( x ¯ j , P j ) . Then, the weighted sum x ¯ of the expected values x ¯ j (defined implicitly by i = 1 M p j ( x ¯ j x ¯ ) = 0 ) is a first-order correct approximation of the expected value X ¯ of all elements in X.
Proof. 
E ( X x ¯ ) ? E ( X X ¯ ) = 0 ,
E ( X x ¯ ) = p ( x ) · x x ¯ d x = j = 1 M p ( x j ) · x j x ¯ d x j ,
p ( x j ) d x j = p j ,
where we use the conventions p ( x ) = p ( X = x ) and p ( x j ) = p ( X j = x j ) . Each element x j can be expressed in the tangent space of the respective mean x ¯ j using the axioms of ⊞-manifolds [14]:
x j = x ¯ j δ j , δ j = x j x ¯ j ,
E ( X x ¯ ) = j = 1 M p ( x j ) · x ¯ j δ j x ¯ d x j .
We approximate the ⊞-operator with a first-order Taylor series around δ j = 0 :
E ( X x ¯ ) j = 1 M p ( x j ) · x ¯ j 0 x ¯ + J j · ( δ j 0 ) d x j ,
J j = x ¯ j δ j x ¯ δ j δ j = 0 .
Using Equation (8) we can split into
E ( X x ¯ ) j = 1 M p j x ¯ j x ¯ + J j · p ( x j ) δ j d x j .
The first summand is 0 by the implicit definition of x ¯ .
E ( X x ¯ ) j = 1 M J j · p ( x j ) δ j d x j .
Using the definition of x ¯ j :
E ( X j x ¯ j ) = 0 = p ( x j ) x j x ¯ j d x j = p ( x j ) δ j d x j .
We can reduce to
E ( X x ¯ ) 0 .
Thus, the approximation is first-order correct. □
In the IMM, the mixed distribution is approximated with a Gaussian. Since the covariance of manifolds is expressed in the tangent space, the covariances cannot be mixed as in the original IMM. First, the Gaussians have to be expressed with respect to the same tangent space.
Naturally, Gaussians on ⊞-manifolds are defined in the tangent space of their expected value, as in (5). However, we can retrieve a more general definition of the Gaussian by defining it in the tangent space of an arbitrary reference r:
N r ( E r ( X ) , Cov r ( X ) ) = r N ( E r ( X ) , Cov r ( X ) ) ,
E r ( X ) = E ( X r ) = ( x r ) · p ( x ) d x ,
Cov r ( X ) = Cov ( X r ) = ( x r ) E ( X r ) ( x r ) E ( X r ) T · p ( x ) d x .
Choosing r = E ( X ) directly yields the definitions of [14]. The standard computation rules for the expectation and covariance do not necessarily hold for this generalization, e.g., E r ( X ) + E r ( Y ) E r ( X + Y ) , since the +-operator is undefined on the manifolds.
With the new definition, the reference can be arbitrarily chosen as in Figure 1. To change the reference of the Gaussian from r 1 to r 2 , the Gaussian has to be transformed to the manifold space and afterwards, to the tangent space of r 2 :
E r 2 ( X ) = E ( r 1 N ( E r 1 ( X ) , Cov r 1 ( X ) ) r 2 ) ,
Cov r 2 ( X ) = Cov ( r 1 N ( E r 1 ( X ) , Cov r 1 ( X ) ) r 2 ) .
As usual, no general closed form solution exists, since the operators are nonlinear. A first-order Taylor approximation at E r 1 ( X ) yields
E r 2 ( X ) r 1 E r 1 ( X ) r 2 ,
Cov r 2 ( X ) J Cov r 1 ( X ) J T ,
J = r 1 ( E r 1 ( X ) + δ ) r 2 δ .
To retrieve the representation in Equation (5), the expected value is computed in the manifold space:
E ( X ) r 2 r 1 E r 1 ( X ) r 2 = r 1 E r 1 ( X ) .
Two major types of reference transformations occur in state estimation. The first type is the reference transform from a Gaussian expressed in the tangent space of r 1 to the tangent space of its expected value r 2 . This reference transform is required to reestablish the standard form of Gaussians on ⊞-manifolds, as defined in [14]. We call this type “centered transform”, since it transforms the Gaussian to the tangent space of its center, i.e., r 2 = r 1 E r 1 ( X ) .
The second type transforms a Gaussian that is expressed in the tangent space of its expected value, i.e., E r 1 ( X ) = 0 , to another reference r 2 . This transform is required to express two Gaussians in the same tangent space to allow further computations. We name it ”displaced transform“.
Using displaced transforms, all Gaussians of a mixture can be expressed in the tangent space of x ¯ . Then, the standard covariance mixing can be applied, which results in
P = j = 1 M p ( X j ) x ¯ j x ¯ + J j P j J j T ,
where [ · ] denotes the outer product and J j as in (12). The same equation can be derived by linearization of the covariance equation like in our conference paper [22].
The first summand of (26) expresses the spread of the means in ⊞-terms. The second summand propagates the covariances of the Gaussians to the new mean. The structural difference to the original equation is the transformation of P j with the Jacobian J j . In the vector case, the Jacobian equals identity. Thus, this formula is a generalization of the Gaussian mixing of vectors to ⊞-manifolds.

3. An IMM Filter on ⊞-Manifolds

The IMM runs a filter, e.g., an EKF or UKF, for each mode of the system. At every time step, it performs three steps: interaction, filtering, and combination [8]. The interaction mixes the estimates of all filters according to their mode and transition probabilities. The filtering performs the prediction and update of each filter. It calculates the new mode probabilities based on the measurements. The details of this step depend on the chosen filter type. The combination step combines all estimates according to their mode probability to create the output of the IMM, which is the expected state of the system.
We derive the IMM on ⊞-manifolds (⊞-IMM) by two changes:
  • We use the ⊞-EKF [19,20] as the single mode filter. This adapts the filtering step to ⊞-manifolds. The measurement likelihood can be calculated using Gaussians on ⊞-manifolds.
  • We use the probabilistic mixing of Gaussians of Section 2 in the interaction and combination step.
The implicit definition of the expected values on ⊞-manifolds does not give a direct rule to compute the value. It can be computed using the iterative algorithm - WeightedSum adapted from [14]:
Input : X ¯ 0 , x ¯ j , p ( X j ) j [ 1 , M ] ,
X ¯ k + 1 = X ¯ k j = 1 M p ( X j ) ( x ¯ j X ¯ k ) ,
x ¯ = lim k X ¯ k .
In practice, the iteration is stopped when the change of the mean is small. The convergence behavior depends primarily on the choice of the initial guess X ¯ 0 . The algorithm is a generalization of [24] and identical to the computation of the mean on compact Lie groups [25].
To shorten the notation of covariance mixing, we define the function - WeightedCov :
Input : x ¯ , P j , x ¯ j , p ( X j ) j [ 1 , M ] ,
J j = x ¯ j δ x ¯ δ δ = 0 ,
P = j = 1 M p ( X j ) x ¯ j x ¯ + J j P j J j T .
The ⊞-IMM (See Table 1) can properly handle generic ⊞-manifolds and their covariances. It does not require any ad hoc implementation to mix the states as it only uses the / -interface of the manifold.

4. An RTS Smoother on ⊞-Manifolds

Since there is no single model smoother for general ⊞-manifolds available, we derive an RTS Extended Kalman Smoother on ⊞-manifolds (⊞-EKS) in this section.
For filtering algorithms, it is suitable to express the covariance with respect to the expected value, but care must been taken at smoothing. The reason is that different estimates (the filtered, smoothed, and predicted estimate) participate in the smoothing equations. Thus, every covariance in the equations is defined on a different tangent space. They are incompatible unless they are transformed to the same tangent space. Therefore, we derive the smoother equations for ⊞-manifolds with special care of the used tangent spaces.
Our derivation of the ⊞-EKS is based on the discussion of Singer [26]. We start with the theorem of normal correlation for ⊞-manifolds. The theorem allows us to calculate the conditional probability p ( X | Z ) of two jointly Gaussian distributed random variables X and Z.
We express the distributions in the two different tangent spaces with references r for X and ζ for Z. This enables us to apply the standard theorem of normal correlation on the tangent spaces around r and ζ :
E r ( X | Z ) = E r ( X ) + Cov r ζ ( X , Z ) Cov ζ ( Z ) 1 ( Z ζ ) E ζ ( Z ) ,
Cov r ( X | Z ) = Cov r ( X ) Cov r ζ ( X , Z ) Cov ζ ( Z ) 1 Cov r ζ ( X , Z ) T .
A formal requirement for this is that X r and Z ζ are jointly Gaussian-distributed. For ⊞-manifolds, this almost only holds for linear approximation. We still use the notation = instead of ≈ to clarify where further approximations are required. All expected values and covariances are expressed in the tangent spaces of r and ζ , and Cov r ζ ( X , Z ) implicitly converts the reference tangent space. As a side remark, choosing ζ = E ( Z ) would result in the theorem of normal correlations that is used in current ⊞-estimators [14,20].
The theorem can be applied to any probability space, conditioned under some other variables Y:
E r ( X | Z , Y ) = E r ( X | Y ) + Cov r ζ ( X , Z | Y ) Cov ζ ( Z | Y ) 1 ( Z | Y ̲ ζ ) E ζ ( Z | Y ) ,
Cov r ( X | Z , Y ) = Cov r ( X | Y ) Cov r ζ ( X , Z | Y ) Cov ζ ( Z | Y ) 1 Cov r ζ ( X , Z | Y ) T .
For readability, we underline conditional distributions if neither their expected value nor their covariance is meant. Following [26], the variables X, Z, and Y are interpreted in an unusual fashion: X is the state x k at time k, Z is the state x k + 1 at time k + 1 , and Y is the vector of all measurements z 1 : k up to time k.
E r ( x k | x k + 1 , z 1 : k ) = E r ( x k | z 1 : k ) + C k x k + 1 | z 1 : k ̲ ζ E ζ ( x k + 1 | z 1 : k ) ,
Cov r ( x k | x k + 1 , z 1 : k ) = Cov r ( x k | z 1 : k ) C k Cov ζ ( x k + 1 | z 1 : k ) C k T ,
C k : = Cov r ζ ( x k , x k + 1 | z 1 : k ) Cov ζ ( x k + 1 | z 1 : k ) 1 ,
where C k is known as the smoother gain. These formulas essentially describe the fusion of the past z 1 : k with a hypothetically fixed next state x k + 1 . The next state is actually not fixed but distributed, given all measurements z 1 : N . Thus, we have to use the formulas in [26] to calculate the expectation and variance:
E r ( x k | z 1 : N ) = E E r x k | x k + 1 , z 1 : k | z 1 : N ,
Cov r ( x k | z 1 : N ) = E Cov r x k | x k + 1 , z 1 : k | z 1 : N + Cov E r x k | x k + 1 , z 1 : k | z 1 : N .
Note that only the distribution x k + 1 | z 1 : k ̲ has to be conditioned on z 1 : N since all other values are known constants. The expected value and covariance of (39) can be calculated by standard rules:
E r ( x k | z 1 : N ) = E E r ( x k | z 1 : k ) + C k x k + 1 | z 1 : N ̲ ζ E ζ ( x k + 1 | z 1 : k ) = E r ( x k | z 1 : k ) + C k E ζ x k + 1 | z 1 : N E ζ ( x k + 1 | z 1 : k ) ,
Cov E r x k | x k + 1 , z 1 : k | z 1 : N = Cov E r ( x k | z 1 : k ) + C k x k + 1 | z 1 : N ̲ ζ E ζ ( x k + 1 | z 1 : k ) = Cov C k x k + 1 | z 1 : N ̲ ζ = C k Cov x k + 1 | z 1 : N ̲ ζ C k T = C k Cov ζ ( x k + 1 | z 1 : N ) C K T .
Substituting all into Equation (40) yields
Cov r ( x k | z 1 : N ) = Cov r ( x k | z 1 : k ) + C k Cov ζ ( x k + 1 | z 1 : N ) Cov ζ ( x k + 1 | z 1 : k ) C k T .
From here, it is required to choose suitable references r and ζ to be able to compute the smoothing step. Choosing the references is crucial for the mathematical complexity of the computation. Covariances in the ⊞-IMM all live in the tangent space of the expected value, i.e., the estimate. Thus, they are all in different tangent spaces and have to be transferred to the tangent spaces required by the smoothing step. The choice of the references decides how many transforms have to be calculated. We choose r = E ( x k | z 1 : k ) and ζ = E ( x k + 1 | z 1 : k ) . This avoids transforming the covariances Cov r ( x k | z 1 : k ) and Cov ζ ( x k + 1 | z 1 : k ) as they are already available in these tangent spaces. It also reduces the expected value to
E r ( x k | z 1 : N ) = E ( x k | z 1 : k ) E ( x k | z 1 : k ) + C k E x k + 1 | z 1 : N ̲ E ( x k + 1 | z 1 : k ) E x k + 1 | z 1 : k ̲ E ( x k + 1 | z 1 : k ) = C k E x k + 1 | z 1 : N ̲ E ( x k + 1 | z 1 : k ) .
By linear approximation, we retrieve
E r ( x k | z 1 : N ) C k E ( x k + 1 | z 1 : N ) E ( x k + 1 | z 1 : k ) .
The value of Cov ζ ( x k + 1 | z 1 : N ) can be retrieved by a displaced transform. Finally, we give a formula for Cov r ζ ( x k , x k + 1 | z 1 : k ) by approximation of
Cov r ζ ( x k , x k + 1 | z 1 : k ) = Cov ( x k | z 1 : k ̲ E ( x k | z 1 : k ) , g ( x k | z 1 : k ̲ ) E ( x k + 1 | z 1 : k ) ) ,
where g is the dynamic model function. Approximating the second argument yields
Cov r ζ ( x k , x k + 1 | z 1 : k ) Cov x k | z 1 : k ̲ E ( x k | z 1 : k ) , g ( E ( x k | z 1 : k ) ) E ( x k + 1 | z 1 : k ) + F k x k | z 1 : k ̲ E ( x k | z 1 : k ) = Cov r ( x k | z 1 : k ) F k T
F k = g ( E x k | z 1 : k δ ) E ( x k + 1 | z 1 : k ) δ δ = 0 ,
where F k is commonly known as the state transition matrix.
With the chosen references, the complete smoothing step is (using more common symbols)
x ^ k | N = x ^ k | k C k x ^ k + 1 | N x ^ k + 1 | k ,
P k | N = J k P k | k + C k B k P k + 1 | N B k T P k + 1 | k C k T J k T ,
C k = P k | k F k T P k + 1 | k 1 ,
F k = g ( x ^ k | k δ ) x ^ k + 1 | k δ δ = 0 ,
B k = x ^ k + 1 | N δ x ^ k + 1 | k δ δ = 0 ,
J k = x ^ k | k C k x ^ k + 1 | N x ^ k + 1 | k + δ x ^ k | N δ δ = 0 .
These formulas are structurally similar to the standard RTS formulas [23]:
x ^ k | N = x ^ k | k + C k x ^ k + 1 | N x ^ k + 1 | k ,
P k | N = P k | k + C k P k + 1 | N P k + 1 | k C k T ,
C k = P k | k F k T P k + 1 | k 1 ,
F k = g ( x ) x x = x ^ k | k .
In the vector case, the ⊞/⊟ operators reduce to + / and the computation of the expected value agrees completely. In contrast, the covariance computation shows important differences — the transforms with the matrices J k and B k . B k transforms the covariance of the smoothed future state to the tangent space of the filtered predicted state. Thus, the matrices B k P k + 1 | N B k T and P k + 1 | k are on the same tangent space, which allows them to be subtracted. J k conducts a centered transform to reestablish the standard Gaussian form. Nevertheless, the covariance reduces to the standard RTS formulas in the vector case as well, since B k and J k evaluate to identity. Therefore, the derived smoothing formulas can be seen as a generalization of the standard RTS formulas.
Note that the choice of the references r and ζ is crucial for the structure of the final formulas. Other references may alter the performance since other linearization points are used. Regardless of the choice of the references, the formulas reduce to the standard RTS formulas in the vector case, because all transformation matrices reduce to identity and the reference ζ cancels out at the computation of the expected value. Thus, countless possibilities exist to generalize the RTS formulas on ⊞-manifolds.
The presented ⊞-EKS differs from existing RTS smoothers on quaternions [17]. Since smoothing on a quaternion state is a special case of our general ⊞-EKS, it should coincide. The smoothed covariance is calculated differently from the existing smoothers as they use the vector Equation (56). The existing RTS smoothers ignore that the covariances are defined on different tangent spaces. In practice, the formulas are almost identical for quaternions if the estimates are close to each other. In that case, the ⊞/⊟-operators are almost linear; wherefore, B k and J k evaluate close to identity.

5. IMM RTS Smoother

Since the IMM performs an arbitrary (but well-chosen) approximation of the hybrid estimation problem, there is no single answer on how to perform the smoothing. Several smoothing schemes exist in the literature [11,12,13]. We choose the scheme of [11] as it requires only one smoothing step per mode and is straightforward to adapt to ⊞-manifolds with our results from Section 2 and Section 4. [12] pointed out that the chosen scheme uses strong approximations. However, the approach of [12] requires operations that are currently not possible on general ⊞-manifolds.
The IMM RTS smoother in [11] can be adapted to ⊞-manifolds in a similar manner as the IMM Filter. The mixture of Gaussian approximations are exchanged with the - WeightedSum and - WeightedCov algorithms. The mode-matched smoothing is exchanged with the ⊞-EKS from Section 4. This results in the ⊞-RTSIMM smoother (⊞-RTSIMMS), as shown in Table 2.
The ⊞-RTSIMMS generically operates on all differentiable ⊞-manifolds without destroying the manifold properties of the state. To our knowledge, this is the first IMM smoother that can operate properly on quaternions and other manifolds.

6. Example Application and Performance Discussion

We test the new algorithms in a simulated environment to provide first insights into their performance. We choose the following setup inspired by classic radar tracking: A drone flies across known terrain. It has a stereo-camera facing downwards. With the camera, it detects known landmarks in the terrain. The task is to track the position of the drone. Since the camera is mounted on the drone, its measurements are in body coordinates. Hence, it is required to estimate the orientation of the drone to make use of the measurements. The simulated drone flies a trajectory of two straight lines and two 180 curves over four visible landmarks.
The drone has two different flight modes. In the first mode, it flies straight with a constant velocity. In the second mode, it flies a curve with a constant angular rate.
We model the dynamics of the drone with the state x:
x = q b w p w v w ω w T ,
where q b w is the rotation quaternion that rotates a world frame vector to body frame, p w is the position in world frame, v w is the velocity in world frame, and ω w is the angular rate in world frame. The straight dynamic is modeled as follows:
g s ( x , ϵ s ) = q b w p w + ( v w + ϵ s ) Δ t v w + ϵ s ω w ,
where Δ t = 0.05 s is the time difference between time k and k + 1 . The constant turn dynamic is modeled as in [2] with a change for the orientation and angular rate:
g c ( x , ϵ c ) = q b w exp ( Δ t 2 ω w ) 1 p w + ( Δ t I 3 × 3 + B ) v w ( I 3 × 3 + A ) v w ω w + ϵ c ,
where exp ( · ) forms a quaternion from the given Euler-angle-axis [14] and A , B R 3 × 3 , as given in [2].
For simplification, we assume that the camera measures the position of the landmark in body coordinates instead of pixel coordinates. The measurement model is
h ( x ) = q b w · p w · q b w 1
The covariance matrices can be found in Appendix A.

6.1. Results

The performance of seven algorithms is evaluated in the simulation:
  • ⊞-EKF: The EKF on ⊞-manifolds, as presented in [20], on the constant turn model (61).
  • -EKS: The RTS EKS on ⊞-manifolds, as derived in Section 4, on the constant turn model (61).
  • (M)-EKS: The same as the ⊞-EKS, but it uses the simplified covariance smoothing (56). This algorithm uses the ⊞-EKF for filtering but its smoothing step is identical to the MEKS or the left invariant EKS on quaternions [17].
  • -IMM: The IMM on ⊞-manifolds, as derived in Section 3.
  • N-IMM: An IMM that handles the manifold state such as a vector state for mixing purposes. We call this approach naive-mixing: The quaternions are summed up in parameter space and normalized afterwards (as in [5]). It uses the standard IMM equation to sum up the covariances. The mode filters of the N-IMM are ⊞-EKFs, since we want to specifically compare the ⊞-mixing from Section 2 against naive-mixing.
  • -RTSIMMS: The IMM RTS smoother on ⊞-manifolds, as derived in Section 5.
  • NM-RTSIMMS: An IMM smoother that handles mixing such as the N-IMM and uses the (M)-EKS for mode-matched smoothing. This IMM basically combines the approaches in [5] and [17].
The root mean squared error (RMSE) of the position and orientation are used as performance metrics. The ⊟-operator is used to calculate the quaternion estimate error (difference as Euler-Angle-Axis). In addition, the consistency measure of [27] is computed; it states that the estimate is consistent if it resembles the true probability distribution. Hence, the expectation bias E ( x ^ x ) has to be 0 and the expectation of the squared Mahalonobis distances E | | x ^ x | | P 2 has to equal the D O F of the state. Both values are computed on the position and orientation, wherefore D O F = 6 . The averaged metrics of 100 Monte Carlo runs are shown in Figure 2.
Both IMM smoother variants caused numerical issues at the computation of the mode-matched smoothed covariances. Apparently, the subtraction of covariances results in indefinite matrices. Since the issue arises in both IMM smoothers, it does not appear to be specific for the changed covariance formula (50). To ensure positive definite covariance matrices, the eigenvalues of the smoothed matrices are set to a minimal threshold ϵ > 0 . All metrics of the ⊞-RTSIMMS and NM-RTSIMMS are affected by this regularization.
As expected, the RMSEs and biases of the smoother variants are lower than their filter counterparts. Remarkably, the single model algorithms perform better than the multiple model algorithms in some of the metrics. However, the IMM variants deliver overall more consistent estimates. In the remainder of our discussion, we focus on the comparison between our three, novel algorithms and their respective state-of-the-art solutions. The ⊞-EKF metrics are only used to explain different behavior between the single- and multimodel smoothers.
The ⊞-EKS and (M)-EKS perform equally on the three metrics that concern only the state, since they only differ in the covariance computation. The (M)-EKS is closer to the optimal value E | | x ^ x | | P 2 = 6 than the ⊞-EKS, which implies a slightly better consistency of the covariances. In contrast, the different covariance computation has an effect on the state for the IMM smoothers, as it influences the model probabilities.
The ⊞-IMM and N-IMM perform identically at the leading digits. Hence, the ⊞-mixing does not change the computation in the presented example. The IMM smoothers perform almost identically as well.
This results are unsatisfying since the theoretically justified ⊞-mixing and smoothing should yield better accuracy and consistency. Therefore, we analyze the effects of the ⊞-mixing and the changed covariance smoothing equation.

6.2. Discussion: ⊞-Mixing of Gaussians

The error of the naive-mixing is negligible in the presented example. To show this, we try to quantify the error induced by naive-mixing. We calculate the weighted mean of two quaternion Gaussian distributions q 1 , q 2 for different angular differences and different probabilities (see Figure 3a).
The difference is in the range of 10 4 rad for angular differences below 0.35 rad (ca. 20 ), for all probabilities. Since the IMM usually operates at small differences between the models, the error of the naive-mixing is negligible for the mean.
Similarly, the effect of the ⊞-mixing on the covariance is small (see Figure 3b). Hence, the two mixing methods differ only for high differences of the mixed quaternions. In the presented simulation example, the angular differences are small, wherefore the mixing methods have equal results.
In general, it is unlikely that the quaternion estimates of the IMM filter differ greatly. The mixing is always performed after the update step. Hence, even big differences of the dynamic models are compensated by the update. Likewise, in the IMM smoother, the mixing is performed on the updated or smoothed estimates.
The ⊞-mixing may perform better for higher differences. To analyze this, ⊞- and naive-mixing are compared to an optimal solution (see Figure 4). Since no closed form solution to mix the Gaussians exists, the optimal solution is obtained numerically. Quaternions are sampled uniformly from the Gaussians to approximate the complete distribution.
The mean and covariance error of the ⊞-mixing increase with the distance between the Gaussian means. The approaches are almost equal for small differences between the quaternions. At higher differences, ⊞- outperforms naive-mixing. Thus, ⊞-mixing is technically an improvement but its advantage is negligible due to the small differences in IMM applications.

6.3. Discussion: ⊞-RTS Smoothing

The ⊞-EKS is slightly less consistent than the (M)-EKS. The difference between the ⊞-EKS and (M)-EKS is the reference transformation of the covariances. Therefore, we analyze the performance of the linear approximation of the reference transform.
We compare the conducted linear approximation of the reference transform with a numerically computed optimum, where we distinguish between displaced and centered transforms (see Figure 5). For further comparison, we add the error of no transformation, which shows the behavior of the (M)-EKS. The covariance of the Gaussians has been set to Σ = 0.1 I 3 x 3 .
The error graphs of the centered and displaced transform clearly differ, which justifies the distinction of the two transform types. The error of the displaced transform (see Figure 5b) is, for high differences of the references, several orders of magnitudes higher than the base covariance. The reason is that wrap-around issues arise, since we approach the classic singularity of quaternions close to angles of π . Note that closeness depends on the chosen covariance. With a higher covariance, the wrap-around issues occur earlier. Since the linear transformation evaluates a single point, the wrap-around is not incorporated. However, such extreme differences of the references do not usually occur during estimation.
The comparison shows that the linear approximation has a smaller error than no transformation. Thus, regarding a single smoothing step, the ⊞-EKS is more precise than the (M)-EKS. Apparently, the improvement of a single step does not improve the whole interval smoothing. To investigate the accumulated effect of the transforms, we plot the ratio of the covariance determinants before and after the reference transform (see Figure 6). In the quaternion case, the displaced transform monotonically increases the covariance and the centered transform reduces it.
In the ⊞-EKS formula, the centered transform is dominant as it encloses the whole Equation (50). Thus, the ⊞-EKS reduces the covariance marginally in comparison with the (M)-EKS. Since the covariance is already too small in the simulation, the ⊞-EKS is slightly less consistent. Otherwise, if the covariances are too high, this effect would yield more consistent results. Thus, this is neither a clear performance degradation nor an improvement.
Noticeably, the IMM smoother variants perform almost identically. Since the IMM has an overall smaller bias than the EKF, the differences between the filtered and smoothed estimates are smaller. Thus, the reduction of the covariance matrices becomes negligible. This result coincides with the analysis of Section 6.2, since the ⊞-mixing of covariances is founded on displaced transforms.
The hypothesis that the bias has a crucial influence can be further substantiated by adapting the noise of the constant turn model to reduce the bias. This is possible since the low inconsistent noise biases the estimators towards the dynamic model instead of the measurements. Thus, the bias can be drastically reduced and the difference in consistency vanishes using an additive noise on the position of Σ = 10 Δ t I 3 x 3 (see Figure 7).

6.4. Discussion: Wrap-Up

Overall, using ⊞-mixing with a first-order approximation does not give a performance boost for the IMM on quaternions. Instead, its strength is to enable a generic IMM on differentiable ⊞-manifolds. The method encapsulates the manifold properties of the state so it can be treated as a black box. Therefore, the IMM can be implemented independently of the used state representation. It does not require any ad hoc solutions to mix the states.
It has been shown that, in the quaternion case, the reference transforms are negligible if the references are close to each other. The transforms can be dropped to reduce computational and algorithmic complexity. Thus, we recommend to use the approach of [17] for quaternion smoothing. For other manifolds, the reference transform may be relevant to achieve consistent results.
Open source C++ implementations of the generic ⊞-EKS (https://github.com/TomLKoller/Manifold-RTS-Smoother, accessed on 16 June 2021) and the ⊞-IMM and ⊞-RTSIMMS (https://github.com/TomLKoller/Boxplus-IMM, accessed on 16 June 2021) are provided. The implementations use automatic differentiation [28] to calculate all required Jacobians for state mixing and for the internal ⊞-EKF [29]. They use a generic state representation to adapt to other system models. Therefore, they can be used without taking care of the heavy math in this paper. The ⊞-IMM repository also contains the presented simulation example.

7. Conclusions

Three, novel estimation algorithms based on the ⊞-method have been presented in this work. All algorithms properly handle the manifold structure of ⊞-manifolds such as quaternions or rotation matrices. Hence, they do not need ad hoc normalization procedures to preserve manifold structures.
The ⊞-IMM, a first-order correct IMM on ⊞-manifolds, has been derived. We described methods to calculate the weighted mean and covariance of Gaussian mixtures on ⊞-manifolds and provided necessary proofs. With these, the ⊞-method is applied to the IMM.
By following the RTS derivation of [26], the new ⊞-EKS has been derived. The ⊞-EKS uses a different smoothing update of the covariance than published smoothers for quaternions, but the expected value is calculated identically.
By combining the ⊞-EKS and the ⊞-IMM, a novel IMM smoother on ⊞-manifolds, ⊞-RTSIMMS, has been derived. To our knowledge, the ⊞-RTSIMMS is the first IMM that enables smoothing on rotation quaternions and other ⊞-manifolds in hybrid estimation.
All algorithms are evaluated on a simulated aircraft tracking example. The evaluation shows that the estimation accuracies are not improved compared to state-of-the-art smoothing methods on quaternions and ad hoc normalization procedures. For other manifolds, a practical advantage may be achieved. The presented algorithms still have high theoretical value as they can be derived from the basic definitions of the expected value and the covariance on ⊞-manifolds. Thus, they are justified, generic algorithms.
This paper extended the family of ⊞-algorithms to hybrid estimation and smoothing. The practical strength of the presented algorithms is their generality. The ⊞-method encapsulates the manifold properties. No further ad hoc implementations are required to perform mixing and smoothing, regardless of the state. Therefore, the algorithms enable the implementation of a generic library that can handle ⊞-manifold states in hybrid estimation and smoothing. A first prototype is published alongside this paper.
The presented methods are only first-order correct, which causes linearization errors. Thus, one may develop higher-order methods or use the unscented transform as approximation, which may improve the consistency of the ⊞-EKS. Presumably, the unscented transform would not significantly reduce the error, as the error compared to the numerical solutions was quite low for small distances anyway. Instead, it should be investigated whether the indefiniteness of the covariance arises from numerical issues or systematic problems in the formulas, e.g., the approximation pointed out by [12].

Author Contributions

Conceptualization, T.L.K.; data curation, T.L.K.; formal analysis, T.L.K.; funding acquisition, U.F.; investigation, T.L.K. and U.F.; methodology, T.L.K. and U.F.; project administration, U.F.; resources, U.F.; software, T.L.K.; supervision, U.F.; validation, T.L.K. and U.F.; visualization, T.L.K.; writing—original draft, T.L.K.; writing—review & editing, T.L.K. and U.F. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Research Foundation (DFG) grant number ZaVI FR 2620/3-1.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Implementations and simulations are available at: https://github.com/TomLKoller/Manifold-RTS-Smoother (accessed on 16 June 2021), and https://github.com/TomLKoller/Boxplus-IMM (accessed on 16 June 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IMMInteracting Multiple Model (Filter)
EKFExtended Kalman Filter
EKSExtended Kalman Smoother
RTSRauch–Tung–Striebel
RTSIMMSRauch–Tung–Striebel Interacting Multiple Model Smoother
MEKFMultiplicative EKF
MEKSMultiplicative EKS

Appendix A. Covariances

Covariances of dynamic models:
Q s ( k ) = 10 0 0 0 10 0 0 0 10 , Q c ( k ) = 0.1 0 0 0 0.1 0 0 0 0.1
Covariance of measurement and transition probabilities:
R ( k ) = 1 0 0 0 1 0 0 0 1 , p t r a n s = 0.95 0.05 0.05 0.95

References

  1. Blom, H.A.P. An efficient filter for abruptly changing systems. In Proceedings of the 23rd IEEE Conference on Decision and Control, Las Vegas, NV, USA, 12–14 December 1984; pp. 656–658. [Google Scholar]
  2. Zhan, R.; Wan, J. Passive maneuvering target tracking using 3D constant-turn model. In Proceedings of the 2006 IEEE Conference on Radar, Verona, NY, USA, 24–27 April 2006; p. 8. [Google Scholar]
  3. Glass, J.D.; Blair, W.D.; Bar-Shalom, Y. IMM estimators with unbiased mixing for tracking targets performing coordinated turns. In Proceedings of the 2013 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2013; pp. 1–10. [Google Scholar]
  4. Zhu, W.; Wang, W.; Yuan, G. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking. Sensors 2016, 16, 805. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Wann, C.-D.; Gao, J.-H. Orientation Estimation for Sensor Motion Tracking Using Interacting Multiple Model Filter. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2010, 93, 1565–1568. [Google Scholar] [CrossRef]
  6. Liu, J.; Cai, B.; Wang, J.; Shangguan, W. GNSS/INS-based vehicle lane-change estimation using IMM and lane-level road map. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, 6–9 October 2013; pp. 148–153. [Google Scholar]
  7. Sadeghzadeh-Nokhodberiz, N.; Poshtan, J. Distributed interacting multiple filters for fault diagnosis of navigation sensors in a robotic system. IEEE Trans. Syst. Man, Cybern. Syst. 2016, 47, 1383–1393. [Google Scholar] [CrossRef]
  8. Mazor, E.; Averbuch, A.; Bar-Shalom, Y.; Dayan, J. Interacting multiple model methods in target tracking: A survey. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 103–123. [Google Scholar] [CrossRef]
  9. Boers, Y.; Driessen, J.N. Interacting multiple model particle filter. IEE Proc. Radar Sonar Navig. 2003, 150, 344–349. [Google Scholar] [CrossRef]
  10. Ali, W.; Li, Y.; Raja, M.A.Z.; Ahmed, N. Generalized pseudo Bayesian algorithms for tracking of multiple model underwater maneuvering target. Appl. Acoust. 2020, 166, 107345. [Google Scholar] [CrossRef]
  11. Nadarajah, N.; Tharmarasa, R.; McDonald, M.; Kirubarajan, T. IMM forward filtering and backward smoothing for maneuvering target tracking. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2673–2678. [Google Scholar] [CrossRef]
  12. Lopez, R.; Danes, P. Low-complexity IMM smoothing for jump Markov nonlinear systems. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1261–1272. [Google Scholar] [CrossRef] [Green Version]
  13. Li, K.; Ma, X.; Liu, Y.; Yuan, D. An Interacting Multiple Model Smoothing Algorithm for Nonlinear Maneuvering Target Tracking. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–7. [Google Scholar]
  14. Hertzberg, C.; Wagner, R.; Frese, U.; Schröder, L. Integrating generic sensor fusion algorithms with sound state representations through encapsulation of manifolds. Inf. Fusion 2013, 14, 57–77. [Google Scholar] [CrossRef] [Green Version]
  15. Markley, F.L. Attitude error representations for Kalman filtering. J. Guid. Control. Dyn. 2003, 26, 311–317. [Google Scholar] [CrossRef]
  16. Madyastha, V.; Ravindra, V.; Mallikarjunan, S.; Goyal, A. Extended Kalman Filter vs. Error State Kalman Filter for Aircraft Attitude Estimation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, American Institute of Aeronautics and Astronautics, Oregon, Portland, 8–11 August 2011. [Google Scholar]
  17. van Der Laan, N.; Cohen, M.; Arsenault, J.; Forbes, J.R. The Invariant Rauch-Tung-Striebel Smoother. IEEE Robot. Autom. Lett. 2020, 5, 5067–5074. [Google Scholar] [CrossRef]
  18. Himberg, H.; Motai, Y.; Bradley, A. A multiple model approach to track head orientation with delta quaternions. IEEE Trans. Cybern. 2012, 43, 90–101. [Google Scholar] [CrossRef] [PubMed]
  19. Clemens, J.; Schill, K. Extended Kalman filter with manifold state representation for navigating a maneuverable melting probe. In Proceedings of the 2016 19th International Conference on Information Fusion (FUSION), Heidelberg, Germany, 5–8 July 2016; pp. 1789–1796. [Google Scholar]
  20. Nakath, D.; Clemens, J.; Schill, K. Multi-Sensor Fusion and Active Perception for Autonomous Deep Space Navigation. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 2596–2605. [Google Scholar]
  21. Koller, T.L.; Frese, U. State Observability through Prior Knowledge: Analysis of the Height Map Prior for Track Cycling. Sensors 2020, 20, 2438. [Google Scholar] [CrossRef] [PubMed]
  22. Koller, T.L.; Frese, U. The Interacting Multiple Model Filter on Boxplus-Manifolds. In Proceedings of the 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Karlsruhe, Germany, 14–16 September 2020. [Google Scholar]
  23. Rauch, H.E.; Tung, F.; Striebel, C.T. Maximum likelihood estimates of linear dynamic systems. AIAA J. 1965, 3, 1445–1450. [Google Scholar] [CrossRef]
  24. Hartley, R.; Trumpf, J.; Dai, Y.; Li, H. Rotation averaging. Int. J. Comput. Vis. 2013, 103, 267–305. [Google Scholar] [CrossRef]
  25. Manton, J.H. A globally convergent numerical algorithm for computing the centre of mass on compact Lie groups. In Proceedings of the ICARCV 2004 8th Control, Automation, Robotics and Vision Conference, Kunming, China, 6–9 December 2004; Volume 3, pp. 2211–2216. [Google Scholar]
  26. Singer, H. A Concise Proof of Gaussian Smoothing; FernUniversität in Hagen: Hagen, Germany, 2018. [Google Scholar]
  27. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: New York City, NY, USA, 2004. [Google Scholar]
  28. Hoffmann, P.H. A hitchhiker’s guide to automatic differentiation. Numer. Algorithms 2016, 72, 775–811. [Google Scholar] [CrossRef] [Green Version]
  29. Post, L.; Koller, T.L. An Automatically Differentiating Extended Kalman Filter on Boxplus Manifolds. 2020. Available online: https://github.com/TomLKoller/ADEKF (accessed on 16 June 2021).
Figure 1. The Gaussian distribution X approximated in two tangent spaces on a unit circle manifold.
Figure 1. The Gaussian distribution X approximated in two tangent spaces on a unit circle manifold.
Sensors 21 04164 g001
Figure 2. RMSE and consistency comparison for aircraft tracking in 100 Monte Carlo runs. The optimal value is 0 for | | E ( x ^ x ) | | and 6 for E | | x ^ x | | P 2 (dashed line).
Figure 2. RMSE and consistency comparison for aircraft tracking in 100 Monte Carlo runs. The optimal value is 0 for | | E ( x ^ x ) | | and 6 for E | | x ^ x | | P 2 (dashed line).
Sensors 21 04164 g002
Figure 3. The mean and covariance difference between ⊞- and naive-mixing over the angular differences of q 1 and q 2 . Reprinted with permission from ref. [22]. Copyright 2020 IEEE.
Figure 3. The mean and covariance difference between ⊞- and naive-mixing over the angular differences of q 1 and q 2 . Reprinted with permission from ref. [22]. Copyright 2020 IEEE.
Sensors 21 04164 g003
Figure 4. Mean and covariance error of ⊞- and naive-mixing compared to optimal mixing over the angular differences of q 1 and q 2 . Reprinted with permission from ref. [22]. Copyright 2020 IEEE.
Figure 4. Mean and covariance error of ⊞- and naive-mixing compared to optimal mixing over the angular differences of q 1 and q 2 . Reprinted with permission from ref. [22]. Copyright 2020 IEEE.
Sensors 21 04164 g004
Figure 5. The covariance error (Schur norm) in logarithmic scale of the ⊞-transform and no transform compared to numerical optimal transformation over the angular differences of r 1 and r 2 .
Figure 5. The covariance error (Schur norm) in logarithmic scale of the ⊞-transform and no transform compared to numerical optimal transformation over the angular differences of r 1 and r 2 .
Sensors 21 04164 g005
Figure 6. The covariance response in logarithmic scale over the angular differences of r 1 and r 2 . The covariance response is the ratio of the transformed and base covariance determinants.
Figure 6. The covariance response in logarithmic scale over the angular differences of r 1 and r 2 . The covariance response is the ratio of the transformed and base covariance determinants.
Sensors 21 04164 g006
Figure 7. RMSE and consistency comparison of single-model estimators with additional noise on the position.
Figure 7. RMSE and consistency comparison of single-model estimators with additional noise on the position.
Sensors 21 04164 g007
Table 1. Original IMM  [8] vs. ⊞-IMM. Dashed boxes belong to the standard IMM only and full boxes to the ⊞-IMM. z k may be a ⊞-manifold D . Modified from [22].
Table 1. Original IMM  [8] vs. ⊞-IMM. Dashed boxes belong to the standard IMM only and full boxes to the ⊞-IMM. z k may be a ⊞-manifold D . Modified from [22].
State, input, process and measurement models x k j S , P k j R D O F × D O F , u k R ν , z k D g j : S × R ν × R n S , x k + 1 j = g j ( x k j , u k , ϵ k j ) ϵ k j = N n ( 0 , Q k j ) , Q k j R n × n h : S D , z k = h ( x k ) Z N d ( 0 , R k ) , R k R d × d
Initialization j [ 1 , M ] x ^ 0 | 0 j = x 0 , P 0 | 0 j = P 0 , μ 0 | 0 j = μ 0 j
Interaction i , j [ 1 , M ] c k j = i = 1 M p i j μ k | k i
μ k | k i j = p i j μ k | k i c k j , p i j are transition probabilities .
x ^ k | k 0 j = i = 1 M x ^ k | k i μ k | k i j
x ^ k | k 0 j = - WeightedSum i [ 1 , M ] ( x ^ k | k j , x ^ k | k i , μ k | k i j )
P k | k 0 j = i = 1 M μ k | k i j P k | k i + x ^ k | k i x ^ k | k 0 j
P k | k 0 j = - WeightedCov i [ 1 , M ] x ^ k | k 0 j , P k | k i , x ^ k | k i , μ k | k i j
Filtering: Prediction (-EKF) j [ 1 , M ] x ^ k + 1 | k j = g j ( x ^ k | k 0 j , u k , 0 ) F k j = g j ( x ^ k | k 0 j δ , u k , 0 ) x ^ k + 1 | k j δ δ = 0 U k j = g j ( x ^ k | k 0 j , u k , ϵ ) x ^ k + 1 | k j ϵ ϵ = 0 P k + 1 | k j = F k j P k | k 0 j F k j T + U k j Q k j U k j T
Filtering: Update (-EKF) j [ 1 , M ] H k j = h ( x ^ k + 1 | k j δ ) h ( x ^ k + 1 | k j ) δ δ = 0 S k j = H k j P k + 1 | k j H k j T + R k W k j = P k + 1 | k j H k j T S k j 1 r k j = z k h x ^ k + 1 | k j x ^ k + 1 | k + 1 j = x ^ k + 1 | k j W k j r k j J k j = x ^ k + 1 | k j ( r k j + δ ) x ^ k + 1 | k + 1 j δ δ = 0 P k + 1 | k + 1 j = J k j P k + 1 | k j W k j S k j W k j T J k j T Λ k j = N r k j ; 0 , S k j μ k + 1 | k + 1 j = 1 c Λ k j c k j , c is a normalization factor
Combination x ^ k + 1 | k + 1 = j = 1 M x ^ k + 1 | k + 1 j μ k + 1 | k + 1 j
x ^ k + 1 | k + 1 = - WeightedSum j [ 1 , M ] ( x ^ k | k , x ^ k + 1 | k + 1 j , μ k + 1 | k + 1 j )
P k + 1 | k + 1 = j = 1 M μ k j P k + 1 | k + 1 j + [ x ^ k + 1 | k + 1 j x ^ k + 1 | k + 1 ]
P k + 1 | k + 1 = - WeightedCov j [ 1 , M ] ( x ^ k + 1 | k + 1 , P k + 1 | k + 1 j , x ^ k + 1 | k + 1 j , μ k + 1 | k + 1 j )
Table 2. The ⊞-RTSIMMS smoothing scheme based on  [11].
Table 2. The ⊞-RTSIMMS smoothing scheme based on  [11].
Backward transition probability b i j = 1 e i p j i μ k | k j e i = l = 1 M p l i μ k | k l
Backward mixing probability μ k + 1 | N i j = 1 d j b i j μ k + 1 | N i d j = l = 1 M b l i μ k + 1 | N l
Backward mixing x ^ k + 1 | N 0 j = - WeightedSum i [ 1 , M ] ( x ^ k + 1 | N j , x ^ k + 1 | N i , μ k + 1 | N i j ) P k + 1 | N 0 j = - WeightedCov i [ 1 , M ] ( x ^ k + 1 | N 0 j , P k + 1 | N i , x ^ k + 1 | N i , μ k + 1 | N i j )
Mode-matched smoothing x ^ k | N j = x ^ k | k j C k j x ^ k + 1 | N 0 j x ^ k + 1 | k j P k | N j = J k j P k | k j + C k j B k j P k + 1 | N 0 j B k j T P k + 1 | k j C k j T J k j T C k j = P k | k j F k j T P k + 1 | k j 1 F k j = g j ( x ^ k | k j δ ) x ^ k + 1 | k j δ δ = 0 B k j = x ^ k + 1 | N j δ x ^ k + 1 | k j δ δ = 0 J k j = x ^ k | k j C k j x ^ k + 1 | N 0 j x ^ k + 1 | k j + δ x ^ k | N j δ δ = 0
Smoothed mode probability Λ k | N j = i = 1 M p j i N x ^ k + 1 | N i ; x ^ k + 1 | k j , P k + 1 | k j μ k | N j = 1 f Λ k | N j μ k | k j , f is a normalizing factor
Combine smoothed estimate x ^ k | N = - WeightedSum j [ 1 , M ] ( x ^ k | k , x ^ k | N j , μ k | N j ) P k | N = - WeightedCov j [ 1 , M ] ( x ^ k | N , P k | N j , x ^ k | N j , μ k | N j )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Koller, T.L.; Frese, U. The Interacting Multiple Model Filter and Smoother on Boxplus-Manifolds. Sensors 2021, 21, 4164. https://doi.org/10.3390/s21124164

AMA Style

Koller TL, Frese U. The Interacting Multiple Model Filter and Smoother on Boxplus-Manifolds. Sensors. 2021; 21(12):4164. https://doi.org/10.3390/s21124164

Chicago/Turabian Style

Koller, Tom L., and Udo Frese. 2021. "The Interacting Multiple Model Filter and Smoother on Boxplus-Manifolds" Sensors 21, no. 12: 4164. https://doi.org/10.3390/s21124164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop