Next Article in Journal
Assessment of Heart Rate Variability during an Endurance Mountain Trail Race by Multi-Scale Entropy Analysis
Next Article in Special Issue
Adaptive Waveform Design for Cognitive Radar in Multiple Targets Situation
Previous Article in Journal
Properties of Risk Measures of Generalized Entropy in Portfolio Selection
Previous Article in Special Issue
The Geometry of Signal Detection with Applications to Radar Signal Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Nonlinear Filtering via Information Geometric Optimization

College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(12), 655; https://doi.org/10.3390/e19120655
Submission received: 17 October 2017 / Revised: 16 November 2017 / Accepted: 29 November 2017 / Published: 1 December 2017
(This article belongs to the Special Issue Radar and Information Theory)

Abstract

:
In this paper, Bayesian nonlinear filtering is considered from the viewpoint of information geometry and a novel filtering method is proposed based on information geometric optimization. Under the Bayesian filtering framework, we derive a relationship between the nonlinear characteristics of filtering and the metric tensor of the corresponding statistical manifold. Bayesian joint distributions are used to construct the statistical manifold. In this case, nonlinear filtering can be converted to an optimization problem on the statistical manifold and the adaptive natural gradient descent method is used to seek the optimal estimate. The proposed method provides a general filtering formulation and the Kalman filter, the Extended Kalman filter (EKF) and the Iterated Extended Kalman filter (IEKF) can be seen as special cases of this formulation. The performance of the proposed method is evaluated on a passive target tracking problem and the results demonstrate the superiority of the proposed method compared to various Kalman filter methods.

1. Introduction

Filtering problems arise in various applications such as signal processing, automatic control and financial time series. The goal of nonlinear filtering is to estimate the state of a nonlinear dynamic process based on noisy observation. In the last several decades, the Kalman filter has become a standard method for linear dynamic systems subject to linear measurements, which can provide a perfect analytical solution in the optimal operation. However, it is not suitable for the nonlinear cases of filtering problems. Along with the spirit of Kalman filter, the approximation methods have been proposed for the nonlinear filtering, such as the Extended Kalman filter (EKF) [1], Unscented Kalman filter (UKF) [2], Gauss–Hermite Kalman filter (GHKF) [3], and Cubature Kalman filter (CKF) [4]. All these methods can be induced by the Bayesian approach with different approximations [5] for nonlinear cases. Apart from the aforementioned methods, the sequential Monte Carlo technique approximating for the Bayesian probability density functions (PDFs) is another feasible approach, for instance, the particle filtering (PF) [6] which uses the particle representations of probability distributions. Within the Bayesian framework, the nonlinear filtering can be converted to Bayesian filtering, and the procedure of filtering consists of two steps: state propagation and measurement update. Correspondingly, the state propagation provides the prior information for the state, and the measurement update integrates the prior information and the conditional measurement to obtain the posterior PDF of the state. In particular, the nonlinear and non-Gaussian conditions will make the measurement update more difficult and the solution of posterior PDFs intractable.
Because Bayesian posterior PDF plays a key role in nonlinear filtering, the study of Bayesian posterior PDF has attracted increasing attentions over the past few decades. Conventionally, the linear minimum mean square error (LMMSE) estimator and maximum a posteriori (MAP) estimator have played major roles in estimating posterior PDF. For the LMMSE estimator, it approximates the posterior mean and covariance matrix by its estimator and its mean square error matrix, respectively. Usually, the EKF, UKF and CKF can be derived from the LMMSE estimator. Besides, an adaptable recursive method named recursive update filter (RUF) has been derived based on the principle of LMMSE [7,8], which overcomes some of the limitations of the EKF. Being different from the LMMSE, the MAP estimator has estimated the posterior mean and obtained the covariance matrix by linearizing the measurement function around the MAP estimator. The well-known iterated EKF (IEKF), which is induced by using the Gauss–Newton optimization [9] or Levenberg–Marquardt (LM) [10] method, can be interpreted as a MAP estimator. With the variational approach employing for MAP optimization, the variational Kalman filter (VKF) [11] has been proposed. By using the Newton–Raphson iterative optimization steps to yield an approximate MAP estimation, the generalized iterated Kalman filter (GIKF) [12] algorithm has been presented to handle the nonlinear stochastic discrete-time system with state-dependent multiplicative estimation. Generally speaking, the MAP estimator methods need the iterative procedures to obtain the final estimation, and these iterative methods for nonlinear filtering have better performance. As the IEKF outperforms EKF and UKF, as shown by Lefebvre [13], the Iterated UKF (IUKF) [14] performs better than the UKF in the estimation of state and the corresponding covariance matrix.
Recently, Morelande [15] has adopted the Kullback–Leibler (KL) divergence as the metric to analyze the difference between the true joint posterior PDFs of the state conditional on the measurement and the approximation posterior PDFs. Actually, this metric can be used to derive new algorithms. The iterated posterior linearization filter (IPLF) [16] can be seen as an approximate recursive KL divergence minimization procedure. The adaptive unscented Gaussian likelihood approximation filter (AUGLAF) [17] selects the best approximation to the posterior PDFs based on the KL divergence. The KL partitioned update Kalman filter (KLPUKF) [18] uses KL divergence to measure the nonlinearity of the measurement. In essence, the KL divergence, also known as relative entropy, is a quantity in information theory. This optimization criterion of information theory has already been applied in signal processing. By utilizing the information theoretic quantities to capture the higher-order statistics, we can obtain the significant performance improvement. Meanwhile, another optimization criterion in information theoretic learning (ITL), i.e., maximum correntropy criterion (MCC), has been introduced for the filtering problems. With this criterion involving in the existing filtering framework, some new Kalman-type filters have been proposed, such as maximum correntropy Kalman filter (MCKF) [19], maximum correntropy unscented Kalman filter (MCUKF) [20], robust information filter based on maximum correntropy criterion [21].
Enlightened by the information theoretic quantities applied in nonlinear filtering, we consider the nonlinear filtering from the information geometric viewpoint. Information geometry, which was originally proposed by Amari [22], has become a new mathematical tool for the study on manifold of probability distributions. The combination of information theory and differential geometry opens a new perspective to study the geometric structure of information theory and provides a new way to deal with the existing statistical problems. In this paper, we will study the nonlinear filtering by using information geometric method. By using the joint PDFs of the measurement and the state to construct the statistical manifold, the nonlinear characteristics can be represented as the geometric quantities, such as metric tensor, and the filtering problems are converted to the optimization problems on the statistical manifold. In this way, the nonlinear filtering can be progressed by the information geometric optimization method, and it will induce an iterative procedure for estimation. The natural gradient descent [23] method is used to seek the optimal estimation across the statistical manifold, and the distance defined on the statistical manifold is utilized to design as the stopping criterion to achieve the goal of filtering.
The paper is organized as follows. Firstly, we give a brief description for Bayesian filtering and information geometry in Section 2 and Section 3, respectively. Then, the adaptive natural gradient descent method on the statistical manifold is presented to derive the new nonlinear filtering algorithm in Section 4. Further discussion about our proposed method will be given in Section 5, and the numerical simulations are implemented to demonstrate the performance in Section 6. Finally, conclusions are made in Section 7.

2. Bayesian Filtering

Bayesian principle provides a general approach for nonlinear filtering, and this approach is called as Bayesian filtering [5]. Bayesian filtering converts the state and measurement from the state-space to probability distribution. The goal of Bayesian filtering is to estimate the state of a nonlinear dynamic process conditional on measurement. The formulations of Bayesian filtering are
p x k | y k 1 = p x k | x k 1 p x k 1 | y k 1 d x k 1
p x k | y k = p y k | x k p x k | y k 1 p y k | x k p x k | y k 1 d x k
with the probability densities as follows
x k | x k 1 p x ; f x k 1 , Q
y k | x k p y ; h x k , R
which correspond to the general state-space model formulation
x k = f x k 1 + u k
y k = h x k + v k
where f and h denote the state transition and measurement functions, and the covariance matrix Q and R correspond to the zero mean Gaussian noise u k and v k , respectively.
For the Bayesian filtering problem, Equation (1) represents the state propagation, while Equation (2) represents the measurement update. Because it is usually intractable to calculate analytically for Bayesian posterior distribution, the optimization methods are used to avoid the troublesome integral and address this problem in a computationally feasible way, such as the MAP. Compared with the LMMSE method, the advantage of the MAP method is that there is no need to solve the integral operations in Bayesian posterior distribution. Further, when we consider these PDFs, the information geometry [22] provides an alternative approach, and the optimization on the statistical manifold can be utilized to derive the new filtering algorithm.

3. Information Geometry

3.1. Riemannian Metric Tensor

Consider a parameterized family of probability density as S = { p y | θ , θ = ( θ 1 , , θ n ) T Θ } , where y R m is a measurable random variable, θ is the parameter to be estimated, and p y | θ is the conditional PDF of y given θ . With the parameter θ acting as the coordinate system, S can be regarded as an n-dimensional manifold. When the Fisher information matrix (FIM) [24]
F θ i j E y | θ log p y | θ θ i log p y | θ θ j = E y | θ 2 log p y | θ θ i θ j
is defined as the Riemannian metric tensor, S is a statistical manifold. Usually, this metric tensor in statistical manifold is also called as Fisher metric tensor [25]. E y | θ · denotes the expectation with respect to p y | θ .
When θ has the prior PDF p ( θ ) , the Bayesian principle can be used to characterize the joint PDF p y , θ of y and θ . The Fisher metric tensor for Bayesian problems is defined as follows [24]
G θ i j E y , θ 2 log p y , θ θ i θ j = E y | θ E θ 2 log p y | θ θ i θ j E θ 2 log p θ θ i θ j
where E y , θ · and E θ · denote the the expectations with respect to p y , θ and p θ , respectively.
This Fisher metric tensor (8) is the expected Fisher information on the measurement plus the negative Hessian of the log-prior on the parameter. The first part of Equation (8) characterizes the measurement conditional on the parameter, while the second part includes the effect of prior information on the parameter. In other words, the two terms of Equation (8) correspond to the information obtained from the measurement and the prior distribution of the parameter, respectively.
Besides, the Fisher metric tensor is relative with the Bayesian posterior Cramér–Rao Bounds (PCRB), which is applicable to multidimensional nonlinear, possible non-Gaussian, dynamical systems [26]. From the viewpoint of statistical inference, we can measure the performance of estimation by using the Bayesian PCRB. The PCRB on the estimation error has the formulation as
PCRB E y , θ θ ^ y θ θ ^ y θ T G 1 θ
where θ ^ y denotes the estimate of θ , and G θ is the n × n Fisher information matrix with the elements defined in the Equation (8) The inequality means that the difference PCRB G 1 ( θ ) is a positive semidefinite matrix. This inequality is used to describe the estimation error bound. Usually, in application, the covariance matrix of the estimation is approximated by the inverse of Fisher information matrix.

3.2. Natural Gradient Descent

The gradient method is a general approach for solving optimization problem. For most of the problems that estimate the parameter θ in Euclidean space, the gradient descent update is defined as
θ ^ i = θ ^ i 1 θ L θ ^ i 1
where θ L is the gradient of convex differentiable objective function L, and it determines the update direction for next iterative step. However, it is not suitable for the Riemannian manifold, because of the curvature of the manifold. Based on the Riemannian metric tensor, the gradient on Riemannian manifold has been proposed as ˜ θ L θ = G 1 θ θ L θ known as the natural gradient. Thus, the natural gradient descent update on the Riemannian manifold as
θ ^ i = θ ^ i 1 η i G 1 θ ^ i 1 θ L θ ^ i 1
where G ( θ ) is the Riemannian metric tensor associated with the estimated parameter θ , i is the number of iterative steps and the parameter η i 0 , 1 denotes the step-size parameter for iterative update. The natural gradient descent multiples the inverse of Riemannian metric tensor by the gradient of objective function. It takes into account the direction of steepest descent on the Riemannian manifold, which involves the curvature of the manifold. It has been proven that steps along the direction of the natural gradient descent is the steepest descent on the Riemannian manifold [27]. The natural gradient method has been used in many application, such as nonlinear estimation [28]. In addition, the natural gradient desecent is Fisher Efficient [23]. Luo et al. [29] has analyzed the convergence and bound properties of natural gradient descent method. After the natural gradient descent method constructing the iterative update procedure, the stopping conditions have to set to obtain the final estimate. Usually, the distance between two successive estimates has been used for these conditions.

3.3. Divergence and Distance

In Euclidean space, the distance can be used to describe the difference between two quantities, and it is defined by Euclidean norm as Δ θ = < Δ θ , Δ θ > = Δ θ T Δ θ . While in the Riemannian manifold, the distance is defined with Riemannian metric tensor G θ as Δ θ G θ = < Δ θ , Δ θ > G θ = Δ θ T G θ Δ θ . In statistical manifold with Fisher metric tensor instead of Riemannian metric tensor, Amari [30] has defined the squared distance between two nearby distributions p ( y , θ ) and p ( y , θ + Δ θ ) as
d s 2 = Δ θ T G θ Δ θ = < Δ θ , Δ θ > G θ
Besides, the KL divergence has provided another means to measure the similarity of two nearby probability distributions. The KL divergence is defined as
D KL p q = p y log p y q y
where p and q denote two probability densities. The KL divergence is also called relative entropy as in information theory. As for the KL divergence, it is a good measure of difference with the desired mathematical properties [31].
Let q approximate the neighborhood p as q y , θ = p y , θ + Δ θ , and the Taylor expansion gives an approximation of the KL divergence by
D KL p y , θ p y , θ + Δ θ = E y , θ log p y , θ p y , θ + Δ θ = E y , θ log p y , θ E y , θ log p y , θ + Δ θ E y , θ log p y , θ E y , θ log p y , θ + i = 1 m log p y , θ θ i Δ θ i + 1 2 i , j = 1 m 2 log p y , θ θ i θ j Δ θ i Δ θ j = E y , θ 1 2 i , j = 1 m 2 log p y , θ θ i θ j Δ θ i Δ θ j = 1 2 Δ θ T G θ Δ θ
where G ( θ ) denotes the Fisher metric tensor of statistical manifold, Δ θ i denotes the i-th scalar of Δ θ . In the equation, E log p y , θ θ = 0 [22] has been used. From this relationship, we can note that the KL divergence has included the second order information of probability density. Compared with the Amari’s squared distance, the KL divergence has local behavior that it is approximately a half of the squared distance for the statistical manifold, which coincides with the geodesic distance for infinitesimal distances [32]. With the help of divergence and distance, we can measure how close between two estimates from the viewpoint of statistical manifold.
For the particular statistical manifold of multivariate Gaussian, it has the explicit formulation of probability density
p ( y ; μ , Σ ) = exp 1 2 ( y μ ) T Σ 1 ( y μ ) ( 2 π ) m det ( Σ )
where y is the random variable, μ R m and Σ S y m ( m ) denote the mean and the covariance matrix, respectively. S y m ( m ) is the space of real symmetric m × m positive-definite matrix. The mean and the covariance matrix are the unknown parameters to be estimated. The statistical manifold is constructed by the probability density as S = p y ; μ , Σ , μ , Σ R m × S y m ( m ) . The logarithm likelihood of the multivariate Gaussian can be re-written as
= log p y ; μ , Σ = 1 2 ( y μ ) T Σ 1 ( y μ ) 1 2 log det ( Σ ) m 2 log ( 2 π )
Consider μ and Σ as the mutual independent parameters, the first order partial derivatives of with respect to μ and Σ are
μ = Σ 1 y μ
Σ = 1 2 tr Σ 1 y μ y μ T Σ + 1 2 log det Σ Σ = 1 2 Σ 1 y μ y μ T Σ 1 1 2 Σ 1
then, we can compute the second order partial derivatives of as follows
μ μ T = Σ 1
Σ Σ T = Σ 1 Σ y μ y μ T Σ 1 1 2 Σ 1 Σ
The Fisher metric tensor with respect to μ is
F μ = E y μ μ T = Σ 1
and the Fisher metric tensor with respect to Σ is
F Σ = E y Σ Σ T = Σ 1 Σ Σ Σ 1 + 1 2 Σ 1 Σ = 1 2 Σ 1 Σ 1
where ⊗ denotes the Kronecker product.
With E y μ Σ T = E y Σ μ T T = 0 , we can obtain the distance
d s 2 = Δ μ T Σ 1 Δ μ + 1 2 vec Δ Σ T Σ 1 Σ 1 vec Δ Σ = Δ μ T Σ 1 Δ μ + 1 2 tr Σ 1 Δ Σ 2
where m is the dimension of data y , Δ μ and Δ Σ denote the variations of μ and Σ , respectively. In the above procedure of deriving, the following equations [33] have been used
tr ABCD = vec D T T C T A vec B
tr A X 1 B X = X 1 BA X 1
log det X X = X 1
X 1 X = X T X 1
where A , B , C , D , X are matrices.
The distance between two quantities on statistical manifold is corresponding to the KL divergence between two probability densities. When we compare two quantities on statistical manifold, the distance can measure their similarity. The shorter distance means the smaller divergence of two probability densities. This can be used to describe the convergence of estimation.
Conventionally, the gradient method will process an iterative procedure to estimate the state, and the distance between two estimates is used to measure the convergence of the algorithm. Intuitionally, the convergence on statistical manifold is that the difference between two probability distributions corresponding to two estimates is very small. In the iterative estimation procedure, the convergence means that two successive estimates are almost equivalent, or, in practice, the distance between two estimates is less than a certain value.

4. Natural Gradient Descent Filtering

For the Bayesian filtering, the posterior distribution plays the key role in the procedure. However, the close-form of posterior PDFs is intractable because of the Bayesian integral. Usually, the optimization technique has been used to obtain the approximation formulation. In this section, we consider this problem from the information geometric perspective, and derive a new filtering method by using information geometric optimization technique.
In the Bayesian filtering, the state propagation (Equation (1)) can provide the prior information of state before the measurement update. The prior PDF is Gaussian density when the state transition function is linear. While the state function is nonlinear, the prior PDF is non-Gaussian. Here, we focus on the measurement update step, and make an effort to use information geometric optimization for the posterior PDF. Similar to the usual Gaussian filtering, the Gaussian density is used to approximate the non-Gaussian density in the step of state propagation [16], which has the formulation as
p x k | y k 1 N x k ; x ^ k , Σ k
where x ^ k and Σ k denote the mean and covariance matrix of the prior PDF after state propagation. It means that the k-time state x k based on the k 1 -time measurement y k 1 . With the k-time measurement likelihood function, i.e., the conditional probability density of the measurement y k given x k is
p y k | x k = N y k ; h ( x k ) , R
We can obtain the Bayesian posterior probability density by substituting Equations (28) and (29) into Equation (2). The numerical optimization is used to obtain the approximative solution for Bayesian integral. Similar to MAP, the optimization is reformulated as
x ^ k = arg max x k p x k | y k = arg min x k L x k
where L x k = log p y k | x k log p x k | y k 1 denotes the negative logarithm likelihood function of posterior distribution, which neglects the terms independent of the x k .
Consider the statistical manifold S = { p y k , x k , x k R n } constructed by the joint probability density p y k , x k , the natural logarithm maps the statistical manifold to R as log : S R . Given a point x ^ k , the Fisher metric tensor of S at x ^ k can be calculated as
G ( x ^ k ) = E y k , x k x k x k log p y k , x k T = E x k E y k | x k e y T R 1 x k x k h x ^ k + x k h x ^ k T R 1 x k h x ^ k + Σ k 1 = E x k x k h x ^ k T R 1 x k h x ^ k + Σ k 1 = x k h x ^ k T R 1 x k h x ^ k + Σ k 1
where x k h x k = h ( x k ) x k R m × R n denotes the first order partial derivative of h with respect to x k , and e y = h ( x k ) y k represents the error of the measurement prediction with the state estimation x k = x ^ k .
From Equation (31), we can note that the Fisher metric tensor consists of two parts: the measurement information and the prior information of the state. It also means that the curvature of the statistical manifold is affected by the measurement data and the prior information. It is evident that the effect of nonlinear measurement is reflected by the first terms of the Fisher metric tensor. Equation (31) establishes the relationship between the nonlinear measurement and the metric tensor.
Since the joint PDFs construct the statistical manifold, the minimization of L x k is converted to the optimization on the statistical manifold for the best estimation. This optimization on statistical manifold considers the measurement and state in a unified approach, and the natural gradient descent method can be used to seek the optimal estimation on the manifold along geodesic lines.
By computing the first order partial derivative of L x k , we can obtain the gradient
x k L x k = x k h x k T R 1 e y + Σ k 1 e x
where e y = h ( x k ) y k and e x = x k x ^ k denote the individual error of the measurement prediction and the state estimation.
Since R 1 is a symmetric positive-definite diagonal matrix, x k h x ^ k T R 1 x k h x ^ k is positive semi-definite. Note that Σ k 1 is positive, so G ( x k ) is nonsingular. Thus, the natural gradient of the statistical manifold is defined as
G 1 x ^ k x k L x ^ k = x k h x ^ k T R 1 x k h x ^ k + Σ k 1 1 x k h x ^ k T R 1 e y + Σ k 1 e x
With the natural gradient descent on the statistical manifold, we can update the state estimation
x ^ k + = x ^ k η G 1 x ^ k x k L x ^ k = x ^ k η H T R 1 H + Σ k 1 1 H T R 1 h x ^ k y k
and the covariance matrix is approximated by the inverse of the Fisher metric tensor
Σ ^ k = G 1 x ^ k = H T R 1 H + Σ k 1 1
where H = x k h x k x k = x ^ k is the Jacobian matrix of measurement function h .
Usually, one step cannot achieve the best estimation, so more steps must be utilized to achieve the final estimation. The states are updated iteratively in state space, while the corresponding posterior probabilities are moving across the statistical manifold.
To sum it up, we can construct an iterative estimation of state through the natural gradient descent method
x ^ k i + 1 = x ^ k i η i G 1 x ^ k i x k L x ^ k i = x ^ k i η i H i T R 1 H i + Σ ^ k i 1 1 H i T R 1 h ( x ^ k i ) y k + Σ ^ k i 1 ( x ^ k i x ^ k 0 ) = x ^ k i + η i H i T R 1 H i + Σ ^ k i 1 1 H i T R 1 y k h ( x ^ k i ) Σ ^ k i 1 ( x ^ k i x ^ k 0 )
which corresponds to the moving from p y k , x ^ k i to p y k , x ^ k i + 1 on statistical manifold. In Equation (36), H i = x k h x k | x k = x ^ k i , and x ^ k 0 = x ^ k is the prior information of state provided by the state propagation. Meanwhile, the error covariance matrix associated with x ^ k i + 1 is approximated by the inverse of Fisher metric tensor
Σ ^ k i + 1 = G 1 x ^ k i = H i T R 1 H i + Σ ^ k i 1 1
Therefore, we can obtain the iterative posterior mean and covariance matrix as ( x ^ k i + 1 , Σ ^ k i + 1 ) . When the stopping criteria are satisfied, the iterative procedure will be terminated, and the filtered state will be achieve.

4.1. Adaptive Step-Size

In this iterative procedure, there are some parameters that must be taken into account. One of them is the step-size parameter, which describes the update step-size in each iterated step. It can be a fixed value through the whole iterative procedure. As an alternative, the step-size is initialized and adjusted in each iteration. There are many strategies to adjust the value of step-size in order to achieve the sufficient decrease during the iterative procedure. In our proposed method, the value of step-size can be obtained by an exact line search [34]
η i = arg min 0 < η 1 L x ^ k i + η × ˜ x k L x ^ k i
where ˜ x k L x ^ k i denotes the natural gradient. In each iteration, the searching direction described by the natural gradient is fixed, and we just need to select the parameter η . Usually, the candidates of η can be generated randomly.

4.2. Stopping Criterion

In the iterative procedure, the number of steps can be fixed as a constant. However, it has not considered the convergence and may lead to additional computational burden. Alternatively, the number of steps is acquired according to certain stopping criterion of the iterative procedure. As in the conventional IEKF, the stopping criterion is that the distance of the state estimates between two successive iterations is smaller than a given constant α , that is,
x ^ k i + 1 x ^ k i 2 = x ^ k i + 1 x ^ k i , x ^ k i + 1 x ^ k i < α
While the distance x ^ k i + 1 x ^ k i 2 is defined in Euclidean space, the counterpart in Riemannian manifold is
x ^ k i + 1 x ^ k i G 2 = x ^ k i + 1 x ^ k i , x ^ k i + 1 x ^ k i G
We can use this distance to measure the convergence on the statistical manifold. Owing to the equivalence between manifold distance and KL divergence, we also can utilize the KL divergence to measure the convergence. Here, we adopt the KL divergence (Equation (14)) instead of the distance in stopping criterion, i.e.,
D KL p ( y k , x ^ k i ) p ( y k , x ^ k i + 1 ) < γ 2
Furthermore, it also describe the divergence between two probability densities of successive iterations. When the convergence is achieved, the divergence level is very low.

5. Discussion

5.1. Comparison with KF

When the conditions of state-space model have become linear and Gaussian, i.e., f ( x k 1 ) = F x k 1 and h ( x k ) = H x k , the Fisher metric is
G ( x k ) = H T R 1 H + Σ k 1
It is independent of the state x k , which means that the metric tensor is a constant across the statistical manifold. Thus, we need only one step to estimate the state and the step-size is full step by setting η = 1 . The natural gradient descent filtering (NGDF) is simplified as
x ^ k = x ^ k G 1 x k L x k x k = x ^ k = x ^ k H T R 1 H + Σ k 1 1 H T R 1 e y + Σ k 1 e x x k = x ^ k = x ^ k + Σ k H T H Σ k H T + R 1 ( y k H x k )
where h ( x k ) = H x k , e y = h ( x k ) y k , and e x = x k x ^ k . When the first order partial derivative is calculated at the point of prior estimation, e x will be zero. Besides, the matrix inversion lemma (Equation (48)) has been used. Equation (43) is the same as the measurement update of conventional Kalman filter, while the Kalman gain is defined as K = Σ k H T H Σ k H T + R 1 . Meanwhile, the error covariance matrix (Equation (37)) can be calculated as
Σ ^ k = G 1 x k = H T R 1 H + Σ k 1 1 = Σ k Σ k H T H Σ k H T + R 1 H Σ k = Σ k K H Σ k H T + R K T
Compared with the state estimation and error covariance matrix in Kalman filter, our method NGDF is equivalent to the Kalman filter when the conditions are linear and Gaussian. As for the EKF, NGDF has made the linear approximation for nonlinear case, so it has a similar formulation to Kalman filter.

5.2. Comparison with IEKF

Furthermore, if we set the step-size parameter as η = 1 in the NGDF, the update of state is reformulated as
x ^ k i + 1 = x ^ k i + H i T R 1 H i + Σ ^ k i 1 1 H i T R 1 y k h ( x ^ k i ) Σ ^ k i 1 ( x ^ k i x ^ k 0 ) = x ^ k i + H i T R 1 H i + Σ ^ k i 1 1 H i T R 1 y k h ( x ^ k i ) H i T R 1 H i + Σ ^ k i 1 1 Σ ^ k i 1 ( x ^ k i x ^ k 0 ) = x ^ k i + Σ ^ k i H i T H i Σ ^ k i H i T + R 1 y k h ( x ^ k i ) Σ ^ k i Σ ^ k i H i T H i Σ ^ k i H i T + R 1 H i Σ ^ k i Σ ^ k i 1 x ^ k i x ^ k 0 = x ^ k 0 + Σ ^ k i H i T H i Σ ^ k i H i T + R 1 y k h ( x ^ k i ) + Σ ^ k i H i T H i Σ ^ k i H i T + R 1 H i x ^ k i x ^ k 0 = x ^ k 0 + K i y k h ( x ^ k i ) + H i x ^ k i x ^ k 0
where K i = Σ ^ k i H i T H i Σ ^ k i H i T + R 1 , and the covariance matrix is approximated by the inverse of the Fisher metric tensor
Σ ^ k i + 1 = G 1 x ^ k i = Σ ^ k i Σ ^ k i H i T H i Σ ^ k i H i T + R 1 H i Σ ^ k i
In the above procedure, the two forms of the matrix inversion lemma
( H T R 1 H + Σ 1 ) 1 H T R 1 = Σ H T ( H Σ H T + R ) 1
H T R 1 H + Σ 1 1 = Σ Σ H T H Σ H T + R 1 H Σ
have been used.
Obviously, the aforementioned iterative procedure in the NGDF is the same as the IEKF method, when we set η = 1 for the step-size parameter. However, the step-size of the NGDF is usually adjusted according to the objective function. The adjustment of step-size may make the NGDF improve the accuracy of estimation. This will be illustrated in the following experiments.

5.3. Comparison with GIKF

Apart from the natural gradient method, the Newton–Raphson method is another gradient method. Hu [12] has proposed the GIKF based on Newton–Raphson method. It should be noted that the natural gradient method is completely different from the Newton–Raphson method. The natural gradient multiples the inverse of the Riemannian metric tensor by the gradient, while the Newton–Raphson multiples the inverse of the Hessian matrix by the gradient. The Riemannian metric tensor is the metric of the underlying space independent of the objective function to be approximated, but the Hessian matrix is dependent on the objective function or the parameter coordinate. Thus, the natural gradient, which using Riemannian metric tensor instead of the Hessian matrix, is more robust [28].

5.4. Relationship with Riemannian Manifold MCMC

Except the MAP technique, the Monte Carlo technique is an alternative method to obtain the integral in Bayesian filtering. Usually, the Sequence Monte Carlo (SMC) and the Markov chain Monte Carlo (MCMC) are two widely used methods. The SMC has been used in the filtering problems, which is also called as PF. The MCMC method is a fundamental tool to generate samples from a posterior density in Bayesian data analysis and inference, and it is robust and excellent for nonlinear filtering and manifold learning. Recently, the geometric concepts are introduced into the MCMC method. As we take the underlying geometric structure into account for the recursive estimation based on PDFs, the Riemannian manifold is used in the MCMC method. This method has been induced by the Girolami [35] as
θ n + 1 = θ n + ε 2 2 G 1 ( θ n ) θ ( θ n ) + ε G 1 ( θ n ) z n
where ε ( 0 , 1 ] denotes the step-size of integration, and the random variable satisfies z N ( z ; 0 , I ) . The sampling mechanism can be rewritten as Gaussian form N θ ; μ ( θ n , ε ) , Σ ( θ n , ε ) , where
μ ( θ n , ε ) = θ n + ε 2 2 G 1 ( θ n ) θ ( θ n )
Σ ( θ n , ε ) = ε 2 G 1 ( θ n )
These are similar to our proposed filtering method. The state update is processed by the natural gradient method, and the covariance matrix is approximated by the inverse of the Riemannian metric tensor. Compared with our proposed method, the difference is the step-size in state update and the decaying factor for the Riemannian metric tensor. The reason is that the Riemannian manifold MCMC is derived by the Hamilton dynamics and Langevin diffusion, while our proposed method is derived by information geometric optimization on statistical manifold. In addition, the metric tensor plays an important role in Riemannian manifold, and the natural gradient method provides the general approach for optimization.

6. Simulation

In this section, we compare the classical filtering methods including EKF, IEKF and RUF with our method NGDF in the application of passive target tracking [14,36]. The EKF has just one step in the update procedure, and the IEKF is an iterative filtering method derived by the Netwon method. The RUF [7,8] is an another iterative filtering method derived by the LMMSE, not the MAP, and it has the fixed number of steps.
For the system setting, the state at time instant k is x k = x k , x ˙ k , y k , y ˙ k T consisting of position vector x k , y k T and velocity vector x ˙ k , y ˙ k T , while the measurement at same instant is z k = [ θ k , θ ˙ k , f ˙ k ] T consisting of bearing θ k , bearing rate θ ˙ k , and Doppler rate f ˙ k .
The state equation is linear and represented as
x k = F x k 1 + G v k
where
F = I 2 1 τ 0 1 G = I 2 0.5 τ 2 τ
where I 2 is a 2 × 2 identity matrix, ⊗ is the Kronecker product, τ is the sampling time, and v k = v x k , v y k T is zero-mean process noise with covariance matrix Q k .
The measurement equation is
z k = θ k θ ˙ k f ˙ k = arctan y k / x k y ˙ k x k x ˙ k y k / r k 2 y ˙ k x k x ˙ k y k 2 / λ r k 3 + n θ k n θ ˙ k n f ˙ k h x k + n k
where r k = x k 2 + y k 2 , λ denotes the wavelength of received signal, and n θ k , n θ ˙ k , n f ˙ k are mutually independent zero-mean Gaussian distributed noises with covariance matrix R k = diag [ σ θ 2 , σ θ ˙ 2 , σ f ˙ 2 ] . Here, we treat x k , y k as the coordinate of target, and z k as the measurement, which are different from the notations in the above sections.
In this simulation, we consider 200 time steps for tracking, and τ = 0.5 s , λ = 0.1 m , Q = diag ( [ ( 9 m / s 2 ) 2 , ( 2 m / s 2 ) 2 ] ) . The prior PDF at time 0 is p x 0 = N x 0 ; μ 0 , Σ 0 , where μ 0 = [ 800 m , 50 m / s , 300 m , 10 m / s ] T , and
Σ 0 = σ x 2 0 ρ σ x σ y 0 0 σ v 2 0 0 ρ σ x σ y 0 σ y 2 0 0 0 0 σ v 2
where σ x = 500 m , σ y = 200 m , ρ = 0.95 , and σ v = 100 m / s .
We carry out the numerical experiment considering two aspects: the first is different initialization parameters in filtering method for the same track, while the second is different tracks with different noise level. The normalization root mean square error (RMSE) of position and velocity are utilized to illustrate the performance of tracking. They are defined as follows
RMSE Pos = x x ^ 2 + y y ^ 2 x 2 + y 2
RMSE Vel = v x v ^ x 2 + v y v ^ y 2 v x 2 + v y 2
where x , v x , y , v y T denotes the true state, and x ^ , v ^ x , y ^ , v ^ y T is estimation state. For the filtering methods, we set γ = 1 in the stopping criterion (41) for our proposed method, while α = 1 in (39) for the IEKF. In the iterative methods (IEKF, RUF and NGDF), the max number of iterative steps is N = 30 .
Firstly, we set the measurement noise variances as σ θ = 2 × 10 4 rad , σ θ ˙ = 10 5 rad / s and σ f ˙ = 0.05 Hz / s , and select a track as Figure 1a. We carry out 100 Monte Carlo runs for filtering, and average over different realizations to obtain the tracking and normalized position RMSE. In each Monte Carlo runs, the initial filtered state at time 0 is generated random according to the prior PDF. Usually, they are different. After filtering, the results are shown in Figure 1. In this figure, we can note that the different initial state has influenced the first few steps, and will perform well in the follow steps. In addition, the changeable trajectory can lead to the performance of tracking degradation. In this comparing experiment, we can know that the filtered track by our filtering method is closer to the true track than the other methods, and the normalized position RMSE of our method is lower than the others. Comparing the iterative filtering methods with EKF, the iterative methods have better performance. This is because that the iterative methods have utilized the more iterative steps to hold the nonlinear measurement function, and they are more accurate than the nonlinear processing in EKF methods.
Secondly, we consider different tracks with different noise level. We carry out 100 Monte Carlo runs for the tracks, and take the average over different realizations of the tracks and the corresponding filtered states. The initial true state is random generated according to the prior PDF of state. Because the random initial state is used in each track and the state noise is imposed on state at each time step, the tracks will be different for the Monte Carlo runs. In the Bayesian filtering framework, we focus on the measurement update, and the different measurement noise is considered in this paper. We analyze the three scenarios that differ in the measurement noise variances:
  • Scenario 1: σ θ = 2 × 10 4 rad , σ θ ˙ = 10 5 rad / s and σ f ˙ = 0.05 Hz / s ;
  • Scenario 2: σ θ = 2 × 10 6 rad , σ θ ˙ = 10 5 rad / s and σ f ˙ = 0.05 Hz / s ;
  • Scenario 3: σ θ = 2 × 10 6 rad , σ θ ˙ = 10 0.1 rad / s and σ f ˙ = 10 3 Hz / s .
The normalized RMSEs of position and velocity are shown in Figure 2, Figure 3 and Figure 4, and the number of iterative steps is shown in Figure 5. The results in Figure 2, Figure 3 and Figure 4 show that the iterative methods perform better than the EKF, and our proposed method NGDF outperforms other two iterative methods IEKF and RUF in the estimations of position and velocity. Our method has the stable performance that the normalized position RMSEs keep a lower level. The EKF method has the bad performance that it diverges at some times. Comparing with the estimation of velocity, the performance of four filtering methods fluctuate greatly. These are limited by the measurement of radar system. The position measurement can be obtained directly, while the velocity measurement is obtained indirectly that also need the position measurement. The error of position measurement and estimation also influence the velocity estimation. In this case, our method has small fluctuation comparing with the other methods. From Figure 2, Figure 3 and Figure 4, the EKF fluctuates greatly. It means that the EKF method diverges in many cases, and it is not suitable for this tracking problems with the nonlinear measurement. Meanwhile, the IEKF has some jump points. This is because the iterative estimation has not converged, but the stopping criterion is satisfied. Besides, we can note the performance of the RUF method. It has a stable performance second only to our method. However, it has far more computational burden than our method and the EKF and IEKF method shown in Figure 5.
To measure the computational burden, we compare the number of iteration intuitively. The average number of iteration in each time step is shown in Figure 5. In the simulation, the EKF uses one step to obtain the filtered state in each time step, i.e., the iteration number is N = 1 . The RUF is an iterative method, which has fixed iteration number as N = 30 . For these three scenarios, the iteration number of the IEKF is about N = 5 , while the iteration number of NGDF is roughly less than N = 4 in Figure 5a,b. In Figure 5c, the number of iteration of NGDF is more than IEKF before the time steps 80, but lower in the succedent time steps. These results can reflect the computational burden. In each time step, the computational burden of RUF is 30 times than the EKF, while the IEKF is about 5 times and NGDF is less 4 times. Comparing the IEKF and NGDF method, the step-size of NGDF is adaptive, which leads to the less steps and more robust estimation. While the IEKF uses the full step-size, which may fluctuate or delay the stopping time. Besides, we compare the execution times in milliseconds for the Scenario 1. The four methods are implemented with the Matlab on a Intel Core i7 laptop. The average of the execution times based on the 100 monte carlo runs are: EKF (11.5 s), IEKF (38.4 s), NGDF (26.4 s) and RUF (187.6 s). In addition, it should be noted that these times are computed for the whole filtering procedure which achieve filtering from the beginning of time steps to the end. We can conclude that the lowest computational burden is achieved by our proposed method.
In this simulation, we also note that the performance may be become bad after a certain tracking steps. This is because that the resolution limit of radar system measurement. The resolution limit of measurement will influence the accuracy of the measurement, thus make some effect on the estimation state. This is why that the tracking of radar system has a tracking time interval. Exceeding the time interval, the tracking performance is unavailable. In addition, the level of noise has influenced the track and estimation. The lower level of state noise may make the track tend to be fixed, while the lower level of measurement noise may make the measurement is more available. They can make the estimation more accurate, but there have no significant and determination relationship between noise level and filtering. This is because the nonlinear function between state and measurement. Usually for the filtering problems, we have made some simplification and considered some case to compare the filtering methods.
To sum up, in this simulation, our method is the filtering method with highest performance than the other methods. It has a stable performance, and the increase in performance of our method does not imply a much higher computational burden compared to other iterative methods.

7. Conclusions

In this paper, we have derived a novel filtering method by utilizing the information geometric approach. The filtering problem has been converted to an optimization on the statistical manifold constructed by the joint PDFs of Bayesian filtering, and the adaptive natural gradient descent method is used to search the optimal estimation. In the filtering procedure, curvature characteristic brought about by the nonlinearity of the observation operator h is considered carefully by the Fisher metric tensor. For the Bayesian filtering, the Fisher metric tensor consists of measurement likelihood and prior information of state. For the linear case, the metric is constant, while variable in the nonlinear case. Then, the adaptive natural gradient descent technique is used to derive the iterative filtering, and the KL divergence is employed in the stopping criterion. Furthermore, the conventional Kalman filter, EKF and IEKF are the special formulations of our proposed method under certain conditions. With adaptive step-size and KL divergence stopping criterion, the proposed method has made some improvement over EKF, IEKF and RUF.
For the Bayesian nonlinear filtering, the posterior density may be non-Gaussian. There are two reasons bringing about the non-Gaussian cases. The first reason is that non-Gaussian observation densities make the posterior density non-Gaussian. An optimal filter has been proposed to address this problem [37]. It can modify the Kalman filter to handle the non-Gaussian observation density. The second reason is that nonlinear measurement makes the Gaussian densities become non-Gaussian. The Monte Carlo method can be used to solve this problem with large computational burden.
In the derivation of our method, we can note that the information geometric optimization for filtering can be extended to some non-Gaussian case, such as the exponential density. When the density has the analytic form, the metric can be computed, and the information geometric optimization can be used to derive the filtering method. Besides, we can use the fact that the non-Gaussian density can be approximated by the sum of some Gaussian densities to convert the non-Gaussian density into the Gaussian density, then the proposed method in this paper can be utilized in each Gaussian density component. In future work, we will continue to combine the information geometric optimization with non-Gaussian filtering, and provide some approaches for addressing the problems of non-Gaussian density in the filtering.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant No. 61601479. The authors gratefully acknowledge the reviewers for their very valuable and insightful comments and suggestions, which have improved the presentation.

Author Contributions

Yubo Li and Yongqiang Cheng put forward the original ideas and performed the research. Xiang Li and Hongqiang wang conceived and designed the simulations comparing with other existing methods. Xiaoqiang Hua discussed the adaptive natural gradient descent method. Yuliang Qin reviewed the paper and provided useful comments. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haug, A. Bayesian Estimation and Tracking: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  2. Julier, S.; Uhlmann, J. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  3. Arasaratnam, I.; Haykin, S.; Elliott, R. Discrete-Time Nonlinear Filtering Algorithms Using Gauss–Hermite Quadrature. Proc. IEEE 2007, 95, 953–977. [Google Scholar] [CrossRef]
  4. Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  5. Stano, P.; Lendek, Z.; Braaksma, J.; Babuška, R.; Keizer, C.; den Dekker, A. Parametric Bayesian Filters for Nonlinear Stochastic Dynamical Systems: A Survey. IEEE Trans. Cybern. 2013, 43, 1607–1624. [Google Scholar] [CrossRef] [PubMed]
  6. Arulampalam, M.; Maskell, S.; Gordon, N. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  7. Zanetti, R. Recursive Update Filtering for Nonlinear Estimation. IEEE Trans. Autom. Control 2012, 57, 1481–1490. [Google Scholar] [CrossRef]
  8. Zanetti, R. Adaptable Recursive Update Filter. J. Guid. Control Dyn. 2015, 38, 1295–1299. [Google Scholar] [CrossRef]
  9. Bell, B.; Cathey, F. The iterated Kalman filter update as a Gauss-Newton method. IEEE Trans. Autom. Control 1993, 38, 294–297. [Google Scholar] [CrossRef]
  10. Bellaire, R.; Kamen, E.; Zabin, S. A new nonlinear iterated filter with applications to target tracking. In Proceedings of the SPIE’s 1995 International Symposium on Optical Science, Engineering, and Instrumentation, San Diego, CA, USA, 9–14 July 1995; Volume 2561, pp. 240–251. [Google Scholar]
  11. Auvinen, H.; Bardsley, J.; Haario, H.; Kauranne, T. The Variational Kalman Filter and an efficient implementation using limited memory BFGS. Int. J. Numer. Methos Fluids 2010, 64, 314–335. [Google Scholar] [CrossRef]
  12. Hu, X.; Bao, M.; Zhang, X.; Guan, L.; Hu, Y. Generalized Iterated Kalman Filter and its Performance Evaluation. IEEE Trans. Signal Process. 2015, 63, 3204–3217. [Google Scholar] [CrossRef]
  13. Lefebvre, T.; Bruyninckx, H.; Schutter, J. Kalman filters for nonlinear systems: A comparison of performance. Int. J. Control 2004, 77, 639–653. [Google Scholar] [CrossRef]
  14. Zhan, R.; Wan, J. Iterated Unscented Kalman Filter for Passive target tracking. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 1155–1163. [Google Scholar] [CrossRef]
  15. Morelande, M.; García-Fernández, Á. Analysis of Kalman filter approximations for nonlinear measurements. IEEE Trans. Signal Process. 2013, 61, 5477–5484. [Google Scholar] [CrossRef]
  16. García-Fernández, Á.; Svensson, L.; Morelande, M.; Sarkka, S. Posterior Linearization Filter: Principles and Implementation Using Sigma Points. IEEE Trans. Signal Process. 2015, 63, 5561–5573. [Google Scholar] [CrossRef]
  17. García-Fernández, Á.; Morelande, M.; Grajal, J.; Svensson, L. Adaptive unscented Gaussian likelihood approximation filter. Automatica 2015, 54, 166–175. [Google Scholar] [CrossRef]
  18. Raitoharju, M.; García-Fernández, Á.; Piché, R. Kullback-Leibler divergence approach to partitioned update Kalman filter. Signal Process. 2017, 130, 289–298. [Google Scholar] [CrossRef]
  19. Chen, B.; Liu, X.; Zhao, H.; Principe, J. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  20. Liu, X.; Qu, H.; Zhao, J.; Yue, P.; Wang, M. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation. Sensors 2016, 16, 1530. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, Y.; Zheng, W.; Sun, S.; Li, L. Robust Information Filter Based on Maximum Correntropy Criterion. J. Guid. Control Dyn. 2016, 39, 1124–1129. [Google Scholar] [CrossRef]
  22. Amari, S.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society: Providence, RI, USA, 2007. [Google Scholar]
  23. Amari, S. Natural Gradient works efficiently in learning. Neural Comput. 1998, 10, 251–276. [Google Scholar] [CrossRef]
  24. Van Trees, H.L.; Bell, K.L. Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  25. Amari, S. Information Geometry and Its Applications; Springer: Tokyo, Japan, 2016. [Google Scholar]
  26. Tichavský, P.; Muravchik, C.; Nehorai, A. Posterior Cramér-Rao Bounds for Discrete-Time Nonlinear Filtering. IEEE Trans. Signal Process. 1998, 46, 1386–1396. [Google Scholar] [CrossRef]
  27. Raskutti, G.; Mukherjee, S. The Information Geometry of Mirror Descent. IEEE Trans. Inf. Theory 2015, 61, 1451–1457. [Google Scholar] [CrossRef]
  28. Cheng, Y.; Wang, X.; Moran, B. Optimal Nonlinear Estimation in Statistical Manifolds with Application to Sensor Network Localization. Entropy 2017, 19, 308. [Google Scholar] [CrossRef]
  29. Luo, Z.; Liao, D.; Qian, Y. Bound Analysis of Natural Gradient Descent in Stochastic Optimization Setting. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancún, México, 4–8 December 2016; pp. 4166–4171. [Google Scholar]
  30. Amari, S. Information Geometry on Hierarchy of Probability Distributions. IEEE Trans. Inf. Theory 2001, 47, 1701–1711. [Google Scholar] [CrossRef]
  31. Oizumi, M.; Tsuchiya, N.; Amari, S. Unified framework for information integration based on information geometry. Proc. Natl. Acad. Sci. USA 2016, 113, 14817–14822. [Google Scholar] [CrossRef] [PubMed]
  32. Lenglet, C.; Rousson, M.; Deriche, R.; Faugeras, O. Statistics on Manifold of Multivaiate Nomal Distributions: Theory and Application to Diffusion Tensor MRI Processing. J. Math. Imaging Vis. 2006, 25, 423–444. [Google Scholar] [CrossRef]
  33. Zhang, X. Matrix Analysis and Applications, 2nd ed.; Tsinghua University Press: Beijing, China, 2013. [Google Scholar]
  34. Nocedal, J.; Wright, S. Numerical Optimization, 2nd ed; Springer: New York, NY, USA, 2006. [Google Scholar]
  35. Girolami, M.; Calderhead, B. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc. B 2011, 73, 123–214. [Google Scholar] [CrossRef]
  36. García-Fernández, Á.; Svensson, L. Gaussian MAP Filtering Using Kalman Optimization. IEEE Trans. Autom. Control 2015, 60, 1336–1349. [Google Scholar] [CrossRef]
  37. Masreliez, C. Approximate Non-Gaussian Filtering with Linear State and Observation Relation. IEEE Trans. Autom. Control 1975, 20, 107–110. [Google Scholar] [CrossRef]
Figure 1. The trajectories and normalized RMSE of Position: (a) trajectory; (b) normalized Position RMSE.
Figure 1. The trajectories and normalized RMSE of Position: (a) trajectory; (b) normalized Position RMSE.
Entropy 19 00655 g001
Figure 2. The normalized RMSEs of Scenario 1: (a) normalized Position RMSE; (b) normalized Velocity RMSE.
Figure 2. The normalized RMSEs of Scenario 1: (a) normalized Position RMSE; (b) normalized Velocity RMSE.
Entropy 19 00655 g002
Figure 3. The normalized RMSEs of Scenario 2: (a) normalized Position RMSE; (b) normalized Velocity RMSE.
Figure 3. The normalized RMSEs of Scenario 2: (a) normalized Position RMSE; (b) normalized Velocity RMSE.
Entropy 19 00655 g003
Figure 4. The normalized RMSEs of Scenario 3: (a) normalized Position RMSE; (b) normalized Velocity RMSE.
Figure 4. The normalized RMSEs of Scenario 3: (a) normalized Position RMSE; (b) normalized Velocity RMSE.
Entropy 19 00655 g004
Figure 5. The number of Iteration of three scenarios: (a) Scenario 1; (b) Scenario 2; (c) Scenario 3.
Figure 5. The number of Iteration of three scenarios: (a) Scenario 1; (b) Scenario 2; (c) Scenario 3.
Entropy 19 00655 g005

Share and Cite

MDPI and ACS Style

Li, Y.; Cheng, Y.; Li, X.; Wang, H.; Hua, X.; Qin, Y. Bayesian Nonlinear Filtering via Information Geometric Optimization. Entropy 2017, 19, 655. https://doi.org/10.3390/e19120655

AMA Style

Li Y, Cheng Y, Li X, Wang H, Hua X, Qin Y. Bayesian Nonlinear Filtering via Information Geometric Optimization. Entropy. 2017; 19(12):655. https://doi.org/10.3390/e19120655

Chicago/Turabian Style

Li, Yubo, Yongqiang Cheng, Xiang Li, Hongqiang Wang, Xiaoqiang Hua, and Yuliang Qin. 2017. "Bayesian Nonlinear Filtering via Information Geometric Optimization" Entropy 19, no. 12: 655. https://doi.org/10.3390/e19120655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop