Next Article in Journal
Langevin Dynamics with Variable Coefficients and Nonconservative Forces: From Stationary States to Numerical Methods
Next Article in Special Issue
Entropy Measures for Stochastic Processes with Applications in Functional Anomaly Detection
Previous Article in Journal
Quantum Information: What Is It All About?
Previous Article in Special Issue
Real-Time Robust Voice Activity Detection Using the Upper Envelope Weighted Entropy Measure and the Dual-Rate Adaptive Nonlinear Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

1
Department of Mathematics and System Science, National University of Defense Technology, Fuyuan Road No. 1, Changsha 410072, China
2
Beijing Institute of Control Engineering, China Academy of Space Technology, Beijing 100080, China
3
Unit 94, PLA 91550, Dalian 116023, China
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(12), 648; https://doi.org/10.3390/e19120648
Submission received: 15 November 2017 / Revised: 25 November 2017 / Accepted: 26 November 2017 / Published: 29 November 2017
(This article belongs to the Special Issue Entropy in Signal Analysis)

Abstract

:
As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

1. Introduction

Target track is a real-time estimation process to obtain the position, the velocity and other motion parameters for the tracked target through the measurement data. Because of the uncertainty and nonlinearity of the maneuver target motion process and the target measurement process, a lot of algorithms were researched and proposed for the maneuver target track in the past [1,2]. To sum up, the track algorithms study has been focused on two parts, i.e., maneuver target modeling and nonlinear filter design.
For the maneuver target modeling, many models were proposed to describe the target maneuvers. Singer model [3] is a classic model proposed by Singer in 1970. In 1982, Bar-shalom and Birmiwal [4] proposed a maneuver target track model with the maneuver detection. In 1984, Zhou and Kumar [5] proposed the current statistical model based on the Singer model. The jerk model proposed by Mehrotra [6] was another valid method to track the maneuver target. Among the models above, jerk model has the highest order. A higher model order can present higher mobility of target. Jerk model broadens the adaptive target maneuver form and is more suitable for high maneuver targets [5]. However, for the traditional jerk model, “jerk” α should be predefined and may be time-varying in the practical process. Therefore, Luo proposed α -jerk model [7], which considered α as a parameter to be estimated by the expanding-dimension method in the real-time filter process.
For the nonlinear filter design, a lot of techniques were proposed according to the different optimization rules, including the function approximation, the posteriori probability density approximation and the stochastic model approximation [8,9,10,11]. Extended Kalman Filter (EKF) is a typical technique of the function approximation based on Taylor series expansion. Unscented Kalman Filter (UKF) is the most common way of the sampling-based moment approximation [12,13]. The stochastic model approximation [14] can approximate the nonlinear function in some probabilistic senses. However, it has not gained popularity mainly because the expectations involved are not easy to evaluate. No matter what nonlinear filter methods are used, they are applied on the state estimation with Gaussian white noises in most time.
However, in practical applications, especially in the maneuver target track system, non-Gaussian noises do exist and have substantial effects on the state estimation. Up to now, some practical models are proposed, such as Gaussian mixture noise model [15], glint noise model [16] and so on.
In a non-Gaussian process, the performance of many nonlinear filters can break down [17]. However, the filter robustness against non-Gaussian noises can be improved by the following approaches:
  • The first approach is to develop filters for the systems with non-Gaussian noises directly. Noise distributions such as heavy-tailed distributions and t-distributions are considered in these filters [18,19]. However, it is difficult to handle more than one dimension, which limits its applicability [20].
  • Approximating the posteriori probability density is another practical approach to handle the non-Gaussian noises. The unscented Kalman filter (UKF) uses the unscented transformation (UT) technique to capture the mean and the covariance of the state estimation with sigma points [21]. The ensemble Kalman filter (EnKF) is a method to approximate the state estimation with a set of samples to handle non-Gaussian noises [22]. Gaussian sum filter (GSF) is an algorithm to obtain the filtering distribution and the predictive distribution recursively approximated as Gaussian mixtures [23,24,25].
  • Huber-based robust filter is a classical method to handle the non-Gaussian noises and outliers [26]. It has become the research focus these past several years and been applied in the field of the navigation and the target track [27,28,29,30].
  • A new robust Kalman filter is proposed by Chang [31] in recent years. It handles the outliers based on the hypothesis testing theory, which defines a judging index as the square of the Mahalanobis distance from the observation to its prediction. It can effectively resist the heavy-tailed distribution of the observation noises and the outliers in the actual observations.
  • Multi-sensor data fusion Kalman filter is a fuzzy logical method proposed by Rodger [32]. It can effectively improve the computational burden and the robustness of Kalman filter. Furthermore, it has been applied on the vehicle health maintenance system.
  • Maximum correntropy criterion is the latest optimization criterion that is used for improving Kalman filter. Maximum correntropy Kalman filter (MCKF) is a newly proposed filter to process the non-Gaussian noises [33]. In addition, several improved MCKF algorithms have been proposed and applied on state estimation [34,35,36,37].
Maximum correntropy criterion Kalman filter (MCCKF) [20] is the latest approach to handle the non-Gaussian noises with Gaussian kernel. It is different from the MCKF in the form and the derivation process. The aforementioned MCCKF has been proved to have higher accuracy than other approaches in a benchmark navigation problem with the non-Gaussian noises. However, it does not have an optimal rule for the kernel size selection, which is important for the robustness of MCCKF. Furthermore, it has not been applied on the maneuver target track. Thus, we adopt the Kalman filter based on the maximum correntropy criterion for α -jerk model with the non-Gaussian process to realize the maneuver target track.
The key contributions of this paper are as follows. Firstly, the robustness of Kalman filter algorithm based on the maximum correntropy criterion is introduced in the presence of the non-Gaussian noises. Then, the influence function of the filter algorithm, for the first time, is analyzed in this paper. Moreover, an optimal kernel size is obtained by a novel adaptive size adjusting skill. Finally, the large outliers and the Gaussian mixture noises in the maneuver target track with α -jerk model are properly handled by MCCKF.
The organization of this paper proceeds as follows. The α -jerk model is briefly reviewed in Section 2. In Section 3, we design the algorithm steps for MCCKF, analyze the robustness performance with the influence function and propose a novel kernel size selection method. In Section 4, simulation results are presented to compare the performances of different filter algorithms with respect to different noise conditions. Finally, conclusions are drawn in Section 5.

2. α -Jerk Model

Jerk model is a classical target tracking model. “Jerk” means the acceleration rate [6]. It is described as follows:
j ˙ t = α j t + ω t ,
where j t = x t is the target “jerk”, x t is the target position, α is the correlation parameter permitting the modeling of different classes of targets, ω t is the process noises with the zero mean and the variance σ ω 2 = 2 α σ j 2 , and σ j 2 is the variance for the target “jerk”. In the model, α is a fixed value. However, it is difficult to predefine and may be time-varying in the practical process.
α -jerk model considers α as a parameter to be estimated and supposes that α is a state variable. In this way, it will be real-time estimated in the filter process. Consider the following equation:
α ˙ t = ε t ,
where ε ( t ) denotes the zero mean Gaussian noises with the variance σ ε 2 .
Combined with Equation (2) and Jerk model [6], the state equation for the continuous time α -jerk when only one variable, x, in the state variable vector x y z T , is considered, the α -jerk model can be written as
d d t x x ˙ x ¨ x α = 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 α 0 0 0 0 0 0 x x ˙ x ¨ x α + 0 0 0 ω t ε t ,
where x, x ˙ , x ¨ , x represent the position, the velocity, the acceleration and the jerk of the aerial target in the x-direction, respectively. Denote
X = x x ˙ x ¨ x α T W t = 0 0 0 ω t ε t T
and
A = 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 α 0 0 0 0 0 0 .
The state equation can be expressed as:
X ˙ = AX + W t .
Equation (6) is discretized as:
X k = Φ k | k 1 X k 1 + W k ,
where
Φ k | k 1 = exp A T ,
where T is the time interval and Φ k | k 1 is the state transition matrix; W k is the process noises at time k.
From Equation (8), the discrete state transition matrix can be written as:
Φ k | k 1 = 1 T T 2 / 2 p 1 0 0 1 T q 1 0 0 0 1 r 1 0 0 0 0 s 1 0 0 0 0 0 1 ,
where
p 1 = 2 2 α T + α 2 T 2 2 e α T / 2 α 3 q 1 = e α T 1 + α T / α 2 r 1 = 1 e α T / α s 1 = e α T .
The variance of the process noises W k is
Q k = q 11 σ ω 2 q 12 σ ω 2 q 13 σ ω 2 q 14 σ ω 2 0 q 21 σ ω 2 q 22 σ ω 2 q 23 σ ω 2 q 24 σ ω 2 0 q 31 σ ω 2 q 32 σ ω 2 q 33 σ ω 2 q 34 σ ω 2 0 q 41 σ ω 2 q 42 σ ω 2 q 43 σ ω 2 q 44 σ ω 2 0 0 0 0 0 T σ ε 2 ,
where
q 11 = 1 / 2 α 7 α 5 T 5 / 10 α 4 T 4 / 2 + 4 α 3 T 3 / 3 + 2 α T 2 α 2 T 2 3 + 4 e α T + 2 α 2 T 2 e α T e 2 α T q 12 = q 21 = 1 / 2 α 6 1 2 α T + 2 α 2 T 2 α 3 T 3 + α 4 T 4 / 4 + e 2 α T + 2 α T e α T 2 e α T α 2 T 2 e α T q 13 = q 31 = 1 / 2 α 5 α 3 T 3 / 3 + 2 α T α 2 T 2 3 + 4 e α T + α 2 T 2 e α T e 2 α T q 14 = q 41 = 1 / 2 α 4 1 2 e α T α 2 T 2 e α T + e 2 α T q 22 = 1 / 2 α 5 1 e 2 α T + 2 α 3 T 3 / 3 + 2 α T 2 α 2 T 2 4 α T e α T q 23 = q 32 = 1 / 2 α 4 1 2 α T + α 2 T 2 + 2 α T e α T + e 2 α T 2 e α T q 24 = q 42 = 1 / 2 α 3 1 2 α T e 2 α T e 2 α T q 33 = 1 / 2 α 3 4 e α T + 2 α T e 2 α T 3 q 34 = q 43 = 1 / 2 α 2 1 2 e α T + e 2 α T q 44 = 1 / 2 α 1 e 2 α T .
The measurement equation can be expressed as:
Z k = H k X k + V k ,
where H k is the measurement matrix at time k; V k is the measurement noises and independent of the process noises W k .
In most situations, V k is the Gaussian white noises. However, in practice, due to the target glint, the scattering of the radar tracking equipment and the faults of the operators [16,38] and so on, the measurement noises may be non-Gaussian or some outliers exist in the measurement data. It can behave as:
V k = V k ( 1 ) + V k ( 2 )
or
V k = V k 1 + Δ V k ,
where V k 1 denotes the Gaussian white noises, V k 2 obeys another Gaussian distribution different from V k 1 , and Δ V k denotes the outliers.
In this paper, a new filter algorithm will be proposed to deal with the non-Gaussian noises like Equations (14) and (15).

3. Design for Maximum Correntropy Criterion Kalman Filter with Non-Gaussian Noise

3.1. Standard Kalman Filter

From Section 2, α -jerk model for the maneuver target track can be expressed as:
X k = Φ k | k 1 X k 1 + W k , Z k = H k X k + V k .
Assume W k and V k are independent Gaussian white noises, satisfying
E ( W k W j T ) = Q k , k = j , 0 , k j , , E ( V k V j T ) = R k , k = j , 0 , k j ,
where Q k and R k are known positive definite matrix.
In the KF, the predicted state X ^ k | k 1 satisfies
E X k X ^ k | k 1 X k X ^ k | k 1 T = P k | k 1 ,
where P k | k 1 is the predicted covariance. One very reasonable estimate X k , taking into account the measurement Z k , is the weighted least squares estimate, which we call X ^ k . For this estimate, we choose X ^ k as the value of X k that minimizes the quadratic form
J = 1 2 X k X ^ k | k 1 T P k | k 1 1 X k X ^ k | k 1 + Z k H k X k T R k 1 Z k H k X k .
Note that the weighting matrices, P k | k 1 1 and R k 1 , are the inverse matrices of the prior expectations of X k X ^ k | k 1 X k X ^ k | k 1 T and Z k H k X k Z k H k X k T , respectively, and the predicted state is obtained as follows:
X ^ k | k 1 = Φ k | k 1 X k 1 .
The value X ^ k = arg min X k J X k will be the best estimate. The detailed proof is given in [39] to show that Kalman filter can be deduced by Equation (19).
Therefore, Kalman filter can be deduced by solving
J X ^ k = 0 .
Therefore, Kalman filter equations are expressed as:
X ^ k | k 1 = Φ k | k 1 X ^ k 1 P k | k 1 = Φ k | k 1 P k 1 | k 1 Φ k | k 1 T + Q k K k = P k | k 1 H k T H k P k | k 1 1 H k T + R k 1 1 X ^ k = Φ k | k 1 X ^ k | k 1 + K k Z k H k X ^ k | k 1 P k | k = I K k H k P k | k 1 .
The standard Kalman filter has been applied in many fields. However, it is under the assumptions that the process noises and the measurement noises are both Gaussian white noises. Therefore, it cannot have a good performance in a non-Gaussian condition, which is introduced in Section 2. Thus, some robust Kalman filtering algorithms were developed [40]. In this paper, a latest robust Kalman filter, maximum correntropy criterion Kalman filter, is deduced based on correntropy therory. It has higher accuracy than some other approaches in the presence of non-Gaussian noises.

3.2. Design of Maximum Correntropy Criterion Kalman Filter

3.2.1. Correntropy

Correntropy [41] defines a metric in the data space and it is directly related to information entropy. It induces a new metric in the sample space that is equivalent to the 2-norm distance [42,43,44]. Given two random variables, A , B R with the joint probability density function (PDF) p a , b . Then, the definition of the correntropy between two random variables is as follows:
Definition 1.
Correntropy is a generalized similarity measure between two arbitrary scalar random variables A and B defined by
C λ A , B = E κ A , B = κ λ a , b p a , b d a d b ,
where E is the expectation and κ λ (,) is any continuous positive definite kernel function, λ > 0 is the kernel size.
In this paper, the kernel function is chosen as the Gaussian kernel given by
κ λ a , b = G λ e = exp e 2 2 λ 2 ,
where e = a b .
In practice, the joint PDF p a , b is unknown and a finite number of sample data are available. Therefore, the correntropy can be estimated by the sample mean estimator:
C ^ λ A , B = 1 N i = 1 N G λ e i ,
where e i = a i b i , with a i , b i i = 1 N being N samples drawn from p a , b .
When λ = 1 , a i 0 , 10 , b i 0 , 10 and the samples number is 50, the correntropy can be shown as Figure 1:
As can be seen from Figure 1, the correntropy can be used as a similarity measure in the joint space. In addition, when a i = b i , the correntropy reaches the maximum value. The cost function that incarnates the maximum correntropy criterion (MCC) is:
J M C C = max i = 1 N G λ e i .
MCC has a probabilistic meaning of maximizing the error probability density at the origin [41], and it has the advantages that it is a local criterion of similarity and it is very useful for non-Gaussian noise [45]. In other words, compared with the mean square error (MSE), MCC is local along the bisector of the joint space, which means that the maximum correntropy value is mainly decided by the kernel function along the line A = B [46].

3.2.2. Maximum Correntropy Criterion Kalman Filter Algorithm

Similar to Equation (19), the maximum correntropy criterion Kalman filter (MCCKF) is designed based on the MCC theory. We have the correntropy-based objective function:
X ^ k = arg max X k J X k ,
where
J X k = G λ X k Φ k | k 1 X ^ k 1 P k | k 1 1 + G λ Z k H k X k R k 1 ,
where R k 1 and P k | k 1 1 are chosen as the weighting matrices.
The derivative of Equation (28) with respect to X k is:
G λ X k Φ k | k 1 X ^ k 1 P k | k 1 1 P k | k 1 1 X k Φ k | k 1 X ^ k 1 G λ Z k H k X k R k 1 H k T R k 1 Z k H k X k = 0 .
Therefore, X ^ k is the solution of Equation (29). Let
P k | k 1 1 X ^ k Φ k | k 1 X ^ k 1 = L k H k T R k 1 Z k H k X ^ k ,
where L k = G λ Z k H k X ^ k R k 1 G λ X ^ k Φ k | k 1 X ^ k 1 P k | k 1 1 . Then, L k H k T R k 1 H k X ^ k | k 1 is added and subtracted on the right-hand side of Equation (30) to obtain
P k | k 1 1 X ^ k Φ k | k 1 X ^ k 1 = L k H k T R k 1 Z k H k X ^ k + L k H k T R k 1 H k X ^ k | k 1 L k H k T R k 1 H k X ^ k | k 1 ,
P k | k 1 1 X ^ k P k | k 1 1 Φ k | k 1 X ^ k 1 = P k | k 1 1 X ^ k P k | k 1 1 X ^ k | k 1 = L k H k T R k 1 Z k L k H k T R k 1 H k X ^ k + L k H k T R k 1 H k X ^ k | k 1 L k H k T R k 1 H k X ^ k | k 1 ,
P k | k 1 1 + L k H k T R k 1 H k X ^ k = P k | k 1 1 X ^ k | k 1 + L k H k T R k 1 H k X ^ k | k 1 + L k H k T R k 1 Z k H k X ^ k | k 1 = P k | k 1 1 + L k H k T R k 1 H k X ^ k | k 1 + L k H k T R k 1 Z k H k X ^ k | k 1 ,
X ^ k = X ^ k | k 1 + P k | k 1 1 + L k H k T R k 1 H k 1 L k H k T R k 1 Z k H k X ^ k | k 1 = X ^ k | k 1 + K k Z k H k X ^ k | k 1 .
In addition, then the MCCKF algorithm can be derived from Equation (34) as follows:
  • State Prediction
    X ^ k | k 1 = Φ k | k 1 X ^ k 1 ,
  • Covariance Prediction
    P k | k 1 = Φ k | k 1 P k 1 | k 1 Φ k | k 1 T + Q k ,
  • Filter Gain
    K k = P k | k 1 1 + L k H k T R k 1 H k 1 L k H k T R k 1 ,
  • State Update
    X ^ k = X ^ k | k 1 + K k Z k H k X ^ k | k 1 ,
  • Covariance Update
    P k | k = I K k H k P k | k 1 I K k H k T + K k R k K k T .

3.3. Robustness of MCCKF

The advantage of MCCKF is that the algorithm uses a method similar to Huber-based filter (HF) [47], which can resist the non-Gaussian noises because the robust function of HF does not increase rapidly as the estimate residual grows. Next, the algorithm robustness is analyzed with the influence function [48] in theory and is compared with HF through the weighting function.
HF is achieved by introducing the concept of the robustness in the maximum likelihood as proposed by Huber [47]. Recall that the maximum likelihood estimator is from a data observation Z = z 1 , , z N , a state variable X = x 1 , , x N . Assuming that e = Z HX obeys the distribution and p e ; X is known except for the parameters X , the MLE of X can be written as
X ^ M L E = arg max X p e 1 , , e N ; X .
If e 1 , , e N are i.i.d. observations, it can be rewritten as
X ^ M L E = arg max X i = 1 N p e i ; X = arg max X i = 1 N ln p e i ; X = arg min X i = 1 N ln p e i ; X .
Assume that ρ is a function that satisfies ρ e i ; X = a ln p e i ; X + b , where a > 0 and b is irrelevant to X . Therefore, X ^ M L E satisfies
i = 1 N ρ e i ; X = i = 1 N ρ z i H x i = min X i = 1 N ρ e i ; X .
For some parametric models, if the errors in the target track model are assumed as purely stochastic white and follow a Gaussian distribution with zero mean and constant variance σ 2 , then the maximum likelihood estimation can be solved. However, in practice, the measurement equipment are affected by external environment or break down. Therefore, the measurement outliers cannot be avoided. In addition, some errors obey the Gaussian distributions with different mean values and variances with most errors. For example, for one error, its distribution can be described as follows:
F e i ; X = 1 υ F 0 e i ; X + υ F c e i ; X ,
where F () denotes the mixture distribution, F 0 () is a known normal distribution, F c () is a contaminated distribution that is unknown, υ denotes the contamination ratio [49], which is usually much smaller than 1. For Equation (43), it is impractical to solve the maximum likelihood estimation by Equation (42).
For this problem, Huber promoted that ρ can change freely within limits. The method to solve Equation (42) is called M-estimation.
Huber defined the M-estimators as a generalization of maximum likelihood as
min X i = 1 N ρ e i ; X o r i = 1 N φ e i ; X = 0 w i t h φ e i ; X = d ρ e i ; X d e i ,
where ρ e is called the cost or penalty function [48] and must obey the properties:
1 . ρ e 0 2 . ρ 0 = 0 3 . ρ e = ρ e 4 . ρ e i ρ e j , e i > e j .
Let
ρ M C C e i = 1 exp e i 2 / 2 λ 2 1 exp e i 2 / 2 λ 2 2 π 2 π λ .
It is trivial to check that it obeys the properties of Equation (45). Furthermore,
min i = 1 N ρ M C C e i = min i = 1 N 1 exp e i 2 / 2 λ 2 1 exp e i 2 / 2 λ 2 2 π 2 π λ max i = 1 N exp e i 2 / 2 λ 2 exp e i 2 / 2 λ 2 2 π 2 π λ = max i = 1 N G λ e i .
In other words, there is a connection with MCC and Equation (46) [50,51]:
min X i = 1 N ρ M C C e i i s e q u i v a l e n t t o M C C = max X i = 1 N G λ e i .
Considering the M-estimation problem, the weighting function ψ M C C e i is defined by
ψ M C C e i = ρ M C C e i e i ,
where ρ M C C e i = d ρ M C C e i d e i . Therefore,
ψ M C C e i = exp e i 2 / 2 λ 2 exp e i 2 / 2 λ 2 2 π 2 π λ 3 .
Based on the penalty function and the weighting function above, we will analyze the robustness with influence function and compare it with Huber-filter to prove the superiority of MCCKF.

3.3.1. Influence Function of MCCKF

Influence function (IF) is a quantitative measure to evaluate the robustness of Kalman filter [52]. The conventional definition of IF is defined as:
Definition 2.
For any l R , R is the real set, δ l is the probability distribution that is degraded in l and the mixture distribution is:
F = 1 υ F 0 + υ δ l 0 υ 1 ,
where F 0 is the main distribution, and υ is the contamination ratio, and then if the limit
I F l ; F 0 , T = lim υ 0 T F T F 0 υ = d T F d υ υ = 0
exists, where T F is one functional expression of the distribution function F, then I F l ; F 0 , T is called the influence function about T F .
The traditional definition of IF describes the relative influence of a contaminated measurement on functional T. The smaller the IF is, the more insensitive the reflection on the contaminated measurement of T is. In other words, for the fixed distribution F, the change of IF reflects the different influences on different measurements. Thus, it can effectively reflect the robustness of Kalman filter.
However, the traditional form of IF is sometimes complex to analyze the performance of Kalman filter. For analyzing the robustness of Kalman filter more clearly, Chang [48] derived a new form of the influence function. According to Equation (16), construct a following pseudo-measurement
Z k = X ^ k | k 1 Z k = X k H k X k + δ X ^ k | k 1 V k = H k X k + V k
with
E V k = 0 , E V k V k T = R k = P V k = P X ^ k | k 1 , X ^ k | k 1 O O R k ,
with Z k as the measurement and X k as the parameter to be estimated. Equation (52) is exactly a classic linear regression problem with the known statistics of the measurements, which can be solved using the weighted least square methods as
X ^ k = H k T R k 1 H k 1 H k T R k 1 Z k ,
P X ^ k X ^ k = H k T R k 1 H k 1 .
Define
Z k = R k 1 / 2 Z k ,
H k = R k 1 / 2 H k ,
V k = R k 1 / 2 V k ,
where R k 1 / 2 = R k 1 / 2 1 and R k 1 / 2 R k 1 / 2 T = R k . Then,
Z k = H k X k + V k
E V k = 0 , P V k , V k = I
For the sake of brevity, the subscript k is omitted and let m = n x + n y in the sequel, where n x , n y denotes the dimension of the state vector and the measurement vector. Find the ML solution to Equation (59) through maximizing the following likelihood function
L X ^ , r = j = 1 m F r j ,
where r = Z k H k X ^ k ,   F () denotes the probability distribution. Similar to Equation (41), Equation (61) is equal to the following form:
J X ^ , r = ln L X ^ , r = i = 1 m ln p r j = i = 1 m ρ r i ,
where ρ () is the cost or penalty function. Then, a necessary condition for theminimizer in Equation (62) is
i = 1 m ρ r i r i X ^ = i = 1 m ϕ r i r i X ^ = i = 1 m ϕ r i h i T = 0 ,
where
ϕ = ρ
and h i is the corresponding column vector of H T (the measurement matrix). By taking transpose, Equation (63) is equivalent to
i = 1 m ϕ r i h i = 0 .
Define an estimator as E F explicating that it is a functional of the distribution F. Then, based on the description above, the new form of IF for the distribution Equation (43) is as follows:
I F h , F 0 , E = lim υ 0 E F 0 1 υ + F c υ E F 0 υ = E F υ υ υ = 0 ,
where h is the generalization of h j representing the corresponding column vector of H T , υ 0 stands for the limit from right. It evaluates the infinitesimal influence on the estimator E of any single measurement whose residual is r, which comes from the neighborhood of the distribution F 0 through the designing matrix h T . Let the method of estimating X ^ for the m-sample data be denoted as E m . Then, according to the relation between X ^ and r, Chang [48] transformed the definition of IF in another form to obtain the IF of E m . It is derived in Ref. [48] in detail. The IF to assess the influence of one of the m elements of the pseudo-measurement vector in Equation (59) is defined as:
I F r , h , F 0 , E = ϕ r j E F 0 ϕ r H T H 1 h j , j = 1 , 2 , , m ,
where r j is the j-th element of r.
Three requirements about the IF are needed for an estimator to be infinitesimally robust, i.e., boundedness, continuity, and linearity at the center [48].
Next, we will prove that MCCKF satisfies the requirements above.
According to the Equation (46), the estimator of MCCKF with its cost function as
ϕ M C C e = ρ M C C e = e 2 π λ 3 exp e 2 2 λ 2
and
ϕ M C C e = 1 2 π λ 3 exp e 2 2 λ 2 e 2 2 π λ 5 exp e 2 2 λ 2 .
Then,
E F 0 ϕ M C C e = + ϕ M C C e d F 0 = λ 2 + σ 2 2 π 2 π λ 2 λ 2 + σ 2 λ 2 + σ 2 ,
where σ is the standard deviation of distribution F 0 . Therefore, we have the IF of MCC expressed as:
I F M C C r , h , F 0 , E = ϕ M C C r j E F 0 ϕ M C C r H T H 1 h j ,
where E F 0 ϕ M C C r is a constant (proved in Equation (70)) and H T H 1 h j is a known constant matrix. It is trivial to check that the IF in Equation (71) satisfies continuity and linearity at the center. Next, we will prove the boundedness of Equation (71).
According to the statement above, the boundedness of IF depends on ϕ M C C r j .
Remark 1.
ϕ M C C r is a bounded function in the interval , + .
Proof. 
ϕ M C C r = r 2 π λ 3 exp r 2 2 λ 2 = r 2 π λ 3 r 2 π λ 3 exp r 2 2 λ 2 exp r 2 2 λ 2 .
It is evident that ϕ M C C r is a continuous function, so it satisfies the boundedness in any closed intervals. Furthermore, it is an odd function.
  • Fix ε > 0 , and ϕ M C C r is bounded in the interval ε , ε .
  • In the interval ε , + , according to the L’Hospital’s rule [53], Equation (72) can be transformed as:
    lim r ϕ M C C r = lim r 1 2 π λ 3 1 2 π λ 3 2 π λ r exp r 2 2 λ 2 2 π λ r exp r 2 2 λ 2 = 0 .
Therefore, ϕ M C C r is bounded in ε , + . According to the characteristic of the odd function, it is also bounded in , ε . ☐
Hence, the IF of MCCKF satisfies all three of the requirements of a robust estimator, i.e., boundedness, continuity, and linearity at the center.
HF is another filter whose robustness is also proved with IF in [48]. Next, we will compare the MCCKF with HF.

3.3.2. Comparison with Huber Filter

In fact, the Huber-based filter (HF) estimates by minimizing the following cost function [54]
J HF = min i = 1 N ρ HF e i ,
where ρ HF () is the designed robust function that is bounded and convex. In other words, a Huber-based filter can resist the non-Gaussian noises because the robust function does not increase rapidly as the estimate residual grows. In addition, the M-estimation function with the expression shown in Equation (74) has several forms. Huber M-estimator [46] is a common one that can be shown as:
ρ Huber e i = e i 2 / 2 , e i δ , δ e i δ δ 2 2 , e i > δ .
Its weighting function is
ψ Huber e i = 1 , e i δ , δ / e i , e i > δ ,
where δ is the tuning parameter. In a similar manner, Equation (46) can seem like a new ρ () function in which the kernel size λ plays a role similar to δ .
Suppose only one measurement is adopted. Figure 2 shows the ψ M C C of the MCC in Equation (49) and ψ Huber in Equation (76) with different errors, in which λ = 1 and δ = 1 , respectively. The result is also presented in Ref. [46].
The weighting function of MCC and Huber reduces as the error value increases. It means that both methods can reduce the weight of the measurements containing large outliers. However, MCC has a higher speed to zero compared to that of Huber. It means that Huber may employ more large outliers into the estimation process than that of MCC, although their impacts are reduced. In other words, MCC selects those measurements with smaller errors and discards those with a large outlier. MCC can completely remove the inappropriate measurements from the estimation processes and have a better performance. That is, Kalman filter under the MCC can improve the robustness.
From Equation (49), the kernel size λ plays an important role in the weighting function. The robustness is decided by the kernel size.

3.4. Kernel Size Selection

Due to the close relationship between M-estimation and correntropy, choosing an appropriate kernel size in the correntropy criterion becomes practical [54]. In the MCCKF algorithm, or the same input data, different kernel sizes lead to different filter results. If the kernel size is too large, the filter system will degrade into linear regression. If the kernel size is too small, all the data will look distinct and the system cannot infer on unseen samples falling between the training points [55]. Therefore, the kernel size can be adaptively adjusted as other parameters.
There are not many accessible values to the problem for us. However, the measurements, the dynamic system model and the noise parameters are accessible. It is a new thought to make use of this information in the adaption of the kernel size selection. In fact, Cinar et al. [56] has proposed that the best kernel sizes are in the range of the observation errors and half of the cost function has the observation error as the argument and a similar kernel size will get a better result. Based on the idea, he heuristically chose the kernel size at each time instant to be the observation error as shown below:
λ k = Z k H k X ^ k .
In this paper, inspired by Equation (77) and combined with the the weighted least square thought, we proposed that the kernel size selection method can be adjusted to
λ 0 k = Z k H k X ^ k T R k 1 Z k H k X ^ k .
In fact, Equations (77) and (78) reflect the Euclidean distance and the Mahalanobis distance, respectively. Equation (78), which is a novel kernel size selection method proposed in the article, can overcome the influence of the covariance due to the advantages of Mahalanobis distance. It is much more sensitive to the small change of data.
To show the different results of different kernel size, the weights are shown under the different sizes for λ in Equation (77) and for λ 0 in Equation (78) as follows.
From Figure 3, λ 0 has a higher speed to zero compared to λ . λ 0 can employ less outliers than λ , although both impacts are reduced. Therefore, λ 0 can select the measurements with smaller error and remove more inappropriate measurements from the estimation processes. Therefore, λ 0 is more appropriate than λ .

4. Simulation

In this section, we mainly demonstrate the performances of the MCCKF in the presence of large outliers and mixture Gaussian noises. The estimation performances are compared based on the following indices:
Residual k = X k X ^ k | k , k = 1 , , N ,
RMSE = 1 M m = 1 M 1 N i = 1 N X k X ^ k | k 2 , m = 1 , , M ,
where N is the entire time steps in every Monte Carlo run and M represents the total Monte Carlo runs.

4.1. Simulation Conditions

In this section, simulation experiments are conducted to validate the proposed algorithm for restricting the non-Gaussian noises. The α -Jerk model results obtained by the Huber-based filter (HF), ensemble Kalman filter (EnKF), unscented Kalman filter (UKF), Gaussian sum filter (GSF) and maximum correntropy criterion Kalman filter (MCCKF) are compared here.
The target tracking parameters are set as below: time interval is T = 1 s; 500 independent Monte Carlo runs have been conducted, i.e., M = 500 ; the measurement noise covariance is R = d i a g 2 2 2 ; the standard deviation of input is σ ε = 0.7 ; the standard deviation of the target ‘jerk’ is σ j = 0.2 ; and the state vector in three dimensions is
X = x x ˙ x ¨ x y y ˙ y ¨ y z z ˙ z ¨ z T ,
which contains the position, the velocity, the acceleration and the ‘jerk’, respectively; the initial true state is
x 0 = 30000 , x ˙ 0 = 300 , x ¨ 0 = 1 , x 0 = 0 , y 0 = 60000 , y ˙ 0 = 400 , y ¨ 0 = 1 , y 0 = 0 , z 0 = 90000 , z ˙ 0 = 500 , z ¨ 0 = 1 , z 0 = 0 .
and the initial covariance matrix of the states are chosen as
P 0 | 0 = d i a g 10000 1000 100 10 10000 1000 100 10 10000 1000 100 10 .
The initial values in three dimensions are α x = α y = α z = 0.5 .

4.2. The Kernel Size Adaptive Method

To choose a better kernel size adaptive method, a simulation is made in this section with Equations (77) and (78), respectively. The simulation conditions are the same as Section 4.1. Figure 4 shows the trajectories in a two-dimensional plane under two kernel size selection methods.
As can be seen from Figure 4 and Figure 5, MCCKF with the new adaptive method ensures a higher accuracy than the other in most time.
Table 1, Table 2 and Table 3 show that the newly proposed kenel size selection method in this paper ( λ 0 ) indeed has a higher accuracy on the whole.

4.3. The Presence of Gaussian Noise

In this section, we will apply the MCCKF on α -jerk model in the presence of Gaussian noises to present the performance of MCCKF. The simulation parameters and the initial state are the same as Section 4.1.
To explain the advantages of MCCKF, we use EnKF, UKF, GSF and HF algorithms to track target with α -jerk model compared with MCCKF. Figure 6 demonstrates the real target trajectory and the estimated target trajectories with EnKF, GSF, UKF, HF and MCCKF in a two-dimensional plane. Figure 7 demonstrates the residual defined in Equation (79) with EnKF, UKF, GSF, HF and MCCKF. Table 4, Table 5 and Table 6 summarize the corresponding RMSE of every state variable with five algorithms in three directions.
As can be seen from Figure 6 and Figure 7, MCCKF does not have the best performance most of the time. Furthermore, it can be seen that the performance of MCCKF may be worse than UKF in general combined with Table 4, Table 5 and Table 6.

4.4. The Presence of Large Outliers

In this section, we will apply the MCCKF on α -Jerk model in the presence of large outliers to verify the robustness of MCCKF. The simulation parameters and the initial state are the same as Section 4.1. The constant outliers are added, respectively, from 251 s to 260 s and from 351 s to 360 s. The magnitude of the outliers is 10.
To explain the advantages of MCCKF, we use the EnKF, UKF, GSF, and HF algorithms to track targets with α -jerk models compared with MCCKF. Figure 8 demonstrates the real target trajectory and the estimated target trajectories with EnKF, GSF, UKF, HF and MCCKF in a two-dimensional plane. Figure 9 demonstrates the residual defined in Equation (79) with EnKF, UKF, GSF, HF and MCCKF. Table 7, Table 8 and Table 9 summarize the corresponding RMSE of every state variable with five algorithms in three directions.
As can be seen from Figure 8, MCCKF has a higher accuracy in most time. From Figure 9, it can be evidently seen that EnKF, UKF and GSF have divergence during the time of outliers. HF and MCCKF have a stable behavior in the whole process. Table 7, Table 8 and Table 9 show that MCCKF has a better performance than others, including HF.

4.5. The Presence of Gaussian Mixture Noises

In this section, we will apply the MCCKF on the α -Jerk model in the presence of Gaussian mixture noises to verify the advantage of MCCKF. The simulation parameters and the initial state are also the same as Section 4.1. In the measurement process, we add the Gaussian mixture noise. The Gaussian mixture noise model is as follows:
V k = 1 υ N μ 1 , σ 1 2 + υ N μ 2 , σ 2 2 ,
where μ 1 = 0 0 0 , μ 2 = 0 0 0 , σ 1 2 = R , σ 2 2 = d i a g 10 10 10 and υ = 0.01 .
The accuracy of the MCCKF is compared with the EnKF, UKF, GSF, and HF. Figure 10 demonstrates the real target trajectory and the estimated target trajectories with EnKF, UKF, GSF, HF and MCCKF. Figure 11 demonstrates the residual defined in Equation (79) with EnKF, GSF, UKF, HF and MCCKF in a two-dimensional plane. Table 8, Table 9 and Table 10 summarize the corresponding RMSE of every state variable with five algorithms in three directions.
Similar to the presence of large outliers, MCCKF has a higher accuracy in most time compared to other methods from Figure 10 and Figure 11. Furthermore, it can be seen that it has a better performance than others from Table 10, Table 11 and Table 12.

5. Conclusions

α -jerk model, a modified jerk model, can estimate the parameter α in real time during the process of target tracking. In the process of tracking targets, the measurement noise is a research focus. Large outliers and Gaussian mixture noises are the most popular non-Gaussian noises in the measurement process. Maximum Correntropy Criterion Kalman Filter is deduced with the weighted least square algorithm, which is proved to be rational based on the correntropy theory. Its robustness in the state estimation is analyzed with influence function for the first time. It proves that the filter algorithm can overcome the influence of non-Gaussian noises. Furthermore, a novel kernel size selection is proposed, which can improve the robustness of the filter algorithm. Simulated experimental results demonstrate that it provides effective performance to control the non-Gaussian noises of α -jerk model.

Acknowledgments

This work was supported in part by the National Defense Technology Foundation of China under Grant No. 3101065, the National Natural Science Foundation of China (NSFC) under Grant Nos. 61773021, 61703408, and 61573367 and the Student Research Innovation Scholarship of Hunan Province under Grant No. CX2014B010.

Author Contributions

Bowen Hou proposed the main idea, wrote the draft manuscript and presented the experiments; Zhangming He and Dong Li conceived of the experiments; Xuanying Zhou and Haiyin Zhou analyzed the data and drew the figures and tables; Jiongqi Wang carried out the simulations and revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 2004, 39, 1333–1364. [Google Scholar]
  2. Yang, X.; Xing, K.; Feng, X. Maneuvering target tracking in dense clutter based on particle filtering. Chin. J. Aeronaut. 2011, 24, 171–180. [Google Scholar] [CrossRef]
  3. Singer, R.A. Estimating optimal tracking filter performance for manned maneuvering targets. IEEE Trans. Aerosp. Electron. Syst. 1970, AES-6, 473–483. [Google Scholar] [CrossRef]
  4. Bar-Shalom, Y.; Birmiwal, K. Variable dimension filter for maneuvering target tracking. IEEE Trans. Aerosp. Electron. Syst. 1982, AES-18, 621–629. [Google Scholar] [CrossRef]
  5. Kumar, K.S.P.; Zhou, H.; Kumar, K.S.P.; Zhou, H. A “current” statistical model and adaptive algorithm for estimating maneuvering targets. J. Guidance Control Dyn. 1984, 7, 596–602. [Google Scholar] [CrossRef]
  6. Mehrotra, K.; Mahapatra, P.R. A jerk model for tracking highly maneuvering targets. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 1094–1105. [Google Scholar] [CrossRef]
  7. Luo, X.B. A α-jerk model for tracking maneuvering targets. Signal Process. 2007, 4, 481–484. [Google Scholar]
  8. Li, X.R.; Jilkov, V.P. A survey of maneuvering target tracking: Approximation techniques for nonlinear filtering. Proc. SPIE 2004, 5428. [Google Scholar] [CrossRef]
  9. Roth, M.; Hendeby, G.; Fritsche, C.; Gustafsson, F. The ensemble kalman filter: A signal processing perspective. Eurasip J. Adv. Signal Process. 2017, 2017, 56. [Google Scholar] [CrossRef]
  10. Ammann, N.; Andert, F. Visual navigation for autonomous, precise and safe landing on celestial bodies using unscented kalman filtering. In Proceedings of the 2017 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2017. [Google Scholar]
  11. Zhang, C.; Hwang, I. Gaussian sum-based maneuvering target tracking using unmanned aerial vehicle. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Grapevine, TX, USA, 9–13 January 2017; pp. 1–12. [Google Scholar]
  12. Julier, S.J.; Uhlmann, J.K.; Durrant-Whyte, H.F. A new approach for filtering nonlinear systems. In Proceedings of the American Control Conference, Seattle, WA, USA, 21–23 June 1995; Volume 3, pp. 1628–1632. [Google Scholar]
  13. Uhlmann, J.K.; Julier, S.J. A new extension of the kalman filter to nonlinear systems. Proc. SPIE 1997, 3068, 182–193. [Google Scholar]
  14. Maybeck, P.S. Stochastic Models, Estimation and Control Volume 3; Academic Press: New York, NY, USA, 1979. [Google Scholar]
  15. Stein, D.W.J. Detection of random signals in gaussian mixture noise. In Proceedings of the 1994 28th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 31 October–2 November 1994; Volume 2, pp. 791–795. [Google Scholar]
  16. Wu, W.R. Target racking with glint noise. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 174–185. [Google Scholar]
  17. Plataniotis, K.N.; Androutsos, D.; Venetsanopoulos, A.N. Nonlinear filtering of non-gaussian noise. J. Intell. Robot. Syst. 1997, 19, 207–231. [Google Scholar] [CrossRef]
  18. Sorenson, H.W.; Alspach, D.L. Recursive bayesian estimation using gaussian sums. Automatica 1971, 7, 465–479. [Google Scholar] [CrossRef]
  19. Harvey, A.; Luati, A. Filtering with heavy tails. J. Am. Stat. Assoc. 2014, 109, 1112–1122. [Google Scholar] [CrossRef]
  20. Izanloo, R.; Fakoorian, S.A.; Yazdi, H.S.; Dan, S. Kalman filtering based on the maximum correntropy criterion in the presence of non-gaussian noise. In Proceedings of the Information Science and Systems, Princeton, NJ, USA, 16–18 March 2016; pp. 500–505. [Google Scholar]
  21. Wan, E.A.; van der Merwe, R. The unscented Kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium, Lake Louise, AB, Canada, 4 October 2000; pp. 153–158. [Google Scholar]
  22. Curn, J.; Marinescu, D.; Lacey, G.; Cahill, V. Estimation with non-white gaussian observation noise using a generalised ensemble kalman filter. In Proceedings of the IEEE International Symposium on Robotic and Sensors Environments, Magdeburg, Germany, 16–18 November 2012; pp. 85–90. [Google Scholar]
  23. Kotecha, J.H.; Djuric, P.M. Gaussian sum particle filtering. IEEE Trans. Signal Process. 2003, 51, 2602–2612. [Google Scholar] [CrossRef]
  24. Liu, Y.; Dong, K.; Wang, H.; Liu, J.; He, Y.; Pan, L. Adaptive gaussian sum squared-root cubature kalman filter with split-merge scheme for state estimation. Chin. J. Aeronaut. 2014, 27, 1242–1250. [Google Scholar] [CrossRef]
  25. Yin, J.; Zhang, J.; Zhuang, Z. Gaussian sum phd filtering algorithm for nonlinear non-gaussian models. Chin. J. Aeronaut. 2008, 21, 341–351. [Google Scholar]
  26. Rousseeuw, P.J.; Leroy, A.M. Robust Regression and Outlier Detection. J. R. Stat. Soc. 1989, 152, 133–134. [Google Scholar]
  27. Wang, R.; Xiong, Z.; Liu, J.Y.; Li, R.; Peng, H. SINS/GPS/CNS information fusion system based on improved huber filter with classified adaptive factors for high-speed UAVs. In Proceedings of the 2012 IEEE/ION Position Location and Navigation Symposium (PLANS), Myrtle Beach, SC, USA, 23–26 April 2012; pp. 441–446. [Google Scholar]
  28. Huang, J.; Yan, B.; Hu, S. Centralized fusion of unscented kalman filter based on huber robust method for nonlinear moving target tracking. Math. Probl. Eng. 2015, 2015, 291913. [Google Scholar] [CrossRef]
  29. Chang, L.; Li, K.; Hu, B. Huber’s m-estimation-based process uncertainty robust filter for integrated ins/gps. IEEE Sens. J. 2015, 15, 3367–3374. [Google Scholar] [CrossRef]
  30. Chien-Hao, T.; Lin, S.F.; Dah-Jing, J. Robust huber-based cubature kalman filter for gps navigation processing. J. Navigat. 2016, 70, 527–546. [Google Scholar]
  31. Chang, G. Robust kalman filtering based on mahalanobis distance as outlier judging criterion. J. Geodesy 2014, 88, 391–401. [Google Scholar] [CrossRef]
  32. Rodger, J.A. Toward reducing failure risk in an integrated vehicle health maintenance system: A fuzzy multi-sensor data fusion kalman filter approach for ivhms. Expert Syst. Appl. 2012, 39, 9821–9836. [Google Scholar] [CrossRef]
  33. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum correntropy kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  34. Liu, X.; Qu, H.; Zhao, J.; Yue, P.; Wang, M. Maximum correntropy unscented kalman filter for spacecraft relative state estimation. Sensors 2016, 16, 1530. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, X.; Chen, B.; Xu, B.; Wu, Z.; Honeine, P. Maximum correntropy unscented filter. Int. J. Syst. Sci. 2017, 48, 1607–1615. [Google Scholar] [CrossRef]
  36. Xi, L.; Hua, Q.; Zhao, J.; Chen, B. Extended Kalman filter under maximum correntropy criterion. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 1733–1737. [Google Scholar]
  37. Wang, J.J.-Y.; Wang, Y.; Jing, B.-Y.; Gao, X. Regularized maximum correntropy machine. Neurocomputing 2015, 160, 85–92. [Google Scholar] [CrossRef]
  38. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part V. multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar]
  39. Lewis, F.L. Applied optimal control. IEEE Trans. Autom. Control 2003, 17, 186–188. [Google Scholar]
  40. Jiang, C.; Zhang, S.B.; Zhang, Q.Z. A new adaptive h-infinity filtering algorithm for the GPS/INS integrated navigation. Sensors 2016, 16, 21–27. [Google Scholar] [CrossRef] [PubMed]
  41. Príncipe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Publishing Company, Incorporated: Berlin, Germany, 2010. [Google Scholar]
  42. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Príncipe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  43. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Príncipe, J.C. Convergence of a fixed-point algorithm under maximum correntropy criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  44. Chen, B.; Príncipe, J.C. Maximum correntropy estimation is a smoothed map estimation. IEEE Signal Process. Lett. 2012, 19, 491–494. [Google Scholar] [CrossRef]
  45. Liu, W.; Pokharel, P.P.; Príncipe, J.C. Correntropy: Properties and applications in non-gaussian signal processing. IEEE Trans. Signal Process. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  46. He, R.; Hu, B.; Yuan, X.; Wang, L. Robust Recognition via Information Theoretic Learning; Springer International Publishing: Berlin, Germany, 2014. [Google Scholar]
  47. Huber, P.J. Robust estimation of a location parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  48. Chang, G.; Liu, M. M-estimator-based robust kalman filter for systems with process modeling errors and rank deficient measurement models. Nonlinear Dyn. 2015, 80, 1431–1449. [Google Scholar] [CrossRef]
  49. Chang, G.; Xu, T.; Wang, Q. M-estimator for the 3d symmetric helmert coordinate transformation. J. Geodesy 2017, 2017, 1–12. [Google Scholar] [CrossRef]
  50. He, Y.; Wang, F.; Yang, J.; Rong, H.; Chen, B. Kernel adaptive filtering under generalized maximum correntropy criterion. In Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 1738–1745. [Google Scholar]
  51. Wang, Y.; Zheng, W.; Sun, S.; Li, L. Robust information filter based on maximum correntropy criterion. J. Guidance Control Dyn. 2016, 39, 1–6. [Google Scholar] [CrossRef]
  52. Huber, P.J. The 1972 wald lecture robust statistics: A review. Ann. Math. Stat. 1972, 43, 1041–1067. [Google Scholar] [CrossRef]
  53. Taylor, A.E. L’hospital’s rule. Am. Math. Mon. 1952, 59, 20–24. [Google Scholar] [CrossRef]
  54. He, R.; Zheng, W.S.; Hu, B.G. Maximum correntropy criterion for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1561–1576. [Google Scholar] [PubMed]
  55. Liu, W.; Príncipe, J.C.; Haykin, S.S. Kernel Adaptive Filtering: A Comprehensive Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  56. Cinar, G.T.; Príncipe, J.C. Hidden state estimation using the correntropy filter with fixed point update and adaptive kernel size. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, 10–15 June 2012. [Google Scholar]
Figure 1. Correntropy when λ = 1 .
Figure 1. Correntropy when λ = 1 .
Entropy 19 00648 g001
Figure 2. ψ with different estimation errors in different weighting functions.
Figure 2. ψ with different estimation errors in different weighting functions.
Entropy 19 00648 g002
Figure 3. ψ with different kernel size selection methods (where λ 0 is the novel kernel size selection method).
Figure 3. ψ with different kernel size selection methods (where λ 0 is the novel kernel size selection method).
Entropy 19 00648 g003
Figure 4. Target trajectories of two kernel size adaptive methods on α -jerk model where λ 0 is the novel kernel size adaptive method.
Figure 4. Target trajectories of two kernel size adaptive methods on α -jerk model where λ 0 is the novel kernel size adaptive method.
Entropy 19 00648 g004
Figure 5. Residual of estimated position with two kernel size adaptive methods where λ 0 is the novel kernel size adaptive method.
Figure 5. Residual of estimated position with two kernel size adaptive methods where λ 0 is the novel kernel size adaptive method.
Entropy 19 00648 g005
Figure 6. Target trajectories of different filter algorithms on the α -jerk model in the presence of Gaussian noises.
Figure 6. Target trajectories of different filter algorithms on the α -jerk model in the presence of Gaussian noises.
Entropy 19 00648 g006
Figure 7. Residual of estimated position with different filter algorithms in the presence of Gaussian noises.
Figure 7. Residual of estimated position with different filter algorithms in the presence of Gaussian noises.
Entropy 19 00648 g007
Figure 8. Target trajectories of different filter algorithms on α -jerk model in the presence of large outliers.
Figure 8. Target trajectories of different filter algorithms on α -jerk model in the presence of large outliers.
Entropy 19 00648 g008
Figure 9. Residual of estimated position with different filter algorithms in the presence of large outliers.
Figure 9. Residual of estimated position with different filter algorithms in the presence of large outliers.
Entropy 19 00648 g009
Figure 10. Target trajectories of different filter algorithms on the α -jerk model in the presence of Gaussian mixture noises.
Figure 10. Target trajectories of different filter algorithms on the α -jerk model in the presence of Gaussian mixture noises.
Entropy 19 00648 g010
Figure 11. Residual of estimated position with different filter algorithms in the presence of Gaussian mixture noises.
Figure 11. Residual of estimated position with different filter algorithms in the presence of Gaussian mixture noises.
Entropy 19 00648 g011
Table 1. RMSE of every state variable in the x-direction of two different kernel size selection methods.
Table 1. RMSE of every state variable in the x-direction of two different kernel size selection methods.
Position (m)Velocity (m/s)Accelaration (m/s 2 )Jerk ( m / s 3 ) α
λ 0 1.23311.32510.77660.35120.2637
λ 1.28211.35900.79610.35310.2764
Table 2. RMSE of every state variable in the y-direction of two different kernel size selection methods.
Table 2. RMSE of every state variable in the y-direction of two different kernel size selection methods.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
λ 0 1.23251.22030.68630.32480.3232
λ 1.25381.24510.69050.320.3232
Table 3. RMSE of every state variable in the z-direction of two different kernel size selection methods.
Table 3. RMSE of every state variable in the z-direction of two different kernel size selection methods.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
λ 0 1.34771.37680.80370.35810.7747
λ 1.37471.39680.82560.36820.7936
Table 4. RMSE of every state variable in the x-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 4. RMSE of every state variable in the x-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF7.03504.79262.30290.55431.7291
UKF1.28921.38250.89400.44241.6125
GSF1.59901.22200.80720.42021.6125
HF1.28431.22200.80720.42021.6125
MCCKF1.29011.22520.80760.42011.6125
Table 5. RMSE of every state variable in the y-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 5. RMSE of every state variable in the y-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF3.28123.98662.11250.50193.2963
UKF1.28561.53430.94180.42991.1582
GSF1.57491.31810.86660.42451.1582
HF1.26951.31810.86660.42451.1582
MCCKF1.26951.31950.86850.42451.1582
Table 6. RMSE of every state variable in the z-direction of EnKF, UKF, GSF and MCCKF algorithms.
Table 6. RMSE of every state variable in the z-direction of EnKF, UKF, GSF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF4.57023.53531.91890.56491.9978
UKF1.21781.47240.93190.47001.1227
GSF1.59311.25630.88870.47381.1227
HF1.20711.25620.88870.47381.1227
MCCKF1.21251.26170.89070.47411.1227
Table 7. RMSE of every state variable in x-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 7. RMSE of every state variable in x-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF6.93906.48463.77530.65301.3750
UKF3.31041.71700.90830.42380.9707
GSF3.08221.51560.83060.42580.9707
HF2.39031.47081.77341.31801.0783
MCCKF1.42521.33360.81530.42450.9707
Table 8. RMSE of every state variable in y-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 8. RMSE of every state variable in y-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF3.79772.87301.70310.56305.5318
UKF3.30851.69680.95860.44640.3219
GSF3.56371.64790.85530.42590.3219
HF2.24203.23661.65251.31940.8478
MCCKF1.31191.25710.80200.42440.3219
Table 9. RMSE of every state variable in z-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 9. RMSE of every state variable in z-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF11.66419.46724.78360.80712.8136
UKF3.18401.63030.96130.43240.6726
GSF2.82791.36320.82590.41920.6726
HF2.24693.27511.69011.31610.9380
MCCKF1.36961.27990.82090.41880.6726
Table 10. RMSE of every state variable in the x-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 10. RMSE of every state variable in the x-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF6.40096.45743.32190.57562.0893
UKF1.31711.58391.04260.45780.9669
GSF1.66691.43160.98400.44980.9669
HF1.54111.49070.98340.47030.9669
MCCKF1.30801.41940.97140.44970.9669
Table 11. RMSE of every state variable in the y-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Table 11. RMSE of every state variable in the y-direction of EnKF, UKF, GSF, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF5.66616.99733.83230.67981.6025
UKF1.24811.42030.92160.42331. 5972
GSF1.65371.27150.86550.4121. 5972
HF1.23481.27150.86550.41221. 5972
MCCKF1.02141.26070.82260.41171.5972
Table 12. RMSE of every state variable in the z-direction of EnKF, UKF, GS, HF and MCCKF algorithms.
Table 12. RMSE of every state variable in the z-direction of EnKF, UKF, GS, HF and MCCKF algorithms.
Position (m)Velocity (m/s)Accelaration ( m / s 2 )Jerk ( m / s 3 ) α
EnKF2.45902.69901.55050.53143.0559
UKF1.28061.57661.01190.47170.4726
GSF1.56641.36250.95330.46780.4726
HF1.28771.36250.95290.46780.4726
MCCKF1.25481.36020.91340.46780.4726

Share and Cite

MDPI and ACS Style

Hou, B.; He, Z.; Zhou, X.; Zhou, H.; Li, D.; Wang, J. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise. Entropy 2017, 19, 648. https://doi.org/10.3390/e19120648

AMA Style

Hou B, He Z, Zhou X, Zhou H, Li D, Wang J. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise. Entropy. 2017; 19(12):648. https://doi.org/10.3390/e19120648

Chicago/Turabian Style

Hou, Bowen, Zhangming He, Xuanying Zhou, Haiyin Zhou, Dong Li, and Jiongqi Wang. 2017. "Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise" Entropy 19, no. 12: 648. https://doi.org/10.3390/e19120648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop