Next Article in Journal
A Fog Computing Based Cyber-Physical System for the Automation of Pipe-Related Tasks in the Industry 4.0 Shipyard
Previous Article in Journal
Stochastic Feedback Based Continuous-Discrete Cubature Kalman Filtering for Bearings-Only Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Maximum Correntropy Gaussian Filter Based on Variational Bayes

1
College of Automation, Harbin Engineering University, Harbin 150001, China
2
Chinese Ship Research and Design Center, Wuhan 430064, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(6), 1960; https://doi.org/10.3390/s18061960
Submission received: 20 May 2018 / Revised: 11 June 2018 / Accepted: 14 June 2018 / Published: 17 June 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
In this paper, we investigate the state estimation of systems with unknown covariance non-Gaussian measurement noise. A novel improved Gaussian filter (GF) is proposed, where the maximum correntropy criterion (MCC) is used to suppress the pollution of non-Gaussian measurement noise and its covariance is online estimated through the variational Bayes (VB) approximation. MCC and VB are integrated through the fixed-point iteration to modify the estimated measurement noise covariance. As a general framework, the proposed algorithm is applicable to both linear and nonlinear systems with different rules being used to calculate the Gaussian integrals. Experimental results show that the proposed algorithm has better estimation accuracy than related robust and adaptive algorithms through a target tracking simulation example and the field test of an INS/DVL integrated navigation system.

1. Introduction

As the benchmark work in state estimation problems, the linear recursive Kalman filter (KF) has been applied in various applications, such as information fusion, system control, integrated navigation, target tracking, and GPS solutions [1,2,3,4]. It is then extended to nonlinear systems through different ways to approximate the nonlinear functions or filtering distributions. Using the Taylor series to linearize the nonlinear functions, the popular extended Kalman filter (EKF) [5] is obtained. To further improve the estimation accuracy of EKF, several sigma points based nonlinear filters have been proposed in recent decades, such as unscented Kalman filter (UKF) using unscented transform [6], cubature Kalman filter (CKF) according to cubature rules [7], and divided difference Kalman filter (DDKF) adopting the polynomial approximations [8]. All these filters can be regarded as special cases of Gaussian filter (GF) [9,10,11], where the noise distribution is assumed to be Gaussian.
However, when the measurements are polluted by non-Gaussian noise, such as impulsive inference or outliers, GF will have worse estimation results and even break down [12,13]. Besides the computation extensive methods including particle filter [14,15], Gaussian sum filter [16], and multiple model filters [17], the robust filters, such as Huber’s KF (HKF, also known as M-estimation) [18,19,20] and H filter [21], are also intended for the contaminated measurements. Although the H filter can obtain guaranteed bounded estimation error, it does not perform well under Gaussian noise [22]. The Huber’s M-estimation is a combined l 1 and l 2 norm filter that can effectively suppress the non-Gaussian noise [18,19,20]. Recently, the information theoretical measure correntropy has been used to incorporate the non-Gaussian noise [23,24,25,26,27,28]. According to the maximum correntropy criterion (MCC), a new robust filter known as the maximum correntropy Kalman filter (MCKF) has been proposed in [24], and it is also extended to nonlinear systems using EKF [27] and UKF [25,26,28]. Simulation results show that an MCC based GF (MCGF) may obtain better estimation accuracy than M-estimation when choosing a proper kernel bandwidth [23,24,25,26]. Even so, both MCGF and M-estimation still require the information of nominal measurement noise covariance, which may be unknown or time varying in some applications. In this situation, the performance of MCGF will degrade as shown in our simulation examples.
Traditionally, the unknown noise statistic can be estimated by the adaptive filter, such as the Sage–Husa filter [29] and fading memory filter [30]. One drawback of these recursion adaptive filters is that the previously estimated statistic of the last time instant will influence current estimation, which is not suitable for the case measurement noise having frequently changing statistics [31]. The recently proposed variational Bayes (VB) based adaptive filter avoids this limitation by VB approximation, and VB based GFs (VBGFs) for both linear [32] and nonlinear systems [9,33] have been proposed.
In this paper, we proposed an adaptive MCGF based on the VB approximation, which is especially useful for estimating the system state from the measurements with unknown covariance non-Gaussian noise. Typical applications include low-cost INS/GPS integrated navigation systems [32] and maneuvering target tracking [33]. To overcome the limitation of MCGF under unknown time varying measurement noise covariance, the VB method is utilised to improve the adaptivity of MCGF, which is achieved through the fixed-point iteration framework. As will be demonstrated in our simulation results, our proposed method has better estimation accuracy than related algorithms. Furthermore, various filters can be obtained by using different ways to calculate the Gaussian integrals.
The rest of this paper is given as follows. In Section 2, after briefly introducing the concept of correntropy, we give the general MCGF algorithm. In Section 3, we explain the main idea of the VB method and the procedure of embedding it into MCGF to obtain our proposed adaptive MCGF. Section 4 gives the experimental results of a typical target tracking model and an INS/DVL navigation system comparing with several related algorithms. Conclusions are made in the final section.

2. Gaussian Filter Based on the Maximum Correntropy Criterion

2.1. Correntropy

As a kind of similarity measure, the correntropy of random variables X and Y is defined as [24,25,26]
V ( X , Y ) = E [ κ ( X , Y ) ] = κ ( x , y ) d F X , Y ( x , y ) ,
where E [ · ] denotes expectation, F X , Y ( x , y ) is the joint density function, and κ ( x , y ) represents the Mercer kernel. The most popular Gaussian kernel is given as the following:
κ ( x , y ) = G σ ( e ) = exp ( e 2 2 σ 2 ) ,
where e = x y , and σ > 0 is the kernel bandwidth.
Then, taking the Taylor series expansion on the Gaussian correntropy, we obtain that
V ( X , Y ) = n = 0 ( 1 ) n 2 n σ 2 n n ! E [ ( X Y ) 2 n ] .
Obviously, it contains all the even moments of X Y weighted by the kernel bandwidth σ . It enables us to capture high order information when applying the correntropy in signal processing. In practice, we can use the sampling data to estimate the real correntropy since the joint density function is usually unavailable.

2.2. Maximum Correntropy Gaussian Filter

In this paper, we consider the following nonlinear system with additive noise
x i = f i 1 ( x i 1 ) + w i 1 , z i = h i ( x i ) + v i ,
where x i R n is the system state at time i and z i R m denotes the measurement. f i ( · ) and h i ( · ) represent the known nonlinear functions. In standard GF, the process noise w i and the measurement noise v i are assumed to be zero mean Gaussian noise sequences with known covariance Q i and R i . The initial state x 0 has known mean x ¯ 0 and covariance P 0 .
In both GF and MCGF, the one step estimation x ^ i and its estimation covariance P i are obtained through:
x ^ i = f i 1 ( x i 1 ) N ( x i 1 | x ^ i 1 , P i 1 ) d x i 1 ,
P i = ( f i 1 ( x i 1 ) x ^ i ) ( f i 1 ( x i 1 ) x ^ i ) T N ( x i 1 | x ^ i 1 , P i 1 ) d x i 1 + Q i 1 .
To further improve the robustness of GF, MCC has been applied on the derivation of measurement update of MCGF. Consider the following regression model based on Equations (4)–(6) [25]:
x ^ i z i = x i h i ( x i ) + δ x i v i
where δ x i = x ^ i x i . The covariance of [ δ x i v i ] T is
P i 0 0 R i = S p , i S p , i T 0 0 S r , i S r , i T = S i S i T .
Multiplying S i 1 on both sides of Equation (7), we obtain
e i = D i W i ,
where D i = S i 1 x ^ i z i and W i = S i 1 x i h i ( x i ) .
Then, the optimal estimation x ^ i under the MCC can be obtained through the following optimization problem:
x ^ i = arg max x i J L = arg max x i k = 1 m + n G σ ( e i ( k ) ) ,
where e i ( k ) is the k-th element of e i .
Equation (10) can be solved by:
J L x i = h i ( x i ) x i T C i ( D i W i ) = 0 ,
where
C i = C ˜ x , i , 0 , 0 , C ˜ y , i , ,
C ˜ x , i = d i a g G σ ( e i ( 1 ) ) , , G σ ( e i ( n ) ) ,
C ˜ y , i = d i a g G σ ( e i ( n + 1 ) ) , , G σ ( e i ( n + m ) ) .
Note here that the d i a g ( · ) is used to denote the diagonal matrix.
Based on the above equation and using x ^ i to replace the x i contained in Equation (9), MCGF can be written in a similar way as GF except a modified measurement noise covariance [25]:
R ˜ i = S r , i C ˜ y , i 1 S r , i T .
Then, x ^ i and its covariance P i can be obtained through
x ^ i = x ^ i + K i ( z i z ^ i ) ,
P i = P i K i P z z K i T ,
where
K i = P x z P z z 1 ,
z ^ i = h i ( x i ) N ( x i | x ^ i , P i ) d x i ,
P z z = ( h i ( x i ) z ^ i ) ( h i ( x i ) z ^ i ) T N ( x i | x ^ i , P i ) d x i + R ˜ i ,
P x z = ( x i x ^ i ) ( h i ( x i ) z ^ i ) T N ( x i | x ^ i , P i ) d x i .
We easily find that the main difference between MCGF and GF is the modified measurement noise covariance, and MCGF shows excellent estimation performance when measurement is polluted by outliers or shot noise [24,25,26]. However, it still requires the knowledge of measurement noise covariance. When the covariance changes over time (which implies the true covariance is different from the known covariance), the MCGF algorithm does not perform well. Therefore, we adopt the adaptive method to further improve the performance of MCGF in this case.

3. Variation Beysian Maximum Correntropy Gaussian Filter

The main idea under state estimation is to obtain the posterior probability density function p ( x i | z 1 : i ) . For GF, we obtain it through the Gaussian approximation p ( x i | z 1 : i ) N ( x i | x ^ i , P i ) . However, if the measurement noise covariance R i is unavailable, we need to estimate the joint posterior distribution p ( x i , R i | z 1 : i ) . This distribution can be solved by the free form VB approximation [9,32]:
p ( x i , R i | z 1 : i ) Q ( x i ) Q ( R i ) ,
where Q ( x i ) and Q ( R i ) are unknown approximation densities, which can be calculated by minimizing the Kullback–Leibler (KL) divergence between the true one and its corresponding approximation [9,32]:
Q ( x i ) exp log p ( z i , x i , R i | z 1 : i 1 ) Q ( R i ) d R i ,
Q ( R i ) exp log p ( z i , x i , R i | z 1 : i 1 ) Q ( x i ) d x i .
According to the VB method, p ( x i , R i | z 1 : i ) can be approximated as a product of Gaussian distribution and inverse Wishart (IW) distribution [9,32]:
p ( x i , R i | z 1 : i ) N ( x i | x ^ i , P i ) IW ( R i | v i , V i ) ,
where
N ( x i | x ^ i , P i ) P i 1 / 2 exp 1 2 ( x i x ^ i ) T P i 1 ( x i x ^ i )
IW ( R i | v i , V i ) R i ( v i + n + 1 ) / 2 exp 1 2 tr ( V i R i ) ,
where tr ( · ) is the trace of a matrix, and v i and V i are the degree of freedom parameter and the inverse scale matrix, respectively.
The integrals in (22) and (23) can be computed as follows [9,32]:
log p ( z i , x i , R i | z 1 : i 1 ) Q ( R i ) d R i = 1 2 ( z i h i ( x i ) ) T R i 1 R ( z i h i ( x i ) ) 1 2 ( x i x ^ i ) T ( P i ) 1 ( x i x ^ i ) + C 1 ,
log p ( z i , x i , R i | z 1 : i 1 ) Q ( x i ) d x i = 1 2 ( v i + n + 2 ) log R i 1 2 tr ( V i R i 1 ) 1 2 ( z i h i ( x i ) ) T R i 1 ( z i h i ( x i ) ) x + C 2 ,
where · R = ( · ) Q ( R i ) d R i , · x = ( · ) Q ( x i ) d x i , and C 1 , C 2 are some constants. Due to the fact that Q ( R i ) = IW ( R i | v i , V i ) , we obtain
R i 1 R = ( v i n 1 ) V i 1 .
Besides this, the expectation can be rewritten as
( z i h i ( x i ) ) T R i 1 ( z i h i ( x i ) ) x = tr { ( z i h i ( x i ) ) T ( z i h i ( x i ) ) x R i 1 } .
Substituting (30) and (31) into (28) and (29), and matching the parameters in (26) and (27), we can obtain the following results:
v i = v i + 1 ,
T i = ( h i ( x i ) z ^ i ) ( h i ( x i ) z ^ i ) T N ( x i | x ^ i , P i ) d x i + ( v i n 1 ) 1 V i ,
K i = P x z T i 1 ,
x ^ i = x ^ i + K i ( z i z ^ i ) ,
P i = P i K i T i K i T ,
V i = V i + ( z i h i ( x i ) ) ( z i h i ( x i ) ) T N ( x i | x ^ i , P i ) d x i ,
where ( v i n 1 ) 1 V i is the estimated measurement covariance.
The VB based GF works well for unknown measurement noise covariance. However, when the measurement contains outliers or shot noise, their estimation will degrade, as will be shown in our simulation results. To overcome the shortcomings of MCGF and VBGF, we take the advantages of VB and MCC by the fixed-point iteration method, and design the so called VBMCGF algorithm, which is summarized as follows:
Step 1:
Predict:
x ^ i and P i are obtained through (5) and (6), and
v k = ρ ( v k 1 n 1 ) + n + 1 ,
V k = B V k 1 B T ,
where 0 < ρ 1 , 0 < B 1 , and a reasonable choice is B = ρ I .
Step 2:
Update:
First, set x ^ i ( 1 ) = x ^ i , P i ( 1 ) = P i , v i = 1 + v i , and V i ( 1 ) = V i . Calculate z ^ i and P x z by (19) and (21).
For j = 1 , , N , iterate the following equations:
R i ( j ) = S r , i ( j ) ( S r , i ( j ) ) T = ( v i n 1 ) 1 V i ( j ) ,
e r , i ( j ) = ( S r , i ( j ) ) 1 ( z i z ^ i ) ,
C ˜ r , i ( j ) = d i a g G σ ( e r , i ( j ) ( 1 ) ) , , G σ ( e r , i ( j ) ( m ) ) ,
R ˜ i ( j ) = S r , i ( j ) ( C ˜ r , i ( j ) ) 1 ( S r , i ( j ) ) T ,
T i ( j + 1 ) = ( h i ( x i ) z ^ i ) ( h i ( x i ) z ^ i ) T N ( x i | x ^ i , P i ) d x i + R ˜ i ( j ) ,
K i ( j + 1 ) = P x z ( T i ( j + 1 ) ) 1 ,
x ^ i ( j + 1 ) = x ^ i + K i ( j + 1 ) ( z i z ^ i ) ,
P i ( j + 1 ) = P i K i ( j + 1 ) T i ( j + 1 ) ( K i ( j + 1 ) ) T ,
V i ( j + 1 ) = V i + ( z i h i ( x i ) ) ( z i h i ( x i ) ) T N ( x i | x ^ i ( j + 1 ) , P i ( j + 1 ) ) d x i ,
End For. In addition, set V i = V i ( N + 1 ) , x ^ i = x ^ i ( N + 1 ) , and P i = P i ( N + 1 ) .
The main difference between the proposed VBMCGF and existing GFs lies in the modified estimation error covariance R ˜ i ( j ) , where VB iterations are used to estimate its value and MCC is used to modify it in the presence of non-Gaussian noises. The kernel bandwidth σ plays an important role in reducing the effect of non-Gaussian noise or outliers. A smaller σ will make the filter more sensitive to outliers, but it may affect the convergence performance. In addition, a too large σ may cause the VBMCGF to perform more like VBGF (It can be proved that, if σ , the proposed VBMCGF will reduce to VBGF). One possible way to select it is by the trial and error method [24,25,26]. Another important issue is the number of fixed-point iterations. In fact, only a few iterations (e.g., 2 or 3) are enough [31,32].
As the general framework, our filter can be easily implemented according to the real requirements. For linear systems that are described by x i = F i 1 x i 1 + w i 1 and z i = H i x i + v i , the predation update in the VBMCKF is the same as KF:
x ^ i = F i 1 x ^ i 1 ,
P i = F i 1 P i 1 F i 1 T + Q i 1 .
In addition, the z ^ i , P x z , T i ( j + 1 ) , and V i ( j + 1 ) that appeared in VBMCKF will reduce to the following equations:
z ^ i = H i x ^ i ,
P x z = P i H i T ,
T i ( j + 1 ) = H i P i H i T + R ˜ i ( j ) ,
V i ( j + 1 ) = V i + H i P i ( j + 1 ) H i T + ( z i H i x ^ i ( j + 1 ) ) ( z i H i x ^ i ( j + 1 ) ) T ,
while other steps are the same as the general framework.
When it comes to the nonlinear systems, the Gaussian integrals contained in x ^ i , P i , z ^ i , P x z , T i ( j + 1 ) , and V i ( j + 1 ) can be calculated according to Taylor series, unscented transform, or cubature rules, and the corresponding filters are called VBMCEKF, VBMCUKF, and VBMCCKF, respectively.

4. Experimental Results

4.1. Simulation Results of the Target Tracking Model

To illustrate the performance of the proposed algorithm, we first give the simulation results using a typical target tracking model, where cubature rules are used to calculate the integrals. We compare the estimation accuracy of seven filters: CKF [7], MCCKF-1 [26], MCCKF-2 [25], VBCKF [9], HCKF [18], VBHCKF (which adopts Huber’s function) and the proposed VBMCCKF under various kinds of measurement noise. The target tracking example is modeled as [2]:
x i = 1 T 0 0 0 1 0 0 0 0 1 T 0 0 0 1 x i 1 + T 2 2 0 T 0 0 T 2 2 0 T w i 1 ,
z i = ( ξ x ξ 0 ) 2 + ( η y η 0 ) 2 arctan η x η 0 ξ y ξ 0 + v i ,
where x i = [ ξ x , i ξ ˙ x , i η y , i η y , i ˙ ] T is the system state with ( ξ x , i , η y , i ) and ( ξ ˙ x , i , η ˙ y , i ) being the position and velocity in x and y directions. The position of radar is set as ( ξ 0 , η 0 ) = (−100 m, −100 m). We set the sampling period T = 0.1 s , Q i = diag ( 0.04 m 2 s 3 , 0.04 s 3 ) . The initial state is x ^ 0 = x ¯ 0 = [ 40 m 3 ms 1 10 m 1 ms 1 ] T and the covariance is P 0 = diag ( 4 m 2 , 0.01 m 2 s 2 , 4 m 2 , 0.01 m 2 s 2 ) . The root mean square error (RMSE) and average RMSE (ARMSE) in position or velocity are used to describe the estimation accuracy—for example, the RMSE and ARMSE in position are defined as [2]:
RMSE p o s ( i ) = 1 M c = 1 M ( ( ξ x , i c ξ ^ x , i c ) 2 + ( η y , i c η ^ y , i c ) 2 ) ,
ARMSE p o s = 1 L i = 1 L RMSE p o s ( i ) ,
where ( ξ x , i c , η y , i c ) and ( ξ ^ x , i c , η ^ y , i c ) are the true and estimated position in the cth Monte Carlo experiment, respectively. The RMSE and ARMSE of velocity are similar.
We here consider the following five kinds of measurement noises:
Case A: Gaussian distribution
v i N ( 0 , diag [ ( 0.2 m ) 2 , ( 0.015 rad ) 2 ] ) ,
Case B: Time varying measurement noise covariance
v i N ( 0 , α i 2 diag [ ( 0.2 m ) 2 , ( 0.015 rad ) 2 ] ) ,
Case C: Gaussian mixture noise with time varying measurement noise covariance
v i 0.8 N ( 0 , α i 2 diag [ ( 0.2 m ) 2 , ( 0.015 rad ) 2 ] ) + 0.2 N ( 0 , diag [ ( 5 m ) 2 , ( 0.75 rad ) 2 ] ) ,
Case D: Time varying measurement noise covariance and shot noise
v i [ β i , γ i ] T + N ( 0 , α i 2 diag [ ( 0.2 m ) 2 , ( 0.015 rad ) 2 ] ) ,
Case E: Gaussian mixture noise with time varying measurement noise covariance and shot noise
v i [ β i , γ i ] T + 0.8 N ( 0 , α i 2 diag [ ( 0.2 m ) 2 , ( 0.015 rad ) 2 ] ) + 0.2 N ( 0 , diag [ ( 5 m ) 2 , ( 0.75 rad ) 2 ] ) ,
where the parameters α i , β i , and γ i are given in Figure 1.
In these simulations, we use σ = 8 for MCC based algorithms, the commonly used threshold h = 1.345 for Huber’s function, and we set ρ = 0.8 and N = 3 for VB approximations. We perform L = 100 steps and run M = 100 Monte Carlo experiments for each case. The simulation results are plotted in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Under the Gaussian measurement noise with known noise covariance, as given in Figure 2, both MCCKF and HCKF have nearly similar estimation accuracy to CKF, since they will reduce to CKF if choosing proper free parameters (e.g., the σ and h are infinity). VBCKF and VBMCCKF work slightly worse as compared with CKF because they only use their online estimated measurement noise covariance instead of the real one. In particular, the VBHCKF has the worst performance since the commonly used parameter h = 1.345 for Huber’s function doesn’t fit the Gaussian noise situation when using the inaccurate online estimated measurement noise covariance. The proposed VBMCCKF works well with the same kernel bandwidth under both Gaussian and non-Gaussian noise situations, as will be shown in the following cases.
Figure 3, Figure 4, Figure 5 and Figure 6 show the estimation performances of different algorithms under Cases B–E. It can be seen obviously that CKF has the worst estimation accuracy since it requires the measurement noise satisfying Gaussian distribution with known covariance, which is violated in these situations. MCCKF-1 and MCCKF-2 have similar estimation performance but are slightly worse than HCKF when using this kernel bandwidth. As demonstrated in [23,24,25,26,27], MCCKF is able to obtain better estimation accuracy than HCKF with a suitable σ . The estimation results of MCCKF and HCKF do not change too much when Gaussian mixture noise or shot noise are added, since they are robust filters. The VBHCKF has better estimation results in velocity but worse accuracy than HCKF. The VBCKF has much better estimation in Cases B and C, as it is able to online estimate the time varying measurement noise covariance. However, its performance will degrade once shot noise is injected in Cases D and E. Among these algorithms, our VBMCCKF has the best estimation accuracy as compared with other algorithms under Cases B–E. It shows the adaptivity to unknown changing measurement noise covariance and robustness to Gaussian mixture noise and shot noise. Its estimation results are also much better than VBHCKF since the MCC has the potential to capture high order information than Huber’s function. The ARMSEs of these filters under different noises are also given in Table 1 to clearly show the differences.

4.2. Field Results of Integrated Navigation

To further illustrate the effectiveness of the proposed algorithm, we compare our algorithm and existing related methods using the real data collected by a self-made fiber optical gyroscope inertial navigation system (INS) together with a doppler velocity logger (DVL). The integrated navigation results of photonics inertial navigation system (PHINS) and GPS are used as the reference system. We adopt the loosely coupled method to fusion the information of INS and DVL. The state vector is chosen as x = [ δ L δ λ δ V E δ V N φ x φ y φ z x y z ε x ε y ε z ] T , where δ L and δ λ are the latitude and longitude error, { δ V j , φ j , j , ε j } are the velocity error, attitude error, accelerometer bias and gyroscope constant drift, respectively. j denotes the subscribe { e , n , x , y , z } , where e and n present the east and north directions in the local-level frame, and x, y, and z are the directions of three axises in the body frame. Then, the continuous system model is given as follows:
x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) w ( t ) ,
where t is the continuous system time, and w ( t ) = [ 0 1 × 2 w a x w a y w g x w g y w g z 0 1 × 5 ] T is the process noise, which contains the Gaussian noise of both accelerometers and gyroscopes. The detailed elements of matrix A ( t ) and B ( t ) can refer to [34]. The measurement equation is
z ( t ) = H x ( t ) + v ( t ) ,
where z ( t ) = [ ( V e I N S V e D V L ) ( V n I N S V n D V L ) ] T , H = [ 0 2 × 2 I 2 × 2 0 2 × 8 ] , and v ( t ) = [ V e D V L V n D V L ] T . Then, the discretization process is performed before running filtering algorithms.
We compare the estimation results of KF, VBKF, HKF, MCKF, and VBMCKF, where we choose h = 1.345 , σ = 4 , ρ = 0.96 , and N = 3 . We set x ^ 0 = [ 0 1 × 12 ] T , and the covariance is P0 = diag( 500 R e rad / s , 500 R e rad / s , 0.5°, 0.5°, 3°, 1 × 10−4 g, 1 × 10−4 g, 0.01°/h, 0.01°/h, 0.01°/h), where R e is the radius of the earth, and g is the gravitational acceleration. The process noise covariance and measurement noise covariance are set as Q i = diag(0, 0, 1 × 10−3 g, 1 × 10−3 g, 0.025°/h, 0.025 °/h, 0.025 °/h, 0, 0, 0, 0, 0) and R i = diag(0.1 m/s, 0.1 m/s) according to the parameters of INS and DVL.
In this experiment, the system is first tested in anchorage for about 50 min, then the ship starts to move. The real velocities of the ship are shown in Figure 7 provided by the commercial INS/GPS integrated navigation system. The collected data is processed using MATLAB (R2014a by MathWorks, Inc., Natick, MA, USA) on a computer with 2.50 GHz Intel Core i5-7300HQ CPU and 8 GB memory. The total computational time of KF, VBKF, HKF, MCKF, and VBMCKF are 0.1900 s, 0.4800 s, 0.2640 s, 0.2310 s, and 0.6340 s, respectively. The position and velocity errors of different filters are given in Figure 8 and Figure 9. The differences between attitude and heading errors are quite similar so we omit them.
It can be seen from Figure 8 and Figure 9 that when the motion state changes sharply, the proposed VBMCKF algorithm has the smallest estimation errors with slightly increased computational time as compared with other estimation methods.

5. Conclusions

In this paper, a novel adaptive MCGF based on VB approximation is proposed. The MCC is used to reduce the effect of non-Gaussian measurement noise and outliers, while we use VB to estimate the unknown measurement noise covariance. Experimental results based on simulation examples and real data show that the proposed algorithm has better estimation accuracy than related robust and adaptive filters.

Author Contributions

W.G. and G.Z. conceived and designed the experiments; W.G. and Z.Y. performed the experiments and analyzed the data; M.B. and Z.Y. contributed experiment tools; W.G. and G.Z. wrote the paper.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 61773133 and 61633008, the Natural Science Foundation of Heilongjiang Province under grant number F2016008, the Fundamental Research Funds for the Central Universities under grant number HEUCFM180401, the China Scholarship Council Foundation, and the Ph.D. Student Research and Innovation Foundation of the Fundamental Research Funds for the Central Universities under grant number HEUGIP201807.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Auger, F.; Hilairet, M.; Guerrero, J.M.; Monmasson, E.; Orlowska-Kowalska, T.; Katsura, S. Industrial Applications of the Kalman Filter: A Review. IEEE Trans. Ind. Electron. 2013, 60, 5458–5471. [Google Scholar] [CrossRef] [Green Version]
  2. Wang, G.; Li, N.; Zhang, Y. An Event based Multi-sensor Fusion Algorithm with Deadzone Like Measurements. Inf. Fusion 2018, 42, 111–118. [Google Scholar] [CrossRef]
  3. Wang, G.; Li, N.; Zhang, Y. Diffusion Distributed Kalman Filter over Sensor Networks without Exchanging Raw Measurements. Signal Process. 2017, 132, 1–7. [Google Scholar] [CrossRef]
  4. Huang, Y.; Zhang, Y.; Xu, B.; Wu, Z.; Chambers, J. A New Outlier-Robust Student’s t Based Gaussian Approximate Filter for Cooperative Localization. IEEE ASME Trans. Mechatron. 2017, 22, 2380–2386. [Google Scholar] [CrossRef]
  5. Reif, K.; Günther, S.; Yaz, E.; Unbehauen, R. Stochastic Stability of the Discrete-time Extended Kalman Filter. IEEE Trans. Autom. Control 1999, 44, 714–728. [Google Scholar] [CrossRef]
  6. Julier, S.J.; Uhlmann, J.K. Unscented Filtering and Nonlinear Estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
  7. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  8. N Rgaard, M.; Poulsen, N.K.; Ravn, O. New Developments in State Estimation for Nonlinear Systems. Automatica 2000, 36, 1627–1638. [Google Scholar] [CrossRef]
  9. Särkkä, S.; Hartikainen, J. Non-linear Noise Adaptive Kalman Filtering via Variational Bayes. In Proceedings of the 2013 IEEE International Workshop on Machine Learning for Signal Processing, Southampton, UK, 14 November 2013; pp. 1–6. [Google Scholar]
  10. Huang, Y.; Zhang, Y.; Wang, X.; Zhao, L. Gaussian Filter for Nonlinear Systems with Correlated Noises at the Same Epoch. Automatica 2015, 60, 122–126. [Google Scholar] [CrossRef]
  11. Wang, G.; Li, N.; Zhang, Y. Hybrid Consensus Sigma Point Approximation Nonlinear Filter Using Statistical Linearization. Trans. Inst. Meas. Control 2018, 40, 2517–2525. [Google Scholar] [CrossRef]
  12. Shmaliy, Y.S.; Zhao, S.; Ahn, C.K. Unbiased Finite Impulse Response Filtering: An Iterative Alternative to Kalman Filtering Ignoring Noise and Initial Conditions. IEEE Control Syst. Mag. 2017, 37, 70–89. [Google Scholar] [CrossRef]
  13. Zhao, S.; Shmaliy, Y.S.; Shi, P.; Ahn, C.K. Fusion Kalman/UFIR Filter for State Estimation with Uncertain Parameters and Noise Statistics. IEEE Trans. Ind. Electron. 2017, 64, 3075–3083. [Google Scholar] [CrossRef]
  14. Kotecha, J.H.; Djuric, P.M. Gaussian Particle Filtering. IEEE Trans. Signal Process. 2003, 51, 2592–2601. [Google Scholar] [CrossRef]
  15. Pak, J.M.; Ahn, C.K.; Shmaliy, Y.S.; Lim, M.T. Improving Reliability of Particle Filter-Based Localization in Wireless Sensor Networks via Hybrid Particle/FIR Filtering. IEEE Trans. Ind. Inform. 2015, 11, 1089–1098. [Google Scholar] [CrossRef]
  16. Alspach, D.; Sorenson, H. Nonlinear Bayesian Estimation Using Gaussian Sum Approximations. IEEE Trans. Autom. Control 1972, 17, 439–448. [Google Scholar] [CrossRef]
  17. Li, X.R.; Jilkov, V.P. Survey of Maneuvering Target Tracking. Part V. Multiple-model Methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar]
  18. Chang, L.; Hu, B.; Chang, G.; Li, A. Multiple Outliers Suppression Derivative-Free Filter Based on Unscented Transformation. J. Guid. Control Dyn. 2012, 35, 1902–1907. [Google Scholar] [CrossRef]
  19. Karlgaard, C.D. Nonlinear Regression Huber-Kalman Filtering and Fixed-Interval Smoothing. J. Guid. Control Dyn. 2014, 38, 322–330. [Google Scholar] [CrossRef]
  20. Wu, H.; Chen, S.; Yang, B.; Chen, K. Robust Derivative-Free Cubature Kalman Filter for Bearings-Only Tracking. J. Guid. Control Dyn. 2016, 39, 1866–1871. [Google Scholar] [CrossRef]
  21. Hur, H.; Ahn, H.S. Discrete-Time H Filtering for Mobile Robot Localization Using Wireless Sensor Network. IEEE Sens. J. 2013, 13, 245–252. [Google Scholar] [CrossRef]
  22. Graham, M.C.; How, J.P.; Gustafson, D.E. Robust State Estimation with Sparse Outliers. J. Guid. Control Dyn. 2015, 38, 1229–1240. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, Y.; Zheng, W.; Sun, S.; Li, L. Robust Information Filter Based on Maximum Correntropy Criterion. J. Guid. Control Dyn. 2016, 39, 1126–1131. [Google Scholar] [CrossRef]
  24. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum Correntropy Kalman Filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  25. Liu, X.; Qu, H.; Zhao, J.; Yue, P.; Wang, M. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation. Sensors 2016, 16, 1530. [Google Scholar] [CrossRef] [PubMed]
  26. Liu, X.; Chen, B.D.; Xu, B.; Wu, Z.Z.; Honeine, P. Maximum Correntropy Unscented Filter. Int. J. Syst. Sci. 2017, 48, 1607–1615. [Google Scholar] [CrossRef]
  27. Liu, X.; Qu, H.; Zhao, J.; Chen, B. Extended Kalman Filter under Maximum Correntropy Criterion. In Proceedings of the 2016 International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 1733–1737. [Google Scholar]
  28. Wang, G.; Li, N.; Zhang, Y. Maximum Correntropy Unscented Kalman and Information Filters for Non-Gaussian Measurement Noise. J. Frankl. Inst. 2017, 354, 8659–8677. [Google Scholar] [CrossRef]
  29. Narasimhappa, M.; Rangababu, P.; Sabat, S.L.; Nayak, J. A modified Sage–Husa Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Signal. In Proceedings of the 2012 Annual IEEE India Conference, Kochi, India, 7–9 December 2012; pp. 1266–1271. [Google Scholar]
  30. Wang, Y.; Sun, S.; Li, L. Adaptively Robust Unscented Kalman Filter for Tracking a Maneuvering Vehicle. J. Guid. Control Dyn. 2014, 37, 1696–1701. [Google Scholar] [CrossRef]
  31. Li, K.; Chang, L.; Hu, B. A Variational Bayesian-Based Unscented Kalman Filter with Both Adaptivity and Robustness. IEEE Sens. J. 2016, 16, 6966–6976. [Google Scholar] [CrossRef]
  32. Sarkka, S.; Nummenmaa, A. Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations. IEEE Trans. Autom. Control 2009, 54, 596–600. [Google Scholar] [CrossRef]
  33. Mbalawata, I.S.; Särkkä, S.; Vihola, M.; Haario, H. Adaptive Metropolis Algorithm using Variational Bayesian Adaptive Kalman Filter. Comput. Stat. Data Anal. 2015, 83, 101–115. [Google Scholar] [CrossRef]
  34. Gao, W.; Li, J.; Zhou, G.; Li, Q. Adaptive Kalman Filtering with Recursive Noise Estimator for Integrated SINS/DVL Systems. J. Navig. 2015, 68, 142–161. [Google Scholar] [CrossRef]
Figure 1. The time varying parameters. (a) α i ; (b) β i and γ i .
Figure 1. The time varying parameters. (a) α i ; (b) β i and γ i .
Sensors 18 01960 g001
Figure 2. RMSE performances of different filters under Case A. (a) position; (b) velocity.
Figure 2. RMSE performances of different filters under Case A. (a) position; (b) velocity.
Sensors 18 01960 g002
Figure 3. RMSE performances of different filters under Case B. (a) position; (b) velocity.
Figure 3. RMSE performances of different filters under Case B. (a) position; (b) velocity.
Sensors 18 01960 g003
Figure 4. RMSE performances of different filters under Case C. (a) Position; (b) Velocity.
Figure 4. RMSE performances of different filters under Case C. (a) Position; (b) Velocity.
Sensors 18 01960 g004
Figure 5. RMSE performances of different filters under Case D. (a) position; (b) velocity.
Figure 5. RMSE performances of different filters under Case D. (a) position; (b) velocity.
Sensors 18 01960 g005
Figure 6. RMSE performances of different filters under Case E. (a) position; (b) velocity.
Figure 6. RMSE performances of different filters under Case E. (a) position; (b) velocity.
Sensors 18 01960 g006
Figure 7. Real velocities of this field experiment.
Figure 7. Real velocities of this field experiment.
Sensors 18 01960 g007
Figure 8. Position errors of different filters.
Figure 8. Position errors of different filters.
Sensors 18 01960 g008
Figure 9. Velocity errors of different filters.
Figure 9. Velocity errors of different filters.
Sensors 18 01960 g009
Table 1. ARMSEs of different filters under five cases.
Table 1. ARMSEs of different filters under five cases.
AlgorithmsCase ACase BCase CCase DCase E
Pos.Vel.Pos.Vel.Pos.Vel.Pos.Vel.Pos.Vel.
CKF0.40970.08552.29300.31212.20500.30533.77200.78093.29600.6923
MCCKF-10.40770.08541.49600.19591.56800.20521.40900.20771.56800.2100
MCCKF-20.40790.08531.52800.19891.59900.20581.45000.20981.60100.2114
HCKF0.40450.08471.09900.13041.13600.13901.01000.13841.10300.1365
VBHCKF0.65000.12961.71800.11851.63600.12601.64600.12071.75900.1251
VBCKF0.43620.08800.87610.08450.78280.09141.12400.09771.31000.1000
VBMCCKF0.43500.08780.86330.08430.77520.09090.74240.09330.82360.0931

Share and Cite

MDPI and ACS Style

Wang, G.; Gao, Z.; Zhang, Y.; Ma, B. Adaptive Maximum Correntropy Gaussian Filter Based on Variational Bayes. Sensors 2018, 18, 1960. https://doi.org/10.3390/s18061960

AMA Style

Wang G, Gao Z, Zhang Y, Ma B. Adaptive Maximum Correntropy Gaussian Filter Based on Variational Bayes. Sensors. 2018; 18(6):1960. https://doi.org/10.3390/s18061960

Chicago/Turabian Style

Wang, Guoqing, Zhongxing Gao, Yonggang Zhang, and Bin Ma. 2018. "Adaptive Maximum Correntropy Gaussian Filter Based on Variational Bayes" Sensors 18, no. 6: 1960. https://doi.org/10.3390/s18061960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop