Next Article in Journal
A Novel Surface Acoustic Wave Sensor Array Based on Wireless Communication Network
Next Article in Special Issue
A KPI-Based Probabilistic Soft Sensor Development Approach that Maximizes the Coefficient of Determination
Previous Article in Journal
Translational Acceleration, Rotational Speed, and Joint Angle of Patients Related to Correct/Incorrect Methods of Transfer Skills by Nurses
Previous Article in Special Issue
A Time-Distributed Spatiotemporal Feature Learning Method for Machine Health Monitoring with Multi-Sensor Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Globally Optimal Distributed Kalman Filtering for Multisensor Systems with Unknown Inputs

College of Mathematics, Sichuan University, Chengdu 610064, Sichuan, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(9), 2976; https://doi.org/10.3390/s18092976
Submission received: 10 July 2018 / Revised: 30 August 2018 / Accepted: 3 September 2018 / Published: 6 September 2018
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
In this paper, the state estimation for dynamic system with unknown inputs modeled as an autoregressive AR (1) process is considered. We propose an optimal algorithm in mean square error sense by using difference method to eliminate the unknown inputs. Moreover, we consider the state estimation for multisensor dynamic systems with unknown inputs. It is proved that the distributed fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurement; therefore, it achieves the best performance. The computation complexity of the traditional augmented state algorithm increases with the augmented state dimension. While, the new algorithm shows good performance with much less computations compared to that of the traditional augmented state algorithms. Moreover, numerical examples show that the performances of the traditional algorithms greatly depend on the initial value of the unknown inputs, if the estimation of initial value of the unknown input is largely biased, the performances of the traditional algorithms become quite worse. However, the new algorithm still works well because it is independent of the initial value of the unknown input.

1. Introduction

The classic Kalman filtering (KF) [1] requires the model of the dynamic system is accurate. However, in many realistic situations, the model may contain unknown inputs in process or measurement equations. The issue concerning estimating the state of a linear time-varying discrete time system with unknown inputs is widely studied by researchers. One widely adopted approach is to consider the unknown inputs as part of the system state and then estimate both of them. This leads to an augmented state Kalman filtering (ASKF). Its computational cost increases due to the augmented state dimension. It is proposed by Friedland [2] in 1969 a two-stage Kalman filtering (TSKF) to reduce the computation complexity of the ASKF, which is optimal for the situation of a constant unknown input. On the basis of the work in [2], it is proposed by Hsieh et al. an optimal two-stage algorithm (OTSKF) for the dynamic system with random bias and a robust two-stage algorithm for the dynamic system with unknown inputs in 1999 [3] and 2000 [4] respectively. It is assumed in [3,4,5] that the unknown inputs were an autoregressive AR (1) process, with the two-stage algorithms being optimal in the mean square error (MSE) sense. However, the optimality of the ASKF and OTSKF depends on the premise that the initial value of the unknown measurement can be estimated correctly. Under the condition of incorrect initial value of the unknown measurement, the ASKF and OTSKF will have poor performance (see Examples 1 and 2 in Section 5), especially, when the unknown measurement is not stationary as regarded in [4,5]. Due to the difficulty of knowing the exact initial value of the unknown measurement, improvements should be made on these approaches. Many other researchers have focused on the problem of unknown inputs [6,7,8] in recent years.
A large number of sensors are now used in practical applications in numerous advanced systems. With the processing center receiving all measurements from the local sensors in time, centralized Kalman filtering (CKF) can be accomplished, and the resulting state estimates are optimal in the MSE. Nevertheless, because of limited communication bandwidth, and relatively low survivability of the system in unfavorable conditions, like martial circumstances, Kalman filtering is required to be carried on every local sensor upon its own observation first for local requirement, and then the processed data-local state estimate is transmitted to a fusion center. Therefore, the fusion center now needs to fuse all the local estimates received to produce a globally optimal or suboptimal state estimate. In the existing research literatures, a large number of researches on distributed Kalman filtering (DKF) have been done. Under certain common conditions, particularly, the supposition of cross-independent sensor noises, an optimal DKF fusion was proposed in [9,10,11] respectively, which was proved to be the same as the CKF adopting all sensor measurements, illustrating that it is universally optimal. Besides, a Kalman filtering fusion with feedback was also suggested there. Then, it was presented in [12] a rigorous performance analysis for Kalman filtering fusion with feedback. The results mentioned above are effective only for conditions with uncoupled observation noises across sensors. It is demonstrated by Song et al. [13] that when the sensor noises are cross-correlated, the fused state estimate was also equal to the CKF under a mild condition. Similarly with [13], Luo et al. [14] posed a distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering in 2008. Moreover, they proved that under a mild condition the distributed fusion state estimate is equivalent to the centralized random parameter matrices Kalman filtering using all sensor measurement, which under the assumption that the expectation of all sensor measurement matrices are of full row rank. As far as we know, few studies have been done for multisensor system with unknown inputs by the above mentioned augmented methods. The main reason is that the augmented methods greatly increase the state dimension and computation complexity for the multisensor system.
In this paper, an optimal estimation for dynamic system with unknown inputs in the MSE sense is proposed. Different from the work in [2,3,4,5], the approach of eliminating instead of estimating the unknown inputs is used. The unknown inputs are assumed to be an autoregressive AR (1) process and are eliminated by measurement difference method. Then the original dynamic system is converted to a remodeled system with correlated process and measurement noises. The new measurement noise of the remodeled system in this paper is not only one-step correlated in time but also correlated with the process noise. We propose a globally optimal recursive state estimate algorithm for this remodeled system. Compared with the ASKF and OTSKF, the new algorithm is still optimal in the MSE sense but with less computation stress. Additionally, it is showed that the performance of the new algorithm does not rely on the initial value of the unknown input. For the multisensor system with unknown inputs, we show that the centralized filtering can still be expressed by a linear combination of the local estimates. Therefore, the performance of the distributed filtering fusion is the same as that of the centralized fusion. The new algorithm is optimal in the MSE sense with low computation complexity. Numerical examples are given to support our analysis.
The remainder of this paper is organized as follows: the problem formulation is discussed in Section 2, followed by an optimal estimation algorithm for dynamic system with unknown inputs being put forward in Section 3. In Section 4, a distributed algorithm for multisensor system with unknown inputs will be given, demonstrating that the fused state estimate is equal to the centralized Kalman filtering with all sensor measurements. Several simulation examples are given in Section 5. Section 6 is the summary of our analysis and possible future work.

2. Problem Formulation

Consider a discrete time dynamic system:
x k + 1 = F k x k + ν k ,
y k = H k x k + A k d k + ω k ,
where x k R m is the system state, ν k R m is the measurement vector, the process noise and measurement noise ω k R n are zero-mean white noise sequences with the following covariances:
E ( ν k ν j T ) = R ν k δ k j ,
E ( ω k ω j T ) = R ω k δ k j ,
E ( ν k ω j T ) = 0 ,    k , j .
where:
δ k j = { 1 ,    k = j 0 ,    k j ,
d k R p is the unknown input. Matrices F k , H k and A k are of appropriate dimensions by assuming that A k R n × p is of full column rank, i.e., r a n k ( A k ) = p . Therefore, we have ( A k ) A k = I , where the superscript “ ” denotes Moore-Penrose pseudo inverse. It is assumed d k follows an autoregressive AR (1), i.e.,:
d k + 1 = B k d k + ω d k ,
where B k is nonsingular and ω d k is a zero-mean white noise sequences with covariance:
E ( ω d k ω d j T ) = R d k δ k j .
This model is widely considered in [2,3,4,5]. For example, in radar systems, the measurement often contains a fixed unknown deviation or an unknown deviation that gradually increases as the distance becomes longer. Such deviations can be described by Equation (6).
ASKF and OTSKF are two classic algorithms to handle this problem. The ASKF regards x k and d k as an augmented state and estimates them together, while the OTSKF estimates x k and d k respectively at first and then fusions them to achieve the optimal estimation. As a matter of fact, the unknown inputs can be eliminated easily by difference method. Denote:
z k = B k 1 A k + 1 y k + 1 A k y k .
Equations (1) and (2) can be represented as:
x k + 1 = F k x k + ν k ,
z k = M k x k + u k ,
where:
M k = B k 1 A k + 1 H k + 1 F k A k H k ,
u k = B k 1 A k + 1 H k + 1 ν k + B k 1 ω d k + B k 1 A k + 1 ω k + 1 A k ω k .
From Equation (12), it is not difficult to find out that the new measurement noise u k is one-step correlated and correlates with the process noise, i.e.,:
E ( u k ν j T ) = B k 1 A k + 1 H k + 1 R ν k δ k j ,
E ( u k u k T ) = B k 1 A k + 1 H k + 1 R ν k H k + 1 T ( A k + 1 ) T ( B k 1 ) T + B k 1 R d k ( B k 1 ) T + B k 1 A k + 1 R ω k + 1 ( A k + 1 ) T ( B k 1 ) T + A k R ω k ( A k ) T ,
E ( u k u j T ) = A k R ω k ( A k ) T ( B k 1 1 ) T δ k j + 1 , k j .

3. Optimal Estimation for the Remodeled System

It is assumed in the classic Kalman filtering that the process noises and measurement noises are uncorrelated temporally; in the meantime, both noises are mutually uncorrelated except at the same time instant. The noises in Equations (13)–(15) apparently violate these assumptions. Using the latest research achievements about Kalman filtering with correlated noises in [15,16,17,18,19,20], we can give an optimal estimation for the remodeled system (9) and (10) in the MSE sense. The recursive state estimate of the new system is presented in the following theorem.
Theorem 1.
The globally optimal estimate for the remodeled system (9) and (10) is given by:
x k | k = x k | k 1 + J k L k Δ z k ,
P k | k = P k | k 1 J k L k J k T ,
where:
x k | k 1 = F k 1 x k 1 | k 1 + R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 Δ z k 1 ,
P k | k 1 = E ( x k x k | k 1 ) ( x k x k | k 1 ) T = F k 1 P k 1 | k 1 F k 1 T F k 1 J k 1 L k 1 B k 1 1 A k H k R ν k 1 R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 J k 1 T F k 1 T + R ν k 1 R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 B k 1 1 A k H k R ν k 1
Δ z k = z k z k | k 1 , = z k M k x k | k 1 + A k R ω k ( A k ) T ( B k 1 1 ) T L k 1 Δ z k 1 ,
J k = E ( x k x k | k 1 ) ( z k z k | k 1 ) T , = P k | k 1 M k T + ( F k 1 J k 1 L k 1 + R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 ) B k 1 1 A k R ω k ( A k ) T ,
L k = E ( z k z k | k 1 ) ( z k z k | k 1 ) T , = M k J k + A k R ω k ( A k ) T ( B k 1 1 ) T L k 1 ( J k 1 T F k 1 T + B k 1 1 A k H k R ν k 1 ) M k T + R u k A k R ω k ( A k ) T ( B k 1 1 ) T L k 1 B k 1 1 A k R ω k ( A k ) T .
Remark 1.
From Theorem 1, the new algorithm presented in this section is optimal in the MSE sense. In theory, the ASKF and OTSKF are also optimal in the MSE sense (see [2,3]). Nevertheless, the optimality of the ASKF and OTSKF depends on the assumption that the initial condition of the unknown measurement d 0 | 0 = E ( d 0 ) , which is difficult to meet in real situations. It will be demonstrated by numerical examples in the Section 5 that if the initial value of the unknown input is wrong, their performances will be greatly influenced. By contrast, the new algorithm will continue its good performance as it does not rely on the initial value of the unknown input.
Remark 2.
A flop is defined as one addition, subtraction and multiplication. To estimate the complexity of an algorithm, the total number of flops is counted, expressing it as a polynomial of the dimensions of the matrices and vectors involved, and simplifying the expression by ignoring all terms except the leading terms. Then the complexities of the ASKF, OTSKF and the new algorithm are equivalent to O ( m 3 + n 3 + p 3 + m 2 n + m n 2 + m 2 p + m p 2 + n 2 p + n p 2 + m n p ) . The evaluation complexities of the three algorithms are the same order polynomials. We will compare their complexities more precisely by numerical examples in Section 5.

4. Multisensor Fusion

The l -sensor dynamic system is given by:
x k + 1 = F k x k + ν k ,    k = 0 , 1 ,
y k i = H k i x k + A k i d k i + ω k i ,
d k + 1 i = B k i d k i + ω d k i ,    i = 1 , , l
where x k R m is the system state, y k i R n i is the measurement vector in the i -th sensor, ν k R m is the process noise and ω k i R n i is measurement noise, d k i R p i is the unknown input in i -th sensor. Matrices F k , H k i and A k i are of appropriate dimensions.
We assume the system has the following statistical properties:
(1)
Every single sensor satisfies the assumption in Section 2.
(2)
A k i R n i × p i is of full column rank, then ( A k i ) A k i = I .
(3)
{ ν k , ω k j , k = 0 , 1 , 2 , } , i , j = 1 , , l is a sequence of independent variables.
Similarly to Equations (9) and (10), Equation (21) could be converted to:
x k + 1 = F k x k + ν k ,    k = 0 , 1 ,
z k i = M k i x k + u k i ,    i = 1 , , l
where:
M k i = B k i 1 A k + 1 i H k + 1 i F k A k i H k i ,
u k i = B k i 1 A k + 1 i H k + 1 i ν k + B k i 1 ω d k i + B k i 1 A k + 1 i ω k + 1 i A k i ω k i .
The stacked measurement equation is written as:
z k = M k x k + u k
where:
z k = ( z k 1 T , , z k l T ) T , M k = ( M k 1 T , , M k l T ) T , u k = ( u k 1 T , , u k l T ) T .
According to Theorem 1, the local Kalman filtering at the i -th sensor is:
x k | k i = x k | k 1 i + J k i L k i Δ z k i ,
x k | k 1 i = F k 1 x k 1 | k 1 i + R ν k 1 H k i T ( A k i ) T ( B k 1 i 1 ) T L k 1 i Δ z k 1 i ,
with covariances of filtering error given by:
P k | k i = P k | k 1 i J k i L k i J k i T ,
where:
Δ z k i = z k i M k i x k | k 1 i + A k i R ω k i ( A k i ) T ( B k 1 i 1 ) T L k 1 i Δ z k 1 i ,
J k i = E ( x k i x k | k 1 i ) ( z k i z k | k 1 i ) T ,
L k i = E ( z k i z k | k 1 i ) ( z k i z k | k 1 i ) T ,
P k | k 1 i = E ( x k i x k | k 1 i ) ( x k i x k | k 1 i ) T .
According to Theorem 1, the centralized Kalman filtering with all sensor data is given by:
x k | k = x k | k 1 + J k L k Δ z k ,
x k | k 1 = F k 1 x k 1 | k 1 + R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 Δ z k 1 ,
The covariance of filtering error given by:
P k | k = P k | k 1 J k L k J k T ,
where:
A k = d i a g ( A k 1 , , A k l ) , B k = d i a g ( B k 1 , , B k l ) ,
Δ z k = z k M k x k | k 1 + A k R ω k ( A k ) T ( B k 1 1 ) T L k 1 Δ z k 1 ,
J k = E ( x k x k | k 1 ) ( z k z k | k 1 ) T ,
L k = E ( z k z k | k 1 ) ( z k z k | k 1 ) T ,
P k | k 1 = E ( x k x k | k 1 ) ( x k x k | k 1 ) T .
d i a g is the diagonalization of a matrix.
Remark 3.
There are two key points to express the centralized filtering Equations (27) and (28) in terms of the local filtering:
(1) Taking into consideration the measurement noise of single sensor in new system Equations (22) and (23), it can be observed that the sensor noises of the converted system are cross-correlated even if the original sensor noises are mutually independent.
(2)  Δ z k in Equation (27) is not stacked by local Δ z k i in Equation (26) directly and includes Δ z k 1 in its expression, which makes our problem more complicated than the previous distributed problem in [9,10,11,12,13,14,21].
Next, we will solve these two problems to express the centralized filtering Equation (28) in terms of the local filtering. We assume that H k T is of full column rank. Thus, we have ( H k T ) H k T = I . Using (28), we can get:
Δ z k 1 = L k 1 [ ( B k 1 1 ) T ] 1 [ ( A k ) T ] ( H k T ) R ν k 1 1 ( x k | k 1 F k 1 x k 1 | k 1 ) .
Substituting (29) and (30) into (27), we have:
x k | k = x k | k 1 + J k L k Δ z k = x k | k 1 + J k L k ( z k M k x k | k 1 + A k R ω k ( A k ) T ( B k 1 1 ) T L k 1 Δ z k 1 ) = x k | k 1 + J k L k ( z k M k x k | k 1 + A k R ω k ( A k ) T ( B k 1 1 ) T L k 1 L k 1 ( ( B k 1 1 ) T ) ) 1 ( ( A k ) T ) ( H k T ) R ν k 1 1 ( x k | k 1 F k 1 x k 1 | k 1 ) ) = x k | k 1 + J k L k ( z k M k x k | k 1 + A k R ω k ( H k T ) R ν k 1 1 ( x k | k 1 F k 1 x k 1 | k 1 ) ) = x k | k 1 J k L k M k x k | k 1 + J k L k A k R ω k ( H k T ) R ν k 1 1 ( x k | k 1 F k 1 x k 1 | k 1 ) + J k L k z k .
Using (26), we have:
z k i = Δ z k i + M k i x k | k 1 i A k i R ω k i ( A k i ) T ( B k 1 i 1 ) T L k 1 i Δ z k 1 i .
We assume that J k i R m × n i is of full column rank, i.e., r a n k J k i = n i . Thus, we have ( J k i ) J k i = I . Then, using (24), we can get:
Δ z k i = L k i J k i ( x k | k i x k | k 1 i ) .
To express the centralized filtering x k | k in terms of the local filtering, by (25), (32) and (33), we have:
J k L k z k = J k i = 1 l L k ( i ) z k i = J k i = 1 l L k ( i ) ( Δ z k i + M k i x k | k 1 i A k i R ω k i ( A k i ) T ( B k 1 i 1 ) T L k 1 i Δ z k 1 i ) = J k i = 1 l L k ( i ) ( L k i J k i ( x k | k i x k | k 1 i ) + M k i x k | k 1 i A k i R ω k i ( A k i ) T ( B k 1 i 1 ) T L k 1 i L k 1 i ( ( B k 1 i 1 ) T ) 1 ( ( A k i ) T ) ( H k i T ) R ν k 1 1 ( x k | k 1 i F k 1 x k 1 | k 1 i ) ) = J k i = 1 l L k ( i ) ( L k i J k i ( x k | k i x k | k 1 i ) + M k i x k | k 1 i A k i R ω k i ( H k i T ) R ν k 1 1 ( x k | k 1 i F k 1 x k 1 | k 1 i ) ) ,
where L k ( i ) is the i-th column block of L k .
Thus, substituting (34) into (31) yields:
x k | k = x k | k 1 J k L k M k x k | k 1 + J k L k A k R ω k ( H k ) T R ν k 1 1 ( x k | k 1 F k 1 x k 1 | k 1 ) + J k i = 1 l L k ( i ) ( L k i J k i ( x k | k i x k | k 1 i ) + M k i x k | k 1 i A k i R ω k i ( H k i T ) R ν k 1 1 ( x k | k 1 i F k 1 x k 1 | k 1 i ) ) = ( I J k L k M k + J k L k A k R ω k ( H k ) T R ν k 1 1 ) x k | k 1 J k L k A k R ω k ( H k ) T R ν k 1 1 F k 1 x k 1 | k 1 + J k i = 1 l L k ( i ) ( L k i J k i x k | k i + ( M i L k i J k i A k i R ω k i ( H k i T ) R ν k 1 1 ) x k | k 1 i + A k i R ω k i ( H k i T ) R ν k 1 1 F k 1 x k 1 | k 1 i ) .
Similarly to Equation (35), using Equations (24), (26), (29) and (32), we have:
x k | k 1 = F k 1 x k 1 | k 1 R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 ( M k 1 x k 1 | k 2 A k 1 R ω k 1 ( H k 1 T ) R ν k 2 1 ( x k 1 | k 2 F k 2 x k 2 | k 2 ) ) + R ν k 1 H k T ( A k ) T ( B k 1 1 ) T i = 1 l L k 1 ( i ) ( L k 1 i ( ( B k 1 i 1 ) T ) 1 ( ( A k i ) T ) ( H k i T ) R ν k 1 1 ( x k | k 1 i F k 1 x k 1 | k 1 i ) + M k 1 i x k 1 | k 2 i A k 1 i R ω k 1 i ( H k 1 i T ) R ν k 2 1 ( x k 1 | k 2 i F k 2 x k 2 | k 2 i ) ) = F k 1 x k 1 | k 1 R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 ( M k 1 A k 1 R ω k 1 ( H k 1 T ) R ν k 2 1 ) x k 1 | k 2 R ν k 1 H k T ( A k ) T ( B k 1 1 ) T L k 1 A k 1 R ω k 1 ( H k 1 T ) R ν k 2 1 F k 2 x k 2 | k 2 + R ν k 1 H k T ( A k ) T ( B k 1 1 ) T i = 1 l L k 1 ( i ) ( L k 1 i ( ( B k 1 i 1 ) T ) 1 ( ( A k i ) T ) ( H k i T ) R ν k 1 1 x k | k 1 i L k 1 i ( ( B k 1 i 1 ) T ) 1 ( ( A k i ) T ) ( H k i T ) R ν k 1 1 F k 1 x k 1 | k 1 i + ( M k 1 i A k 1 i R ω k 1 i ( H k 1 i T ) R ν k 2 1 ) x k 1 | k 2 i + A k 1 i R ω k 1 i ( H k 1 i T ) R ν k 2 1 F k 2 x k 2 | k 2 i ) .
That means the centralized filtering is expressed in terms of the local filtering. Therefore, the distributed fused state estimate is equal to the centralized Kalman filtering adopting all sensor measurements, which means the distributed fused state estimate achieves the best performance.
Remark 4.
From this new algorithm, it is easy to see that local sensors should transmit x k | k i , x k | k 1 i , P k | k i and P k | k 1 i to the fusion center to get global fusion result. The augmented methods greatly increase the state dimension and computation complexity for the multisensor system. Since the difference method does not increase the state dimension, the computation complexity is lower than that of the augmented method for the multisensory system.

5. Numerical Examples

In this section, several simulations will be carried out for dynamic system with unknown inputs. It is assumed that the unknown input d k + 1 = B k d k + ω k in this paper. Actually, the unknown measurement d k is a stationary time series if the eigenvalue of B k is less than 1, or else the unknown measurement d k is a non-stationary time series. The performances of the new algorithm (denoted as Difference KF) in these two cases are discussed in Example 1 and 2, respectively:
Example 1.
A two dimension target tracking problem is considered. The target dynamic models are given as Equations (1)–(7). The state transition matrices:
F k = ( 1 1 0 0 0 1 0 0 0 0 1 1 0 0 0 1 )
and the measurement matrix is given by:
H k = ( 1 0 0 0 0 0 1 0 )
Suppose A k is an identity matrix with appropriate dimensions, B k = 0.9 I . In this case, d k is a stationary time series. The targets start at x 0 = ( 50 , 1 , 50 , 1 ) T and the initial value d 0 = ( 5 , 5 ) T . The covariance matrices of the noises are given by:
R ν k = ( 1 0 0 0 0 0.1 0 0 0 0 1 0 0 0 0 0.1 ) ,
R ω k = ( 1 0 0 1 ) ,   R d k = ( 1 0 0 1 ) .
In the following, the computer time and performances of the ASKF, OTSKF and Difference KF will be compared respectively.
  • Computer time
The complexities of the three algorithms are analyzed in Remark 2, which shows the complexities of the three algorithms are the equivalent order polynomials. Now let us compare their computer time by this example. Table 1 illustrates the computer time of the three algorithms with 1000 Monte-Carlo runs respectively, through which we can find out that the new algorithm is the fastest algorithm in this example.
  • Estimation Performances
In [3], Hsieh et al. has proved that the OTSKF is equivalent to the ASKF, so the tracking results of the two algorithms are the same. In order to make the figure clearer, we will only compare the performances of the following six algorithms:
Algorithm 1: KF without considering unknown input.
Algorithm 2: ASKF with accurate initial value of unknown input ( d 0 = ( 5 , 5 ) T ) .
Algorithm 3: OTSKF with accurate initial value of unknown input ( d 0 = ( 5 , 5 ) T ) .
Algorithm 4: ASKF with wrong inaccurate initial value of unknown input ( d 0 = ( 0 , 0 ) T ) .
Algorithm 5: ASKF with inaccurate initial value of unknown input ( d 0 = ( 20 , 20 ) T ) .
Algorithm 6: Difference KF without any information about initial value of unknown input.
The initial states of the six algorithms are set at x 0 | 0 = x 0 , the initial P x 0 | 0 = R v 0 , P d 0 | 0 = R d 0 . Using 100 Monte-Carlo runs, we can evaluate estimation performance of an algorithm by estimating the second moment of the tracking error:
E k 2 = 1 100 j = 1 100 | | x k | k ( j ) x k | | 2 , k = 1 , 2 , , 100 .
It must be noticed that Difference KF uses ( y 1 , y 2 , , y k , y k + 1 ) to estimate x k at step k . However, the KF, ASKF and OTSKF only use ( y 1 , y 2 , , y k ) to estimate x k at step k . To make an equal comparison, x k | k 1 in Difference KF with x k | k in the other five algorithms is compared. As d k + 1 = 0.9 d k + ω k , d k is almost equal to a random white noise with small covariance after several steps and the influence of the initial value d 0 will be gradually weakened. The tracking errors of the six methods are compared in Figure 1 and Table 2. It can be noticed that no matter whether the initial values of the unknown input in ASKF and OTSKF are accurate or wrong, the tracking results of the six algorithms are almost the same after about 25 steps. However, it should be noticed that the Difference KF performs better than the ASKF with inaccurate initial value of unknown measurement in the first stage, which is important for some practical conditions, for instance, in multi-target tracking problems, due to data association errors and heavy clutters, tracking has to restart very often. Therefore, in order to derive an entirely good tracking, initial work status at each tracking restarting should be as good as possible.
Example 2.
The dynamic equations are the same as Example 1. Assume B k = I . This model has been considered in [4,5]. d k is a non-stationary time series here. The non-stationary unknown measurement is common in practice. For instance, for an air target, the unknown radar bias is frequently increasing with distance changing between the target and radar.
The targets start at x 0 = ( 50 , 1 , 50 , 1 ) T and the initial value d 0 = ( 5 , 5 ) T .The performances of the following six algorithms are compared:
Algorithm 1: KF without considering unknown input.
Algorithm 2: ASKF with accurate initial value of unknown input ( d 0 = ( 5 , 5 ) T ) .
Algorithm 3: OTSKF with accurate initial value of unknown input ( d 0 = ( 5 , 5 ) T ) .
Algorithm 4: ASKF with wrong inaccurate initial value of unknown input ( d 0 = ( 0 , 0 ) T ) .
Algorithm 5: ASKF with inaccurate initial value of unknown input ( d 0 = ( 20 , 20 ) T ) .
Algorithm 6: Difference KF without any information about initial value of unknown input.
Figure 2 and Table 3 compare the tracking errors of the six methods. As the new algorithm, ASKF and OTSKF with accurate initial value of unknown input are optimal in the MSE sense. Their performances are almost of no difference. The KF without considering unknown input is worse because it does not use any information of the unknown input. Numerical examples also demonstrate that once the initial value of the unknown input is inaccurate, the performance of the ASKF becomes poorer. We can also see that if the initial value of the unknown input is largely biased, the performance of ASKF is even poorer than KF ignoring unknown input. This is because d k + 1 = d k + ω k in this example, the influence of the incorrect initial value d 0 will always exist. Nevertheless, the new algorithm is independent of the initial value of the unknown input and yet performs well.
From Examples 1 and 2, we can see that the performance of the difference KF is almost the same to that of the ASKF and OTSKF with accurate initial value of unknown input. If the initial value of the unknown measurement is largely biased, performances of the ASKF and OTSKF will be badly influenced. Due to the fact that it is not easy to get the exact initial value of the unknown measurement, Difference KF is a better option than the ASKF and OTSKF in practice.
Example 3.
A two-sensor Kalman filtering fusion problem with unknown inputs is considered. The object dynamics and measurement equation are modeled as follows:
x k + 1 = F k x k + ν k ,    k = 0 , 1 , , 100
y k i = H k i x k + A k i d k i + ω k i ,
d k + 1 i = B k i d k i + ω d k i ,    i = 1 , 2 .
The state transition matrix F k and the measurement matrices H k i are the same as Example 1, A k i and B k i are identity matrix with appropriate dimensions. The targets start at x 0 = ( 50 , 1 , 50 , 1 ) T and the initial value d 0 i = ( 5 , 5 ) T .
The covariance matrices of the process noises is given by:
R ν k = ( 5 0 0 0 0 0.1 0 0 0 0 5 0 0 0 0 0.1 )
The covariance matrices of the measurement noises and the unknown inputs are diagonal given by R ω k i = 1 , R d k i = 1 , i = 1 , 2 .
The performances of the following three algorithms are compared as follows:
Algorithm 1: Centralized KF without considering unknown input.
Algorithm 2: The centralized fusion of the Difference KF.
Algorithm 3: The distributed fusion of the Difference KF.
The initial states of the three algorithms are set at x 0 | 0 = x 0 , the initial P x 0 | 0 i = I . Using 100 Monte-Carlo runs, we can evaluate estimation performance of an algorithm by estimating the second moment of the tracking error.
It is illustrated in Figure 3 and Table 4 that the simulation outcome of distributed fusion and centralized fusion of the new algorithm are exactly the same. Additionally, the new algorithm fusion gives better performance than the KF. Thus, the distributed algorithm has not only the global optimality, but also the good survivability in a poor situation.

6. Conclusions

In this paper, the state estimation for dynamic system with unknown inputs modeled as an autoregressive AR (1) process is considered. The main contributions are: (1) A novel optimal algorithm for dynamic system with unknown inputs in the MSE sense is proposed by differential method. The computational burden of the new algorithm is lower than that of ASKF. The performance of the new algorithm is independent of the initial value of the unknown input. (2) A distributed fusion algorithm is proposed for the multisensor dynamic system with unknown inputs, the result of which is equal to the centralized difference Kalman filtering adopting all sensor measurements.
However, it should be noticed that the new algorithm uses y k and y k + 1 to estimate x k , which leads the new algorithm to be one-step delayed. The new algorithm can only cope with the unknown inputs in measurement equation, while the ASKF can handle the unknown inputs in both state and measurement equation. Besides, it is assumed throughout the paper that the model of the unknown inputs d k follows an autoregressive AR (1) process. As for the future research, one interesting direction is to extend the difference method to dynamic system with more general unknown inputs.

Author Contributions

Conceptualization, Y.L. and Y.Z.; Methodology, Y.L.; Software, Y.R.; Validation, Y.L. and Y.Z.; Formal Analysis, Y.R.; Investigation, Y.R.; Resources, Y.L.; Data Curation, Y.R.; Writing-Original Draft Preparation, Y.R.; Writing-Review & Editing, Y.L.; Visualization, Y.R.; Supervision, Y.L.; Project Administration, Y.Z.; Funding Acquisition, Y.L. and Y.Z.

Funding

This research was funded by the NNSF of China (61201065 and 61273074).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–55. [Google Scholar] [CrossRef]
  2. Friedland, B. Treatment of bias in recursive filtering. IEEE Trans. Autom. Control 1969, 14, 359–367. [Google Scholar] [CrossRef]
  3. Hsieh, C.-S.; Chen, F.-C. Optimal solution of the two-stage Kalman estimator. IEEE Trans. Autom. Control 1999, 11, 194–199. [Google Scholar] [CrossRef]
  4. Hsieh, C.-S. Robust two-stage Kalman filters for systems with unknown inputs. IEEE Trans. Autom. Control 2000, 45, 2374–2378. [Google Scholar] [CrossRef]
  5. Alouani, A.T.; Xia, P.; Rice, T.R.; Blair, W.D. A Two-Stage Kalman Estimator For State Estimation In The Presence of Random Bias And For Tracking Maneuvering Targets. In Proceedings of the 30th IEEE Conference on Decision and Control, Brighton, England, UK, 11–13 December 1991. [Google Scholar]
  6. Hsieh, C.S. Extension of unbiased minimum-variance input and state estimation for systems with unknown inputs. Automatica 2009, 45, 2149–2153. [Google Scholar] [CrossRef]
  7. Hsieh, C.S. Optimal filtering for systems with unknown inputs via the descriptor Kalman filtering method. Automatica 2011, 47, 2313–2318. [Google Scholar] [CrossRef]
  8. Hsieh, C.S. State estimation for descriptor systems via the unknown input filtering method. Automatica 2013, 49, 1281–1286. [Google Scholar] [CrossRef]
  9. Chong, C.Y.; Chang, K.C.; Mori, S. Distributed Tracking in Distributed Sensor Networks. In Proceedings of the 1986 American Control Conference, Seattle, WA, USA, 18–20 June 1986. [Google Scholar]
  10. Chong, C.Y.; Mori, S.; Chang, K.C. Distributed Multitarget Multisensor Tracking. In Multitarget-Multisensor Tracking: Advanced Applications; Artech House: Norwood, MA, USA, 1990. [Google Scholar]
  11. Hashmipour, H.R.; Roy, S.; Laub, A.J. Decentralized Structures for Parallel Kalman Filtering. IEEE Trans. Autom. Control 1988, 33, 88–93. [Google Scholar] [CrossRef]
  12. Zhu, Y.M.; You, Z.S.; Zhao, J.; Zhang, K.S.; Li, X.R. The Optimality for the Distributed Kalman Filter with Feedback. Automatica 2001, 37, 1489–1493. [Google Scholar] [CrossRef]
  13. Song, E.; Zhu, Y.; Zhou, J.; You, Z. Optimality Kalman Filtering fusion with cross-correlated sensor noises. Automatica 2007, 43, 1450–1456. [Google Scholar] [CrossRef]
  14. Luo, Y.; Zhu, Y.; Luo, D.; Zhou, J.; Song, E.; Wang, D. Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications. Sensors 2008, 8, 8086–8103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Feng, J.; Wang, Z.; Zeng, M. Distributed weighted robust Kalman filter fusion for uncertain systems with autocorrelated and cross-correlated noises. Inf. Fusion 2013, 14, 78–86. [Google Scholar] [CrossRef]
  16. Jiang, P.; Zhou, J.; Zhu, Y. Globally optimal Kalman filtering with finite-time correlated noises. In Proceedings of the 49th IEEE Conference on Decision and Control, Atlanta, GA, USA, 15–17 December 2010. [Google Scholar]
  17. Li, F.; Zhou, J.; Wu, D. Optimal filtering for systems with finite-step autocorrelated noises and multiple packet dropouts. Aerosp. Sci. Technol. 2013, 24, 255–263. [Google Scholar] [CrossRef]
  18. Ren, L.; Luo, Y. Optimal Kalman filtering for systems with unknown inputs. In Proceedings of the 25th Chinese Control and Decision Conference, Guiyang, China, 25–27 May 2013. [Google Scholar]
  19. Sun, S.; Tian, T.; Lin, H. State estimators for systems with random parameter matrices, stochastic nonlinearities, fading measurements and correlated noises. Inf. Sci. 2017, 397–398, 118–136. [Google Scholar] [CrossRef]
  20. Tian, T.; Sun, S.; Li, N. Multi-sensor information fusion estimators for stochastic uncertain systems with correlated noises. Inf. Fusion 2016, 27, 126–137. [Google Scholar] [CrossRef]
  21. Zhu, Y.; Zhou, J.; Shen, X.; Song, E.; Luo, Y. Networked Multisensor Decision and Estimation Fusion Based on Advanced Mathematical Methods; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
Figure 1. Comparison of the six algorithms when d k is a stationary time series.
Figure 1. Comparison of the six algorithms when d k is a stationary time series.
Sensors 18 02976 g001
Figure 2. Comparison of the six algorithms when d k is a non-stationary time series.
Figure 2. Comparison of the six algorithms when d k is a non-stationary time series.
Sensors 18 02976 g002
Figure 3. Comparison of the three algorithms in multisensor case.
Figure 3. Comparison of the three algorithms in multisensor case.
Sensors 18 02976 g003
Table 1. The computer time of the three algorithms.
Table 1. The computer time of the three algorithms.
AlgorithmComputer Time (seconds)
ASKF16.163026
OTSKF11.684104
Difference KF9.274128
Table 2. The average tracking errors of the six methods.
Table 2. The average tracking errors of the six methods.
AlgorithmKFASKF d 0 = ( 5 , 5 ) T OTSKF d 0 = ( 5 , 5 ) T ASKF d 0 = ( 0 , 0 ) T ASKF d 0 = ( 20 , 20 ) T Difference KF
Average Tracking Error3.68433.32613.32613.53353.92293.3540
Table 3. The average tracking errors of the six methods.
Table 3. The average tracking errors of the six methods.
AlgorithmKFASKF d 0 = ( 5 , 5 ) T OTSKF d 0 = ( 5 , 5 ) T ASKF d 0 = ( 0 , 0 ) T ASKF d 0 = ( 20 , 20 ) T Difference KF
Average Tracking Error11.80919.54709.547010.449315.85399.5787
Table 4. The average tracking errors of the three methods.
Table 4. The average tracking errors of the three methods.
AlgorithmCKFCentralized Difference KFDistributed Difference KF
Average Tracking Error10.37209.35829.3582

Share and Cite

MDPI and ACS Style

Ruan, Y.; Luo, Y.; Zhu, Y. Globally Optimal Distributed Kalman Filtering for Multisensor Systems with Unknown Inputs. Sensors 2018, 18, 2976. https://doi.org/10.3390/s18092976

AMA Style

Ruan Y, Luo Y, Zhu Y. Globally Optimal Distributed Kalman Filtering for Multisensor Systems with Unknown Inputs. Sensors. 2018; 18(9):2976. https://doi.org/10.3390/s18092976

Chicago/Turabian Style

Ruan, Yali, Yingting Luo, and Yunmin Zhu. 2018. "Globally Optimal Distributed Kalman Filtering for Multisensor Systems with Unknown Inputs" Sensors 18, no. 9: 2976. https://doi.org/10.3390/s18092976

APA Style

Ruan, Y., Luo, Y., & Zhu, Y. (2018). Globally Optimal Distributed Kalman Filtering for Multisensor Systems with Unknown Inputs. Sensors, 18(9), 2976. https://doi.org/10.3390/s18092976

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop