Next Article in Journal
A Machine Learning Method for Vision-Based Unmanned Aerial Vehicle Systems to Understand Unknown Environments
Previous Article in Journal
Applications of Converged Various Forces for Detection of Biomolecules and Novelty of Dielectrophoretic Force in the Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Distributed Kalman Filtering: On the Choice of the Local Tolerance

Department of Information Engineering, University of Padova, Via Gradenigo 6/B, 35131 Padova, Italy
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(11), 3244; https://doi.org/10.3390/s20113244
Submission received: 23 April 2020 / Revised: 3 June 2020 / Accepted: 4 June 2020 / Published: 7 June 2020
(This article belongs to the Section Sensor Networks)

Abstract

:
We propose a distributed Kalman filter for a sensor network under model uncertainty. The distributed scheme is characterized by two communication stages in each time step: in the first stage, the local units exchange their observations and then they can compute their local estimate; in the final stage, the local units exchange their local estimate and compute the final estimate using a diffusion scheme. Each local estimate is computed in order to be optimal according to the least favorable model belonging to a prescribed local ambiguity set. The latter is a ball, in the Kullback–Liebler topology, about the corresponding nominal local model. We propose a strategy to compute the radius, called local tolerance, for each local ambiguity set in the sensor network, rather than keep it constant across the network. Finally, some numerical examples show the effectiveness of the proposed scheme.

1. Introduction

Modern problems involve a large number of sensors forming a sensor network and taking measurements from which we would like to infer quantities not accessible to observation possibly at each node location. These problems can be classified as filtering problems whose solution is given by the Kalman filter. On the other hand, its implementation is very expensive in terms of data transmission; indeed, we require that all sensors can exchange their measurements. Such a limitation disappears by considering distributed filtering [1,2,3,4,5,6,7,8,9]. The key idea is that the communication among the nodes is limited.
In the simplest distributed strategy, the state estimate of a node (i.e., local unit) is computed by using only the observations from its neighbors. Such a strategy, however, appears to be not very effective. A remarkable progress has been reached by distributed Kalman filtering with consensus [10,11,12,13] and diffusion strategies [14,15]. These distributed approaches are characterized by several communication stages during each time step. For instance, in the first stage, the local units exchange their observations and then they can compute their local estimate; in the final stage, the local units exchange their local estimate and compute the final estimate using a consensus or a diffusion scheme. Many important challenges have been addressed in distributed filtering. For instance, the issues about limited observability, network topologies that restrict allowable communications, and communication noises between sensors are considered in [16]; the case in which the sensor network is subject to transmission delays is considered in [17]; and the cases of missing measurements and absence of communication among the nodes are analyzed in [18,19], respectively. It is worth observing that distributed state estimation can be performed also through different principles [20,21,22,23,24,25]. For instance, each node can transmit its measurements to a fusion center and then the latter computes the state estimate.
An important aspect in filtering applications is that the nominal model does not correspond to the actual one. Risk sensitive Kalman filtering [26,27,28] addresses this problem by penalizing large errors. The severity of this penalization is tuned by the so-called risk sensitivity parameter: the larger the risk sensitivity parameter is, the more large errors are penalized. A refinement of these filters is given by robust Kalman filtering where the uncertainty is expressed incrementally [29,30,31,32]. More precisely, for each time step, the state estimator minimizes the prediction error according to the least favorable model belonging to a prescribed ambiguity set. The latter is a ball in the Kullback–Leibler (KL) topology whose center is the nominal model. The radius of this ball is called tolerance and it represents the discrepancy budget between the actual and the nominal model allowed for the corresponding time step. It is worth noting that the ambiguity set can be formed also by using different types of divergence (see, for instance, [33,34,35]).
The problem of distributed Kalman filtering under model uncertainty has been considered as well (see, e.g., [36,37,38]). In the present paper, we consider the distributed robust Kalman filter with diffusion step proposed in [39]. Here, the local estimate of each node is computed by using the robust Kalman filter in [29]. In this scenario, the least favorable model is the one used to compute the robust Kalman filter of the global model. Accordingly, we have one ambiguity set corresponding to the global model, which contains the actual model, and the local ambiguity sets corresponding to the nodes of the network. The centers of those balls are known because they are given by the nominal model. The local tolerances corresponding to the local ambiguity sets of the nodes are set equal to the one of the global ambiguity set. On the other hand, the local ambiguity set of a node corresponds to a local model which is just a part of the global model. Accordingly, taking all the local tolerances uniform across the network and equal to the global one may not be the best choice.
The main contribution of this paper is to propose a robust distributed Kalman filter as in [39] where the local tolerance for each node is customized. In this way, the local tolerance is non-uniform and time-varying across the network. We show through some simulation studies that the performance of the predictions is improved. Moreover, we show that, if the tolerance corresponding to the global ambiguity set is sufficiently small, then the local tolerances across the network are constant in the steady-state condition. Accordingly, it is also possible to simplify the distributed scheme by replacing the time-varying tolerances with the steady-state values.
The organization of this paper is as follows. In Section 2, we provide the background about robust Kalman filtering whose uncertainty is expressed incrementally. In Section 3, the distributed robust Kalman filter with local uniform tolerance is reviewed. In Section 4, we introduce the distributed robust Kalman filter with non-uniform local tolerance. In Section 5, we perform some numerical experiments to check the performance of the proposed distributed scheme. In Section 6, we propose an efficient approximation of the distributed robust Kalman filter with non-uniform local tolerance. Finally, in Section 7, we draw the conclusions.
Notation: [ a i j ] i j denotes the matrix having entry a i j in position ( i , j ) ; A T is the transposition of matrix A; and A > 0 ( A 0 ) means that matrix A is positive (semi)-definite. diag ( A 1 A n ) denotes a block-diagonal matrix whose blocks in the main block diagonal are A 1 A n . Given a squared matrix A, tr ( A ) and | A | denote the trace and the determinant of A, respectively. Given two matrices A and B, A B denotes their Kronecker product. x N ( m , K ) means that x is a Gaussian random vector with mean m and covariance matrix K.

2. Background

In this section, we review the robust Kalman filter proposed in [29], which represents the “building block” used throughout the paper. Consider the nominal state-space model
x t + 1 = A x t + Γ B u t + r t y t = C x t + Γ D u t
where A R n × n , Γ B R n × n + p N , C R p N × n , Γ D R p N × n + p N , x t is the state process, y t is the observation process, u t is normalized white Gaussian noise (WGN), and  r t is a deterministic signal. It is assumed that u t is independent from the initial state x 0 N ( x ^ 0 , V 0 ) . We also assume that the noise entering in the state process and the one entering in the observation process are independent, i.e., we assume that Γ B Γ D T = 0 . Finally, the state-space model in Equation (1) is considered to be reachable and observable. Let ϕ t z t | x t denote the nominal transition probability density of z t : = x t + 1 T y t T T given x t . Notice that ϕ t ( z t | x t ) is Gaussian by construction and it is straightforwardly given by Equation (1).
We assume that the (unknown) actual transition probability ϕ ˜ t z t | x t belongs to the ambiguity set which is a closed ball centered in ϕ t ( z t | x t ) in the KL topology:
B t : = ϕ ˜ t s . t . E ˜ log ϕ ˜ t ϕ t Y t 1 c
with
E ˜ log ϕ ˜ t ϕ t Y t 1 : = ϕ ˜ t z t | x t f ˇ t x t | Y t 1 log ϕ ˜ t z t | x t ϕ t z t | x t d z t d x t ,
Y t 1 : = { y s , s = 0 t 1 } , and  f ˇ t x t | Y t 1 N x ^ t , V t is defined as the actual conditional probability density of x t given Y t 1 . The mismatch modeling budget allowed for each time step is represented by the parameter c > 0 , which is called tolerance. The robust estimator of x t + 1 given Y t for the nominal model in Equation (1) is given by solving the following minimax problem:
x ^ t + 1 = argmin g t G t max ϕ ˜ t B t E ˜ x t + 1 g t ( y t ) 2 Y t 1
where G t is the set of all estimators g t whose variance is finite under any model in the ambiguity set  B t ,
E ˜ x t + 1 g t ( y t ) 2 Y t 1 : = x t + 1 g t ( y t ) 2 ϕ ˜ t z t | x t f ˇ t x t | Y t 1 d z t d x t
is the estimation error under the transition density ϕ ˜ t ( z t | x t ) . In [29], it is proved that the estimator solution to the problem in Equation (3) has the following Kalman-like structure:
G t = A V t C T ( C V t C T + Γ D Γ D T ) 1 x ^ t + 1 = A x ^ t + G t y t C x ^ t + r t P t + 1 = A V t 1 + C T ( Γ D Γ D T ) 1 C 1 A T + Γ B Γ B T Find θ t s . t . γ P t + 1 , θ t = c V t + 1 = ( P t + 1 1 θ t I ) 1
where
γ ( P , θ ) : = log det ( I θ P ) + tr ( ( I θ P ) 1 I ) .
Parameter θ t > 0 is called risk sensitivity parameter. It is worth noting that, given P > 0 and c > 0 , the equation γ ( P , θ ) = c always admits a unique solution in θ and such that: θ > 0 , P 1 θ I > 0 . In the special case that c = 0 , i.e., the nominal model coincides with the actual model, then θ t = 0 and thus Equation (4) degenerates in the usual Kalman filter.
Remark 1.
It is worth noting that the robust Kalman filter is well defined also in the case that the ambiguity set Equation (2) is defined by a time-varying tolerance, i.e.,  c t instead of c. However, we prefer to keep c constant in Equation (3) because in the following we assume that the actual (global) model is the solution to Equation (3) with constant tolerance c, in order to simplify the setup.

3. Distributed Robust Kalman Filtering with Uniform Local Tolerance

In this section, we review the distributed robust Kalman filter presented in [39]. Consider a network made by N sensors. The latter are connected if they can communicate with each other. Accordingly, every sensor k has a set of neighbors which is denoted by N k . In particular, k N k that is each node is connected with itself. The number of neighbors of node k is denoted by n k . The corresponding N × N adjacency matrix J = [ j l k ] l k is defined as
j l k : = 1 , if l N k 0 , otherwise .
We assume that every node collects a measurement y k , t R p at time t and the corresponding nominal state-space model is
x t + 1 = A x t + B w t + r t y k , t = C k x t + D k v k , t k = 1 N
where w t and v k , t , with  k = 1 N , are independent normalized WGNs. It is worth noting that the actual state-space model for each node is unknown. By stacking Equation (5) for every k, it is possible to rewrite such sensor network as Equation (1) where:
y t = y 1 , t y N , t , u t = w t v t , v t = v 1 , t v N , t Γ B = B 0 , Γ D = 0 D C = C 1 C N , D = diag D 1 , , D N .
Accordingly, Equation (4) represents the centralized robust Kalman filter. Defining R : = D D T , R l : = D l D l T with l = 1 N , and
S t o t : = C T R 1 C = l = 1 N C l T R l 1 C l ,
the Kalman gain for Equation (5) becomes, using the matrix inversion lemma,
G t = A V t 1 + S t o t 1 C T R 1 .
Since the nominal model in Equation (5) does not coincide with the actual one and each node k can only exploit information shared by its neighbors l N k , the aim of distributed robust Kalman filtering is to compute a prediction x ^ k , t of the state x t for every node k by using only the local information, taking into account the model uncertainty. In the case that the node k has access to all measurements across all the nodes in the network, then x ^ k , t coincides with Equation (4) which can be written, using the parameterization in Equations (6) and (7) as
x ^ k , t + 1 = A x ^ k , t + A V k , t 1 + S t o t 1 l = 1 N C l T R l 1 y l , t C l x ^ k , t + r t P k , t + 1 = A V k , t 1 + S t o t 1 A T + B B T Find θ k , t s . t . γ P k , t + 1 , θ k , t = c V k , t + 1 = P k , t + 1 1 θ k , t I 1
where x ^ k , t = x ^ t , P k , t = P t , V k , t = V t , and  θ k , t = θ t . In the case that not all the measurements in the network are accessible to node k, then the target is to compute a state prediction x ^ k , t of x t which is as similar as possible to the global state prediction.
Assume that the node k can collect the measurements from its neighbors N k . Then, the corresponding local nominal state-space model is
x t + 1 = A x t + B w t + r t y l , t = C l x t + D l v l , t , l N k .
The latter can be rewritten in the compact form
x t + 1 = A x t + Γ B u k , t loc + r t y k , t loc = C k loc x t + Γ D k loc u k , t loc
where u k , t loc = [ w t T ( v k , t loc ) T ] T is the input noise and y k , t loc is the output; v k , t loc and y k , t loc are given by stacking v l , t and y l , t , with  l N k , respectively. Moreover, C k loc is given by stacking C l with l N k , Γ D loc = 0 D k loc and D k loc is a block diagonal matrix whose main blocks are D l with l N k . In addition, defining R k loc : = D k loc D k loc T and S k : = C k loc T R k loc 1 C k loc it results that
S k = l N k C l T R l 1 C l .
We conclude that the one-step ahead predictor of x t at node k is similar to the one in Equation (8) but now we need to discard the terms for which l N k . It is worth noting that the latter represents an intermediate local prediction of x t + 1 at node k, and it is denoted as ψ k , t + 1 . Allowing that the connected nodes can exchange their intermediate estimates, then each node can update the prediction at node k in terms of both ψ k , t + 1 and ψ l , t + 1 with l N k . More precisely, consider a matrix W = w l k l k R N × N such that
w l k 0 and w l k = 0 if l N k l N k w l k = 1 .
Therefore, the final predicted state at node k is given by means of the so-called diffusion step [14]:
x ^ k , t + 1 = l N k w l k ψ l , t + 1 .
To sum up, in the diffusion scheme, each local unit uses the measurements and the intermediate local predictions from its neighbors. The resulting scheme is explained through Algorithm 1.
Algorithm 1 Distributed robust Kalman filter with uniform local tolerance at time t.
  • Input: x ^ k , t , V k , t , y k , t , W = w l k l k with k = 1 N
  • Output: x ^ k , t + 1 , V k , t + 1 with k = 1 N
  • Incremental step. Compute at every node k:
    ψ k , t + 1 = A x ^ k , t + A V k , t 1 + S k 1 l N k C l T R l 1 y l , t C l x ^ k , t + r t P k , t + 1 = A V k , t 1 + S k 1 A T + B B T Find θ k , t s . t . γ P k , t + 1 , θ k , t = c V k , t + 1 = P k , t + 1 1 θ k , t I 1
  • Diffusion step. Compute at every node k:
    x ^ k , t + 1 = l N k w l k ψ l , t + 1
It is worth noting that ψ k , t is computed by using the robust Kalman scheme in Equation (4) applied to the local model in Equation (10). In addition, c is the same for any node that is c takes a uniform value over the sensor network. In particular, the tolerance c is the same for both the centralized and the distributed Kalman filter. This strategy for the selection of the tolerance does not ensure that the least favorable model computed at node k is compatible with the one of the centralized filter. However, in the case of large deviations of the least favorable model corresponding to the centralized problem, it is very likely that the predictor at node k using Algorithm 1 is better than the one which assumes that the nominal and actual models coincide. Finally, in the case that c = 0 , i.e., the nominal model coincides with the actual one, Algorithm 1 boils down to the distributed Kalman filter with diffusion step in [14].

4. Distributed Robust Kalman Filtering with Non-Uniform Local Tolerance

We investigate the possibility to assign a possibly different local tolerance to each node that is the local tolerance is not uniform across the sensor network. Recall that the least favorable model is given by the minimax problem in Equation (3), with constant tolerance c, and the corresponding optimal estimator is given by the centralized robust Kalman filter in Equation (4).
Consider the centralized problem in Equation (3). Let
f ¯ t z t | Y t 1 = ϕ t z t | x t f ˇ t x t | Y t 1 d x t
f ˜ t z t | Y t 1 = ϕ ˜ t z t | x t f ˇ t x t | Y t 1 d x t
denote the pseudo-nominal and the least favorable conditional probability densities of z t given the past observations Y t 1 , respectively. Recall that ϕ t ( z t | x t ) is the nominal transition density of the state space model in Equation (1) and thus
ϕ t z t | x t N A x t + r t C x t , Γ B Γ D Γ B T Γ D T .
Since f ˇ t ( x t | Y t 1 ) N ( x ^ t , V t ) , and in view of Equations (14) and (16), we have
f ¯ t z t | Y t 1 N m z , K z t
where
m z t = A x ^ t + r t C x ^ t , K z t = A C V t A T C T + Γ B Γ D Γ B T Γ D T .
In [29], it has been shown that the optimal solution ϕ ˜ t 0 ( z t | x t ) to Equation (3) is Gaussian. Accordingly, in view of Equation (15), the corresponding least favorable density of z t given Y t 1 is Gaussian:
f ˜ t z t | Y t 1 N m ˜ z t , K ˜ z t .
It is clear then that the minimax problem in Equation (3) can be written by replacing ϕ t ( z t | x t ) and ϕ ˜ t ( z t | x t ) with f ¯ t ( z t | Y t 1 ) and f ˜ t ( z t | Y t 1 ) , respectively. Then, the equivalent minimax problem is
x ^ t + 1 = argmin g t G t max f ˜ t B ¯ t x t + 1 g t 2 f ˜ t z t | Y t 1 d z t
where the ambiguity set is a ball about the pseudo-nominal density f ¯ t ( z t | Y t 1 )
B ¯ t = f ˜ t z t | Y t 1 N m ˜ z t , K ˜ z t s . t . D K L f ˜ t f ¯ t c
formed by the KL divergence between f ˜ t ( z t | Y t 1 ) and f ¯ t ( z t | Y t 1 ) :
D K L f ˜ t f ¯ t = f ˜ t z t | Y t 1 log f ˜ t z t | Y t 1 f ¯ t z t | Y t 1 d z t .
Since f ˜ t ( z t | Y t 1 ) and f ¯ t ( z t | Y t 1 ) are Gaussian distributed, we have
D K L f ˜ t f ¯ t = 1 2 m z t m ˜ z t K z t 1 2 log K ˜ z t + log K z t + tr K ˜ z t K z t 1 ( n + p ) .
It is well known that D K L ( f ˜ t f ¯ t ) also represents the negative log-likelihood of the model f ¯ t under the actual model f ˜ t , [40,41,42]. Accordingly, c represents an upper bound of the negative log-likelihood and it can be found as follows. Fix the nominal state space model ( A , B , C , D ) and collect the data ( y N , u N , x N ) where y N = { y 1 y N } , u N = { u 1 u N } , x N = { x 1 x N } . Let ( A , B , C , D ; y N , u N , x N ) be the negative log-likelihood of this nominal model using the collected data. Then, fix c = ( A , B , C , D ; y N , u N , x N ) . Clearly, we need to assume that the state is accessible to observation (or its estimate is reasonably good) to compute c.
Theorem 1
(Levy & Nikoukhah [30]). Let f ¯ t ( z t | Y t 1 ) be the nominal density with mean m z t and covariance matrix K z t partitioned as
K z t = K x t + 1 K x t + 1 , y t K y t , x t + 1 K y t
according to the dimension of x t + 1 and y t , respectively. The least favorable density f ˜ t 0 ( z t | Y t 1 ) solution to Equation (18) has mean and covariance matrix as follows:
m ˜ z t = m z t , K ˜ z t = K ˜ x t + 1 K x t + 1 , y t K y t , x t + 1 K y t .
Let
P t + 1 = K x t + 1 K x t + 1 , y t K y t 1 K y t , x t + 1 V t + 1 = K ˜ x t + 1 K x t + 1 , y t K y t 1 K y t , x t + 1
denote the nominal and least favorable error covariance matrices of x t + 1 given Y t . Then,
V t + 1 = ( P t + 1 1 θ t I ) 1
and θ t > 0 is the unique value for which
D K L f ˜ t 0 f ¯ t = 1 2 log K ˜ z t + log K z t + tr K ˜ z t K z t 1 ( n + p ) = c .
The above result provides a way to compute f ˜ t 0 ( z t | Y t 1 ) . Indeed, once the centralized robust Kalman filter in Equation (4) has been computed, the mean and the covariance matrix of f ˜ t 0 ( z t | Y t 1 ) are given, in view of Equation (17), by 
m ˜ z t = A x ^ t + r t C x ^ t , K ˜ z t = V t + 1 + K x t + 1 , y t K y t 1 K y t , x t + 1 K x t + 1 , y t K y t , x t + 1 K y t
where
K x t + 1 , y t = A V t C T K y t = C V t C T + Γ D Γ D T .
From f ¯ t ( z t | Y t 1 ) and f ˜ t 0 ( z t | Y t 1 ) , we can compute the nominal and least favorable density for each node. Consider the state-space model in Equation (10) for node k. Let z k , t : = [ x t + 1 T ( y k , t loc ) T ] T . Then, the nominal transition probability at node k, in view of Equation (10), is
ϕ k , t z k , t | x t N A x t + r t C k loc x t , Γ B Γ D k loc Γ B T Γ D k loc T .
Then,
f ¯ k , t z k , t | Y t 1 = ϕ k , t z k , t | x t f ˇ t x t | Y t 1 d x t
denotes the pseudo-nominal conditional probability density of z k , t given the past observations Y t 1 at node k. Since f ˇ t ( x t | Y t 1 ) N ( x ^ t , V t ) , and in view of Equation (23), we have
f ¯ k , t z k , t | Y t 1 N m z k , t , K z k , t
where
m z k , t = A x ^ t + r t C k loc x ^ t K z k , t = K x t + 1 K x t + 1 , y k , t K y k , t , x t + 1 K y k , t
and
K x t + 1 , y k , t = A V t ( C k loc ) T K y k , t = C k loc V t ( C k loc ) T + Γ D k loc Γ D k loc T .
Such a result is not surprising, indeed f ¯ k , t ( z k , t | Y t 1 ) is given by marginalizing f ¯ t ( z t | Y t 1 ) with respect to y l , t with l N k . Roughly speaking, this means that m z k , t , K x t + 1 , y k , t and K y k , t are obtained from m z t , K x t + 1 , y t and K y t as follows:
  • m z k , t is the vector obtained from m z t by deleting the elements from p l p + 1 to p l for any l N k .
  • K x t + 1 , y k , t is the matrix obtained from K x t + 1 , y t by deleting the columns from p l p + 1 to p l for any l N k .
  • K y k , t is the matrix obtained from K y t by deleting the rows and the columns from p l p + 1 to p l for any l N k .
Accordingly, we can compute the least favorable density at node k, say f ˜ k , t 0 ( z k , t | Y t 1 ) , by marginalizing f ˜ t 0 ( z t | Y t 1 ) with respect to y l , t with l N k . Therefore, we have
f ˜ k , t 0 z k , t | Y t 1 N m z k , t , K ˜ z k , t
with
K ˜ z k , t = K ˜ x t + 1 K x t + 1 , y k , t K y k , t , x t + 1 K y k , t = V t + 1 + K x t + 1 , y t K y t 1 K y t , x t + 1 K x t + 1 , y k , t K y k , t , x t + 1 K y k , t
where in the last equality we exploit Equation (22). It remains to design the robust filter to compute the intermediate prediction ψ k , t + 1 .
Remark 2.
At this point, it is worth doing a digression about Algorithm 1. The intermediate prediction at node k is the solution to the following minimax problem
ψ k , t + 1 = argmin g k , t G k , t max f ˜ k , t B ¯ k , t x t + 1 g k , t ( y k , t loc ) 2 f ˜ k , t z k , t | Y t 1 d z k , t
where
B ¯ k , t : = f ˜ k , t s . t . D K L f ˜ k , t f ¯ k , t c
and G k , t is the set of all estimators g k , t whose variance is finite under any model in the ambiguity set B ¯ k , t . Moreover, in view of Theorem 1, the least favorable density f ˜ k , t ( z k , t | Y t 1 ) solution to Equation (26) is such that D K L ( f ˜ k , t f ¯ k , t ) = c . It is worth noting that the best estimator at node k would be the one constructed from f ˜ k , t 0 . On the other hand, the problem in Equation (26) implies neither f ˜ t , k 0 = f ˜ t , k nor D K L ( f ˜ k , t f ¯ k , t ) = D K L ( f ˜ k , t 0 f ¯ k , t ) .
Clearly, one would design the intermediate estimator at node k by using f ˜ k , t 0 . However, the latter is not available at node k, and it is only known by a “central unit”, i.e., a unit knowing the global model, but neither collecting measurements nor computing predictions. Moreover, the transmission of the mean and the covariance matrix of f ˜ k , t 0 would be more expensive in terms of transmission costs. As alternative, we can consider a minimax problem whose least favorable model f ˜ k , t is such that D K L ( f ˜ k , t f ¯ k , t ) = D K L ( f ˜ k , t 0 f ¯ k , t ) :
ψ k , t + 1 = argmin g k , t G k , t max f ˜ k , t B ¯ k , t x t + 1 g k , t ( y k , t loc ) 2 f ˜ k , t z k , t | Y t 1 d z k , t
where
B ¯ k , t : = f ˜ k , t s . t . D K L f ˜ k , t f ¯ k , t c k , t ,
c k , t : = 1 2 log K ˜ z k , t + log K z k , t + tr K ˜ z k , t K z k , t 1 ( n + p k ) ,
and p k coincides with the number of rows of C k loc . Under the above scheme, the central unit only transmits the local tolerance to each node in the network. The procedure which implements this optimized strategy of distributed robust Kalman filtering is outlined in Algorithm 2.
Algorithm 2  Distributed robust Kalman filter with non-uniform local tolerance at time t.
  • Input: x ^ k , t , V k , t , y k , t , W = w l k l k with k = 1 N
  • Output: x ^ k , t + 1 , V k , t + 1 with k = 1 N
  • Tolerance update. Using the nominal global model, the central unit computes for every node k:
    c k , t = 1 2 log K ˜ z k , t + log K z k , t + tr K ˜ z k , t K z k , t 1 ( n + p k )
  • Incremental step. Compute at every node k:
    ψ k , t + 1 = A x ^ k , t + A V k , t 1 + S k 1 l N k C l T R l 1 y l , t C l x ^ k , t + r t P k , t + 1 = A V k , t 1 + S k 1 A T + B B T Find θ k , t s . t . γ P k , t + 1 , θ k , t = c k , t V k , t + 1 = P k , t + 1 1 θ k , t I 1
  • Diffusion step. Compute at every node k:
    x ^ k , t + 1 = l N k w l k ψ l , t + 1

Least Favorable Performance

We show how to evaluate the performance of the previously introduced distributed algorithm with non-uniform local tolerance and diffusion step with respect to the least favorable model solution of the centralized problem in Equation (3). More precisely, we show how to compute the mean and the variance of the prediction error for each node k in the network. In [29,34], it is shown that the least favorable model can be characterized through a state-space model over a finite interval [ 0 , T ] as follows. Let ξ t = [ x t T e t T ] T , where x t is the least favorable state process. Then, the least favorable model takes the form
ξ t + 1 = A ˇ t ξ t + B ˇ t ε t + r ˇ t y t = C ˇ t ξ t + D ˇ t ε t
where ε t is normalized WGN, independent from x ^ 0 , and  r ˇ t : = [ r t T 0 ] T . Moreover,
A ˇ t : = A Γ B Γ H t 0 A G t C + Γ B G t Γ D Γ H t , B ˇ t : = Γ B Γ L t Γ B G t Γ D Γ L t C ˇ t : = C Γ D Γ H t , D ˇ t Γ D Γ L t
where Γ L t is such that K t = Γ L t Γ L t T ,
K t : = I ( Γ B G t Γ D ) T ( Ω t + 1 1 + θ t I ) ( Γ B G t Γ D ) 1 Γ H t : = K t ( Γ B G t Γ D ) T ( Ω t + 1 1 + θ t I ) ( A G t C ) .
The matrix Ω t + 1 1 is computed from the backward recursion
Ω t 1 = A G t C T ( Ω t + 1 1 + θ t I ) A G t C + Γ H t T K t 1 Γ H t
where the final point is initialized with Ω T + 1 1 = 0 .
Let x ˜ k , t = x t x ^ t , k denote the least favorable state prediction error x ˜ k , t of node k at time t using Algorithm 2 or Algorithm 1. Define the vector containing all the errors across the network
χ ˜ t : = x ˜ 1 , t T x ˜ N , t T T .
Using the same reasonings in [39], it is not difficult to prove that χ ˜ t obeys the following dynamics
χ ˜ t + 1 = A t χ ˜ t + B t ε t + C t e t
where
A t : = ( W T I ) ( I A ) ( V t 1 + S ) 1 V t 1 B t : = ( W T I ) ( I A ) ( V t 1 + S ) 1 ( J T I ) C T R 1 D L t + 1 B N t C t : = ( W T I ) ( I A ) ( V t 1 + S ) 1 ( J T I ) C T R 1 D H t + 1 B M t C : = diag ( C 1 , , C N ) V t : = diag ( V 1 , t , , V N , t ) S : = diag ( S 1 , , S N ) ,
M t R n × n , H t R p N × n , N t R n × ( p N + n ) and L t R p N × ( p N + n ) are such that Γ H t = [ M t T H t T ] T and Γ L t = [ N t T L t T ] T . Finally, 1 denotes the vector of ones. Then, we combine Equation (35) with the model for e t in Equation (34):
η t + 1 = F t η t + G t ε t
where η t : = [ χ ˜ t T e t T ] T ,
F t : = A t C t 0 ( A G t C ) + ( Γ B G t Γ D ) Γ H t G t : = B t ( Γ B G t Γ D ) Γ L t .
Taking the expectation of Equation (36), we obtain
E [ η t + 1 ] = F t E [ η t ] .
In view of the fact that x 0 has mean equal to x ^ 0 and x ^ k , 0 = x ^ 0 for k = 1 N , it is not difficult to see that E ˜ [ η 0 ] = 0 . This implies that η t is a zero mean stochastic process or, equivalently, all the predictors are unbiased. Next, we show how to derive the variance of the prediction errors. Let Q t = E [ η t η t T ] . In view of the fact that ε t is normalized WGN, by Equation (36), we have that Q t is given by solving the following Lyapunov equation
Q t + 1 = F t Q t F t T + G t G t T .
We partition Q t as follows:
Q t = P t H t H t T R t
where P t R N n × N n , H t R N n × n and R t R n × n . Notice that P t contains in the main block diagonal the covariance matrices of the estimation error at each node. Accordingly, the least favorable mean square deviation is given by
MSD ¯ t : = 1 N k = 1 N MSD k , t = 1 N tr ( P t )
where MSD k , t is the variance of the prediction error at node k. Finally, we have the following convergence result for the proposed distributed algorithm.
Proposition 1.
Let ( A , B ) be a reachable pair and ( A , C k l o c ) be an observable pair for any k. Let W be an arbitrary diffusion matrix satisfying Equation (11). Then, there exists c > 0 sufficiently small such that, for any arbitrary initial condition V 0 > 0 and V k , 0 > 0 , the sequence Q t , t 0 , generated by Equation (39) converges to Q ¯ > 0 over [ α T , β T ] as T . Moreover, we have F t F ¯ , G t G ¯ , and  c k , t c ¯ k . In particular, Q ¯ corresponds to the unique solution of the algebraic Lyapunov equation
Q ¯ = F ¯ Q ¯ F ¯ T + G ¯ G ¯ T
with F ¯ Schur stable. Accordingly, MSD ¯ t converges over [ α T , β T ] as T .
Proof. 
First, notice that the observability condition on the pairs ( A , C k l o c ) implies the observability of ( A , C ) . Since the global model is reachable and observable, the robust centralized Kalman filter converges provided that c is sufficiently small (see [43,44]). As a consequence, V t V ¯ > 0 as t . Accordingly, in view of Equation (17), K z t K z and, thus, in view of Equation (22), K ˜ z t K ˜ z . Since K z k , t and K ˜ z k , t are submatrices of K z t and K ˜ z t , respectively, we have that K z k , t K z k and K ˜ z k , t K ˜ z k . Accordingly, in view of Equation (30), we have that c k , t c ¯ k where
c ¯ k : = 1 2 log K ˜ z k + log K z k + tr K ˜ z k K z k 1 ( n + p k ) .
In [30], it has been shown that V t P t as c 0 , and thus K ˜ z t K z t . Since K z k , t and K ˜ z k , t are submatrices of K z t and K ˜ z t , respectively, we have that K ˜ z k , t K z k , t . Accordingly, in view of Equation (30), we have that c k , t 0 as c 0 .
In view of ([43], Proposition 5.3), we conclude that the robust local Kalman filter at node k converges because: the local state-space model is reachable and observable; c ¯ k is sufficiently small provided that c is sufficiently small as well.
Finally, the remaining part of the proof follows the one in ([39], Section IV-A) (see also [45]). □
It is worth noting that Proposition 1 guarantees that Q ¯ is bounded because F ¯ is Schur stable. This means that the prediction errors over the network have finite variance, i.e., the Kalman gains of the local filters are stabilizing. The proof above also shows that, in the case c = 0 , i.e., the nominal model coincides with the actual one, Algorithm 2 boils down to the distributed Kalman filter with diffusion step proposed in [14].

5. Numerical Examples

In this section, we test the performance of the distributed Kalman filters with uniform versus non-uniform local tolerance. More precisely, we consider the problem in [39] to track the position of a projectile from position observations corrupted by noise and coming from a network of N = 20 sensors. The latter is shown in Figure 1.
The model for the projectile motion is
x ˙ t c = Φ x t c + r t c
where
Φ = 0 0 I 3 0 ,
r t c = 0 0 g 0 0 0 T , with  g = 10 , and  x t c = v x , t v y , t v z , t p x , t p y , t p z , t T , with v denoting the velocity and p the position in the three spatial components. We discretize Equation (43) by using a sampling time equal to 0.1 s . In this way, the model becomes
x t + 1 = A x t + r t
where x t is the sampled version of x t c , A = I 6 + 0.1 Φ and r t = ( 0.1 I 6 + 0.1 2 Φ / 2 ) u t c . Assuming that every sensor can measure only along two spatial components, the output matrix of the kth node can be of the type
C k = 0 0 0 diag ( 1 , 1 , 0 ) , C k = 0 0 0 diag ( 1 , 0 , 1 ) , C k = 0 0 0 diag ( 0 , 1 , 1 ) .
Every output matrix is assigned randomly to each node, with the constraint that the local state-space model in Equation (10) associated to each node must be observable for every node to guarantee the convergence of the robust Kalman filters at every node. Thus, if any node violates the constraint, all the output matrices are discarded and reassigned. Then, we choose B = 0.001 I , R k = D k D k T = k P R 0 P T , where R 0 = 0.5 · diag ( 1 , 4 , 7 ) and P is a permutation matrix which is randomly generated for any node. The initial state x 0 is Gaussian distributed and whose covariance matrix is P 0 = I .
In the numerical simulations, the following Kalman filters are considered: the centralized Kalman filter (KFC); the centralized robust Kalman filter (RKFC); the distributed Kalman filter with diffusion step (KFD) proposed in [14]; the distributed robust Kalman with diffusion step and uniform local tolerance (RKFDU) proposed in [39] and reviewed in Section 3; and the distributed robust Kalman filter with diffusion step and non-uniform local tolerance (RKFDNU) proposed in Section 4. For the distributed filters, the diffusion matrix W is defined as
w l k = α k n l , if l N k 0 , otherwise
where n l represents the total number of neighbors of node l while α k is chosen such that Equation (11) holds.

5.1. First Example

We assume that the actual model is contained in the ambiguity set Equation (2) with c = 0.02 . Figure 2 shows the least favorable mean squared deviation across the network. We notice that MSD ¯ t converges at the steady-state for all the distributed versions of the Kalman filter. RKFDNU performs slightly better that RKFDU and both perform consistently better than KFD. Finally, all of them perform worse than the centralized versions, and RKFC results the best.
However, the situation is more salient if we consider the steady-state least favorable MSD k , t for each node (see Figure 3a): RKFDNU performs slightly better than RKFDU for the majority of the nodes. However, there is a clear difference for nodes 18 and 19 which are more susceptible to model uncertainty: RKFDNU performs better than RKFDU.
Figure 3b shows the behavior of the local tolerances c k , t over the time for RKFDNU. As expected, every c k , t converges to a constant value. However, the latter is different from the tolerance c of the centralized minimax problem.
Finally, Figure 4a,b shows the risk sensitivity parameters θ k , t at every node for RKFDU and RKFDNU. We can observe that the risk sensitivity parameters of RKFDU takes larger values than the ones of RKFDNU. Accordingly, the inferior performance of RKFDU is due by the fact that the robust local filters are too conservative.

5.2. Second Example

In the second experiment, we consider a larger deviation between the actual model and the nominal one, i.e., we choose c = 0.06 .
Figure 5 and Figure 6a show the least favorable mean square deviation across the network and for each node in the steady-state. The situation is similar to the previous one, but the difference among RKFD, RKFDU, and RKFDNU is more evident. In particular, the steady-state value of MSD ¯ k , t for k = 18 , 19 using RKFDNU is clearly better than the ones corresponding to KFD and RKFDU.
In addition, Figure 6b shows the tolerances c k , t at every node over the time. As expected, the latter are higher than the ones with c = 0.02 . Indeed, the uncertainty now is greater than before and thus the robust local filters now must be more conservative than before.
Finally, we study how the least favorable MSD for each node correlates with the topology of the sensor network. Figure 7a,b shows two additional sensor networks obtained from the original network of Figure 1 by adding connections only to some nodes. More precisely, the density of the original network, i.e., number of connections over all possible connections, is d 1 = 0.39 ; the density of the networks in Figure 7a,b is d 2 = 0.48 and d 3 = 0.72 , respectively.
Figure 8a,b shows the results obtained by RKFDNU with the three different sensor networks. As expected, the increase of the degrees of the nodes, and consequently of the connections in the network, reduces the least favorable MSD related to those nodes at steady-state and the total least favorable MSD across the network. In conclusion, by adding edges the performance of RKFDNU tends to the one obtained in the centralized case (RKFC), where the nodes are considered all connected to each other.

6. Efficient Algorithm

Proposition 1 suggests a simplified version of Algorithm 2. Indeed, if c is sufficiently small, then c k , t converges to c ¯ k in the steady state for every node of the network. Accordingly, the central unit can compute c ¯ k and transmit it to any node once. In this way, the transmission costs are reduced. The resulting procedure is outlined in Algorithm 3.
Algorithm 3  Efficient distributed robust Kalman filter with non-uniform local tolerance at time t.
  • Input: x ^ k , t , V k , t , y k , t , W = w l k l k with k = 1 N
  • Output: x ^ k , t + 1 , V k , t + 1 with k = 1 N
  • Incremental step. Compute at every node k:
    ψ k , t + 1 = A x ^ k , t + A V k , t 1 + S k 1 l N k C l T R l 1 y l , t C l x ^ k , t + r t P k , t + 1 = A V k , t 1 + S k 1 A T + B B T Find θ k , t s . t . γ P k , t + 1 , θ k , t = c ¯ k V k , t + 1 = P k , t + 1 1 θ k , t I 1
  • Diffusion step. Compute at every node k:
    x ^ k , t + 1 = l N k w l k ψ l , t + 1
We compared this algorithm, hereafter called RKFDNU2, with RKFDNU: the performance in practice is the same. Figure 9 shows their least favorable mean square deviation across the network in the scenario of Section 5.2 in the first 50 time steps. Finally, Figure 10a,b shows the risk sensitivity parameters for RKDFNU and RKFDNU2, respectively: there is a slight difference. However, we saw that such a difference disappears after 20 time steps. We conclude that the efficient scheme RKFDNU2 represents a good approximation of RKFDNU.
Finally, Table 1 summarizes the performance of RKFC, RKFDU, RKFDNU, and RKFDNU2 obtained with tolerance c = 0.02 . The considered values are the least favorable MSD across the network at steady-state, the average among every node of the tolerances at steady-state, the average among every node of the risk sensitivity parameter at steady-state, and the occurred communications between the central unit and the local nodes in the whole time span. In particular, concerning the communication:
  • in RKFDU, the central unit transmits the uniform tolerance to each node at once (at the beginning);
  • in RKFDNU, the central unit transmits the local tolerances to each node at every time step;
  • in RKFDNU2, the central unit transmits the steady-state local tolerances to each node at once (at the beginning).

7. Conclusions

In this article, the problem of distributed robust Kalman filtering for a sensor network is considered. More precisely, we consider a distributed scheme with diffusion step and the intermediate estimate is designed in order to be optimal according to the least favorable model belonging to a prescribed local ambiguity set. The latter is a ball about the local nominal model and the radius of this ball is the local tolerance. In this paper, we propose an algorithm in which the local tolerance of each node is different and suitably computed by the central unit. We also consider a more efficient implementation of the algorithm where the central unit computes and transmits the steady-state local tolerances for every node at once. In this way, the communication between the central unit and the local nodes is reduced. Through some numerical examples, we showed that the proposed algorithm performs better than the one with a uniform local tolerance across the network.

Author Contributions

Conceptualization, A.E., F.G., G.G. and M.Z.; methodology, A.E., F.G., G.G. and M.Z.; simulations, A.E., F.G., G.G. and M.Z.; supervision, M.Z.; All the authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

Part of this work has been supported by the MIUR (Italian Minister for Education) under the initiative “Departments of Excellence” (Law 232/2016).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, H.N.; Wang, H.D. Distributed consensus observers-based H control of dissipative PDE systems using sensor networks. IEEE Trans. Control Netw. Syst. 2015, 2, 112–121. [Google Scholar] [CrossRef]
  2. Orihuela, L.; Millán, P.; Vivas, C.; Rubio, F.R. Distributed control and estimation scheme with applications to process control. IEEE Trans. Control Syst. Technol. 2015, 23, 1563–1570. [Google Scholar] [CrossRef]
  3. Sadamoto, T.; Ishizaki, T.; Imura, J. Average state observers for large-scale network systems. IEEE Trans. Control Netw. Syst. 2017, 4, 761–769. [Google Scholar] [CrossRef]
  4. Wang, S.; Ren, W.; Chen, J. Fully distributed dynamic state estimation with uncertain process models. IEEE Trans. Control Netw. Syst. 2017, 5, 1841–1851. [Google Scholar] [CrossRef]
  5. Dormann, K.; Noack, B.; Hanebeck, U. Optimally distributed Kalman filtering with data-driven communication. Sensors 2018, 18, 1034. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Ruan, Y.; Luo, Y.; Zhu, Y. Globally optimal distributed Kalman filtering for multisensor systems with unknown inputs. Sensors 2018, 18, 2976. [Google Scholar] [CrossRef] [Green Version]
  7. Gao, S.; Chen, P.; Huang, D.; Niu, Q. Stability analysis of multi-sensor Kalman filtering over lossy networks. Sensors 2016, 16, 566. [Google Scholar] [CrossRef] [Green Version]
  8. Kordestani, M.; Samadi, M.; Saif, M. A distributed fault detection and isolation method for multifunctional spoiler system. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems, Windsor, ON, Canada, 5–8 August 2018; pp. 380–383. [Google Scholar]
  9. Scott, S.; Blocker, A.; Bonassi, F.; Chipman, H.; George, E.; McCulloch, R. Bayes and big data: The consensus Monte Carlo algorithm. EFaBBayes 250th Conf. 2013, 16. [Google Scholar] [CrossRef] [Green Version]
  10. Spanos, D.P.; Olfati-Saber, R.; Murray, R.M. Approximate distributed Kalman filtering in sensor networks with quantifiable performance. In Proceedings of the Fourth International Symposium on Information Processing in Sensor Networks, Boise, ID, USA, 15 April 2005; pp. 133–139. [Google Scholar]
  11. Olfati-Saber, R. Distributed Kalman filter with embedded consensus filters. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 15 December 2005; pp. 8179–8184. [Google Scholar]
  12. Olfati-Saber, R. Distributed Kalman filtering for sensor networks. In Proceedings of the 2007 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 5492–5498. [Google Scholar]
  13. Carli, R.; Chiuso, A.; Schenato, L.; Zampieri, S. Distributed Kalman filtering based on consensus strategies. IEEE J. Sel. Areas Commun. 2008, 26, 622–633. [Google Scholar] [CrossRef] [Green Version]
  14. Cattivelli, F.S.; Sayed, A.H. Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans. Autom. Control 2010, 55, 2069–2084. [Google Scholar] [CrossRef]
  15. Yang, S.; Huang, T.; Guan, J.; Xiong, Y.; Wang, M. Diffusion Strategies for Distributed Kalman Filter with Dynamic Topologies in Virtualized Sensor Networks. Mob. Inf. Syst. 2016, 2016, 8695102. [Google Scholar] [CrossRef] [Green Version]
  16. Ji, H.; Lewis, F.L.; Hou, Z.; Mikulski, D. Distributed information weighted Kalman consensus filter for sensor networks. Automatica 2017, 77, 18–30. [Google Scholar] [CrossRef] [Green Version]
  17. Yang, H.; Li, H.; Xia, Y.; Li, L. Distributed Kalman filtering over sensor networks with transmission delays. IEEE Trans. Cybern. 2020, 1–11. [Google Scholar] [CrossRef] [PubMed]
  18. Kordestani, M.; Dehghani, M.; Moshiri, B.; Saif, M. A new fusion estimation method for multi-rate multi-sensor systems with missing measurements. IEEE Access 2020, 8, 47522–47532. [Google Scholar] [CrossRef]
  19. Luengo, D.; Martino, L.; Elvira, V.; Bugallo, M.F. Efficient linear fusion of partial estimators. Digit. Signal Process. 2018, 78, 265–283. [Google Scholar] [CrossRef]
  20. Song, E.; Zhu, Y.; Zhou, J.; You, Z. Optimal Kalman filtering fusion with cross-correlated sensor noises. Automatica 2007, 43, 1450–1456. [Google Scholar] [CrossRef]
  21. Xu, J.; Song, E.; Luo, Y.; Zhu, Y. Optimal distributed Kalman filtering fusion algorithm without invertibility of estimation error and sensor noise covariances. IEEE Signal Process. Lett. 2012, 19, 55–58. [Google Scholar] [CrossRef]
  22. Sun, S.L.; Deng, Z.L. Multi-sensor optimal information fusion Kalman filter. Automatica 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
  23. Feng, J.; Zeng, M. Optimal distributed Kalman filtering fusion for a linear dynamic system with cross-correlated noises. Int. J. Syst. Sci. 2012, 43, 385–398. [Google Scholar] [CrossRef]
  24. Andrieu, C.; Doucet, A.; Holenstein, R. Particle Markov chain Monte Carlo methods. J. R. Statist. Soc. B 2010, 72, 269–342. [Google Scholar] [CrossRef] [Green Version]
  25. Martino, L.; Elvira, V.; Camps-Valls, G. Distributed particle Metropolis-Hastings schemes. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Freiburg, Germany, 10–13 June 2018. [Google Scholar]
  26. Boel, R.; James, M.; Petersen, I. Robustness and risk-sensitive filtering. IEEE Trans. Automat. Control 2002, 47, 451–461. [Google Scholar] [CrossRef]
  27. Hansen, L.; Sargent, T. Robustness; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  28. Levy, B.; Zorzi, M. A Contraction analysis of the convergence of risk-sensitive filters. SIAM J. Control Optim. 2016, 54, 2154–2173. [Google Scholar] [CrossRef] [Green Version]
  29. Levy, B.; Nikoukhah, R. Robust state-space filtering under incremental model perturbations subject to a relative entropy tolerance. IEEE Trans. Automat. Control 2013, 58, 682–695. [Google Scholar] [CrossRef] [Green Version]
  30. Levy, B.; Nikoukhah, R. Robust least-squares estimation with a relative entropy constraint. Inf. Theory IEEE Trans. 2004, 50, 89–104. [Google Scholar] [CrossRef]
  31. Zenere, A.; Zorzi, M. On the coupling of model predictive control and robust Kalman filtering. IET Control Theory Appl. 2018, 12, 1873–1881. [Google Scholar] [CrossRef] [Green Version]
  32. Zenere, A.; Zorzi, M. Model predictive control meets robust Kalman filtering. In Proceedings of the 20th IFAC World Congress, Toulouse, France, 9–14 July 2017. [Google Scholar]
  33. Zorzi, M. On the robustness of the Bayes and Wiener estimators under model uncertainty. Automatica 2017, 83, 133–140. [Google Scholar] [CrossRef] [Green Version]
  34. Zorzi, M. Robust Kalman filtering under model perturbations. IEEE Trans. Autom. Control 2017, 62, 2902–2907. [Google Scholar] [CrossRef] [Green Version]
  35. Abadeh, S.; Nguyen, V.; Kuhn, D.; Esfahani, P. Wasserstein distributionally robust Kalman filtering. In Advances in Neural Information Processing Systems; MIT Press: Montreal, CA, USA, 2018; pp. 8474–8483. [Google Scholar]
  36. Shen, B.; Wang, Z.; Hung, Y. Distributed H-consensus filtering in sensor networks with multiple missing measurements: The finite-horizon case. Automatica 2010, 46, 1682–1688. [Google Scholar] [CrossRef] [Green Version]
  37. Luo, Y.; Zhu, Y.; Luo, D.; Zhou, J.; Song, E.; Wang, D. Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications. Sensors 2008, 8, 8086–8103. [Google Scholar] [CrossRef]
  38. Huang, J.; Shi, D.; Chen, T. Distributed robust state estimation for sensor networks: A risk-sensitive approach. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018; pp. 6378–6383. [Google Scholar]
  39. Zorzi, M. Distributed Kalman filtering under model uncertainty. IEEE Trans. Control Netw. Syst. 2019. [Google Scholar] [CrossRef] [Green Version]
  40. Cover, T.; Thomas, J. Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
  41. Zorzi, M. An interpretation of the dual problem of the THREE-like approaches. Automatica 2015, 62, 87–92. [Google Scholar] [CrossRef] [Green Version]
  42. Zorzi, M. A new family of high-resolution multivariate spectral estimators. IEEE Trans. Autom. Control 2014, 59, 892–904. [Google Scholar] [CrossRef]
  43. Zorzi, M.; Levy, B.C. On the convergence of a risk sensitive like filter. In Proceedings of the 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 4990–4995. [Google Scholar]
  44. Zorzi, M. Convergence analysis of a family of robust Kalman filters based on the contraction principle. SIAM J. Control Optim. 2017, 55, 3116–3131. [Google Scholar] [CrossRef]
  45. Zorzi, M.; Levy, B. Robust Kalman filtering: Asymptotic analysis of the least favorable model. In Proceedings of the 57th IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018. [Google Scholar]
Figure 1. Network of 20 sensors for measuring the noisy positions of the projectile.
Figure 1. Network of 20 sensors for measuring the noisy positions of the projectile.
Sensors 20 03244 g001
Figure 2. Least favorable mean square deviation across the network with tolerance c = 0.02 .
Figure 2. Least favorable mean square deviation across the network with tolerance c = 0.02 .
Sensors 20 03244 g002
Figure 3. (a) Least favorable mean square deviation for each node in the steady-state with tolerance c = 0.02 . (b) Time-variant local tolerances for each node over time with c = 0.02 .
Figure 3. (a) Least favorable mean square deviation for each node in the steady-state with tolerance c = 0.02 . (b) Time-variant local tolerances for each node over time with c = 0.02 .
Sensors 20 03244 g003
Figure 4. (a) Risk sensitivity parameter for each node using RKFDU with tolerance c = 0.02 . (b) Risk sensitivity parameter for each node using RKFDNU with tolerance c = 0.02 .
Figure 4. (a) Risk sensitivity parameter for each node using RKFDU with tolerance c = 0.02 . (b) Risk sensitivity parameter for each node using RKFDNU with tolerance c = 0.02 .
Sensors 20 03244 g004
Figure 5. Least favorable mean square deviation across the network with tolerance c = 0.06 .
Figure 5. Least favorable mean square deviation across the network with tolerance c = 0.06 .
Sensors 20 03244 g005
Figure 6. (a) Least favorable mean square deviation for each node in the steady-state with tolerance c = 0.06 . (b) Time-variant local tolerances for each node over time with c = 0.06 .
Figure 6. (a) Least favorable mean square deviation for each node in the steady-state with tolerance c = 0.06 . (b) Time-variant local tolerances for each node over time with c = 0.06 .
Sensors 20 03244 g006
Figure 7. (a) Sensor network with density d 2 = 0.48 . (b) Sensor network with density d 3 = 0.72 .
Figure 7. (a) Sensor network with density d 2 = 0.48 . (b) Sensor network with density d 3 = 0.72 .
Sensors 20 03244 g007
Figure 8. Performance of RKFDNU across the three networks compared with RKFC. (a) Least favorable mean square deviation with tolerance c = 0.06 . (b) Least favorable mean square deviation for each node in the steady-state with tolerance c = 0.06 .
Figure 8. Performance of RKFDNU across the three networks compared with RKFC. (a) Least favorable mean square deviation with tolerance c = 0.06 . (b) Least favorable mean square deviation for each node in the steady-state with tolerance c = 0.06 .
Sensors 20 03244 g008
Figure 9. Least favorable mean square deviation across the network with tolerance c = 0.02 .
Figure 9. Least favorable mean square deviation across the network with tolerance c = 0.02 .
Sensors 20 03244 g009
Figure 10. (a) Risk sensitivity parameter for each node using RKFDNU with tolerance c = 0.02 . (b) Risk sensitivity parameter for each node using RKFDNU2 with tolerance c = 0.02 .
Figure 10. (a) Risk sensitivity parameter for each node using RKFDNU with tolerance c = 0.02 . (b) Risk sensitivity parameter for each node using RKFDNU2 with tolerance c = 0.02 .
Sensors 20 03244 g010
Table 1. Summary of the performance of the different algorithms with tolerance c = 0.02 .
Table 1. Summary of the performance of the different algorithms with tolerance c = 0.02 .
RKFCRKFDURKFDNURKFDNU2
MSD [dB]−2.9174−1.5553−1.6182−1.6182
Average tolerance0.020.020.01440.0144
Average risk sensitivity parameter1.33660.37640.35760.3576
Communication requirementN.A.One at the beginningEvery time stepOne at the beginning

Share and Cite

MDPI and ACS Style

Emanuele, A.; Gasparotto, F.; Guerra, G.; Zorzi, M. Robust Distributed Kalman Filtering: On the Choice of the Local Tolerance. Sensors 2020, 20, 3244. https://doi.org/10.3390/s20113244

AMA Style

Emanuele A, Gasparotto F, Guerra G, Zorzi M. Robust Distributed Kalman Filtering: On the Choice of the Local Tolerance. Sensors. 2020; 20(11):3244. https://doi.org/10.3390/s20113244

Chicago/Turabian Style

Emanuele, Alessandro, Francesco Gasparotto, Giacomo Guerra, and Mattia Zorzi. 2020. "Robust Distributed Kalman Filtering: On the Choice of the Local Tolerance" Sensors 20, no. 11: 3244. https://doi.org/10.3390/s20113244

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop