Next Article in Journal
Multiscale Interactions betweenWater and Carbon Fluxes and Environmental Variables in A Central U.S. Grassland
Previous Article in Journal
HydroZIP: How Hydrological Knowledge can Be Used to Improve Compression of Hydrological Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resilient Minimum Entropy Filter Design for Non-Gaussian Stochastic Systems

1
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191,China
2
Control System Center, The University of Manchester, Manchester, M13 9PL, UK
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(4), 1311-1323; https://doi.org/10.3390/e15041311
Submission received: 1 March 2013 / Revised: 27 March 2013 / Accepted: 2 April 2013 / Published: 10 April 2013

Abstract

:
In this paper, the resilient minimum entropy filter problem is investigated for the stochastic systems with non-Gaussian disturbances. The goal of designing the filter is to guarantee that the entropy of the estimation error is monotonically decreasing, moreover, the error system is exponentially ultimately bounded in the mean square. Based on the entropy performance function, a filter gain updating algorithm is presented to make the entropy decrease at every sampling instant k. Then the boundedness of the gain updating law is analyzed using the kernel density estimation technique. Furthermore, a suboptimal resilient filter gain is designed in terms of LMI. Finally, a simulation example is given to show the effectiveness of the proposed results.

1. Introduction

The estimation of the state variables of a dynamic system through noisy measurements is one of the fundamental problems in control systems and signal processing. This problem has been an active topic over the past few decades, and some effective estimation approaches have been exploited in the literatures, such as Kalman filtering schemes, H filtering and robust filtering methods; see e.g., [1,2,3,4]. The celebrated Kalman filtering has been proven to be an optimal estimator for linear systems with disturbances described as white noise. The filtering scheme is based on the assumption of having an exact and accurate system model as well as perfect knowledge about the statistical properties of noise sources. Compared with the Kalman filtering, the advantage of H filtering is that the noise sources are arbitrary signals with bounded energy, and the exact statistic information about the external disturbance is not precisely known.
However, many well-established filtering techniques focus only on two quantities, i.e., mean and variance or covariance, as key design targets. Since the output of a nonlinear stochastic system is usually non-Gaussian, mean and variance or covariance are not enough to characterize the output process. Thus, the existing classical stochastic filtering theory may be incomplete for non-Gaussian stochastic systems. Entropy is a scalar quantity that provides a measure for the average “uncertain” information contained in a given probability density function (PDF). When entropy is minimized, all moments of the error PDF are constrained [5,6]. Entropy has been widely used in information, thermodynamics and control fields. The minimum entropy filtering problem has recently received renewed research interests in dealing with the stochastic state estimation problem for non-Gaussian systems. This subject is one of the important topics of the research of stochastic distribution control (SDC). Differing from traditional stochastic control where only the output mean and variance are considered, stochastic distribution control aims to control the shape of the output probability density functions for non-Gaussian and dynamic stochastic systems; see [7,8], and the reference therein. Based on SDC, current studies mainly focus on the problems of the probability density function tracking control [9,10,11,12], fault detection and fault isolation of stochastic systems [13,14], parameter estimation [15], etc.
The entropy optimization filtering methodology has been studied for non-Gaussian systems in [16], where the concepts of a hybrid PDF and a hybrid entropy are introduced, and an optimal filter gain design algorithm is proposed to minimize the hybrid entropy of the estimation error. Using the idea of the iterative learning control, in [17], the filter gain is determined by the gradient ILC tuning law. The learning rate is studied to guarantee the convergence of the proposed algorithm. In [18], based on the form of the hybrid characteristic function of the conditional estimation error, a new Kullback–Leibler-like performance function is constructed. An optimal PDF shaping tracking filter is designed such that the tracking error between the characteristic function of the estimation error and the target characteristic function is minimized.
It should be pointed out that, in the literatures mentioned above, only stochastic stability analysis is given after the recursive solution, direct and analytical stabilization design and other closed-loop performance design cannot be carried on along with the minimum entropy filtering. To the best of the author’s knowledge, analytical filtering algorithm that links the minimum entropy filtering and the stochastic stabilization together has not yet been addressed for non-Gaussian stochastic systems and still remains a challenging problem.
Inspired by the aforementioned situations, we aim at solving the resilient minimum entropy filtering problem for the stochastic systems subject to non-Gaussian noise. The main contributions of this paper are as follows: (1) A recursive solution of the filter gain updating is proposed such that the entropy of the estimation error decreases strictly, which means the shape of the distribution of the error is as narrow as possible; (2) Based on linear matrix inequality and the resilient control theory, a new suboptimal stochastic stability filter gain updating law is proposed to guarantee that the estimation error is stochastically exponentially ultimately bounded in the mean square. Compared with the previous works on filtering for non-Gaussian stochastic systems, the advantage of the results presented in this paper is that the relationship between the entropy performance and the stochastic stability, even other important closed-loop performance can be established directly. In the future work, we will further discuss other closed-loop performance of the non-Gaussian stochastic systems in detail.
The rest of this article is organized as follows. The problem formulations and preliminaries are given in Section 2. Section 3 is dedicated to derive the filter gain updating algorithm to guarantee that the entropy of the error reduces at every sampling time k. The boundedness of the filter gain is analyzed in Section 4. A sufficient condition for the existence of the recursive resilient filter is given to ensure the exponential ultimate boundedness of the error system in Section 4. Numerical example is included in Section 5, which is then followed by concluding remarks.
Notation. The notations in this paper are quite standard. · means the Euclidean norm in R n . A T is the transpose of the matrix A. A is the operator of matrix A, i.e., A = s u p { A x : x = 1 } = λ max ( A T A ) , where λ max ( · ) means the largest eigenvalue of A. Moreover, Ω is the sample space, F is a set of events. Let ( Ω , F , P ) be a complete probability space, and E { · } stands for the mathematical expectation operator with respect to the given probability measure P. The expected value of a random variable x is denoted by E { x } . V a r { · } represent the variance of random variables. The star * represents a transpose quantity.

2. Problem Formulation

Consider the following stochastic system:
x ( k + 1 ) = A x ( k ) + f ( x ( k ) ) + G ω ( k + 1 )
y ( k ) = C x ( k )
where x ( k ) R n is the state, y ( k ) R is the output, ω ( k ) R p is the random disturbance. It should be pointed out that ω ( k ) can be non-Gaussian vector. A, C and G are known system matrices.
It is assumed that ω ( k ) has known PDF γ ω ( ξ ) defined on a known closed interval [ α , β ] . There are some identification methods, such as the kernel estimation technique, experiment technique, direct physical measurement and other identification methods, that can be used to model the PDF. Therefore, the assumption can be satisfied in many practical systems. Without loss of generality, we assume E { ω ( k ) } = 0 , and E { ω ( k ) 2 } ν 2 . The nonlinear function f ( · ) is assumed to be Lipschitz with respect to x ( k ) , which means
f ( x 1 ) f ( x 2 ) γ x 1 x 2
for all x 1 , x 2 R n , and γ > 0 .
For the system given by Equation (1), the full-order filter is of the form
x f ( k + 1 ) = A x f ( k ) + f ( x f ( k ) ) + L k ( y ( k ) y f ( k ) )
y f ( k ) = C x f ( k )
where x f ( k ) is the state estimate and L k is the gain to be determined. Let the estimation error be e ( k ) = x ( k ) x f ( k ) , then it follows from Equation (1) and Equation (3) that
e ( k + 1 ) = A e ( k ) L k ( y ( k ) y f ( k ) ) + η ( k ) + G ω ( k + 1 ) = g ( L k , ω ( k + 1 ) )
where η ( k ) = f ( x ( k ) ) f ( x f ( k ) ) is a nonlinear function. The PDF of the estimation error is defined as
P ( a e ( k ) b ) = a b γ e ( ξ ) d ξ
where P ( a e ( k ) b ) is the probability of the estimation error e ( k ) [ a , b ] n when the filter gain L k is given. It can be seen from Equation (4) that the shape of the PDF of the estimation error at time k is governed by the filtering gain L k .
Let the initial estimation of the state x ( 0 ) be taken to be equal to the known mean of the initial state x ¯ 0 . In order to study the stochastic behavior of the error system Equation (4), the following definition is introduced.
Definition 1 ([19,20]) The dynamics of the estimation error e ( k ) is exponentially ultimately bounded in the mean square if there exist constants a > 0 , b > 0 , c > 0 such that for any initial condition e ( 0 ) ,
E { e ( k ) 2 | e ( 0 ) } a k b + c
where a [ 0 , 1 ) , b > 0 and c > 0 . In this case, the filter in Equation (3) is said to be exponential.
Remark 1 When the error dynamic is exponentially ultimately bounded in the mean square, the estimation error will initially decrease exponentially in the mean square, and remain within a certain region in the steady state, again in the mean square sense. The stability bound is defined in terms of the norm ( E { e ( k ) 2 } ) 1 2 of the Hilbert space of random vectors, and is specified by the coefficient c.
The main purpose of the proposed filter design scheme can be stated as follows:
(1). To design the filter gain L k such that the dynamics of the estimation error is guaranteed to be stochastically exponentially ultimately bounded in the mean square.
(2). The filter should be designed such that the shape of the PDF of the estimation error is made as narrow as possible.
A narrow distribution generally indicates that the uncertainty of the related random variable is small, i.e., the entropy is small. Considering the requirement of energy, we will design L k such that the following performance function is minimized at every sampling time k,
J ( k ) = R 1 a b γ e ( ξ ) l n ( γ e ( ξ ) ) d ξ + 1 2 L k T R 2 L k
where R 1 > 0 and R 2 > 0 are weighting matrices, a b γ e ( ξ ) l n ( γ e ( ξ ) ) d ξ is the entropy of the estimation errors.

3. The Minimum Entropy Filter Gain Updating Algorithm

In order to minimize the performance function in Equation (7), the PDF of e ( k ) is required. Applying probability theory [21], the PDF of e ( k ) can be formulated as
γ e ( ξ ) = γ ω ( g 1 ( L k , ξ ) ) | d g 1 ( L k , ξ ) d ξ |
where g 1 ( · ) is the inverse function of g ( · ) with respect to the noise term ω ( k ) . In this case, γ e ( ξ ) is continuous and first order differentiable with all its variables.
Then, using Equation (8), the filtering design algorithm can be developed by minimizing the performance function Equation (7). Given
H ( k ) = R 1 a b γ e ( ξ ) l n ( γ e ( ξ ) ) d ξ
the optimal filtering strategy can be obtained via the following equality,
( H ( k ) + 1 2 L k T R 2 L k ) L k = 0
To formulate the recursive design procedure, we introduce
L k = L k 1 + Δ L k
where k = 1 , 2 , . . . , N , . . . , + .
The function H ( k ) can be approximated via
H ( k ) = H 0 k + H 1 k Δ L k + 1 2 Δ L k T H 2 k Δ L k
where
H 0 k = H ( k ) | L k = L k 1
H 1 k = H ( k ) L k | L k = L k 1
H 2 k = 2 H ( k ) L k 2 | L k = L k 1
Then the following result can be obtained.
Theorem 1 The recursive filtering gain design algorithm to minimize the performance function J ( k ) subject to the estimation model Equation (4) is given by
Δ L k = ( H 2 k + R 2 ) 1 ( H 1 k + R 2 L k 1 )
where the weight matrix R 2 satisfies
H 2 k + R 2 > 0
Proof. From Equation (11), it can be seen that
L k T R 2 L k = L k 1 T R 2 L k 1 + 2 L k 1 R 2 Δ L k + Δ L k T R 2 Δ L k
Substituting Equation (16) into Equation (10) yields
H 1 k + H 2 k Δ L k = R 2 L k 1 R 2 Δ L k
Then, we can obtain the recursive algorithm (Equation (14)) for k = 1 , 2 , . . . , N , . . . + .
Equation (14) is derived from a necessary condition for optimization. To guarantee its sufficiency, the second-order derivative should be satisfied
2 ( H ( k ) + 1 2 L k T R 2 L k ) Δ L k 2 > 0
which will hold when Equation (15) holds.□

4. The Bound of the Filter Gain

In this section, we will study the bound of the filtering gain by introducing the kernel density estimation technique for estimating the probability density function. For estimating the density function γ e ( ξ ) , [22,23] studied a general class of consistent and asymptotically normal estimators as a kernel weighted average over the empirical distribution
γ e ( ξ ) = 1 N i = 1 N K σ ( e k e i )
where the kernel K σ is a Gaussian kernel function.
From Equation (10), we have
J ( k ) L k = R 1 a b γ e ( ξ ) L k ( l n ( γ e ( ξ ) ) + 1 ) d ξ + R 2 L k = 0
Then the filtering gain of model (1) is bounded as
L k R 2 1 · R 1 a b γ e ( ξ ) L k l n ( γ e ( ξ ) ) + 1 d ξ
Using Equation (19), the partial differential term in Equation (21) can be written as follows
γ e ( ξ ) L k = 1 N i = k N k K σ e k · e k L k
From the dynamic equation of the estimation error in Equation (4), we have
e k L k = C e ( k 1 ) , e ( k ) [ a , b ] n
Noting that e ( k ) [ a , b ] n , we get the following bound of γ e ( ξ ) L k if K σ e k ρ
γ e ( ξ ) L k C · max { a , b } · ρ
Furthermore, since γ e ( ξ ) is a bounded function and ξ is defined on [ a , b ] n , l n ( γ e ( ξ ) ) + 1 is bounded. We denote l n ( γ e ( ξ ) ) + 1 = θ , by Equation (21) and Equation (24), we get
L k ( b a ) · θ · ρ · C · R 2 1 · R 1 · max { a , b } = δ
Thus,
Δ L k = L k L k 1 2 δ , or Δ L k T Δ L k 4 δ 2 I = τ 2 I
Remark 2. It should be noted that the function K σ is a Gaussian kernel function, and the error is bounded in the interval [ a , b ] n . This guarantees that the upper bound of K σ e k is easy to obtain.

5. Resilient Filter Gain Design

In the section, we will show how the resilient filter gain is obtained to guarantee the stochastic stability of the error system in Equation (4). Generally, it is difficult to analyze the closed-loop stability for the stochastic distribution systems. In order to overcome the difficulty, using the results of Section 4, we will design a gain-variation filter to assure Equation (4) to be stochastically exponentially ultimately bounded in the mean square. We define the variable gain L k as follows
L k = L + Δ L k , Δ L k T Δ L k τ 2 I
where Δ L k is given by Equation (14) of Theorem 1 to guarantee the decreasing of the performance function Equation (7). In this case, Equation (27) is a suboptimal recursive strategy to make sure that the distribution of the estimation error is made as narrow as possible.
Substituting Equation (27) into Equation (4), the estimation error can be written as
e ( k + 1 ) = ( A L k C ) e ( k ) + η ( k ) + G ω ( k + 1 )
= ( A L C Δ L k C ) e ( k ) + η ( k ) + G ω ( k + 1 )
For the resilient gain design, we introduce the following Lemma.
Lemma 1 ([24]) Given matrices Y, M and N. Then
Y + M Δ N + N T Δ T M T < 0
for all Δ satisfying Δ T Δ σ I if and only if there exists a constant ε > 0 such that
Y + ε M M T + σ ε N T N < 0
Based on Lemma 1 and the resilient control theory [25], we present the following results to establish the relationship between the entropy performance and the stochastic stability for stochastic distribution control systems.
Theorem 2 Consider the error system in Equation (4), the filter gain variation is given by Equation (26), for given constants ε 1 > 0 , ε 2 > 0 and γ > 0 , there exist a positive scalar β, and matrices P > 0 and Y such that the following inequalities hold
P β I < 0
P + β γ 2 I β γ I ( 1 + ε 1 ) ( A T P C T Y T ) 0 ( 1 + ε 1 ) C T * ε 1 β I 0 0 0 * * ( 1 + ε 1 ) P ε 2 P 0 * * * ε 2 I 0 * * * * ε 2 τ 2 I < 0
then system in Equation (28) is exponentially bounded in the mean square, and the filter gain is given by
L k = P 1 Y + Δ L k
where Δ L k is defined by Equation (14).
Proof. Define a Lyapunov functional candidate for Equation (28) as
V ( k ) = e T ( k ) P e ( k )
where P > 0 . Since E { ω ( k ) } = 0 , E { ω ( k ) 2 } ν 2 , we have
Δ V ( k ) = E { V ( k + 1 ) } V ( k ) = e T ( k ) ( A L C Δ L k C ) T P ( A L C Δ L k C ) e ( k ) + 2 e T ( k ) ( A L C Δ L k C ) T P η ( k ) + η T ( k ) P η ( k ) + E { ω T ( k ) G T P G ω ( k ) } e T ( k ) P e ( k ) e T ( k ) ( 1 + ε 1 ) ( A L C Δ L k C ) T P ( A L C Δ L k C ) P e ( k ) + ( ε 1 1 + 1 ) η T ( k ) P η ( k ) + E { ω T ( k ) G T P G ω ( k ) }
Note that P < β I and f ( x 1 ) f ( x 2 ) γ x 1 x 2 , then
η T ( k ) P η ( k ) = ( f ( x ( k ) ) f ( x f ( k ) ) ) T P ( f ( x ( k ) ) f ( x f ( k ) ) ) β γ 2 e T ( k ) e ( k )
then, if
( 1 + ε 1 ) ( A L C Δ L k C ) T P ( A L C Δ L k C ) P + ( ε 1 1 + 1 ) β γ 2 I = Π k < 0
holds, we have
Δ V ( k ) e T ( k ) Π k e ( k ) + ϕ , where ϕ = ν 2 · λ max ( G T P G )
From Equation (37), we have that there exists a sufficiently small scalar θ satisfying 0 < θ < λ max ( P ) and
Π k < θ I
On the other hand,
λ min ( P ) E { e ( k ) 2 } E { V ( k ) } λ max ( P ) E { e ( k ) 2 }
Then, it follows from Equation (35) and Equation (38) that
E { V ( k + 1 ) } E { V ( k ) } θ λ max ( P ) E { V ( k ) } + ϕ
and subsequently,
E { V ( k + 1 ) } θ ^ E { V ( k ) } + ϕ
where
θ ^ = 1 θ λ max ( P ) , and 0 < θ ^ < 1
Therefore,
E { V ( k ) } θ ^ k E { V ( 0 ) } + 1 θ ^ k 1 θ ^ ϕ
By using inequality of Equation (40) again, we get
E { e ( k ) 2 } θ ^ k λ max ( P ) λ min ( P ) E { V ( 0 ) } + ϕ λ min ( P ) · 1 θ ^ k 1 θ ^
Then, it follows directly from Definition 1 that if Equation (38) holds, the dynamics of the error system is exponentially bounded in the mean square.
Next, we will show the equivalence of Equation (32) and Equation (37).
By the Schur complement, Equation (37) is equivalent to the following inequality
P + β γ 2 I β γ I ( 1 + ε 1 ) ( A L C ) T P β γ I ε 1 β I 0 ( 1 + ε 1 ) P ( A L C ) 0 ( 1 + ε 1 ) P
+ ( 1 + ε 1 ) C T 0 0 Δ L k T 0 0 P + 0 0 P Δ L k ( 1 + ε 1 ) C 0 0 < 0
Then from Lemma 1 and Δ L k T Δ L k τ 2 I , we have
P + β γ 2 I β γ I ( 1 + ε 1 ) ( A L C ) T P β γ I ε 1 β I 0 ( 1 + ε 1 ) P ( A L C ) 0 ( 1 + ε 1 ) P
+ ε 2 0 0 P 0 0 P + τ 2 ε 2 ( 1 + ε 1 ) C T 0 0 ( 1 + ε 1 ) C 0 0 < 0
It follows that the above inequality is equivalent to Equation (37). This completes the proof.□
Remark 3 In Theorem 2, the filter gain updating law Δ L k is a suboptimal recursive strategy that guarantees the strictly decreasing nature of the entropy of the estimation errors. Meanwhile, L k is a control law to ensure the mean square exponential stability. Hence, based on the resilient control theory, Theorem 2 establishes the relationship between the entropy performance and the stochastic stability for stochastic distribution control systems.

6. Numerical Example

The proposed resilient filter gain algorithm is demonstrated via a numerical example in this section. Consider the stochastic system given as follows,
x ( k + 1 ) = 0 . 9 0 . 1 2 1 . 1 x ( k ) + 1 1 . 25 sin ( x 1 ( k ) ) + 1 + 1 1 ω ( k + 1 )
y ( k ) = 1 1 x ( k )
The Lipschitz constant of the nonlinear part of the system is γ = 1 . 25 . The random disturbance ω ( k ) is defined by
γ ω ( ξ ) = 3000 4 ( ξ 2 0 . 01 ) , ξ [ 0 . 1 , 0 . 1 ] 0 , ξ ( , 0 . 1 ) ( 0 . 1 , + )
According to Equation (3), the designed filter is
x f ( k + 1 ) = 0 . 9 0 . 1 2 1 . 1 x f ( k ) + 1 1 . 25 sin ( x 1 f ( k ) ) + 1 + ( L + Δ L k ) ( y ( k ) y f ( k ) )
y f ( k ) = 1 1 x f ( k )
where L is the solution of Equation (32), Δ L k is given by Equation (14).
By solving Equation (32), we have
β = 1 . 2154 P = 1 . 0 e + 003 6 . 6854 0 . 6550 0 . 6550 0 . 1530 Y = 1 . 0 e + 003 4 . 6770 0 . 2210 L = 1 . 4483 7 . 6424
The response of the error dynamics is shown in Figure 1. It can be shown that the steady-state estimation error variance is bounded. The response of the entropy is displayed in Figure 2. The 3D-mesh plots of the PDF of e 1 ( k ) and e 2 ( k ) are given in Figure 3 and Figure 4. These figures demonstrate that the error system is exponentially bounded in mean square, the entropy of the estimation errors decreases monotonically, and the PDF of the estimation error is close to a narrow Gaussian shape.
Figure 1. Trajectories of the estimation errors.
Figure 1. Trajectories of the estimation errors.
Entropy 15 01311 g001
Figure 2. The entropy of the estimation errors.
Figure 2. The entropy of the estimation errors.
Entropy 15 01311 g002
Figure 3. The 3D-mesh plot of PDF of the estimation error e 1 ( k ) .
Figure 3. The 3D-mesh plot of PDF of the estimation error e 1 ( k ) .
Entropy 15 01311 g003
Figure 4. The 3D-mesh plot of PDF of the estimation error e 2 ( k ) .
Figure 4. The 3D-mesh plot of PDF of the estimation error e 2 ( k ) .
Entropy 15 01311 g004

7. Conclusions

This paper presented a new method to the problem of the PDF shaping filter for stochastic systems with non-Gaussian noise. We have investigated both the minimum entropy performance design and stochastic stabilization problems. The recursive solution of the filter gain is designed such that the distribution of the error is made as narrow and as Gaussian as possible. Using Lyapunov theory, considering the bound of the recursive solution, a suboptimal filter gain is obtained such that the estimation error system is exponentially ultimately bounded in the mean square. The effectiveness of the proposed approaches is demonstrated by a numerical simulation example.

Acknowledgements

The work was supported by the National Science Foundation of China (Grant No: 60925012, 61127007, 61004023, 61074057, 61004021, 61174069).

References

  1. Anderson, B.D.O.; Moore, J.B. Optimal Filtering; Prentice Hall: Englewood Cliffs, NJ, USA, 1979. [Google Scholar]
  2. Brown, R.G.; Hwang, P.Y.C. Introduction to Random Signals and Applied Kalman Filtering; Wiley: New York, NY, USA, 1992. [Google Scholar]
  3. Grimble, M.J.; El-Sayed, A. Solution of the H optimal linear filtering problem for discrete-time systems. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1092–1104. [Google Scholar] [CrossRef]
  4. Petersen, I.R.; McFarlane, D.C. Optimal guaranteed cost filtering for uncertain discrete-time linear systems. Int. J. Robust Nonlinear Control 1996, 6, 267–280. [Google Scholar] [CrossRef]
  5. Shannon, C.E. A mathematical theory of communications. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  6. Renyi, A. Some fundamental questions of information theory. Sel. Pap. Alfred Renyi bf 1976, 2, 526–552. [Google Scholar]
  7. Wang, H. Bounded Dynamic Stochastic Systems: Modelling and Control; Springer-Verlag: London, UK, 2000. [Google Scholar]
  8. Guo, L.; Wang, H. Stochstic Distribution Control System Design-A Convex Optimization Approach; Springer-Verlag: London, UK, 2010. [Google Scholar]
  9. Yue, H.; Wang, H. Minimum entropy control of closed loop tracking errors for dynamic stochastic systems. IEEE Trans. Autom. Control 2003, 48, 118–122. [Google Scholar]
  10. Yue, H.; Zhou, J.L.; Wang, H. Minimum entropy of B-spline PDF systems with mean constraint. Automatica 2006, 46, 989–994. [Google Scholar] [CrossRef]
  11. Zhang, J.H.; Wang, H. Minimum entropy control of nonlinear ARMA systems over a communication network. Neural Comput. Appl. 2008, 17, 385–390. [Google Scholar] [CrossRef]
  12. Guo, L.; Yin, L. Robust PDF control with guaranteed stability for non-linear stochastic systems under modelling errors. IET Control Theorey Appl. 2009, 3, 575–582. [Google Scholar] [CrossRef]
  13. Yin, L.; Guo, L. Fault isolation for multivariate nonlinear non-Gaussian systems using generalized entropy optimization principle. Automatica 2009, 45, 2612–2619. [Google Scholar] [CrossRef]
  14. Yin, L.; Guo, L. Fault Detection for NARMAX Stochastic Systems Using Entropy Optimization Principle. In Proceedings of 2009 Chinese Control and Decision Conference, Guilin, Guangxi, China, 17-19 June 2009; pp. 859–864.
  15. Wang, H.; Wang, A.; Wang, Y. Online estimation algorithm for the unknown probability density functions for the random parameters in auto-regression and exogenous stochastic parameter systems. IET Control Theorey Appl. 2006, 153, 462–468. [Google Scholar] [CrossRef]
  16. Guo, L.; Wang, H. Minimum entropy filtering for multivariate stochastic systems with non-gaussian noises. IEEE Trans. Autom. Control 2006, 51, 695–700. [Google Scholar] [CrossRef]
  17. Afshar, P.; Yang, F.; Wang, H. ILC-Based minimum entropy filter design and implemention for non-Gaussian stochastic systems. IEEE Transcactions Control Syst. Technol. 2012, 20, 960–970. [Google Scholar] [CrossRef]
  18. Zhou, J.; Wang, H.; Guo, L.; Chai, T. Distribution function tracking filter design using hybrid characteristic functions. Automatica 2010, 46, 101–109. [Google Scholar] [CrossRef]
  19. Yaz, E.; Azemi, A. Observer design for discrete and continuous non-linear stochastic systems. Int. J. Syst. Sci. 1993, 24, 2289–2302. [Google Scholar] [CrossRef]
  20. Wang, Z.D.; James, L.; Liu, X.H. Filtering for a class of nonlinear discrete-time stochastic systems with state delays. J. Comput. Appl. Math. 2007, 201, 153–163. [Google Scholar] [CrossRef]
  21. Papoulis, A. Probability, Random Variables and Stochastic Process; McGraw-Hill: New York, NY, USA, 1993. [Google Scholar]
  22. Rosenblatt, M. Remarks on some nonparametric estimates of a density function. Ann. Math. Stat. 1956, 27, 832–837. [Google Scholar] [CrossRef]
  23. Parzen, E. On estimation of a probability function and its mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  24. Khargonekar, P.P.; Petersen, I.R.; Zhou, K. Robust stabilization of uncertain systems and H optimal control. IEEE Trans. Autom. Control 1990, 35, 351–361. [Google Scholar]
  25. Mahmoud, M.S. Resilient Control of Uncertain Dynamical Systems. In Lecture Notes in Control and Information Sciences; Springer-Verlag: Berlin/Heidelberg, Germany, 2005; Volume 303. [Google Scholar]

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, H.; Guo, L. Resilient Minimum Entropy Filter Design for Non-Gaussian Stochastic Systems. Entropy 2013, 15, 1311-1323. https://doi.org/10.3390/e15041311

AMA Style

Wang Y, Wang H, Guo L. Resilient Minimum Entropy Filter Design for Non-Gaussian Stochastic Systems. Entropy. 2013; 15(4):1311-1323. https://doi.org/10.3390/e15041311

Chicago/Turabian Style

Wang, Yan, Hong Wang, and Lei Guo. 2013. "Resilient Minimum Entropy Filter Design for Non-Gaussian Stochastic Systems" Entropy 15, no. 4: 1311-1323. https://doi.org/10.3390/e15041311

Article Metrics

Back to TopTop