Next Article in Journal
A db-Scan Hybrid Algorithm: An Application to the Multidimensional Knapsack Problem
Next Article in Special Issue
A Generalized Equilibrium Transform with Application to Error Bounds in the Rényi Theorem with No Support Constraints
Previous Article in Journal
Infinitely Many Homoclinic Solutions for Fourth Order p-Laplacian Differential Equations
Previous Article in Special Issue
Rates of Convergence in Laplace’s Integrals and Sums and Conditional Central Limit Theorems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Filtering of Markov Jump Processes Given Observations with State-Dependent Noises: Exact Solution and Stable Numerical Schemes

1
Institute of Informatics Problems of Federal Research Center “Computer Science and Control” RAS, 44/2 Vavilova str., 119333 Moscow, Russia
2
Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, GSP-1, 1-52 Leninskiye Gory, 119991 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(4), 506; https://doi.org/10.3390/math8040506
Submission received: 14 March 2020 / Revised: 29 March 2020 / Accepted: 30 March 2020 / Published: 2 April 2020
(This article belongs to the Special Issue Stability Problems for Stochastic Models: Theory and Applications)

Abstract

:
The paper is devoted to the optimal state filtering of the finite-state Markov jump processes, given indirect continuous-time observations corrupted by Wiener noise. The crucial feature is that the observation noise intensity is a function of the estimated state, which breaks forthright filtering approaches based on the passage to the innovation process and Girsanov’s measure change. We propose an equivalent observation transform, which allows usage of the classical nonlinear filtering framework. We obtain the optimal estimate as a solution to the discrete–continuous stochastic differential system with both continuous and counting processes on the right-hand side. For effective computer realization, we present a new class of numerical algorithms based on the exact solution to the optimal filtering given the time-discretized observation. The proposed estimate approximations are stable, i.e., have non-negative components and satisfy the normalization condition. We prove the assertions characterizing the approximation accuracy depending on the observation system parameters, time discretization step, the maximal number of allowed state transitions, and the applied scheme of numerical integration.

1. Introduction

The Wonham filter [1], as well as the Kalman–Bucy filter [2], is one of the most practically used filtering algorithms for the states of the stochastic differential observation systems. It is applied extensively for signal processing in technics, communications, finance and economy, biology, medicine, etc. [3,4,5,6]. The filter provides the optimal in the Mean Square (MS) sense on-line estimate of the finite-state Markov Jump Process. (MJP) given indirect continuous-time observations, corrupted by the Wiener noise. The elegant algorithm represents the desired estimate as a solution to a Stochastic Differential System (SDS) with continuous random processes on the Right-Hand Side (RHS).
The fundamental condition for the solution to the filtering problem is the independence of the observation noise intensity of the estimated state. It provides the continuity from the right for the natural flow of σ -algebras induced by the observations, with subsequent utilization of the innovation process framework. The condition violation breaks these advantages. In the case of the state-dependent observation noise, the author of [7] presents the optimal estimate within the class of the linear estimates. Further, the authors of [8,9] use filters of a linear structure for the solution to the H 2 -optimal state filtering problem. To find the absolute optimal filtering estimate, one has to make extra efforts. First, for proper utilization of the stochastic analysis framework, one needs to reformulate the optimal filtering problem, “smoothing forward“ the flow of σ -algebras induced by the observations. Second, in the case of state-dependent noise, the innovation process contains less information than the original observations. One has to supplement the innovation by the observation quadratic characteristic, which represents a continuous-time noiseless function of the estimated MJP state. In general, the optimal filtering given partially noiseless observations is a challenging problem. Its solution can be expressed either as a sequence of some regularized estimates [10] or by the additional differentiation of the smooth observation components or their quadratic characteristics [11,12,13,14]. In both cases, one needs to realize a limit passage, which is difficult in computers.
Even in the traditional settings, the numerical realization of the MJP state filtering is a complicated problem. For example, the explicit numerical methods based on the Itô–Taylor expansion applied to the Wonham filter equation, diverge: the produced approximations do not meet component-wise non-negativity condition. Over time the approximation components reach arbitrary large absolute values. Further, in the presentation, we refer to the approximations, preserving both the component non-negativity and normalization condition as the stable ones.
The Wonham filtering equation is a particular case of the nonlinear Kushner–Stratonovich equation. To solve it, one can use various numerical algorithms
  • the procedures based on the weak approximation of the original processes by Markov chains [15,16],
  • some variants of the splitting methods [17],
  • the robust procedures based on the Clark transform [18,19],
  • the schemes, which represent the conditional probability distributions through the logarithm [20], etc.
All the algorithms are developed for the case of additive observation noise and based on the Girsanov’s measure transform. Hence, they are useless for the estimation of the MJP given the observations with state-dependent noise.
The goal of the paper is two-fold. First, it presents a theoretical solution to the MS-optimal filtering problem, given the observations with state-dependent noise. Second, it introduces a new class of stable numerical algorithms for filter realization and investigates its accuracy. We organize the paper as follows. Section 2 contains a description of the studying observation system with state-dependent observation noise along with the MS-optimal filtering problem statement. To solve the problem, one needs to transform the available observations both to preserve the information equivalence and suit for application of the known results of the optimal nonlinear filtering. Section 3 describes both the observation transformation and the SDS defining the optimal filtering estimate. The SDS is discrete–continuous and contains both continuous and counting random processes on the RHS. Previously, the author of the note [21] presents a sketch of the observation transform, but it cannot guarantee the uniqueness of that SDS solution.
Section 4 presents a new class of the stable numerical algorithms of the nonlinear filtering. The main idea is to discretize original continuous-time observations and then find the MS-optimal filtering estimate given the sampled observations. The authors of [22] use this idea to solve a particular case of the estimation problem, namely the classification problem of a finite-state random vector given continuous-time observations with multiplicative noise. Section 4.1 contains a general solution to the problem. The corresponding estimate represents a ratio, which numerator and denominator are the infinite sums of integrals. They are shift-scale mixtures of the Gaussians. The mixing distributions, in turn, describe the occupation time of the system state in each admissible value during the time discretization interval. In Section 4.2, we suggest approximating the estimates by a convergent sequence bounding number s of possible state transitions, which occurred over the discretization interval. We replace the infinite sums in the formula of the optimal estimate by their finite analogs and also investigate the accuracy of the approximations. We refer these approximations as the analytical ones of the s-th order. One cannot calculate the integrals analytically and have to replace them with some integral sums, and this brings an extra error. Section 4.3 analyzes the value of this error and the total distance between the optimal filtering estimate given the discretized observations and its numerical realization. Section 4.4 presents a numerical example that illustrates the conformity of theoretical estimates and their numerical realization. Section 5 contains discussion and concluding remarks.

2. Continuous-Time Filtering Problem Statement

On the probability triplet with filtration ( Ω , F , P , { F } t 0 ) we consider the observation system
X t = X 0 + 0 t Λ ( s ) X s d s + M t X ,
Y t = 0 t f ( s ) X s d s + 0 t n = 1 N X s n G n 1 / 2 ( s ) d W s .
Here
  • X t = col ( X t 1 , , X t N ) S N is an unobservable state which is a finite-state Markov jump process (MJP) with the state space S N { e 1 , , e N } ( S N stands for the set of all unit coordinate vectors of the Euclidean space R N ) with the transition matrix Λ ( t ) and the initial distribution π = col ( π 1 , , π N ) ; the process M t X is an F t -adapted martingale,
  • Y t = col ( Y t 1 , , Y t M ) R M is an observation process: W t = col ( W t 1 , , W t M ) R M is an F t -adapted standard Wiener process characterizing the observation noise, f ( t ) is an M × N -dimensional observation matrix and the collection of M × M -dimensional matrices { G n ( t ) } n = 1 , N ¯ defines the conditional observation noise intensities given X t = e n .
The natural flow of σ -algebras generated by the observations Y up to the moment t is denoted by Y t σ { Y s : s [ 0 , t ] } , Y 0 { , Ω } .
The optimal state filtering given the observations Y is to find the Conditional Mathematical Expectation (CME)
X ^ t E X t | Y t + .

3. Observation Transform and Optimal Filtering Equation

Before derivation of the optimal filtering equation we specify the properties of the observation system (1) and (2).
  • All trajectories of { X t } t 0 are continuous from the left and have finite limits from the right, i.e., are cádlág-processes.
  • Nonrandom matrix-valued functions Λ ( t ) , f ( t ) and { G n ( t ) } n = 1 , N ¯ consist of the cádlág-components.
  • The noises in Y are uniformly nondegenerate [10], i.e., min 1 n N , t 0 G n ( t ) > α I for some α > 0 ; here and after, I is a unit matrix of appropriate dimensionality.
  • The processes
    K i j ( t ) I { 0 } ( G i ( t ) G j ( t ) ) , i , j = 1 , N ¯
    have a finite variation; here and after, I A ( x ) is an indicator function of the set A , and 0 is a zero matrix of appropriate dimensionality.
Conditions 1–3 are standard for the filtering problems [10]. They guarantee the proper description of MJP distribution π ( t ) E X t by the Kolmogorov system π ( t ) = π + 0 t Λ ( s ) π ( s ) d s . Condition 4 relates to the quadratic characteristic of the observation process as a key information source itself. Below we show that collection of G n ( · ) , distinguished for different n, allows to restore the state X t precisely given the available noisy observations. Condition 4 guarantees the local regularity of the time subsets, where G n ( · ) coincide and/or differ each other: one can express them as finite unions of the intervals. The condition is not too restrictive: for instance, they are valid when G n ( · ) are piece-wise continuous with bounded derivatives.
Both the system state and observation are special square-integrable semimartingales [6,23] with the predictable characteristics
X , X t X t X t 0 t X s d X s 0 t d X s X s = = 0 t diag Λ ( s ) X s Λ ( s ) diag X s diag X s Λ ( s ) d s
and
Y , Y t Y t Y t 0 t Y s d Y s 0 t d Y s Y s = n = 1 N 0 t X s n G n ( s ) d s .
Conditions 1–3 and the properties of X t guarantee P -a.s. fulfilment of the following equalities for the one-sided derivatives of Y , Y t :
d Y , Y s d s | s = t = n = 1 N X t n G n ( t ) = n = 1 N X t n G n ( t ) , d Y , Y s d s | s = t + = n = 1 N X t n G n ( t ) + Δ G n ( t ) = n = 1 N X t n G n ( t ) ,
where Δ G n ( t ) G n ( t ) G n ( t ) is a jump function of G n ( t ) . So, if there exists a nonrandom instant t * > 0 such that n = 1 N π n ( t * ) Δ G n ( t * ) 0 , then Y t * Y t * + = Y t * σ { n = 1 N X t * n Δ G n ( t * ) } . The inclusion presumes the flow of σ -subalgebras { Y t } t 0 is not necessarily continuous from the right for the considered observations [24]. This is a reason to define a filtering estimate as a CME of X t with respect to the “smoothed” flow Y t + for subsequent correct usage of the stochastic analysis framework.
Let us transform the available observations in such a way to derive the optimal filtering estimate by the standard methods [6,23]. Initially, the idea of this transform is suggested in [11]. As the result, the authors introduce the pair
U t 0 t d Y , Y u d u u = s + 1 / 2 d Y s ,
Y , Y t = n = 1 N 0 t X s n G n ( s ) d s .
The authors of [11] prove coincidence of the σ -algebras Y t = σ { U s , 0 s t } σ { Y , Y s , 0 s t } for the general diffusion observation systems. However, they do not pay attention to the continuity of { Y t } from the right. The authors of [12,14] suggest to replace the observations Y , Y t by their derivative
Q ( t ) d Y , Y s d s | s = t = n = 1 N X t n G n ( t ) .
Then, one can construct the optimal estimate either to use Q t as a linear constraint or to differentiate (10) for extraction of the dynamic noises. The papers [12,14] contain a rather pessimistic conclusion: the number of differentiations is unbounded in the general case of diffusion observation system. In contrast, we estimate a finite-state MJP and can construct the optimal filtering estimate using Q without additional differentiation.
So, the transformed observations will contain
  • diffusion processes with the unit diffusion,
  • counting stochastic processes,
  • indirect state observations obtained at the nonrandom discrete moments.
The first transformed observation part is the process U t (8), and in view of (2) and (7) it can be rewritten as
U t = 0 t f ¯ ( s ) X s d s + W ¯ t ,
where f ¯ ( s ) n = 1 N G n 1 / 2 ( s ) f ( s ) diag ( e n ) and W ¯ t is an F t -adapted standard Wiener [10].
The process Q t could play the role of the second part of the transformed observations since Y t = σ { U s , Q s , s [ 0 , t ] } [11], however the natural flow of σ -algebras generated by the couple ( U , Q ) is not continuous from the right yet. Moreover, the process Q t is matrix-valued and looks overabundant for the filter derivation. The point is, Q t = Q ( t , X t ) (10) is a function of the finite-set argument X t , and it affects the estimate performance through its complete preimage
Q t = Q ( t , X t ) Q 1 { e n S N : G n ( t ) e n = Q t } .
To go to the preimage we introduce the following transformation of Q t :
H t n = 1 N I { 0 } Q t G n ( t ) e n .
H t is a Y t -adapted vector process with components 0 or 1, but the trajectories H t are not cádlág processes. Due to the fact X t = X t P -a.s. for t 0 the equalities below are valid
H t = n , k = 1 N I { 0 } G k ( t ) G n ( t ) X t k e n = K ( t ) X t = K ( t ) X t P a . s . ,
where K ( t ) is the N × N -dimensional matrix with the components (4).
The function K ( t ) has the following properties.
  • K ( t ) K ( t ) for any t 0 .
  • The number of K ( · ) jumps occurred in any finite time interval is finite due to condition 4.
  • K ( t ) is not a cádlág-function [25].
  • P { Δ K ( t ) Δ X t > 0 } = 0 for any t 0 .
  • For any t 0 there exists a transformation T ( t ) such that the matrix T ( t ) K ( t ) is trapezoid with orthogonal strings and 0 and 1 as the components.
  • P { T ( t ) H t S N } = 1 for any t 0 .
Let us define a Y t + -adapted process V t = col ( V t 1 , , V t N ) with the cádlág-trajectories:
V t T ( t + ) H t + .
From (12) and (13) it follows that V t = J ( t ) X t P -a.s., where J ( t ) T ( t + ) K ( t + ) .
We denote the set of the process V discontinuity by V , X stands for the set of X discontinuity and J for the analogous set of the process J. The sets V and X are random, in contrast J is nonrandom. The process V t is purely discontinuous, and due to property 4 it can be rewritten in the form
V t = J ( 0 ) X 0 + κ V : κ t Δ V κ = J ( 0 ) X 0 + κ J : κ t Δ J ( κ ) X κ + κ V J : κ t J ( κ ) Δ X κ = = J ( 0 ) X 0 + κ J : κ t Δ J ( κ ) X κ + κ X : κ t J ( κ ) Δ X κ = J ( 0 ) X 0 + κ J : κ t Δ J ( κ ) X κ D t + 0 t J ( s ) d X s R t .
Due to the definition V t S N for t 0 . The process D t characterizes the observable jumps at the nonrandom moments caused by J ( t ) changes, and R t is an observable part of the state X t jumps, occurred, at some random instants.
As a second part of the transformed observations, we choose the N-dimensional random process C t col ( C t 1 , , C t N ) : the components C t n count the jumps of the process V t into the state e n , occurred at the random instants over the interval [ 0 , t ] :
C t n = 0 t ( 1 e n V s ) e n d R s .
The third part of the transformed observations is the N-dimensional process D t with the jumps at the nonrandom moments.
Lemma 1.
If Y ¯ t σ { ( U s , C s , D s ) , s [ 0 , t ] } , then the coincidence Y ¯ t = Y t + is true for any t 0 .
Correctness of the Lemma assertion follows immediately from the fact the composite process ( U t , C t , D t ) is constructed to be Y t + -adapted, and one-to-one correspondence of the ( U , C , D ) and Y paths:
U t = 0 t d Y , Y u d u u = s + 1 / 2 d Y s , C t = 0 t ( I diag V s ) d V s κ J : κ t ( I diag V κ ) Δ V κ , D t = κ J : κ t ( I diag V κ ) Δ V κ , V t = T ( t + ) H t + , H t n = 1 N I { 0 } d Y , Y s d s | s = t G n ( t ) e n ,
V t = D t + 0 t ( i , j ) : i j N V s i ( e j e i ) d C s j . Y t = 0 t n = 1 N V s n G n 1 / 2 ( s ) d U s ,
Below we use the following notations: 1 is a row vector of the appropriate dimensionality formed by units, J n ( s ) e n J ( s ) is the n-th row of the matrix J ( s ) ,
Γ n ( s ) diag ( J n ( s ) ) Λ ( s ) ( I diag J n ( s ) ) .
Lemma 2.
The process C t = col ( C t 1 , , C t N ) has the following properties.
1. 
n-th component C t n allows the martingale representation
C t n = 0 t 1 Γ n ( s ) X s d s + 0 t ( 1 J n ( s ) X s ) J n ( s ) d M s X .
2. 
[ C n , C m ] t 0 for any n m ;
C n , C n t = 0 t 1 Γ n ( s ) X s d s .
3. 
The innovation processes
ν t n 0 t d C s n 1 Γ n ( s ) X ^ s d s , n = 1 , N ¯
are Y ¯ t -adapted martingales with the quadratic characteristics
ν n , ν n t = 0 t 1 Γ n ( s ) X ^ s d s .
Proof of Lemma 2 is given in Appendix A.
Finally, the transformed observations ( U , C , D ) take the form
U t = 0 t f ¯ ( s ) X s d s + W ¯ t , C t n = 0 t 1 Γ n ( s ) X s d s + 0 t ( 1 J n ( s ) X s ) J n ( s ) d M s X , n = 1 , N ¯ , D t = J ( 0 ) X 0 + κ J : κ t Δ J ( κ ) X κ .
Theorem 1.
The optimal filtering estimate X ^ t is a strong solution to the SDS
X ^ t = ( D 0 ) J ( 0 ) π 0 + diag ( D 0 ) J ( 0 ) π 0 + 0 t Λ ( s ) X ^ s d s + 0 t diag X ^ s X ^ s X ^ s f ¯ ( s ) d ω s + + n = 1 N 0 t Γ n ( s ) 1 Γ n ( s ) X ^ s I X ^ s 1 Γ n ( s ) X ^ s + d ν s n + + κ J : κ t Δ D κ Δ J ( κ ) X ^ κ + diag ( Δ D κ ) Δ J ( κ ) I X ^ κ ,
where
ω t U t 0 t f ¯ ( s ) X ^ s d s
and A + is a Moore–Penrose pseudoinverse. The solution is unique within the class of nonnegative piecewise-continuous Y t + -adapted processes with discontinuity set lying in V .
Proof of Theorem 1 is given in Appendix B.
The transformed observations (22) along with Theorem 1 prompt a condition of the exact identifiability of the state X t given indirect noisy observations Y t (2).
Corollary 1.
If for any n m ( n , m = 1 , N ¯ ) the inequalities G n ( s ) G m ( s ) are true almost everywhere on [ 0 , t ] , then X ^ t = X t P -a.s., and X t is the solution to SDS (23).
The proof of Corollary 1 is given in Appendix C.

4. Numerical Algorithms of Optimal Filtering

4.1. Optimal Filtering Given Discretized Observations

The latter section contains the stochastic system (23) defining the optimal filtering estimate X ^ t . The problem of its numerical realization seems routine: we should apply the corresponding methods of numerical integration of SDS with jumps on the RHS [26]. However, this simplicity is illusory. The problem is that the “new” countable observation C t and discrete-time one D t are results of certain transform of the available observation Y, and this transform includes a limit passage operation. In fact, to obtain C t we have to estimate/restore the current value of the derivative d Y , Y t + d t . First, this leads to some time delay to accumulate observations Y t . Second, any pre-limit variant of C t either has a.s. continuous trajectories or represents their sampling, which demonstrates oscillating nature. Third, the considered filtering estimate is the CME of the state X t given the observations Y up to the moment t. The CME has natural properties: its components are a.s. non-negative and satisfy the normalization condition. The estimates and approximations having these properties are referred in the paper as the stable ones. Mostly, the conventional numerical algorithms do not provide these properties for the calculated approximations. They can preserve the normalization condition only, but the components can have the arbitrary signs and absolute values.
In the paper we present another approach to the numerical realization of the filtering algorithm above. We discretize the available observations Y by time with the increment h and then solve the optimal state filtering problem given discretized observations. The estimate can be considered as approximation of the one given the initial continuous-time observations. Properties of the CME guarantee the stability of the proposed approximation.
To simplify derivation of the numerical algorithm and its accuracy analysis we investigate the time-invariant subset of the observation system (1), (2), i.e., Λ ( t ) Λ , A ( t ) A , G n ( t ) G n , n = 1 , N ¯ . The observations are discretized with the time increment h:
Y r t r 1 t r f X s d s + t r 1 t r n = 1 N X s n G n 1 / 2 d W s , r N ,
where t r r h are equidistant time instants. We denote Y r σ { Y s : 1 s r } non-decreasing collection of σ -algebras generated by the time-discretized observations; Y 0 { , Ω } .
The optimal state filtering problem given discretized observations is to find X ^ r E X t r | Y r .
Let us consider asymptotics of X ^ . We fix some T > 0 and consider a condensed sequence of binary meshes { r T 2 n } r = 1 , 2 n ¯ with time increments h n T 2 n and corresponding increasing sequence of σ -subalgebras { Y 2 n n } : Y 2 n n σ { Y r , 1 r 2 n } . The observation process { Y t } is separable, hence σ n = 1 Y n = Y T . Then, by Levy theorem X ^ 2 n E X T | Y n n E X T | Y T = E X T | Y T + X ^ T P -a.s. Moreover, since E X ^ T E X ^ 2 n = π ( T ) , the L 1 -convergence is also true: lim n E | X ^ T X ^ 2 n | = 0 . The convergence also holds, if we replace the sequence of the binary meshes by any condensed sequence with vanishing step. So, we can conclude that the optimal filtering given the discretized observation is a way to design the stable convergent approximations without observation transform Y ( U , C , D ) introduced in the previous section.
To derive the filtering formula we use the approach of [27] and the mathematical induction.
In the case r = 0 we have
X ^ 0 = E X 0 | Y 0 = E X 0 = π .
Let for some r N the estimate X ^ r 1 = E X t r 1 | Y r 1 be known. Now we calculate X ^ r at the next time instant. To do this we have to specify the mutual conditional distribution ( X t r , Y r ) with respect to Y r 1 . From the observation model and ([10] Lemma 7.5) it follows that the conditional distribution of Y r given σ -algebra F t r X Y r 1 is Gaussian with the parameters
E Y r | F t r X = f υ r , cov ( Y r , Y r | F t r X ) = n = 1 N υ r n G n .
Here, υ r = col ( υ r 1 , , υ r N ) t r 1 t r X s d s is a random vector composed of the occupation times of the process X in each state e n during the interval [ t r 1 , t r ] .
Below in the presentation we use the following notations:
  • D { u = col ( u 1 , , u N ) : u n 0 , n = 1 N u n = h } is an ( N 1 ) -dimensional simplex in the space R M ; D is a distribution support of the vector υ r ;
  • Π { π = col ( π 1 , , π N ) : π n 0 , n = 1 N π n = 1 } is a “probabilistic simplex” formed by the possible values of π ;
  • N r X is a random number of the state X t transitions, occurred on the interval [ t r 1 , t r ] ,
  • a r s { ω Ω : N r X ( ω ) s } , A r s q = 1 r a q s ;
  • ρ k , , q ( d u ) is a conditional distribution of the vector X t r I { q } ( N r X ) υ r given X t r 1 = e k , i.e., for any G B ( R M ) the following equality is true:
    E I G ( υ r ) I { q } ( N r X ) X t r | X t r 1 = e k = G ρ k , , q ( d u ) ;
  • N ( y , m , K ) ( 2 π ) M / 2 det 1 / 2 K exp 1 2 y m ) K 1 2 is an M-dimensional Gaussian probability density function (pdf) with the expectation m and nondegenerate covariance matrix K;
  • α K 2 α K α , α , β K α K β .
Markovianity of { ( X t r , Y r ) } r 0 , formula of the total probability and Fubini theorem provide the equalities below for any set A B ( R M )
E X t r I A ( Y r ) | Y r 1 = E E X t r I A ( Y r ) | F t r X Y r 1 | Y r 1 = = E X t r A N ( y , f υ r , p = 1 N υ r p G p ) d y | Y r 1 = = E E X t r A N ( y , f υ r , p = 1 N υ r p G p ) d y | X t r 1 Y r 1 | Y r 1 = = E = 1 N e q = 0 k = 1 N e k X t r 1 D A N ( y , f u , p = 1 N u p G p ) d y ρ k , , q ( d u ) | Y r 1 = = = 1 N e A k = 1 N X ^ r 1 k q = 0 D N ( y , f u , p = 1 N u p G p ) ρ k , , q ( d u ) d y .
This means that the integrand in the square brackets defines the conditional distribution ( X t r , Y r ) given Y r 1 . Further, the conditional distribution X ^ r is defined component-wisely by the generalized Bayes rule [10]
X ^ r j = k = 1 N X ^ r 1 k q = 0 D N ( Y r , f u , p = 1 N u p G p ) ρ k , j , q ( d u ) i , = 1 N X ^ r 1 i c = 0 D N ( Y r , f v , n = 1 N v n G n ) ρ i , , c ( d v ) , j = 1 , N ¯ .
So, we have proved the following
Lemma 3.
If for the observation system (1), (2) conditions 1–3 are valid, then the filtering estimate X ^ r given the discretized observations is defined by (26) at r = 0 , and by recursion (28) at the instant t r of the discretized observation Y r reception.

4.2. Stable Analytic Approximations

Recursion (23) cannot be realized directly because of infinite summation both in the numerator and denominator. We replace them by the finite sums, and the corresponding vector sequence X ¯ r ( s ) , calculated by the formula
X ¯ r j ( s ) = k = 1 N X ¯ r 1 k ( s ) q = 0 s D N ( Y r , f u , p = 1 N u p G p ) ρ k , j , q ( d u ) i , = 1 N X ¯ r 1 i ( s ) c = 0 s D N ( Y r , f v , n = 1 N v n G n ) ρ i , , c ( d v ) , j = 1 , N ¯
is called the analytic approximation of the s-th order of X ^ r . Obviously, that X ¯ r ( s ) is stable.
Let us introduce the following positive random numbers and matrices:
ξ q k j m = 0 s D N ( Y q , f u , p = 1 N u p G p ) ρ k , j , m ( d u ) , θ q k j m = s + 1 D N ( Y q , f u , p = 1 N u p G p ) ρ k , j , m ( d u ) , ξ q ξ q k j k , j = 1 , N ¯ , θ q θ q k j k , j = 1 , N ¯ .
The estimates X ^ r (28) and X ¯ r ( s ) (29) can be rewritten in the recurrent form:
X ^ r = ( 1 ( ξ r + θ r ) X ^ r 1 ) 1 ( ξ r + θ r ) X ^ r 1 ,
X ¯ r ( s ) = ( 1 ξ r X ¯ r 1 ( s ) ) 1 ξ r X ¯ r 1 ( s ) .
Let us define the global distance [28] between the estimates { X ¯ r ( s ) } and { X ^ r } as
Σ r ( s ) sup π Π E X ^ r X ¯ r ( s ) 1 = sup π Π j = 1 N E | X ^ r j X ¯ r j ( s ) | .
The pretty natural characteristic shows the maximal expected divergence of the recursions (28) and (29) at the r-th step.
The assertion below defines an upper bound of the characteristic Σ r ( s ) .
Lemma 4.
If the conditions of Lemma 3 are valid, then
Σ r ( s ) 2 2 1 C 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! r ,
where λ ¯ max 1 n N | λ n n | , and C 1 = C 1 ( h , λ ¯ ) ( 0 , 1 ) is the following parameter:
C 1 e λ ¯ h ( s + 1 ) ! ( λ ¯ h ) s + 1 k = s + 1 ( λ ¯ h ) k k ! ,
which is bounded from above: C 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! < 1 .
The proof of Lemma 4 is given in Appendix D.
Assertion of Lemma brings the practical benefit. The Lemma does not contain any asymptotic requirements neither to the approximation order s nor to the discretization step h: inequality (34) is universal. Mostly, in the digital control systems the data acquisition rate is fixed or bounded from above. There are some extra algorithmic limitations of the rate: the “raw” data should be preprocessed, smoothed, averaged, refined from outliers, etc. For example, utilization of the central limit theorem [29] and diffusion approximation framework [30] for the the renewal processes is legitimate with significant averaging intervals, and their length depends on the process moments.
Now we fix the time instant T and consider an asymptotic h 0 . In this case r = T h and
Σ T h ( s ) 2 2 1 C 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! T h 2 λ ¯ T ( λ ¯ h ) s ( s + 1 ) ! .

4.3. Stable Numerical Approximations

In the recursion (32) we use the integrals ξ r i j , which cannot be calculated analytically. The numerical integration brings some extra approximation error. Let us investigate its affect to the total accuracy of the filter numerical realization.
The integrals ξ i j ( y ) are usually approximated by the sums
ξ i j ( y ) ψ i j ( y ) = 1 L N ( y , f w , p = 1 N w p g p ) ϱ i j , ψ ( y ) ψ i j ( y ) i , j = 1 , N ¯ ,
which are defined by the collection of the pairs { ( w , ϱ i j ) } = 1 , L ¯ . Here, w col ( w 1 , , w N ) D are the points, and ϱ i j 0 ( = 1 , L ¯ ) are the weights: j = 1 N = 1 L ϱ i j Q 1 .
In complete analogy with ξ q we define the approximations ψ q ψ i j ( Y q ) i , j = 1 , N ¯ . By construction, the elements of ψ q are positive random values, hence the approximation X ˜ r
X ˜ r ( 1 ψ r X ˜ r 1 ) 1 ψ r X ˜ r 1 , X ˜ 0 = π
is stable. Below we denote the numerical integration errors and their absolute values as follows
γ k j ψ k j ξ k j , γ r γ k j ( Y r ) k , j = 1 , N ¯
γ ¯ k j | γ k j | , γ ¯ r | γ k j ( Y r ) | k , j = 1 , N ¯ .
So, the recursion (32) is replaced by the scheme (37), holding the common initial condition π .
Both (32) and (37) are constructed in light of the event A r s : the state transition numbers do not exceed the threshold s over any subintervals [ t q 1 , t q ] belonging to [ 0 , t r ] . So, the distance between X ˜ r and X ¯ r ( s ) should be determined taking into account A r s . In view of this fact, we propose the pseudo-metrics
E r ( s ) sup π Π E I A r s ( ω ) X ˜ r X ¯ r ( s ) 1 = sup π Π n = 1 N E I A r s ( ω ) | X ˜ r n X ¯ r n ( s ) | .
This index reflects maximal divergence of the algorithms (32) and (37) after r steps, being started from the arbitrary but common initial condition.
Theorem 2.
If the inequality
max i = 1 , N ¯ j = 1 N R M | ψ i j ( y ) ξ i j ( y ) | d y < δ
is true for the numerical integration scheme (36), then the distance E r ( s ) is bounded from above:
E r ( s ) 2 r Q r 1 δ .
The proof of Theorem 2 is given in Appendix E.
The chance to describe the accuracy of the numerical algorithm for the stochastic filtering using only the condition (41), related to the calculus, looks remarkable. Furthermore, if the total weight Q = , j ϱ i j separates from the unity, i.e., Q < 1 , then the index E r ( s ) is a sublinear function of r, so as the index Σ r ( s ) of the analytic accuracy is. Notably, that in the classic numerical algorithms of the SDS solution the global error grows linearly with respect to the number of steps r [26].
The precision characteristics of both the analytical approximation and its numerical realization should be aggregated into the one. If the conditions of Lemma 4 and Theorem 2 are valid, then the local distance (i.e., the distance after one iteration) between the optimal filtering estimate and its numerical approximation can be bounded from above:
τ ( s ) sup π Π E X ^ 1 X ˜ 1 1 sup π Π E I a 1 s ( ω ) X ˜ 1 X ¯ 1 ( s ) + X ¯ 1 ( s ) X ^ 1 1 + I a ¯ 1 s ( ω ) X ˜ 1 X ¯ 1 ( s ) 1 2 P { a ¯ 1 s } + sup π Π E X ¯ 1 ( s ) X ^ 1 1 + sup π Π E I a 1 s ( ω ) X ˜ 1 X ¯ 1 ( s ) 1 = = 2 P { a ¯ 1 s } + σ ( s ) + E 1 ( s ) 4 ( λ ¯ h ) s + 1 ( s + 1 ) ! + 2 δ .
The global distance between X ^ r E X r | Y r and X ˜ r can be bounded in the similar way:
T ( s ) sup π Π E X ^ r X ˜ r 1 4 1 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! r + 2 r Q r 1 δ .
We could choose the parameters ( h , s ) of the analytical approximation and δ of the numerical integration independently each other. However, both the limitation of the computational resources and the accuracy requirements lead to the necessity of the mutual optimization of ( h , s , δ ) .
Let us fix some time horizon T along with the order s of analytical approximation, and consider the asymptotic r , or, equivalently, h = T r 0 . Due to the Bernoulli inequality, and condition 0 < Q 1 we have that
sup π Π E X ˜ T / h X ^ T / h 1 4 1 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! r + 2 r Q r 1 δ 4 r ( λ ¯ h ) s + 1 ( s + 1 ) ! + 2 r Q r 1 δ = = 4 λ ¯ T ( λ ¯ h ) s ( s + 1 ) ! + 2 r Q r 1 δ 2 T 2 λ ¯ ( λ ¯ h ) s ( s + 1 ) ! + δ h .
The first summand in the brackets represents the contribution of the analytical approximation error, the second one reflects the error of the specified numerical integration scheme. Obviously, the optimal choice of the parameters provides an equal infinitesimal order for both the summands, and it is possible when δ ( λ ¯ h ) s + 1 λ ¯ .

4.4. Numerical Example

To illustrate the correspondence between the theoretical estimate and its realization along with the performance of the numerical algorithm, we consider the filtering problem for the observation system (1) and (2) with the following parameters: t [ 0 , 1 ] , N = 3 ,
Λ = 1.0 0.2 0.8 0.8 1.0 0.2 0.2 0.8 1.0 , π = 0.333 0.333 0.334 , f = 0.0 0.0 0.0 , G 1 = 1.0 , G 2 = 4.0 , G 3 = 9.0 .
The specified observation system is the one with state-dependent noise, and the conditions of Corollary 1 hold, so the optimal filter (23) restores the MJP state precisely under available noisy observations. Let us verify this theoretical fact, using the recursive algorithm (37). We choose the analytical approximation of the order s = 1 with numerical integration by the simple midpoint rectangle scheme and calculate estimate approximations with decreasing time-discretization step: h = 0.01 ; 0.001 ; 0.0001 ; 0.00001 . We expect the descent of the estimation error characterized by the MS-criterion S t ( h ) = E X t X ˜ t h 2 2 . To calculate the criterion, we use the Monte–Carlo method over the test sample of the size 1000. Figure 1 presents the corresponding plots of the quality index S t ( h ) for various values of h.
The determination of the precision order provided by the chosen numerical integration method is out of the scope of this investigation. Nevertheless, one can see the expected decrease of the estimation error when the time-discretization step descends. We appraise this result as a practical confirmation of both the theoretical assertions and numerical algorithm.

5. Conclusions

In this paper, we investigated the optimal filtering problem of the MJP states, given the indirect noisy continuous-time observations. The observation noise intensity was a function of the estimated state, so it was impossible to apply the classic Wonham filter to this observation system. To overcome this obstacle, we suggested an observation transform. On the one hand, the transformed observations remained to be equivalent to the original one from the informational point of view. On the other hand, the “new“ observations allowed to apply the effective stochastic analysis framework to process them. We derived the optimal filtering estimate theoretically as a unique strong solution to some discrete–continuous stochastic differential system. The transformed observations included derivative of the quadratic characteristics, i.e., the result of some limit passage in the stochastic settings. Hence, the subsequent numerical realization of the filtering became challenging. We proposed to approximate the initial continuous-time filtering problem by a sequence of the optimal ones given the time-discretized observations. We also involved numerical integration schemes to calculate the integrals included in the estimation formula. We prove assertions, characterizing the accuracy of the numerical approximation of the filtering estimate, i.e., the distance between the calculated approximation and optimal discrete-time filtering estimate. The accuracy depended on the observation system parameters, time discretization step, a threshold of state transition number during the time step, and the chosen scheme of the numerical integration. We suggested the whole class of numerical filtering algorithms. In each case, one could choose any specific algorithm individually, taking into account characteristics of the concrete observation system, accuracy requirements, and available computing resources.
We do not consider the presented investigations as completed. First, the characterization of the distance between the initial optimal continuous-time filtering estimate and its proposed approximation is still an open problem. Second, we can use the theoretical solution to the MJP filtering problem as a base of numerical schemes for the diffusion process filtering, given the observations with state-dependent noise. Third, the obtained optimal filtering estimate looks a springboard for a solution to the optimal stochastic control of the Markov jump processes, given both the counting and diffusion observations with state-dependent noise. All of this research is in progress.

Author Contributions

Conceptualization, A.B., I.S.; methodology, A.B.; formal analysis and investigation, A.B., I.S.; writing—original draft preparation, A.B.; writing—review and editing, I.S.; supervision, I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CMEConditional mathematical expectation
MJPMarkov jump process
pdfProbability density function
RHSRight-hand side
SDSStochastic differential system

Appendix A. Proof of Lemma 2

From (14), (15), the identity diag ( a ) b diag ( b ) a , the fact that J n ( t ) J n ( t ) at most at finite points of any finite interval and property 4 of the function K ( t ) , the following equalities are true
C t n = 0 t ( 1 e n V s ) e n d R s = 0 t ( 1 e n V s ) e n J ( s ) ( Λ ( s ) X s d s + d M s X ) = = 0 t ( 1 J n ( s ) X s ) J n ( s ) Λ ( s ) X s d s + 0 t ( 1 e n V s ) J n ( s ) d M s X = = 0 t J n ( s ) Λ ( s ) ( I diag J n ( s ) ) X s d s + 0 t ( 1 e n V s ) J n ( s ) d M s X = = 0 t 1 Γ n ( s ) X s d s + 0 t ( 1 e n V s ) J n ( s ) d M s X .
Assertion 1 of Lemma is proved.
The definition of the processes C t n ( n = 1 , N ¯ ) guarantees their strong orthogonality, i.e., P { Δ C t i Δ C t j = 0 } 0 for any i j and t 0 , so [ C i , C j ] t 0 .
Let us use (5), (19) and properties of X and J n to derive the quadratic characteristics of C n :
C n , C n t = 0 t ( 1 J n ( s ) X s ) 2 J n ( s ) d X , X s J n ( s ) = = 0 t ( 1 J n ( s ) X s ) J n ( s ) diag ( Λ ( s ) X s Λ ( s ) diag X s diag ( X s ) Λ ( s ) J n ( s ) d s = = 0 t ( 1 J n ( s ) X s ) J n ( s ) diag ( J n ( s ) ) Λ ( s ) X s d s = 0 t J n ( s ) Λ ( s ) ( I diag J n ( s ) ) X s d s = = 0 t 1 Γ n ( s ) X s d s .
Assertion 2 of Lemma is proved.
If s and t are two arbitrary moments, such that s t , then
E ν t n ν s n | Y ¯ s = E s t J n ( u ) Λ ( u ) ( I diag J n ( u ) ) E ( X u X ^ u ) | Y ¯ u d u | Y ¯ s + + E E s t ( 1 J n ( s ) X s ) J n ( u ) d M u X | F s | Y ¯ s = 0 ,
i.e., ν t n is a Y ¯ t -adapted martingale. Note, that ν t n is purely discontinuous with unit jumps, hence
[ ν n , ν n ] t = τ t ( Δ ν τ n ) 2 = [ C n , C n ] t = τ t ( Δ C τ n ) 2 = C t n = = 0 t J n ( s ) Λ ( s ) ( I diag J n ( s ) ) X s d s + 0 t ( 1 J n ( s ) X s ) J n ( s ) d M s X = 0 t 1 Γ n ( s ) X ^ s d s + μ t 0 ,
where μ t 0 is some Y ¯ t -adapted martingale. From the uniqueness of the special martingale representation [ ν n , ν n ] t it follows that ν n , ν n t = 0 t 1 Γ n ( s ) X ^ s d s . Lemma 2 is proved. □

Appendix B. Proof of Theorem 1

We use the same approach as in ([6], Part III, Sect. 8.7) to derive the MJP filtering equations. The idea exploits the uniqueness of the representation for a special semimartingale along with the integral representation of a martingale [23].
From the Bayes rule it follows that X ^ 0 = E X 0 | D 0 = D 0 J ( 0 ) π + diag ( D 0 ) J ( 0 ) π . Let ϰ n 1 be a random instant of the n 1 -th discrete observation Δ D ϰ n 1 . We investigate evolution of X t over the interval [ ϰ n 1 , ϰ n ) :
X t = X ϰ n 1 + ϰ n 1 t Λ ( s ) X s d s + M t X M ϰ n 1 X , t [ ϰ n 1 , ϰ n ) .
Conditioning the left and right parts of the latter equality over Y ¯ t , one can show that
X ^ t = X ^ ϰ n 1 + ϰ n 1 t Λ ( s ) X ^ s d s + μ t 1 ,
where { μ t 1 } t [ ϰ n 1 , ϰ n ) is an Y ¯ t adapted martingale. For any t [ ϰ n 1 , ϰ n ) the equality Y ¯ t = Y ¯ ϰ n 1 σ { U s , s ( ϰ n 1 , t ] } σ { C s j , s ( ϰ n 1 , t ] , j = 1 , N ¯ } holds. The process { ω t } (24) is a Y ¯ t -adapted standard Wiener process [10].
The process U t is a Y ¯ t -adapted semimartingale with F X -conditionally-independent increments, meanwhile { C t j } j = 1 , N ¯ are Y ¯ t -adapted point processes. Hence, the martingale μ t 1 admits an integral representation ([23], Chap. 4, §8, Problem 1), i.e.,
X ^ t = X ^ ϰ n 1 + ϰ n 1 t Λ ( s ) X ^ s d s + ϰ n 1 t α s d ω s + ϰ n 1 t j = 1 N β s j d ν s j ,
where α t and { β t j } j = 1 , N ¯ are Y ¯ t -predictable processes of appropriate dimensionality, which should be determined.
Due to the generalized Itô rule
X t U t = X ϰ n 1 U ϰ n 1 + ϰ n 1 t Λ ( s ) X s U s + diag ( X s ) f ¯ ( s ) d s + μ t 2 ,
where μ t 2 is an F t -adapted matringale. Conditioning both sides of the latter equality over Y ¯ t , we can show that
X ^ t U t = X ^ ϰ n 1 U ϰ n 1 + ϰ n 1 t Λ ( s ) X ^ s U s + diag ( X ^ s ) f ¯ ( s ) d s + μ t 3 ,
where μ t 3 is a Y ¯ t -adapted martingale. On the other hand, using the Itô rule, representation (A3) and the fact that ω t is the Wiener process, we can obtain
X ^ t U t = X ^ ϰ n 1 U ϰ n 1 + ϰ n 1 t Λ ( s ) X ^ s U s + X ^ s X ^ s f ¯ ( s ) + α s d s + μ t 4 ,
where μ t 4 is a Y ¯ t -adapted martingale. One can see that (A4) and (A5) are two representations of the same special semimartingale X ^ t U t , hence due to the representation uniqueness the Y ¯ t -predictable process α t should satisfy the equality
ϰ n 1 t diag ( X ^ s ) f ¯ ( s ) d s = ϰ n 1 t X ^ s X ^ s f ¯ ( s ) + α s d s ,
and α t may be chosen in the form
α t = diag X ^ t X ^ t X ^ t f ¯ ( t ) .
Due to the generalized Itô rule, Formulae (5), (18) and the properties of X and J j we can obtain, that
X t C t j = X ϰ n 1 C ϰ n 1 j + ϰ n 1 t Λ ( s ) X s C s j + Γ j ( s ) X s d s + μ t 5 ,
where μ t 5 is an F t -adapted martingale. Conditioning both sides of this equality over Y ¯ t , we get
X ^ t C t j = X ^ ϰ n 1 C ϰ n 1 j + ϰ n 1 t Λ ( s ) X ^ s C s j + Γ j ( s ) X ^ s d s + μ t 6 ,
where μ t 6 is a Y ¯ t -adapted martingale. On the other hand, using the Itô rule, representation (A3) and quadratic characteristic (21) we deduce, that
X ^ t C t j = X ^ ϰ n 1 C ϰ n 1 j + ϰ n 1 t Λ ( s ) X ^ s C s j + X ^ s 1 Γ j ( s ) X ^ s + β s j 1 Γ j ( s ) X ^ s d s + μ t 7 ,
where μ t 7 is a Y ¯ t -adapted martingale. Since the representations (A7) and (A8) correspond to the same special semimartingale X ^ t C t j we conclude that the process β s j should satisfy the equality
ϰ n 1 t Γ j ( s ) X ^ s d s = ϰ n 1 t X ^ s 1 Γ j ( s ) X ^ s + β s j 1 Γ j ( s ) X ^ s d s .
Acting as with the coefficient α t , we choose the predictable processes β t j in the form
β t j = Γ j ( t ) 1 Γ j ( t ) X ^ t I X ^ t 1 Γ j ( t ) X ^ t + , j = 1 , N ¯ .
So, on the interval [ ϰ n 1 , ϰ n ) the optimal filtering estimate X ^ t is described by the SDS
X ^ t = X ^ ϰ n 1 + ϰ n 1 t Λ ( s ) X ^ s d s + ϰ n 1 t ( diag X ^ s X ^ s X ^ s ) f ¯ ( s ) d ω s + + j = 1 N ϰ n 1 t Γ j ( s ) 1 Γ j ( s ) X ^ s I X ^ s 1 Γ j ( s ) X ^ s + d ν s j .
Since P { Δ X ϰ n = 0 } = 1 , Equation (A10) presumes P -a.s. fulfilment of the equality
E X ϰ n | Y ¯ ϰ n 1 σ { U s , s ( ϰ n 1 , ϰ n ] } σ { C s j , s ( ϰ n 1 , ϰ n ] , j = 1 , N ¯ } = = X ^ ϰ n 1 + ϰ n 1 ϰ n Λ ( s ) X ^ s d s + ϰ n 1 ϰ n ( diag X ^ s X ^ s X ^ s ) f ¯ ( s ) d ω s + + j = 1 N ϰ n 1 ϰ n Γ j ( s ) 1 Γ j ( s ) X ^ s I X ^ s 1 Γ j ( s ) X ^ s + d ν s j = X ^ τ n .
Finally,
Y ¯ ϰ n = Y ¯ ϰ n 1 σ { U s , s ( ϰ n 1 , ϰ n ] } σ { C s j , s ( ϰ n 1 , ϰ n ] , j = 1 , N ¯ } σ { Δ D ϰ n } ,
so, by the Bayes rule we get that
X ^ τ n = Δ D τ n Δ J ( τ n ) X ^ τ n + diag ( Δ D τ n ) Δ J ( τ n ) X ^ τ n .
Equation (23) can be obtained as “gluing“ of local Equation (A10), which describe the evolution of X ^ t on the intervals [ ϰ n 1 , ϰ n ) , and Formula (A11), which describes the estimate correction given the observations available at the moments ϰ n .
Uniqueness of the strong solution within the class of nonnegative piecewise-continuous Y t + -adapted processes with discontinuity set lying in V can be proved in complete analogy with ([31] Chap. 9, Theorem 9.2). Theorem 1 is proved. □

Appendix C. Proof of Corollary 1

The conditions of Corollary guarantee, that the elements of K ( t ) (4) satisfy the equality K n m ( t ) = δ n m almost everywhere, hence J ( t ) I . This means that in (23) D 0 = X 0 , P a . s . , i.e., X ^ 0 = X 0 . Further, from the properties of transition intensity matrix Λ ( · ) and the identity J n ( t ) e n it follows that Γ n ( t ) = diag ( e n ) Λ ¯ ( t ) , where Λ ¯ ( t ) Λ ( t ) λ ( t ) , λ ( t ) diag ( Λ 11 ( t ) , , Λ N N ) . In this case
C t = 0 t Λ ¯ ( s ) X s d s + 0 t ( I diag X s ) d M s X ,
and the n-th component counts the jumps of X t into the state e n , occurred on the interval ( 0 , t ] . This means X t is the unique solution to the “purely discontinuous” equation
X t = D 0 + 0 t ( I X s 1 ) d C s ,
i.e., the state X t is measurable with respect to σ { D 0 , C s , 0 s t } , so X ^ t = X t P -a.s.
Further, we substitute X t into (23) and verify its validity. To do this we simplify the RHS of the equality using the explicit form of J n ( t ) , Γ n ( t ) and C t , along with the identities diag X t X t X t 0 and Δ J ( t ) 0 :
X t = D 0 + 0 t Λ ( s ) X s d s + + n = 1 N 0 t diag ( e n ) Λ ¯ ( s ) e n Λ ¯ ( s ) X s I X s e n Λ ¯ ( s ) X s + d C s n e n Λ ¯ ( s ) X s d s = = D 0 + n = 1 N 0 t diag ( e n ) Λ ¯ ( s ) e n Λ ¯ ( s ) X s I X s e n Λ ¯ ( s ) X s + d C s n .
The properties of counting processes also provides the following implication: if for some T [ 0 , T ] the equality T e n Λ ¯ ( s ) X s d s = 0 holds, then T d C s n = 0 . Hence, the latter transformation can be continued:
X t = D 0 + n = 1 N 0 t e n X s e n d C s = D 0 + 0 t ( I X s 1 ) d C s ,
which leads to (A12). So, we have verified that under conditions of Corollary 1 the state X t is a solution to the filtering Equation (23). Corollary 1 is proved. □

Appendix D. Proof of Lemma 4

Using notations Ξ r ξ 1 ξ 2 ξ r and Θ r θ 1 θ 2 θ r we can rewrite the estimates X ^ r and X ¯ r ( s ) in the explicit form
X ^ r = 1 Ξ r + Θ r π 1 Ξ r + Θ r π , X ¯ r ( s ) = 1 Ξ r π 1 Ξ r π .
To simplify inferences we will omit the index r in Ξ r and Θ r . The following relations are valid
E X ^ r X ¯ r ( s ) 1 = E 1 1 Ξ + Θ π Ξ + Θ π 1 1 Ξ π Ξ π 1 = = E 1 1 Ξ + Θ π 1 Ξ π 1 Ξ π Θ π 1 Θ π Ξ π 1 E 1 1 Ξ + Θ π 1 Ξ π 1 Ξ π Θ π 1 + 1 Θ π Ξ π 1 = 2 E 1 1 Ξ + Θ π 1 Θ π .
Let us consider an auxiliary estimate X ˘ r E X t r I A r s ( ω ) | Y r . From the Bayes rule it follows that X ˘ r = 1 1 Ξ + Θ π Ξ π and
X ^ r X ˘ r = E X t r I A ¯ r s ( ω ) | Y r = 1 1 Ξ + Θ π Θ π .
From (A13) and (A14) we deduce, that for r = 1 and π Π
E X ^ 1 X ¯ 1 ( s ) 1 2 E E X t 1 I a ¯ 1 s ( ω ) | Y 1 1 = = 2 E n = 1 N E X t 1 n I a ¯ 1 s ( ω ) | Y 1 = 2 E E I a ¯ 1 s ( ω ) | Y 1 = 2 P { a ¯ 1 s } .
The counting process N t X has the quadratic characteristic N X , N X t = 0 t n = 1 N λ n n X s n d s , hence the probability P { a ¯ 1 s } can be bounded from above as
P { a ¯ 1 s } e λ ¯ h k = s + 1 ( λ ¯ h ) k k ! = C 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! .
Formulae (A15) and (A16) lead to the fact, that sup π Π E X ^ 1 X ¯ 1 ( s ) 1 2 C 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! .
Markovianity of the pair ( X t , N t X ) and inequality (A16) also allow to bound the probability P { A ¯ r s } from above: P { A ¯ r s } 1 1 C 1 ( λ ¯ h ) s + 1 ( s + 1 ) ! r , that leads to (34). Lemma 4 is proved. □

Appendix E. Proof of Theorem 2

We have X ˜ 1 = ( 1 ψ 1 π ) 1 ψ 1 π , X ¯ 1 = ( 1 ξ 1 π ) 1 ξ 1 π and Δ 1 = X ˜ 1 X ¯ 1 ( s ) . Using the matrix algebra it is easy to verify that [ γ π 1 1 γ π I ] γ π 0 . Both the estimates are stable, hence X ˜ 1 1 = X ¯ 1 ( s ) 1 = 1 . The following relations are valid:
Δ 1 1 = 1 1 ψ 1 π 1 ξ 1 π 1 ξ 1 π ψ 1 π 1 ψ 1 π ξ 1 π 1 = 1 1 ψ 1 π 1 ξ 1 π 1 ξ 1 π γ 1 π 1 γ 1 π ξ 1 π 1 = = 1 1 ψ 1 π 1 ξ 1 π [ γ 1 π 1 1 γ 1 π I ] ξ 1 π 1 = = 1 1 ψ 1 π 1 ξ 1 π [ γ 1 π 1 1 γ 1 π I ] [ ξ 1 π + γ 1 π ] 1 = 1 1 ξ 1 π [ γ 1 π 1 1 γ 1 π I ] X ˜ 1 1 1 1 ξ 1 π [ γ 1 π 1 1 γ 1 π I ] 1 X ˜ 1 1 2 1 γ ¯ 1 π 1 ξ 1 π = i = 1 N π i j = 1 N γ ¯ 1 i j k , = 1 N ξ 1 k π k .
Using the last inequality, (41) and (A20), it can be shown that
E I a 1 s ( ω ) Δ 1 1 2 i = 1 N π i R M i = 1 N γ ¯ i j ( y ) d y 2 δ .
Since the latter inequality is valid for any π Π , we have an upper bound for the local distance characteristic:
sup π Π E I a 1 s ( ω ) X ˜ 1 X ¯ 1 ( s ) 1 2 δ .
Let us define the following products of the random matrices ξ r and ψ r :
Ξ q , r ξ q ξ q + 1 ξ r , if q r , I otherwise ,
Ψ q , r ψ q ξ q + 1 ψ r , if q r , I otherwise ,
Γ q , r Ψ q , r Ξ q , r .
To proceed the proof of Theorem 2 we need the following auxiliary
Lemma A1.
If ϕ r ϕ r ( Y 1 , , Y r ) is a non-negative Y r -measurable random value, and Φ r ϕ r 1 Ξ 1 , r π , then
E I A r s ( ω ) Φ r = R M R M ϕ r ( y 1 , , y r ) d y r d y 1 .
Proof of Lemma A1. 
We consider a non-negative integrable function ϕ 1 = ϕ 1 ( y ) : R M R + and a Y 1 -measurable random value
Φ 1 ϕ 1 ( Y 1 ) 1 ξ 1 ( Y 1 ) π = ϕ 1 ( Y 1 ) i , j = 1 N m = 0 s D N ( Y 1 , f u , p = 1 N u p G p ) ρ i , j , m ( d u ) π i .
We find E I a 1 s ( ω ) Φ 1 :
E I a 1 s ( ω ) Φ 1 = R M D ϕ 1 ( y ) k , = 1 N n = 0 s N ( y , f v , q = 1 N v q G q ) ρ k , , n ( d v ) π k i , j = 1 N m = 0 s D N ( y , f u , p = 1 N u p G p ) ρ i , j , m ( d u ) π i d y = = R M ϕ 1 ( y ) k , = 1 N n = 0 s D N ( y , f v , q = 1 N v q G q ) ρ k , , n ( d v ) π k i , j = 1 N m = 0 s D N ( y , f u , p = 1 N u p G p ) ρ i , j , m ( d u ) π i d y = R M ϕ 1 ( y ) d y .
Let us consider a non-negative integrable function ϕ 2 = ϕ 1 ( y 1 , y 2 ) : R 2 M R + and a Y 2 -measurable random value
Φ 2 ϕ 1 ( Y 1 , Y 2 ) 1 Ξ 1 , 2 ( Y 1 , Y 2 ) π = = ϕ 2 ( Y 1 , Y 2 ) i , i 2 , j = 1 N m 1 , m 2 = 0 s D D N ( Y 1 , f u 1 , p 1 = 1 N u p 1 G p 1 ) N ( Y 2 , f u 2 , p 2 = 1 N u p 2 G p 2 ) ρ i , i 2 , m 1 ( d u 1 ) ρ i 2 , j , m 2 ( d u 2 ) π i .
We find E I A 2 s ( ω ) Φ 2 :
E I A 2 s ( ω ) Φ 2 = R M R M ϕ 2 ( y 1 , y 2 ) × × k , k 2 , = 1 N n 1 , n 2 = 0 s D D N ( y 1 , f v 1 , q 1 = 1 N v q 1 G q 1 ) N ( y 2 , f v 2 , q 2 = 1 N v q 2 G q 2 ) ρ k , k 2 , n 1 ( d v 1 ) ρ k 2 , , n 2 ( d v 2 ) π k i , i 2 , j = 1 N m 1 , m 2 = 0 s D D N ( y 1 , f u 1 , p 1 = 1 N u p 1 G p 1 ) N ( y 2 , f u 2 , p 2 = 1 N u p 2 G p 2 ) ρ i , i 2 , m 1 ( d u 1 ) ρ i 2 , j , m 2 ( d u 2 ) π i d y 2 d y 1 = = R M R M ϕ 2 ( y 1 , y 2 ) d y 2 d y 1 .
The correctness of the Lemma assertion in the general case of E I A r s ( ω ) Φ r can be verified similarly. Lemma A1 is proved. □
Let us define an upper estimate for the norm of Δ r = X ˜ r X ¯ r . From the definitions of Ξ , Ψ and Γ it follows that
Γ 1 , r Ψ 1 , r Ξ 1 , r = t = 1 r Ψ 1 , t 1 γ t Ψ t + 1 , r .
Making the same inferences as for Δ 1 , we can deduce that
Δ r 1 1 1 Ξ 1 , r π [ Γ 1 , r π 1 1 Γ 1 , r π I ] 1 2 t = 1 r 1 1 Ξ 1 , r π 1 Ψ t + 1 , r γ ¯ t Ψ 1 , t 1 π .
To estimate the contribution of each summand in (A22) we use (A18). To simplify derivation we consider the case r = 3 , function ϕ ( y 1 , y 2 , y 3 ) : R 3 M R +
ϕ ( y 1 , y 2 , y 3 ) = 1 ψ ( y 3 ) γ ¯ ( y 2 ) ψ ( y 1 ) π
and the Y 3 -measurable random value Φ ϕ ( Y 1 , Y 2 , Y 3 ) 1 Ξ 1 , 3 ( Y 1 , Y 2 , Y 3 ) π . Let us estimate from above the mathematical expectation
E I A 3 s ( ω ) Φ = R M R M R M i , j , k , m = 1 N π i ψ i j ( y 1 ) γ ¯ j k ( y 2 ) ψ k m ( y 3 ) d y 3 d y 2 d y 1 = = i , j , k = 1 N π i = 1 L ϱ i j R M γ ¯ j k ( y 2 ) d y 2 m = 1 N n = 1 L ϱ n k m = Q i , j = 1 N π i = 1 L ϱ i j k = 1 N R M γ ¯ j k ( y 2 ) d y 2 Q δ i = 1 N π i j = 1 N = 1 L ϱ i j Q 2 δ .
Acting in the same way, we can prove that for arbitrary r 2 the inequality
E I A r s ( ω ) 1 Ψ t + 1 , r γ ¯ t Ψ 1 , t 1 π 1 Ξ 1 , r π Q r 1 δ
is valid for all r summands in the RHS of (A22). Finally E I A r s ( ω ) Δ r 1 2 r Q r 1 δ , and the correctness of (42) follows from the fact that the latter inequality is valid for arbitrary π Π . Theorem 2 is proved. □

References

  1. Wonham, W.M. Some Applications of Stochastic Differential Equations to Optimal Nonlinear Filtering. J. Soc. Ind. Appl. Math. Series A Control 1964, 2, 347–369. [Google Scholar] [CrossRef]
  2. Kalman, R.E.; Bucy, R.S. New results in linear filtering and prediction theory. Trans. ASME Ser. D J. Basic Eng. 1961, 95–108. [Google Scholar] [CrossRef]
  3. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
  4. Ephraim, Y.; Merhav, N. Hidden Markov processes. IEEE Trans. Inf. Theory 2002, 48, 1518–1569. [Google Scholar] [CrossRef] [Green Version]
  5. Cappé, O.; Moulines, E.; Ryden, T. Inference in Hidden Markov Models; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  6. Elliott, R.J.; Moore, J.B.; Aggoun, L. Hidden Markov Models: Estimation and Control; Springer: New York, NY, USA, 1995. [Google Scholar]
  7. McLane, P.J. Optimal linear filtering for linear systems with state-dependent noise. Int. J. Control 1969, 10, 41–51. [Google Scholar] [CrossRef]
  8. Dragan, V.; Aberkane, S. H 2 -optimal filtering for continuous-time periodic linear stochastic systems with state-dependent noise. Syst. Control Lett. 2014, 66, 35–42. [Google Scholar] [CrossRef]
  9. Dragan, V.; Morozan, T.; Stoica, A. Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems; Springer: New York, NY, USA, 2010. [Google Scholar]
  10. Liptser, R.; Shiryaev, A. Statistics of Random Processes II: Applications; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  11. Takeuchi, Y.; Akashi, H. Least-squares state estimation of systems with state-dependent observation noise. Automatica 1985, 21, 303–313. [Google Scholar] [CrossRef]
  12. Joannides, M.; LeGland, F. Nonlinear filtering with continuous time perfect observations and noninformative quadratic variation. In Proceedings of the 36th IEEE Conference on Decision and Control, San Diego, CA, USA, 10–12 December 1997; Volume 2, pp. 1645–1650. [Google Scholar] [CrossRef] [Green Version]
  13. Borisov, A. Optimal filtering in systems with degenerate noise in the observations. Autom. Remote Control 1998, 59, 1526–1537. [Google Scholar]
  14. Crisan, D.; Kouritzin, M.; Xiong, J. Nonlinear filtering with signal dependent observation noise. Electron. J. Probab. 2009, 14, 1863–1883. [Google Scholar] [CrossRef]
  15. Kushner, H. Probability Methods for Approximations in Stochastic Control and for Elliptic Equations; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  16. Kushner, H.J.; Dupuis, P.G. Numerical Methods for Stochastic Control Problems in Continuous Time; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  17. Ito, K.; Rozovskii, B. Approximation of the Kushner Equation for Nonlinear Filtering. SIAM J. Control Optim. 2000, 38, 893–915. [Google Scholar] [CrossRef]
  18. Clark, J. The design of robust approximations to the stochastic differential equations of nonlinear filtering. Commun. Syst. Random Proc. Theory 1978, 25, 721–734. [Google Scholar]
  19. Malcolm, W.P.; Elliott, R.J.; van der Hoek, J. On the numerical stability of time-discretised state estimation via Clark transformations. In 42nd IEEE International Conference on Decision and Control; IEEE: Piscataway, NJ, USA, 2003; Volume 2, pp. 1406–1412. [Google Scholar] [CrossRef]
  20. Yin, G.; Zhang, Q.; Liu, Y. Discrete-time approximation of Wonham filters. J. Control Theory Appl. 2004, 2, 1–10. [Google Scholar] [CrossRef]
  21. Borisov, A.V. Wonham Filtering by Observations with Multiplicative Noises. Autom. Remote Control 2018, 79, 39–50. [Google Scholar] [CrossRef]
  22. Borisov, A.V.; Semenikhin, K.V. State Estimation by Continuous-Time Observations in Multiplicative Noise. IFAC Pap. OnLine 2017, 50, 1601–1606. [Google Scholar] [CrossRef]
  23. Liptser, R.; Shiryaev, A. Theory of Martingales; Mathematics and its Applications; Springer: Dortrecht, The Netherlands, 1989. [Google Scholar]
  24. Stoyanov, J. Counterexamples in Probability; Wiley: Hoboken, NJ, USA, 1997. [Google Scholar]
  25. Kolmogorov, A.; Fomin, S. Elements of the Theory of Functions and Functional Analysis; Dover: Mineola, NY, USA, 1999. [Google Scholar]
  26. Platen, E.; Bruti-Liberati, N. Numerical Solution of Stochastic Differential Equations with Jumps in Finance; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef] [Green Version]
  27. Bertsekas, D.P.; Shreve, S.E. Stochastic Optimal Control: The Discrete-Time Case; Academic Press: New York, NY, USA, 1978. [Google Scholar]
  28. Zolotarev, V. Metric Distances in Spaces of Random Variables and Their Distributions. Math. USSR-Sbornik 1976, 30, 373–401. [Google Scholar] [CrossRef]
  29. Zolotarev, V. Limit Theorems as Stability Theorems. Theory Prob. Appl. 1989, 34, 153–163. [Google Scholar] [CrossRef]
  30. Borovkov, A. Asymptotic Methods in Queuing Theory; John Wiley & Sons: Hoboken, NJ, USA, 1984. [Google Scholar]
  31. Liptser, R.; Shiryaev, A. Statistics of Random Processes: I. General Theory; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
Figure 1. Estimation quality index S t ( h ) depending on the time-discretization step h.
Figure 1. Estimation quality index S t ( h ) depending on the time-discretization step h.
Mathematics 08 00506 g001

Share and Cite

MDPI and ACS Style

Borisov, A.; Sokolov, I. Optimal Filtering of Markov Jump Processes Given Observations with State-Dependent Noises: Exact Solution and Stable Numerical Schemes. Mathematics 2020, 8, 506. https://doi.org/10.3390/math8040506

AMA Style

Borisov A, Sokolov I. Optimal Filtering of Markov Jump Processes Given Observations with State-Dependent Noises: Exact Solution and Stable Numerical Schemes. Mathematics. 2020; 8(4):506. https://doi.org/10.3390/math8040506

Chicago/Turabian Style

Borisov, Andrey, and Igor Sokolov. 2020. "Optimal Filtering of Markov Jump Processes Given Observations with State-Dependent Noises: Exact Solution and Stable Numerical Schemes" Mathematics 8, no. 4: 506. https://doi.org/10.3390/math8040506

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop