Next Article in Journal
Bagged Ensemble of Gaussian Process Classifiers for Assessing Rockburst Damage Potential with an Imbalanced Dataset
Next Article in Special Issue
Accelerating Extreme Search of Multidimensional Functions Based on Natural Gradient Descent with Dirichlet Distributions
Previous Article in Journal
A Bayesian Change Point Analysis of the USD/CLP Series in Chile from 2018 to 2020: Understanding the Impact of Social Protests and the COVID-19 Pandemic
Previous Article in Special Issue
Identification of Continuous-Discrete Hidden Markov Models with Multiplicative Observation Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Study of Markov Chain Filtering Schemas for Stabilization of Stochastic Systems under Incomplete Information

Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 44/2 Vavilova Str., 119333 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(18), 3381; https://doi.org/10.3390/math10183381
Submission received: 25 August 2022 / Revised: 12 September 2022 / Accepted: 13 September 2022 / Published: 17 September 2022
(This article belongs to the Special Issue Mathematical Modeling, Optimization and Machine Learning)

Abstract

:
The object under investigation is a controllable linear stochastic differential system affected by some external statistically uncertain piecewise continuous disturbances. They are directly unobservable but assumed to be a continuous-time Markov chain. The problem is to stabilize the system output concerning a quadratic optimality criterion. As is known, the separation theorem holds for the system. The goal of the paper is performance analysis of various numerical schemes applied to the filtering of the external Markov input for system stabilization purposes. The paper briefly presents the theoretical solution to the considered problem of optimal stabilization for systems with the Markov jump external disturbances: the conditions providing the separation theorem, the equations of optimal control, and the ones defining the Wonham filter. It also contains a complex of the stable numerical approximations of the filter, designed for the time-discretized observations, along with their accuracy characteristics. The approximations of orders 1 2 , 1, and 2 along with the classical Euler–Maruyama scheme are chosen for the comparison of the Wonham filter numerical realization. The filtering estimates are used in the practical stabilization of the various linear systems of the second order. The numerical experiments confirm the significant influence of the filtering precision on the stabilization performance and superiority of the proposed stable schemes of numerical filtering.

1. Introduction

The theoretical solution to the optimal filtering problem of stochastic differential system states with the development of its effective numerical solution is a traditional field of extensive research [1,2,3,4]. The online calculation of the high precise estimates of the system state given the noisy indirect observations is in demand for navigation [5,6,7], telecommunications [8,9,10], finance [11,12,13,14,15], processing in micro- [16] and nano-structures [17], and many other purposes [18,19,20,21,22,23,24]. However, advanced filtering algorithms play a crucial role in the synthesis of the control in the case of incomplete information [25,26,27,28,29]. The spectrum of such applications is rather broad [30,31,32,33].
When the optimal control depends on the available observation through the optimal estimate, fulfillment of the separation theorem is presumed [34,35]. The corresponding “good lucks” are not numerous and primarily include the case of the Linear Quadratic Gaussian (LQG) systems [36]. When the separation theorem fails for some reason, one can use the so-called “separation principle”, i.e., find the conditional optimum within the class of strategies, which are the functions of the state estimate, not of the original observations. So, the availability of a high precise filtering estimate is a significant factor for the successful solution to numerous control problems under incomplete information.
As is known, the optimal in the mean-square sense state estimate coincides with a conditional mathematical expectation of the state given the available observations. The estimates, which are optimal in the sense of criteria other than mean-square one, are the functionals of the corresponding conditional distribution. This, in turn, is a solution to the Kushner–Stratonovich or Zakai partial stochastic differential equations (SDEs) [29,37,38,39,40]. There are only a few cases when the conditional distribution is a solution to some finite-dimensional closed system of SDSs. The Kalman–Bucy filter [41,42], the Wonham one [2,25], and some particular cases [43] belong to this class. Development of the stable numerical realizations of the optimal filters is nontrivial even for the linear Gaussian observation systems [44].
The aim of this paper is an investigation of how the filtering performance impacts the quality of the subsequent optimal control with incomplete information. As a testbed, we choose the stabilization of a linear stochastic differential system affected by some statistically uncertain external disturbance. This represents a continuous-time Markov chain (CTMC). The theoretical solution to the stabilization problem is presented in [33], and the separation theorem is proved. The optimal control strategy needs the optimal estimate, and the last, in turn, is determined by the Wonham filter. The numerical realization of the filter is nontrivial: the classical numerical schemes such as the Euler–Maruyama one [45,46] are unstable in this case. The point is that they do not provide non-negativity and normalization properties for the estimate approximations. The authors of [47] suggest a new concept of the stable numerical approximations of the filtering estimates. It is left to solve the Wonham filter SDE numerically for direct development of the optimal filtering algorithm by the observations discretized by time. The corresponding estimate is calculated recursively by a variant of the Bayes formula. The formula contains the integrals, which are the shift-scale mixtures of Gaussians, and one cannot calculate them analytically. The author of [48] suggests various numerical schemes of their calculation and the accuracy characteristics. This paper presents a comparison of the numerical filtering schemas in light of their influence on the performance of the subsequent stabilization.
The paper is organized as follows. Section 2 presents the investigated control system affected by the external statistically uncertain piece-wise constant Markov disturbances. We state the problem of optimal stabilization and present the equations describing the optimal stabilization strategy. The separation theorem is valid for the system, so the optimal strategy depends on the observation via the optimal filtering state estimate.
Section 3 introduces the mechanical system, an overhead crane, which serves as a prototype of the controlled stochastic system used for the comparative numerical analysis. We present the general properties of the system, its evolution with and without external disturbances, and the influence of the criterion coefficients on the control character. Section 4 is devoted to the numerical realizations of the Wonham filter. Based on the optimal filtering estimates of the CTMC given the continuous-time observations, we introduce the numerical schemas of its approximation of the order 1 2 , 1, and 2 given the time-discretized observations.
Section 5 plays a vital role in the presentation. It contains the results of the numerical experiments, illustrating the influence of the estimation stability on the final performance of the proposed control. We compare the suggested schemas with the classical Euler–Maruyama one. Section 5.1Section 5.2 and Section 5.3 present the results of the stabilization of the state in the case of the stable, semistable, and unstable control system. All the examples confirm the superiority of the proposed schemas of numerical filtering for subsequent stabilization purposes. Section 6 contains concluding remarks.

2. Problem Formulation and Optimal Control Equations

Below, we briefly present the control problem and its theoretical solution (see [33] for details).
On the canonical space with filtration Ω , F , P , F t , t 0 , T , let us consider the linear SDE
d z t = a t y t d t + b t z t d t + c t u t d t + σ t d w t , z 0 = Z .
Here
  • z t R n z is a controllable output stochastic process of the second order;
  • y t is an uncertain stochastic input which represents a CTMC with the finite-state space e 1 ; , e n y of the unit vectors in the Euclidean space R n y , with the transition intensity matrix Λ t and initial distribution π 0 = E { y 0 } ;
  • w t R n w is a standard Wiener process: w t , y t , z 0 are mutually independent;
  • u t R n u is a control, which is a process with a finite second moment;
  • a t R n z × n y , b t R n z × n z , c t R n z × n u , σ t R n z × n w are known nonrandom matrix-valued functions.
The admissible control u t = U t z t represents any function of the output z t . The optimal control U t = U t ( z ) , z R n z minimizes the objective function
J U 0 T = E 0 T P t y t + Q t z t + R t u t 2 d t + P T y T + Q T z T 2 ,
where U 0 T = { U t z , 0 t T } , P t R n J × n y , Q t R n J × n z , R t R n J × n u , 0 t T are known bounded matrix-valued functions, x is the x transpose, | x | = x x is the Euclidean norm.
Under the standard conditions of (1) and (2) (piecewise continuity and uniform degeneracy of R t R t and σ t σ t ), the solution to the optimization problem
U ^ 0 T = U ^ t z , 0 t T argmin J U 0 T
takes the form
u ^ t = 1 2 R t R t 1 c t 2 α t z ^ t + β t y ^ t + 2 R t P t y ^ t + Q t z ^ t ,
where
d α t d t M t α α t + α t M t α + N t α α t c t R t R t 1 c t α t = 0 , α T = Q T Q T ,
d β t d t + β t Λ t + M t β N t β β t = 0 , β T = 2 Q T P T ,
d y ^ t = Λ t y ^ t d t + diag y ^ t y ^ t y ^ t a t σ t σ t 1 × × d z ^ t a t y ^ t d t b t z t d t c t u ^ t d t , y ^ 0 = E Y ,
where z ^ t denotes the optimal trajectory of the state, i.e., the solution to (1) calculated for the control u t = u ^ t .
Above, the functions on the right hand side (RHS) of (4) and (5) have the form
M t α = Q t R t R t R t 1 c t ,
N t α = Q t I R t R t R t 1 R t Q t ,
M t β = 2 a t P t R t R t R t 1 c t α t + P t I R t R t R t 1 R t Q t ,
N t β = Q t R t + α t c t R t R t 1 c t ,
where I is an identity matrix of suitable dimensionality. So, the optimal control is a linear function of the system state z t and the optimal filtering estimate y ^ t of the external input y t . Note that y ^ t is defined by the Wonham filter Equation (6) [2,25]. Moreover, Equation (6) determines the conditional mathematical expectation (CME) y ^ t = E y t | F t z for any control u t other than the optimal one u t . Here, we denote the natural filtration generated by the state process z t ( u ) , governed by the control u as F t z = σ z τ ( u ) , 0 τ t , F t z F t F .
The above relations allow construction of the following stabilization algorithm
  • to solve numerically ordinary differential Equations (4) and (5) (any stable numerical scheme is valid for this);
  • to solve numerically SDE (6), defining the Wonham filter.
The separation theorem is valid in control of a Linear Quadratic Gaussian (LQG) system under incomplete information. The optimal control coincides with its analog under complete information, when the unobservable system state is replaced by its Kalman–Bucy filtering estimate [49]. The realization of the optimal control in the LQG case represents the numerical solution to a couple of the Riccati ODEs and a system of linear SDEs. The numerical solution of the Riccati equation can be effectively performed, for example, by the Runge–Cutta scheme, and the realization of the Kalman–Bucy filter is also well developed.
In contrast with the LQG case, the considered optimal stochastic control problem is challenging exactly in the numerical realization stage. The separation theorem is valid for the class of the investigated systems, and the optimal estimate represents the solution to the system of nonlinear SDEs. It is well known that the CME of the MJP is its conditional distribution, so the trajectories satisfy the non-negativity and normalization conditions. Visually, the instability of a numerical scheme of the Wonham filter realization looks as follows: once an estimate approximation violates the non-negativity, the subsequent trajectory of the filter “blows up” in a few steps, which makes it impossible to synthesize the control strategy. We knew this feature, so in the numerical example of [33] we chose one of the stable numerical schemes [43,47] of the Wonham filter. The point was that the example played an illustrative role only. The article can be treated as the second part of [33]: it presents the way for the practical realization of the stated optimal stochastic control problem.

3. Performance Analysis of Mechanical Actuator

Initially, for the comparative numerical study, we choose a controlled overhead crane model. The crane trolley with a hoist travels along the H-beam. The loaded trolley has significant inertia. The goal is to place it above one of the locations (for example, the railway track). The set of possible locations is known and fixed. The trolley state includes the current coordinate x t and velocity v t on the beam. The stabilization system includes a feedback velocity control block, actuated by the couple “coordinate–velocity”. In addition, the system also includes some external unobservable input y t (the signal, specifying the number of the requested railway track) and the control u t itself. So, the overhead crane model has the form
d x t = v t d t , t 0 , T , d v t = a x t d t + b v t d t c y t d t + h u t d t + g d w t .
The external governing signal y t { e 1 , e 2 , e 3 } represents a homogeneous CTMC with the rransition intensity matrix (TIM) Λ and initial condition π 0 = e 1 . The constant scalars a , b , h , g and row vector c 1 , c 2 , c 3 are known, and w t is a standard Wiener process. The term g d w t simulates both an inner inaccuracy of the actuator and the random disturbances contaminating the governing signal. The initial condition ( x 0 , v 0 ) is a two-dimensional standard Gaussian vector.
One can see that the system (7) is stable, when b < 0 and b 2 + 4 a < 0 , because b and b 2 + 4 a are eigenvalues of the system matrix 0 1 a b .
Initially, we choose the following “basic” values for the system and control problem:
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.01 , c 1 , c 2 , c 3 = 1 , 0 , 1 , Λ = 0.5 0.5 0 0.5 1 0.5 0 0.5 0.5 .
The vector C = ( c 1 a , c 2 a , c 3 a ) forms the coordinates of the railway tracks, served by the crane, while the process C y t represents the desired “ideal” piece-wise continuous movement of the trolley between the tracks.
We illustrate the crane’s functioning in three stages. To do this, we have to vary the parameters above to highlight various features of the estimation and/or control algorithms.
First, we consider the system without both the external governing signal y t and control u t . Second, we model the system with CTMC input y t without the control u t . Third, we consider the system under the action of both the input y t and control u t .
We solve (7) numerically via the Euler–Maruyama scheme with the time increments δ = 0.005 , 0.001 , 0.0001 ; meanwhile, we model the CTMC y t with the smaller time step, namely δ s m a l l = 0.01 δ .
At the first stage, c 1 = c 2 = c 3 = h = 0 .
Figure 1 contains typical paths of the coordinate x t and velocity v t of the crane trolley in the case where the external input y t is absent.
The figure confirms the stable character of the system.
Second, we consider the system with the parameters (8), keeping h = 0 . The simulation is performed for the CTMCs with the two variants of the TIM:
Λ = 0.05 0.05 0 0.005 0.01 0.005 0 0.5 0.5 , and Λ = 5 5 0 0.005 1.005 1.0 0 0.005 0.005 .
The first TIM provides a long staying in the initial state e 1 for the CTMC, while the second TIM generates the rapid transitions e 1 e 2 , e 2 e 3 and the long staying in the last state.
The input CTMC y t influences the output z t via the process C y t , where C = c 1 a , c 2 a , c 3 a represent possible drift accelerations.
Figure 2 contains typical paths of the coordinate and velocity of the crane trolley affected by the CTMC with the corresponding TIMs compared with the desired switching C y t .
One can see some stabilization of the real trolley coordinate near the “ideal” position C y t without additional control u t , but the stabilization performance looks poor.
Third, the stabilization could be made more effective by the control u t , which minimizes the functional
J U 0 T = E 0 T C y t x t 2 + R u t 2 d t ,
where R = 0.01 characterizes the control unit cost.
Figure 3 demonstrates typical paths of the coordinate and velocity of the crane trolley with and without control u t . The calculations are performed for the parameter set (8): left subplots correspond to the the controlled case with h = 10 , and right subplots stand for the uncontrolled case h = 0 .
Comparing the results demonstrated in Figure 1, Figure 2 and Figure 3, it can be seen that the control remarkably improves the stabilization performance. Without u t the system only “declares an intension” to pursue the current stabilization position C y t . By contrast, the assisting control u t admits a more sharp reaction to the external drift change.
Fourth, we consider the system with the parameter set (8), reducing the value R, namely R = 0.00001 . The example demonstrates a potential stabilization performance in the case of low control cost.
Figure 4 illustrates the high performance of the coordinate stabilization and high amplitude oscillating character of the corresponding velocity under the optimal control u t .
Note that the value R of the unit control cost does not affect the filtering performance of the external input y and is valuable only in the control procedure. Let us explain this fact from the physical point of view. The trolley has a positive mass. According to Newton’s second law, the faster the actuator must react to the external governing signal, the higher the force applied. However, each resource has its own cost. The coefficient R sets a regulated trade-off between the inaccuracy of the actuator governance to the external input y t and the total expenses for the control assistance u t . We can track variation of R even visually. When R is small enough, the actuator follows the external signal relatively accurately, whereas the applied assisting control demonstrates high values and oscillating nature (see, e.g., Figure 3 (left)). The high unit cost R leads to small assisting control u t values and low quality concerning how the trolley coordinate follows the input y t . Figure 3 (right) can serve as an illustration of this fact.
The provided calculations disclose the main issue of the control realization. The point is that the Euler–Maruyama scheme for the numerical solution to the SDE demonstrates an instability, which leads to divergence of some estimate trajectories. Regardless of the step δ , almost each trajectory y ^ t contains the points with violated conditions concerning the non-negativity y ^ t 1 0 ,   y ^ t 2 0 ,   y ^ t 3 0 or normalization y ^ t 1 + y ^ t 2 + y ^ t 3 = 1 . The number of such points varies from single to several thousands, which leads to the filtering “blowing up” and inability of the control synthesis. The more such defective estimates there are in the trajectory, the more likely the trajectory diverges rapidly.
If the defects appear rarely, they can be neutralized by a heuristic modification of the Euler–Maruyama scheme. Once the filtering estimate violates either non-negativity or normalization conditions y ^ t i , it is replaced by the CTMC stationary distribution π , or by the estimation at the previous time instant y ^ t i 1 . These modifications help in some cases but are not a panacea, and this fact is shown in Section 5. Further in the presentation we refer to the introduced scheme modifications as y ^ t i l i m and y ^ t i d e l , respectively.
The next section presents a stable realization of the Wonham filter [43,47] required for the synthesis of the optimal stabilization strategy u ^ t .

4. Stable Filtering Algorithms by Discretized Observations

We realize both the filtering and stabilization algorithms with the same constant time increment δ , such that t 0 = 0 , t i = i δ , and T δ is an integer.
Furthermore, without loss of generality, we suppose for model (1) a t const = a t i , b t const = b t i and σ t const = σ t i over the discretization intervals t i 1 , t i ; otherwise, the functions a t , b t and σ t should be approximated by suitable piece-wise constant functions.
Let us consider z t 0 , which represents a non-anticipated transformation of the observable input z t :
z t 0 = 0 t d z τ ( b τ z τ + c τ u τ ) d τ = 0 t a τ y τ d τ + σ τ d w τ .
The process z t 0 does not depend on the control u t , and due to the identity
F t z = σ z τ , 0 τ t σ z τ 0 , 0 τ t = F t z 0
the equality y ^ t = E y t | F t z = E y t | F t z 0 holds.
Since the numerical realization of the control strategy u t presumes its action at the time instants t i = i δ , we have to estimate the Markovial drift y t at the same moments. We use the transformed observations z t 0 discretized by time with the step δ :
Δ z t i 0 = t i 1 t i a τ y τ d τ + σ τ d w τ .
The discretized observations generate the new family of σ algebras
F t i Δ z 0 = σ Δ z t j 0 , 1 j i .
If μ i = t i 1 t i y τ d τ = μ i 1 , , μ i n y is a random vector composed of the occupation times of y t in each state during the interval ( t i 1 , t i ] , and N ( z ; m , σ 2 ) is a Gaussian probability density function with the mean m and variance σ 2 , then the estimate y ^ t i = E y t | F t i Δ z 0 can be calculated by the following recursive procedure [43,47]
y ^ t i = 1 q ^ t i y ^ t i 1 1 q ^ t i y ^ t i 1 , y ^ 0 = π 0 ,
where 1 = ( 1 , , 1 ) R n y is a vector formed by units, and the matrix q ^ t i = q ^ t i k , j k , j = 1 n y consists of the random values
q ^ t i k , j = E N Δ z t i 0 ; a t i μ i , δ σ t i σ t i y t i j | y t i 1 = e k .
The CMEs q ^ t i k , j represent the integrals—the shift/scale mixtures of some Gaussians with the distribution of μ i as the mixing one. The issue is that this distribution is not continuous with respect to the Lebesgue measure, so the integral (11) cannot be calculated analytically: we have to do it numerically. In this paper, we compare the following numerical schemes for the calculation of q ^ t i k , j :
q ^ t i k , j q ˘ t i δ 1 2 k j = N Δ z t i 0 ; δ a t i e k , σ t i σ t i Δ k j + δ λ k j
which is a scheme of the “left” rectangles with the accuracy order 1 2 ,
q ^ t i k , j q ˘ t i δ k j = Δ k j e λ k k δ N Δ z t i 0 , δ a t i e k , σ t i σ t i +
1 Δ k j δ λ k j e δ 2 λ k k + λ j j N Δ z t i 0 , δ 2 a t i ( e k + e j ) , σ t i σ t i
which is a scheme of the “midpoint” rectangles with the accuracy order 1,
q ^ t i k , j q ˘ t i δ 2 k j = Δ k j e λ k k δ N Δ z t i 0 , δ a t i e k , σ t i σ t i +
( 1 Δ k j ) δ 2 e λ k k λ j j ( 3 1 ) δ 2 3 N Δ z t i 0 , ( 3 1 ) δ 2 3 a t i e k + ( 3 + 1 ) δ 2 3 a t i e j , σ t i σ t i +
e λ k k λ j j ( 3 + 1 ) δ 2 3 N Δ z t i 0 , ( 3 + 1 ) δ 2 3 a t i e k + ( 3 1 ) δ 2 3 a t i e j , σ t i σ t i +
i : i j , i k δ 2 6 e δ 6 λ k k λ i i + δ 6 λ i i λ j j N Δ z t i 0 , δ 6 a t i ( e k + e i + 4 e j ) , σ t i σ t i +
e 2 δ 3 λ k k λ i i + δ 6 λ i i λ j j N Δ z t i 0 , δ 6 a t i ( e k + 4 e i + e j ) , σ t i σ t i +
e δ 6 λ k k λ i i + 2 δ 3 λ i i λ j j N Δ z t i 0 , δ 6 a t i ( 4 e k + e i + e j ) , σ t i σ t i
which is a Gaussian quadrature scheme with the accuracy order 2 (here and below Δ k j stands for the Kronecker symbol).
In the system under investigation q ˘ t i δ 1 2 13 = q ˘ t i δ 1 2 31 = q ˘ t i δ 13 = q ˘ t i δ 31 = q ˘ t i δ 2 13 = q ˘ t i δ 2 31 = 0 since λ 13 = λ 31 = 0 , and this fact simplifies calculations significantly.
For the comparative study we choose the approximations y ˘ t i δ 1 2 , y ˘ t i δ , and y ˘ t i δ 2 , calculated by (10) and (11) and the schemes q ˘ t i δ 1 2 , q ˘ t i δ , and q ˘ t i δ 2 , respectively.

5. Comparative Numerical Study

Through a series of numerical experiments, we try to answer the question of how the accuracy of the filtering approximation affects the total performance of the control of a linear system (1) or, more precisely (7). The first set of tests is devoted to the control of the physical system introduced in Section 3. The investigation object of the second set represents a semi-stable linear system. The third set returns to a consideration of the trolley functioning under various complications. Finally, the fourth set is directed to the unstable linear system control.
We perform all experiments with the time increments δ = 0.005 , 0.001 , 0.0001 . On the one hand, the step δ is inversely proportional to the computational costs of the input y t filtering. On the other hand, the decrease of δ improves the filtering accuracy and raises the control performance. Hence, the value of δ is a trade-off between the computational costs and reasonable losses of the control quality. However, in some applied problems, the step δ is constrained from below by the frequency of the actual measuring sensors.
We simulate 1000 random trajectories of the Wiener process w t and CTMC y t , which are the same for all experiments. Then we synthesize the optimal stabilization strategy u ^ t by use of various approximations of the Wonham filter estimates and various time steps δ . Finally, the control performance index is a result of the Monte-Carlo sampling over the simulated trajectories.
We present the obtained results both in the tables and figures. For convenient analysis, the headers of all tables contain the system parameters, and we mark the important results in bold font. To characterize the quality of the estimates y ^ t i ,  y ˘ t i δ 1 2 ,  y ˘ t i δ ,  y ˘ t i δ 2 , we calculate the values
D ^ y ˜ t i = y ^ t i , D ^ y ˜ t i = y ˘ t i δ 1 2 , D ^ y ˜ t i = y ˘ t i δ , D ^ y ˜ t i = y ˘ t i δ 2 ,
D ^ y ˜ t i = E ^ δ T i = 1 T δ c y t i c y ˜ t i 2 ,
where E ^ denotes averaging over the simulated trajectories.
In the case of the Euler–Maruyama scheme divergence for y ^ t i , we use its stable versions y ^ t i l i m and y ^ t i d e l , suggested in Section 3.
To analyse the stabilization quality, we calculate the performance functionals
J ^ u ^ t i , J ^ u ˘ t i δ 1 2 , J ^ u ˘ t i δ , J ^ u ˘ t i δ 2 ,
where u ^ t i ,  u ˘ t i δ 1 2 ,  u ˘ t i δ ,  u ˘ t i δ 2 are optimal control strategies calculated with the use of the estimates y ^ t i ,  y ˘ t i δ 1 2 ,  y ˘ t i δ ,  y ˘ t i δ 2 , respectively, and
J ^ u t i = E ^ δ i = 1 T δ C y t i x t i 2 + R u t i 2 .

5.1. Stable System

We perform the first numerical experiment with the trolley model, which has the parameters (8), to compare the modified Euler–Maruyama scheme and the stable ones (10) and (11). The calculation results are accumulated in Table 1.
We do not supply the example with an extra illustration, because Figure 3 can serve this purpose. The results of the first experiment can lead to the following conclusions.
  • The utilization of the suggested numerical filtering schemes, based on the time discretization of the observations, provides stability to the filtering procedure for all experimental conditions. The superiority is obvious for large steps δ . The offered stable modifications of the Euler–Maruyama scheme demonstrate the expectable behavior. There are many “divergent” filtering trajectories; they need correction for large δ , so both modifications provide a low filtering quality. When δ decreases, the number of the “divergent” trajectories also decreases, and both modifications provide the acceptable filtering quality. They lose to the suggested stable schemes. When δ decreases, the probability that the “divergent” filtering trajectory vanishes too and the quality of all considered numerical filtering schemes are identical.
  • In general, one “divergent” trajectory is enough to make the value D ^ y ^ t i arbitrarily huge. The absence of such trajectories in the sample of 1000 trajectories for δ = 0.0001 does not guarantee their absence in the large samples. We detect the “divergent” trajectories in the samples of the size 10 7 . Note that the modified Euler–Maruyama schemes provide the acceptable filtering quality in the case of the small δ values.
  • On the set of 1000 simulated trajectories, all suggested stable numerical schemes demonstrate very close performance.
  • The absolute values of all D ^ and J ^ are small enough. The reason might be the small noise intensity g = 0.01 . The calculation error and moderate sample size seem to make the main contribution to the performance index comparing with the theoretical properties of all considered estimates.
The second numerical experiment differs from the first one only in that the parameter g value is increased by 10 times: g = 0.1 . The calculation results are collected in Table 2.
The results of the second experiment can lead to the following conclusions.
  • There are no divergent filtering trajectories y ^ t i among the simulated sample for any considered step δ . This does not imply their absence in the samples of the greater size; however, the modifications y ^ t i l i m and y ^ t i d e l give an opportunity to neutralize divergence without significant loss of the estimate accuracy.
  • All estimates have moderate quality because of relatively high noise intensity g. We can conclude that under a low signal–noise ratio the performance of all considered estimates becomes similar for any considered step δ .
The numerical experiments above indicate that the reasonability of the stable numerical filtering scheme usage is questionable. To highlight their utility, we have to involve the models with more delicate examples, which help to expose the value of the stable numerical schemes.

5.2. Semi-Stable System

We impair the model quality and consider a semi-stable dynamic system. Let us remind the reader that trolley model (7) keeps its stability for any a , b > 0 . The equalities a = b = 0 determine the stability bound of the system, and we investigate this case. Obviously, the formula C = c 1 a , c 2 a , c 3 a for the stable states is not valid in this situation. We also make the following changes to the parameters: c = 1.5 , 0.5 , 0.5 , C = 1 , 0 , 1 , g = 0.01 . The calculation results are collected in Table 3. The results of the third experiment can lead to the following conclusions.
  • The proposed stabilization strategy demonstrates good performance even in the semi-stable case.
  • The proposed stable numerical schemes of the state filtering demonstrate remarkable superiority compared with the Euler–Maruyama scheme and its modifications. This situation can be seen for all considered time steps δ .
  • There were five divergent filtering trajectories in the simulated sample, which did not affect the filtering performance dramatically in the case δ = 0.0001 . Nevertheless, even the fact of their existence justifies the usage of the stable filtering scheme.
  • The experiment demonstrates no significant difference in the performance of the stable numerical schemes y ˘ t i δ 1 2 , y ˘ t i δ and y ˘ t i δ 2 .
Figure 5 contains typical state trajectories for the semi-stable case, governed by the same control law u ^ t i , but calculated by the various drift estimates y ^ t i l i m , y ^ t i d e l , and control law u ˘ t i δ 1 2 .
Figure 6 contains the uncontrolled variant of the same trajectory.
The numerical experiments with the chosen parameters make it possibile to expose the behaviour of the offered heuristic modernizations of the Euler–Maruyama scheme y ^ t i l i m and y ^ t i d e l in comparison with the stable scheme y ˘ t i δ 1 2 , when the original Euler–Maruyama scheme diverges. For the case δ = 0.001 , Figure 7 makes it possibile to compare the estimates c y ^ t i l i m , c y ^ t i d e l , c y ˘ t i δ 1 2 with the true values c y t i visually. Figure 8 contains corresponding state trajectories governed by the controls u ^ t i ( y ^ t i l i m ) , u ^ t i ( y ^ t i d e l ) and u ˘ t i δ 1 2 .
The numerical experiments with the semi-stable system allow us to conclude that the proposed stable filtering estimates are very effective for stabilization control synthesis. The differences between the proposed schemes are still minor in the semi-stable case. The subsection below discusses the situation when the choice of the stable numerical scheme makes some sense.

5.3. Stable System with High-Frequency Changing Drift

The numerical experiments of the last subsection demonstrate that the modified estimates y ^ t i l i m and y ^ t i d e l do not “blow up” at any trajectory. Nevertheless, in some cases, they “lose” the CTMC y t , as we can see in Figure 7. Hence, we hold for the comparative study the original Euler–Maruyama scheme, keeping in mind the possibility of its “blowing up” divergence.
The experiments above expose several realistic cases, for which the proposed stable numerical estimates demonstrate remarkable superiority for the stabilization strategy synthesis compared with the Euler–Maruyama scheme and its modifications. Nevertheless, we still do not present a situation when the suggested stable schemes deliver different estimation quality. To do this, we complicate the conditions of the trolley functioning. Namely, we increase the intensity of the CTMC transitions. We do this to confirm the hypothesis that the proposed schemes show different precision when the probability that the CTMC y t has more than one transition during ( t i 1 , t i ] is significant. To meet this condition, we have to increase the elements of TIM Λ . The frequent input signal transitions lead to more “active” control. If the control cost R is rather high, then the optimal control is minor and does not affect the system state. Hence, in this experiment, we set R = 0.00001 , as in Section 3 (see Figure 4). Table 4 and Table 5 contain the results of the numerical experiments, performed with the transition intensities 10 and 50, respectively.
The results of Table 4 correspond to expectations. For δ = 0.005 and δ = 0.001 , the quality of estimates of discretized filters is built in accordance with their theoretical convergence rate. It should be noted that the distinction is quite small and can be seen only in the third or fourth significant digit. The case with δ = 0.0001 can already be considered marginal. Assuming that the difference may show an upward trend with an increase in the intensity of the jumps, we will continue to complicate the operating conditions of the actuator.
Figure 9 illustrates the results corresponding to the transition intensities 10. It contains the typical state trajectory governed by the strategy u ˘ t i δ 1 2 , indistinguishable from u ˘ t i δ and u ˘ t i δ 2 . We compare it with the trajectory governed by the “ideal” control u t i synthesized with the complete information about the input y t [50]. The figure shows that the potential enhancement of the stabilization performance is possible only through the increasing of the filtering accuracy.
The plots corresponding to the system with the transition intensities 50 are given in Figure 10.
The calculation presented in Table 5 meets the expected results to the same extent as the previous one. For δ = 0.005 , the distinction in the quality of the estimates of the discretized filters and the corresponding stabilization strategies can be seen in the second or third significant digit. For δ = 0.001 , there is the same difference and the same hierarchy of algorithm quality, but already by the third or fourth significant digit. This means that the order of the transition intensities of the input y t for such sampling steps provides quite a lot of implementations of more than one jump in the interval δ , which gives an advantage to higher-order filters. Having achieved this result, it should be noted that according to Figure 10, the effectiveness of the stabilization strategy itself, even in the case of complete information about the state of the input y t , is extremely low. This is the result of the deterioration of the operating conditions of the actuator to completely unrealistic parameters. The slight superiority of the Euler–Maruyama scheme y ^ t i in the case δ = 0.0001 is illusory: in fact, the sample of the trajectories does not contain the divergent ones.

5.4. Influence of MJP Dimensionality on Control Performance

The dimensionality of the CTMC state space can also indirectly affect the filtering and stabilization performance. Let us repeat the experiments of the last subsection but increase the CTMC dimensionality: n y = 4 . As in the subsection above, we consider values of the transition rate: 10 and 50. The numerical results of the experiments are given in Table 6 and Table 7, respectively.
Figure 11 and Figure 12 contain the corresponding typical trajectories.
The obtained results emphasize the necessity of using the stable numerical filtering schemes: even for the minimal time step δ = 0.0001 , the Euler–Maruyama approximation of the Wonham filter “blows up”. The differences between the various stable numerical filtering schemes are still minor.

5.5. Unstable System

To analyze the bundle “stable numerical filtering scheme–optimal stabilization strategy” exhaustively, we finally omit the requirement that system (7) be stable. This means that inequalities b < 0 and b 2 + 4 a < 0 are not valid. The numerical experiments in this subsection have no physical sense; they just illustrate the applicability of the stabilization strategy calculated with the aid of the stable numerical filtering scheme.
Note that the linearity of (7) admits the control synthesis without requiring system stability. If v t is an optimal strategy for the stable system with a < 0 and b < 0 , then v t is optimal for the unstable system with “the mirror parameters” a ¯ = a > 0 and b ¯ = b > 0 , and the minimal value of the criterion remains the same as for the stable system (see Table 1, Table 2 and Table 3). Hence, we have to consider the case when the parameters a and b have different signs. We perform a numerical experiment with the parameters a = 1 , b = 5 , ( c 1 , c 2 , c 3 ) = ( 2.5 , 1.5 , 1 ) ; the rest of parameters coincide with ones in (8). Table 8 contains the filtering and stabilization results for various time-discretization steps.
Figure 13 contains the typical system trajectories both in the controlled and uncontrolled cases.
The calculations confirm the effectiveness of the proposed stabilization strategy in the case of an unstable system. When the system is uncontrolled, its state diverges rapidly: the absolute values of the state attain 10 19 10 20 at the termination moment T = 10 . The Euler–Maruyama scheme is also unstable in this system. All the proposed stable schemes of numerical filtering demonstrate the same performance.

6. Conclusions

The object of investigation in the paper is a stabilization of an overhead crane affected by the uncertain external Markov piece-wise constant disturbances. The goal is to analyze the potential application of the stable approximations of the Wonham filter [2,25] to the system state stabilization in the case of incomplete information. Their utilization should have neutralized the problem of the instability of the Wonham filter approximations delivered by the Euler–Maruyama numerical scheme. A large number of the fulfilled numerical experiments lead to the following conclusions.
  • The rational choice of the stable numerical scheme preventing filter divergence is a natural trade-off between the accuracy requirements and the level of statistical uncertainty. In the case of small noise and high-frequency time discretization of the observations, the computational cost of the high-order schemas could exceed its accuracy gain.
  • One should not underestimate the divergence of the unstable filtering schemas. There is a multiplicity of mathematical models for which any Euler–Maruyama filtering trajectory diverges.
  • All stable schemas demonstrate consistency, i.e., converge to the theoretic Wonham filter as the time-discretization step infinitely vanishes.
  • The complex numerical experiments suggest the schema of the “middle” rectangles of the order 1 as the preferable one.

Author Contributions

Investigation, A.B. (Alexey Bosov) and A.B. (Andrey Borisov); Methodology, A.B. (Alexey Bosov) and A.B. (Andrey Borisov). All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The research was carried out using the infrastructure of the Shared Research Facilities “High Performance Computing and Big Data” (CKP “Informatics”) of FRC CSC RAS (Moscow).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CMEConditional Mathematical Expectation
CTMCContinuous-Time Markov Chain
LQGLinear Quadratic Gaussian
ODEOrdinary Differential Equation
RHSRight Hand Side
SDEStochastic Differential Equation
TIMTransition Intensity Matrix

References

  1. Liptser, R.S.; Shiryaev, A.N. Statistics of Random Processes II Applications; Stochastic Modelling and Applied Probability; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar] [CrossRef]
  2. Elliott, R.J.; Aggoun, L.; Moore, J.B. Hidden Markov Models.Estimation and Control; Stochastic Modelling and Applied Probability; Springer: New York, NY, USA, 1995. [Google Scholar] [CrossRef]
  3. Anderson, B.; Moore, J. Optimal Filtering; Prentice-Hall Information and System Sciences Series; Prentice-Hall: Hoboken, NJ, USA, 1979. [Google Scholar]
  4. Del Moral, P. Nonlinear Filtering: Interacting Particle Solution. Markov Process. Relat. Fields 1996, 2, 555–580. [Google Scholar] [CrossRef]
  5. Paull, L.; Saeedi, S.; Seto, M.; Li, H. AUV Navigation and Localization: A Review. IEEE J. Ocean. Eng. 2014, 39, 131–149. [Google Scholar] [CrossRef]
  6. Ji, C.L.; Zhang, N.; Wang, H.H.; Zheng, C.E. Application of Kalman Filter in AUV Acoustic Navigation. In Applied Mechanics and Materials; Development of Industrial Manufacturing; Trans Tech Publications Ltd.: Baech, Switzerland, 2014; Volume 525, pp. 695–701. [Google Scholar] [CrossRef]
  7. Miller, A.; Miller, B.; Miller, G. AUV position estimation via acoustic seabed profile measurements. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–5. [Google Scholar] [CrossRef]
  8. Altman, E.; Avrachenkov, K.; Barakat, C. TCP in presence of bursty losses. Perform. Eval. 2000, 42, 129–147. [Google Scholar] [CrossRef]
  9. Miller, B.M.; Avrachenkov, K.E.; Stepanyan, K.V.; Miller, G.B. Flow Control as a Stochastic Optimal Control Problem with Incomplete Information. Probl. Inf. Transm. 2005, 41, 150–170. [Google Scholar] [CrossRef]
  10. Borisov, A.; Bosov, A.; Miller, G.; Sokolov, I. Partial Diffusion Markov Model of Heterogeneous TCP Link: Optimization with Incomplete Information. Mathematics 2021, 9, 1632. [Google Scholar] [CrossRef]
  11. Platen, E.; Bruti-Liberati, N. Numerical Solution of Stochastic Differential Equations with Jumps in Finance; Stochastic Modelling and Applied Probability; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef] [Green Version]
  12. Cvitanić, J.; Liptser, R.; Rozovskii, B. A filtering approach to tracking volatility from prices observed at random times. Ann. Appl. Probab. 2006, 16, 1633–1652. [Google Scholar] [CrossRef]
  13. Ang, A.; Timmermann, A. Regime Changes and Financial Markets. Annu. Rev. Financ. Econ. 2012, 4, 313–337. [Google Scholar] [CrossRef]
  14. Paulsen, J. Risk theory in a stochastic economic environment. Stoch. Process. Their Appl. 1993, 46, 327–361. [Google Scholar] [CrossRef]
  15. Christiansen, M. Multistate models in health insurance. AStA Adv. Stat. Anal. 2012, 96, 155–186. [Google Scholar] [CrossRef]
  16. Akishin, P.; Akishina, E.; Akritas, P.; Antoniou, I.; Ioannovich, J.; Ivanov, V. Stochastic filtering of digital images of skin micro-structure. Comput. Phys. Commun. 2000, 126, 1–11. [Google Scholar] [CrossRef]
  17. Bechhoefer, J. Hidden Markov models for stochastic thermodynamics. New J. Phys. 2015, 17, 075003. [Google Scholar] [CrossRef]
  18. Krogh, A.; Brown, M.; Mian, I.; Sjölander, K.; Haussler, D. Hidden Markov Models in Computational Biology: Applications to Protein Modeling. J. Mol. Biol. 1994, 235, 1501–1531. [Google Scholar] [CrossRef] [PubMed]
  19. Huelsenbeck, J.P.; Larget, B.; Swofford, D. A Compound Poisson Process for Relaxing the Molecular Clock. Genetics 2000, 154, 1879–1892. [Google Scholar] [CrossRef] [PubMed]
  20. Papadopoulos, C.T.; Li, J.; O’Kelly, M.E. A classification and review of timed Markov models of manufacturing systems. Comput. Ind. Eng. 2019, 128, 219–244. [Google Scholar] [CrossRef]
  21. Karchin, R.; Cline, M.; Mandel-Gutfreund, Y.; Karplus, K. Hidden Markov models that use predicted local structure for fold recognition: Alphabets of backbone geometry. Proteins Struct. Funct. Bioinform. 2003, 51, 504–514. [Google Scholar] [CrossRef]
  22. Cauchemez, S.; Carrat, F.; Viboud, C.; Valleron, A.J.; Boëlle, P. A Bayesian MCMC approach to study transmission of influenza: Application to household longitudinal data. Stat. Med. 2004, 23, 3469–3487. [Google Scholar] [CrossRef]
  23. Allen, L.J. An introduction to stochastic epidemic models. In Mathematical Epidemiology; Springer: Berlin/Heidelberg, Germany, 2008; pp. 81–130. [Google Scholar]
  24. Gómez, S.; Arenas, A.; Borge-Holthoefer, J.; Meloni, S.; Moreno, Y. Discrete-time Markov chain approach to contact-based disease spreading in complex networks. EPL (Europhys. Lett.) 2010, 89, 38009. [Google Scholar] [CrossRef]
  25. Wonham, W.M. Some Applications of Stochastic Differential Equations to Optimal Nonlinear Filtering. SIAM J. Control 1965, 2, 347–369. [Google Scholar] [CrossRef]
  26. Kushner, H.; Dupuis, P.G. Numerical Methods for Stochastic Control Problems in Continuous Time; Stochastic Modelling and Applied Probability Series; Springer: New York, NY, USA, 2001; Volume 24. [Google Scholar] [CrossRef]
  27. Bertsekas, D.P. Dynamic Programming and Optimal Control; Athena Scientific: Cambridge, MA, USA, 2013. [Google Scholar]
  28. Fleming, W.H.; Rishel, R.W. Deterministic and Stochastic Optimal Control; Stochastic Modelling and Applied Probability Series; Springer: New York, NY, USA, 1975; Volume 1. [Google Scholar] [CrossRef]
  29. Mortensen, R.E. Stochastic Optimal Control with Noisy Observations. Int. J. Control 1966, 4, 455–464. [Google Scholar] [CrossRef]
  30. Cipra, B.A. Engineers look to Kalman filtering for guidance. SIAM News 1993, 26, 8–9. [Google Scholar]
  31. Johnson, A. LQG applications in the process industries. Chem. Eng. Sci. 1993, 48, 2829–2838. [Google Scholar] [CrossRef]
  32. Mäkilä, P.; Westerlund, T.; Toivonen, H. Constrained linear quadratic gaussian control with process applications. Automatica 1984, 20, 15–29. [Google Scholar] [CrossRef]
  33. Borisov, A.; Bosov, A.; Miller, G. Optimal Stabilization of Linear Stochastic System with Statistically Uncertain Piecewise Constant Drift. Mathematics 2022, 10, 184. [Google Scholar] [CrossRef]
  34. Wonham, W.M. On the Separation Theorem of Stochastic Control. SIAM J. Control 1968, 6, 312–326. [Google Scholar] [CrossRef]
  35. Georgiou, T.T.; Lindquist, A. The Separation Principle in Stochastic Control, Redux. IEEE Trans. Autom. Control 2013, 58, 2481–2494. [Google Scholar] [CrossRef]
  36. Athans, M. The Role and Use of the Stochastic Linear-Quadratic-Gaussian Problem in Control System Design. IEEE Trans. Autom. Control 1971, 16, 529–552. [Google Scholar] [CrossRef]
  37. Stratonovich, R.L. Conditional Markov Processes. Theory Probab. Appl. 1960, 5, 156–178. [Google Scholar] [CrossRef]
  38. Kushner, H.J. On the differential equations satisfied by conditional probability densities of Markov processes with applications. J. Soc. Ind. Appl. Math. Ser. A Control 1964, 2, 106–119. [Google Scholar] [CrossRef]
  39. Duncan, T.E. On the Absolute Continuity of Measures. Ann. Math. Stat. 1970, 41, 30–38. [Google Scholar] [CrossRef]
  40. Zakai, M. On the optimal filtering of diffusion processes. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1969, 11, 230–243. [Google Scholar] [CrossRef]
  41. Kálmán, R.E.; Bucy, R.S. New Results in Linear Filtering and Prediction Theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef]
  42. Bucy, R.S.; Joseph, P.D. Filtering for Stochastic Processes with Applications to Guidance; American Mathematical Soc.: Providence, RI, USA, 2005; Volume 326. [Google Scholar]
  43. Borisov, A.V. Wonham Filtering by Observations with Multiplicative Noises. Autom. Remote Control 2018, 79, 39–50. [Google Scholar] [CrossRef]
  44. Kloeden, P.; Platen, E. Numerical Solution of Stochastic Differential Equations; Stochastic Modelling and Applied Probability; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar] [CrossRef]
  45. Gang George, Y.; Zhang, Q.; Liu, Y. Discrete-time approximation of Wonham filters. J. Control Theory Appl. 2004, 2, 1–10. [Google Scholar] [CrossRef]
  46. Kushner, H.J. Probability Methods for Approximations in Stochastic Control and for Elliptic Equations; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  47. Borisov, A.; Sokolov, I. Optimal Filtering of Markov Jump Processes Given Observations with State-Dependent Noises: Exact Solution and Stable Numerical Schemes. Mathematics 2020, 8, 506. [Google Scholar] [CrossRef]
  48. Borisov, A.V. L1-optimal filtering of Markov jump processes. II: Numerical analysis of particular realizations schemes. Autom. Remote Control 2020, 81, 2160–2180. [Google Scholar] [CrossRef]
  49. Davis, M. Linear Estimation and Stochastic Control; A Halsted Press Boo; Chapman and Hall: London, UK, 1977. [Google Scholar]
  50. Bosov, A.V. Stabilization and Trajectory Tracking of Linear System with Jumping Drift. Autom. Remote Control 2022, 83, 520–535. [Google Scholar] [CrossRef]
Figure 1. Typical state trajectory of the model without the drift y t : 1—coordinate x t ; 2—velocity v t .
Figure 1. Typical state trajectory of the model without the drift y t : 1—coordinate x t ; 2—velocity v t .
Mathematics 10 03381 g001
Figure 2. Typical state trajectory of the model with the “slow varying” CTMC drift: 1—coordinate x t ; 2—velocity v t ; 3—drift C y t .
Figure 2. Typical state trajectory of the model with the “slow varying” CTMC drift: 1—coordinate x t ; 2—velocity v t ; 3—drift C y t .
Mathematics 10 03381 g002
Figure 3. Typical state trajectories of the model with and without control u t : 1—coordinate x t ; 2—velocity v t ; 3—desired “ideal” trolley movement C y t .
Figure 3. Typical state trajectories of the model with and without control u t : 1—coordinate x t ; 2—velocity v t ; 3—desired “ideal” trolley movement C y t .
Mathematics 10 03381 g003
Figure 4. Typical controlled state trajectories with low control cost: 1—coordinate x t ; 2—velocity v t ; 3—desired “ideal” trolley movement C y t .
Figure 4. Typical controlled state trajectories with low control cost: 1—coordinate x t ; 2—velocity v t ; 3—desired “ideal” trolley movement C y t .
Mathematics 10 03381 g004
Figure 5. Typical controlled state trajectories with low control cost in the semi-stable system: (left) 1—coordinate x t , controlled by u ^ t i ( y ^ t i l i m ) ; 2—coordinate x t , controlled by u ˘ t i δ 1 2 ; 3—drift C y t ; (right) 1—coordinate x t , controlled by u ^ t i ( y ^ t i d e l ) ; 2—coordinate x t , controlled by u ˘ t i δ 1 2 ; 3—drift C y t .
Figure 5. Typical controlled state trajectories with low control cost in the semi-stable system: (left) 1—coordinate x t , controlled by u ^ t i ( y ^ t i l i m ) ; 2—coordinate x t , controlled by u ˘ t i δ 1 2 ; 3—drift C y t ; (right) 1—coordinate x t , controlled by u ^ t i ( y ^ t i d e l ) ; 2—coordinate x t , controlled by u ˘ t i δ 1 2 ; 3—drift C y t .
Mathematics 10 03381 g005
Figure 6. Typical trajectory “coordinate–velocity” in the uncontrolled case of the semistable system: 1—coordinate x t ; 2—coordinate v t ; 3—drift C y t .
Figure 6. Typical trajectory “coordinate–velocity” in the uncontrolled case of the semistable system: 1—coordinate x t ; 2—coordinate v t ; 3—drift C y t .
Mathematics 10 03381 g006
Figure 7. The heuristic modification of the Euler–Maruyama scheme with respect to the stable one and true input: (left) 1—heuristic estimate y ^ t i l i m ; 2—stable estimate y ˘ t i δ 1 2 ; 3—true value c y t i ; (right) 1—heuristic estimate y ^ t i d e l ; 2—stable estimate y ˘ t i δ 1 2 ; 3—true value c y t i .
Figure 7. The heuristic modification of the Euler–Maruyama scheme with respect to the stable one and true input: (left) 1—heuristic estimate y ^ t i l i m ; 2—stable estimate y ˘ t i δ 1 2 ; 3—true value c y t i ; (right) 1—heuristic estimate y ^ t i d e l ; 2—stable estimate y ˘ t i δ 1 2 ; 3—true value c y t i .
Mathematics 10 03381 g007
Figure 8. State trajectories governed by the various controls: (left) 1—system state, controlled by u ^ t i ( y ^ t i l i m ) ; 2—system state, controlled by y ˘ t i δ 1 2 ; 3—drift C y t i ; (right) 1—system state, controlled by u ^ t i ( y ^ t i d e l ) ; 2—system state, controlled by y ˘ t i δ 1 2 ; 3—drift C y t i .
Figure 8. State trajectories governed by the various controls: (left) 1—system state, controlled by u ^ t i ( y ^ t i l i m ) ; 2—system state, controlled by y ˘ t i δ 1 2 ; 3—drift C y t i ; (right) 1—system state, controlled by u ^ t i ( y ^ t i d e l ) ; 2—system state, controlled by y ˘ t i δ 1 2 ; 3—drift C y t i .
Mathematics 10 03381 g008
Figure 9. The results with transition intensities 10: 1—system state, controlled by u ˘ t i δ 1 2 ; 2—system state, controlled by u t i ; 3—drift C y t i .
Figure 9. The results with transition intensities 10: 1—system state, controlled by u ˘ t i δ 1 2 ; 2—system state, controlled by u t i ; 3—drift C y t i .
Mathematics 10 03381 g009
Figure 10. The results with transition intensities 50: 1—system state, controlled by u ˘ t i δ 1 2 ; 2—system state, controlled by u t i ; 3—drift C y t i .
Figure 10. The results with transition intensities 50: 1—system state, controlled by u ˘ t i δ 1 2 ; 2—system state, controlled by u t i ; 3—drift C y t i .
Mathematics 10 03381 g010
Figure 11. The results with the CTMC dimensionality 4 and transition intensities 10: 1—the system state, controlled by u ˘ t i δ 1 2 ; 2—the system state, controlled by u t i ; 3—the drift C y t i .
Figure 11. The results with the CTMC dimensionality 4 and transition intensities 10: 1—the system state, controlled by u ˘ t i δ 1 2 ; 2—the system state, controlled by u t i ; 3—the drift C y t i .
Mathematics 10 03381 g011
Figure 12. The results with the CTMC dimensionality 4 and transition intensities 10: 1—the system state, controlled by u ˘ t i δ 1 2 ; 2—the system state, controlled by u t i ; 3—the drift C y t i .
Figure 12. The results with the CTMC dimensionality 4 and transition intensities 10: 1—the system state, controlled by u ˘ t i δ 1 2 ; 2—the system state, controlled by u t i ; 3—the drift C y t i .
Mathematics 10 03381 g012
Figure 13. Typical trajectories of the unstable system: 1—system state, controlled by u ˘ t i δ 1 2 ; 2—uncontrolled system state; 3—drift C y t i .
Figure 13. Typical trajectories of the unstable system: 1—system state, controlled by u ˘ t i δ 1 2 ; 2—uncontrolled system state; 3—drift C y t i .
Mathematics 10 03381 g013
Table 1. The results of the first numerical experiment.
Table 1. The results of the first numerical experiment.
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.01 , R = 0.01 , ( c 1 , c 2 , c 3 ) = ( 1 , 0 , 1 ) , Λ = 0.5 0.5 0 0.5 1 0.5 0 0.5 0.5
D ^ ( y ^ t i ) J ^ ( u ^ t i )
δ y ^ t i = y ^ t i l i m D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) y ^ t i = y ^ t i l i m J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
y ^ t i = y ^ t i d e l y ^ t i = y ^ t i d e l
0.005 0 . 2269 0 . 0496 0.0496 0.0496 3 . 1876 1 . 5909 1.5913 1.5912
0 . 3954 4 . 5370
0.001 0 . 0539 0 . 0485 0.0485 0.0485 1 . 6035 1 . 5576 1.5576 1.5575
0 . 0492 1 . 5631
0.0001 0 . 0493 0 . 0492 0.0492 0.0492 1 . 5432 1 . 5431 1.5431 1.5431
Table 2. The results of the second numerical experiment.
Table 2. The results of the second numerical experiment.
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.1 , R = 0.01 , ( c 1 , c 2 , c 3 ) = ( 1 , 0 , 1 ) , Λ = 0.5 0.5 0 0.5 1 0.5 0 0.5 0.5
δ D ^ ( y ^ t i ) D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) J ^ ( u ^ t i ) J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
0.005 0.1857 0.1861 0.1861 0.0496 2.7371 2.7409 2.7411 2.7411
0.001 0.1827 0.1828 0.1828 0.1828 2.6836 2.6845 2.6845 2.6845
0.0001 0.1860 0.1861 0.1861 0.1861 2.7126 2.7128 2.7128 2.7128
Table 3. The results of the third numerical experiment.
Table 3. The results of the third numerical experiment.
a = 0 , b = 0 , T = 10 , h = 10 , g = 0.01 , R = 0.01 , ( c 1 , c 2 , c 3 ) = ( 1.5 , 0.5 , 0.5 ) , C = 1 , 0 , 1 , Λ = 0.5 0.5 0 0.5 1 0.5 0 0.5 0.5
D ^ ( y ^ t i ) J ^ ( u ^ t i )
δ y ^ t i = y ^ t i l i m D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) y ^ t i = y ^ t i l i m J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
y ^ t i = y ^ t i d e l y ^ t i = y ^ t i d e l
0.005 0 . 6776 0.0557 0.0558 0.0557 4 . 4154 1.6670 1.6672 1.6671
1 . 1360 5 . 1079
0.001 0 . 3500 0 . 0528 0.0528 0.0528 1 . 6035 1.6135 1.6135 1.6135
0 . 0742 1 . 5631
0.0001 0 . 0527 0 . 0523 0.0523 0.0523 1 . 6168 1 . 6158 1.6158 1.6158
Table 4. The results with transition intensities 10.
Table 4. The results with transition intensities 10.
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.1 , R = 0.00001 , ( c 1 , c 2 , c 3 ) = ( 1.5 , 0.5 , 0.5 ) , Λ = 10 10 0 10 20 10 0 10 10
δ D ^ ( y ^ t i ) D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) J ^ ( u ^ t i ) J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
0.005 0 . 3744 0 . 3735 0 . 3733 7 . 1539 7 . 1169 7 . 1146
0.001 0 . 3430 0 . 3430 0 . 3430 6 . 4908 6 . 4894 6 . 4894
0.0001 0.3354 0.3357 0.3357 0.3357 6.3079 6.3095 6.3095 6.3095
Table 5. The results with transition intensities 50.
Table 5. The results with transition intensities 50.
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.1 , R = 0.00001 , ( c 1 , c 2 , c 3 ) = ( 1.5 , 0.5 , 0.5 ) , Λ = 50 50 0 50 100 50 0 50 50
δ D ^ ( y ^ t i ) D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) J ^ ( u ^ t i ) J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
0.005 0 . 7582 0 . 7470 0 . 7461 10 . 7081 10 . 6590 10 . 6558
0.001 0 . 6474 0 . 6470 0 . 6470 10 . 3166 10 . 3149 10 . 3149
0.0001 0.6260 0.6277 0.6277 0.6277 10.2395 10.2415 10.2415 10.2415
Table 6. The results with the CTMC dimensionality 4 and transition intensities 10.
Table 6. The results with the CTMC dimensionality 4 and transition intensities 10.
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.01 , R = 0.00001 , ( c 1 , c 2 , c 3 , c 4 ) = ( 1.5 , 0.5 , 0.5 , 1.5 ) , Λ = 10 10 0 0 10 20 10 0 0 10 20 10 0 0 10 10
δ D ^ ( y ^ t i ) D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) J ^ ( u ^ t i ) J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
0.005 0 . 3349 0 . 3345 0 . 3345 6 . 3259 6 . 2977 6 . 2952
0.001 0.3126 0.3125 0.3126 5.7229 5.7219 5.7219
0.0001 0.3063 0.3063 0.3063 5.5687 5.5687 5.5687
Table 7. The results with the CTMC dimensionality 4 and transition intensities 50.
Table 7. The results with the CTMC dimensionality 4 and transition intensities 50.
a = 1 , b = 0.5 , T = 10 , h = 10 , g = 0.01 , R = 0.00001 , ( c 1 , c 2 , c 3 , c 4 ) = ( 1.5 , 0.5 , 0.5 , 1.5 ) , Λ = 50 50 0 0 50 100 50 0 0 50 100 50 0 0 50 50
δ D ^ ( y ^ t i ) D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) J ^ ( u ^ t i ) J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
0.005 0.7011 0.6973 0.7011 11.9747 11.8980 11.9165
0.001 0 . 6431 0 . 6425 0 . 6429 11 . 1556 11 . 1533 11 . 1544
0.0001 0.6258 0.6258 0.6258 11.0219 11.0219 11.0219
Table 8. The stabilization of the unstable system.
Table 8. The stabilization of the unstable system.
a = 1 , b = 5 , T = 10 , h = 10 , g = 0.01 , R = 0.01 , ( c 1 , c 2 , c 3 ) = ( 2.5 , 1.5 , 1 ) , Λ = 0.5 0.5 0 0.5 1 0.5 0 0.5 0.5
δ D ^ ( y ^ t i ) D ^ ( y ˘ t i δ 1 2 ) D ^ ( y ˘ t i δ ) D ^ ( y ˘ t i δ 2 ) J ^ ( u ^ t i ) J ^ ( u ˘ t i δ 1 2 ) J ^ ( u ˘ t i δ ) J ^ ( u ˘ t i δ 2 )
0.005 0.0645 0.0646 0.0646 4.6579 4.6602 4.6595
0.001 0.0604 0.0604 0.0604 4.4521 4.4525 4.4525
0.0001 0.0594 0.0594 0.0594 4.4753 4.4754 4.4754
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bosov, A.; Borisov, A. Comparative Study of Markov Chain Filtering Schemas for Stabilization of Stochastic Systems under Incomplete Information. Mathematics 2022, 10, 3381. https://doi.org/10.3390/math10183381

AMA Style

Bosov A, Borisov A. Comparative Study of Markov Chain Filtering Schemas for Stabilization of Stochastic Systems under Incomplete Information. Mathematics. 2022; 10(18):3381. https://doi.org/10.3390/math10183381

Chicago/Turabian Style

Bosov, Alexey, and Andrey Borisov. 2022. "Comparative Study of Markov Chain Filtering Schemas for Stabilization of Stochastic Systems under Incomplete Information" Mathematics 10, no. 18: 3381. https://doi.org/10.3390/math10183381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop