*Article* **Stabilization of Stochastic Dynamical Systems of a Random Structure with Markov Switches and Poisson Perturbations**

**Taras Lukashiv 1,2,3,\*,†, Yuliia Litvinchuk 3,†, Igor V. Malyk 3,†, Anna Golebiewska <sup>2</sup> and Petr V. Nazarov 1,\***


**Abstract:** An optimal control for a dynamical system optimizes a certain objective function. Here, we consider the construction of an optimal control for a stochastic dynamical system with a random structure, Poisson perturbations and random jumps, which makes the system stable in probability. Sufficient conditions of the stability in probability are obtained, using the second Lyapunov method, in which the construction of the corresponding functions plays an important role. Here, we provide a solution to the problem of optimal stabilization in a general case. For a linear system with a quadratic quality function, we give a method of synthesis of optimal control based on the solution of Riccati equations. Finally, in an autonomous case, a system of differential equations was constructed to obtain unknown matrices that are used for the construction of an optimal control. The method using a small parameter is justified for the algorithmic search of an optimal control. This approach brings a novel solution to the problem of optimal stabilization for a stochastic dynamical system with a random structure, Markov switches and Poisson perturbations.

**Keywords:** optimal control; Lyapunov function; system of stochastic differential equations; Markov switches; Poisson perturbations

**MSC:** 60J25; 93C73; 93E03; 93E15

#### **1. Introduction**

The main problem considered in this paper is the synthesis of an optimal control for a controlled dynamical system, described by a stochastic differential equation (SDE) with Poisson perturbations and external random jumps [1,2]. The importance of this problem is linked to the fact that the dynamics of many real processes cannot be described by continuous models such as ordinary differential equations or Ito's stochastic differential equations [3]. More complex systems include the presence of jumps, and these jumps can occur at random *τk*, *k* ≥ 1, or deterministic time moments, *tm*, *m* ≥ 1. In the first case, the jump-like change can be described by point processes [4,5], or in a more specific case by generalized Poisson processes, the dynamics of which are characterized only by the intensity of the jumps. The jumps of the system at deterministic moments of time, *tm*, can be described by the relation:

$$
\Delta \mathbf{x}(t\_m) = \mathbf{x}(t\_m) - \mathbf{x}(t\_m -) = \mathbf{g}(\dots), \tag{1}
$$

where *x*(*t*), *t* ≥ 0, is a random process describing the dynamics of the system, the function *g* is a finite-valued function that reflects the magnitude of a jump which depends on time *t* and process *x* in the time *tm*−. According to the works of Katz I. Ya. [1], Yasinsky

**Citation:** Lukashiv, T.; Litvinchuk, Y.; Malyk, I.V.; Golebiewska, A.; Nazarov, P.V. Stabilization of Stochastic Dynamical Systems of a Random Structure with Markov Switches and Poisson Perturbations. *Mathematics* **2023**, *11*, 582. https:// doi.org/10.3390/math11030582

Academic Editor: Alexandru Agapie, Denis Enachescu, Vlad Stefan Barbu and Bogdan Iftimie

Received: 6 December 2022 Revised: 13 January 2023 Accepted: 20 January 2023 Published: 22 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

V.K., Yurchenko I.V. and Lukashiv T.O. [6], the description of jumps at deterministic time moments, *tm*, are quite accurately described using the Equation (1). It allows a relatively simple transfer of the basic properties of stochastic systems of differential equations without jumps (*g* ≡ 0) to systems with jumps. Such properties, as will be noted below, include the Markovian property, *x*(*t*), *t* ≥ 0, concerning natural filtering, and the martingale properties, *x*(*t*)2, *<sup>t</sup>* <sup>≥</sup> 0 [7,8]. It should be noted that the description of real dynamical systems is not limited to the Wiener process and point processes (Poisson process). A more general approach is based on the use of semimartingales [9]. The disadvantage of this approach is that it is impossible to link it with the methods used for systems described by ordinary differential equations or stochastic Ito's differential equations. The second approach to describe jump processes, *x*(*t*), is based on the use of semi-Markov processes, considered in the works of Korolyuk V.S. [10] and Malyk I.V. [2,11]. In particular, the works of Malyk I.V. are devoted to the convergence of semi-Markov evolutions in the averaging scheme and diffusion approximation. The results derived in these works together with the results of the works on large deviations (e.g., [12]) can also be used to investigate the considered problems.

Since we consider generalized classical differential equations, the approaches used will also be classical. The basic research method is based on the Lyapunov methods described in the papers by Katz I. Ya. [1] and Lukashiv T.O. and Malyk I.V. [13]. It should be noted that the application of this method makes it possible to find the optimal control for linear systems with a quadratic quality functional, which also corresponds to classical dynamical systems [14].

It should be noted that a large number of works are devoted to the issues of stability of systems with jumping Markov processes. For example, the works [15,16] consider sufficient conditions for the stability of Ito stochastic differential equations with Markov switching and the presence of variable delay. In the work [15], this theory has gained logical use for modeling neural networks with a decentralized event-triggered mechanism and finding sufficient conditions for stabilizing the process that describe dynamic of the neural network. Note that the authors of this work also considered systems of stochastic differential equations in which the deterministic term near *dt* is quasi-linear; that is, the linear component plays the main role in this research. This assumption of quasi-linearity allows, with additional conditions on the value of the nonlinear part, the discovery of sufficient stability conditions of the *x*(*t*), *t* ≥ 0, by constructing suboptimal control *u*(*t*), *t* ≥ 0. Similar results were obtained also in the work [16], where authors described an algorithm of stabilization by construction of the non-fragile event-triggered controller for Ito stochastic differential equations with varying delay. The authors chose a specific class of admissible controls, which makes it possible to solve the optimization problem for finding a suboptimal control.

The structure of the paper is as follows. In Section 2, we consider the mathematical model of a dynamical system with jumps. It is described by a system of stochastic differential equations with Poisson's integral and external jumps. Sufficient conditions for the existence and uniqueness of the solution of this system are given there. In Section 3, we investigate the stability in probability of the solution *x*(*t*), *t* ≥ 0. In this section, we consider the notion of the Lyapunov function and prove the sufficient conditions for stability in probability (Theorem 1). The algorithm for computing the quality functional, *Ju*(*y*, *h*, *x*0), from the known control, *u*(*t*), is given in Section 4. Moreover, we further present sufficient conditions for the existence of an optimal control (Theorem 2), which are based on the existence of a Lyapunov function for the given system. Section 5 considers constructing an optimal control for linear non-autonomous systems via the coefficients of the system. The optimal control is found by solving auxiliary Ricatti equations (Theorem 3). For the analysis of linear autonomous systems, we consider the construction of a quadratic functional. Finally, we formulate sufficient conditions of existence of an optimal control (Theorem 4), and present the explicit form of such a control in the case of a quadratic quality functional.

#### **2. Task Definition**

On the probability basis (Ω, F, *F*, **P**) [7], consider a stochastic dynamical system of a random structure given by Ito's stochastic differential Equation (SDE) with Poisson perturbations:

$$dx(t) = a(t-, \tilde{\xi}(t-), x(t-), u(t-))dt +$$

$$+ b(t-, \tilde{\xi}(t-), x(t-), u(t-))dw(t) +$$

$$\int\_{\mathbb{R}^{m}} (c(t-, \tilde{\xi}(t-), x(t-), u(t-), z)) \, \tilde{v}(dz, dt), \, t \in \mathbb{R}\_{+} \backslash K,\tag{2}$$

with Markov switches

$$
\Delta \mathbf{x}(t) \Big|\_{t=t\_k} = \mathbf{g}(t\_k - \mathbf{\bar{z}}(t\_k -), \eta\_{k'} \mathbf{x}(t\_k -)), \quad t\_k \in K = \{t\_n \: \uparrow\} \tag{3}
$$

for lim *<sup>n</sup>*→+<sup>∞</sup> *tn* = +<sup>∞</sup> and initial conditions

+

$$\mathbf{x}(0) = \mathbf{x}\_0 \in \mathbb{R}^m, \ \mathbf{\tilde{z}}(0) = \mathbf{y} \in \mathbf{Y}, \eta\_0 = h \in \mathbf{H}.\tag{4}$$

Here, *ξ*(*t*), *t* ≥ 0, is a homogeneous continuous Markov process with a finite number of states **Y** := {*y*1, ... , *yN*} and a generator *Q*; {*ηk*, *k* ≥ 0} is a Markov chain with values in the space **<sup>H</sup>** and the transition probability matrix <sup>P</sup>*H*; *<sup>x</sup>* : [0, <sup>+</sup>∞) <sup>×</sup> <sup>Ω</sup> <sup>→</sup> <sup>R</sup>*m*; *<sup>w</sup>*(*t*) is an *<sup>m</sup>*-dimensional standard Wiener process; *<sup>ν</sup>*\*(*dz*, *dt*) = *<sup>ν</sup>*(*dz*, *dt*) <sup>−</sup> <sup>E</sup>*ν*(*dz*, *dt*) is a centered Poisson measure; and the processes *w*, *ν*, *ξ* and *η* are independent [3,7]. We denote by

$$\mathfrak{F}\_{\mathfrak{k}\_k} = \sigma(\mathfrak{f}(\mathfrak{s}), w(\mathfrak{s}), \nu(\mathfrak{s}, \*), \eta\_{\mathfrak{e}\prime} \mathfrak{s} \preceq t\_{k\prime} t\_{\mathfrak{e}} \preceq t\_{\mathfrak{k}}),$$

a minimal *σ*-algebra, with respect to which *ξ*(*t*) is measurable for all *t* ∈ [0, *tk*] and *η<sup>n</sup>* for *n* ≤ *k*.

The process *<sup>x</sup>*(*t*), *<sup>t</sup>* <sup>≥</sup> 0 is *cadl* ` *ag*` and the control *<sup>u</sup>*(*t*) :<sup>=</sup> *<sup>u</sup>*(*t*, *<sup>x</sup>*(*t*)) : [0, *<sup>T</sup>*] <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* is an *m*-measure function from the class of admissible controls *U* [14].

The following mappings are measurable by a set of variables *<sup>a</sup>* : <sup>R</sup><sup>+</sup> <sup>×</sup> **<sup>Y</sup>** <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*m*, *<sup>b</sup>* : <sup>R</sup><sup>+</sup> <sup>×</sup> **<sup>Y</sup>** <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup>*m*, *<sup>c</sup>* : <sup>R</sup><sup>+</sup> <sup>×</sup> **<sup>Y</sup>** <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* and function *<sup>g</sup>* : <sup>R</sup><sup>+</sup> <sup>×</sup> **<sup>Y</sup>** <sup>×</sup> **<sup>H</sup>** <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup>* satisfies the Lipschitz condition

$$\begin{aligned} \left| a(t, y, \mathbf{x}\_1, u) - a(t, y, \mathbf{x}\_2, u) \right| &+ \left| b(t, y, \mathbf{x}\_1, u) - b(t, y, \mathbf{x}\_2, u) \right| + \\ &+ \int\_{\mathbb{R}^n} \left| c(t, y, \mathbf{x}\_1, u, z) - c(t, y, \mathbf{x}\_2, u, z) \right| \Pi(dz) + \\ &+ \left| \mathcal{g}(t, y, h, \mathbf{x}\_1) - \mathcal{g}(t, y, h, \mathbf{x}\_2) \right| \le L|\mathbf{x}\_1 - \mathbf{x}\_2|, \end{aligned} \tag{5}$$

where <sup>Π</sup>(*dz*) is defined by <sup>E</sup>*ν*(*dz*, *dt*) = <sup>Π</sup>(*dz*)*dt*, *<sup>L</sup>* <sup>&</sup>gt; 0, *<sup>x</sup>*1, *<sup>x</sup>*<sup>2</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* for <sup>∀</sup>*<sup>t</sup>* <sup>≥</sup> 0, *<sup>y</sup>* <sup>∈</sup> **<sup>Y</sup>**, *h* ∈ **H**, and the condition

$$|a(t, y, 0, u)| + |b(t, y, 0, u)| + \int\_{\mathbb{R}^n} |c(t, y, 0, u, z)| \Pi(dz) + \\
$$

$$+ |g(t, y, h, 0)| \le C < \infty,\tag{6}$$

The conditions defined above, with respect to the mappings *a*, *b*, *c* and *g*, guarantee the existence of a strong solution to Equations (2)–(4) with the exact stochastic equivalence [13].

Let us denote

$$\mathbf{P}\_k((y, h, \mathbf{x}), \Gamma \times \mathbf{G} \times \mathbf{C}) :=$$

$$\mathbf{x} := P(\mathfrak{F}(t\_{k+1}), \eta\_{k+1}, \mathbf{x}(t\_{k+1}) \in \Gamma \times \mathbf{G} \times \mathbf{C} | (\mathfrak{F}(t\_k), \eta\_{k'}, \mathbf{x}(t\_k)) = (y, h, \mathbf{x}) | \mathbf{y})$$

the transition probability of a Markov chain (*ξ*(*tk*), *ηk*, *x*(*tk*)), determining the solution to the Equations (2)–(4) *x*(*t*), at the *k*-th step.

#### **3. Stability in Probability**

Here we used the definitions from classical works in this area [14,17].

**Definition 1.** *The discrete Lyapunov operator* (*lvk*)(*y*, *h*, *x*) *on a sequence of measurable scalar functions vk*(*y*, *<sup>h</sup>*, *<sup>x</sup>*): **<sup>Y</sup>** <sup>×</sup> **<sup>H</sup>** <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>1, *<sup>k</sup>* <sup>∈</sup> <sup>N</sup> ∪ {0} *for SDE* (2) *with Markov switches* (3) *is defined by the equation:*

$$\mathbb{P}(lv\_k)(y,h,\mathbf{x}) := \int\_{\mathbf{Y}\times\mathbf{H}\times\mathbb{R}^m} \mathbf{P}\_k((y,h,\mathbf{x}),du\times dz\times dl)v\_{k+1}(u,z,l) - v\_k(y,h,\mathbf{x}), k \ge 0. \tag{7}$$

When applying the second Lyapunov method to the SDE (2) with Markov switches (3), special sequences of the above mentioned functions *vk*(*y*, *h*, *x*), *k* ∈ N are required.

**Definition 2.** *The Lyapunov function for the system of the random structure* (2)*–*(4) *is a sequence of non-negative functions* {*vk*(*y*, *h*, *x*), *k* ≥ 0}, *for which*

*1. for all <sup>k</sup>* <sup>≥</sup> 0, *<sup>y</sup>* <sup>∈</sup> **<sup>Y</sup>**, *<sup>h</sup>* <sup>∈</sup> **<sup>H</sup>**, *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup> the discrete Lyapunov operator is defined* (*lvk*)(*y*, *<sup>h</sup>*, *<sup>x</sup>*) (7)*; 2. for r* → ∞

$$\begin{array}{ll} \bar{v}(r) \equiv & \inf v\_k \\ & k \in \mathbb{N}, y \in \mathbf{Y}\_\prime \\ & h \in \mathbf{H}\_\prime \, |x| \ge r \end{array}$$

*3. for r* → 0

$$\underline{v}(r) \equiv \sup\_{\substack{k \in \mathbb{N}, y \in \mathbf{Y}, \\ h \in \mathbf{H}, \, |x| \le r}} (y, h, x) \to 0,$$

*Moreover, v*¯(*r*) *and v*(*r*) *are continuous and strictly monotonous.*

**Definition 3.** *Let us call a system of random structure* (2)*–*(4) *stable in probability on the whole, and if for* ∀*ε*<sup>1</sup> > 0,*ε*<sup>2</sup> > 0 *one can specify, as δ* > 0*, that from the inequality* |*x*| < *δ follows the inequality*

$$\mathbf{P}\left\{\sup\_{t\geq 0}|\mathbf{x}(t)|>\varepsilon\_{1}\right\}<\varepsilon\_{2}\tag{8}$$

*for all x*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*m, y* <sup>∈</sup> **<sup>Y</sup>**, *<sup>h</sup>* <sup>∈</sup> **<sup>H</sup>***.*

To solve the Equations (2)–(4) in the intervals [*tk*, *tk*<sup>+</sup>1), the following estimate takes place.

**Lemma 1.** *Let the coefficients of Equation* (2)*, a*, *b*, *c and function g, satisfy the Lipschitz condition* (5) *and the uniform boundedness condition* (6)*.*

*Then, for all k* ≥ 0*, the inequality for the strong solution of the Cauchy problem* (2)*–*(4) *holds*

$$\mathbb{E}\left\{\sup\_{t\_k \le t < t\_{k+1}} |\mathbf{x}(t)|^2\right\} \le \mathbb{P}\left[\mathbb{E}|\mathbf{x}(t\_k)|^2 + 3\mathbb{C}^2(t\_{k+1} - t\_k)\right] \times$$

$$\times \exp\{\mathcal{T}\boldsymbol{L}^2((t\_{k+1} - t\_k) + 8)\}, t \in (t\_k, t\_{k+1}). \tag{9}$$

**Proof of Lemma 1.** Using the integral form of the strong solution of Equation (2) [8], for all *t* ∈ [*tk*, *tk*<sup>+</sup>1), *tk* ≥ 0, the following inequality is true:

$$|x(t)| \le |x(t\_k)| + \int\_{t\_k}^t |a(\tau, \mathfrak{f}(\tau), x(\tau), u(\tau)) - a(\tau, \mathfrak{f}(\tau), 0, u(\tau))| d\tau + \varepsilon$$

$$\begin{aligned} &+\int\_{t}^{t} |a(\tau,\tilde{\xi}(\tau),0,u(\tau))|d\tau+\\ &+\int\_{t}^{t} |b(\tau,\tilde{\xi}(\tau),x(\tau),u(\tau))-b(\tau,\tilde{\xi}(\tau),0,u(\tau))|dw(\tau)+\\ &+\int\_{t\_{k}}^{t} |b(\tau,\tilde{\xi}(\tau),0,u(\tau))|dw(\tau)+\\ &+\int\_{t\_{k}}^{t} \int |c(\tau,\tilde{\xi}(\tau),x(\tau),u(\tau),z)-c(\tau,\tilde{\xi}(\tau),0,u(\tau),z)|\tilde{\nu}(dz,d\tau)+\\ &+\int\_{t\_{k}}^{t} \int |c(\tau,\tilde{\xi}(\tau),0,u(\tau),z)|\tilde{\nu}(dz,d\tau) \end{aligned}$$

Given (5) and (6) and the inequality (∑*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *xi*) <sup>2</sup> <sup>≤</sup> *<sup>n</sup>* <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *x*<sup>2</sup> *<sup>i</sup>* we get:

sup *tk*≤*t*<*tk*+<sup>1</sup> |*x*(*t*)| <sup>2</sup> <sup>≤</sup> 7 sup *tk*≤*t*<*tk*+<sup>1</sup> |*x*(*tk*)| 2 + + % % % % *<sup>t</sup> tk* |*a*(*τ*, *ξ*(*τ*), *x*(*τ*), *u*(*τ*)) − *a*(*τ*, *ξ*(*τ*), 0, *u*(*τ*))|*dτ* % % % % 2 + + % % % % % % *t tk* |*a*(*τ*, *ξ*(*τ*), 0, *u*(*τ*))|*dτ* % % % % % % 2 + + % % % % % % *t tk* |*b*(*τ*, *ξ*(*τ*), *x*(*τ*), *u*(*τ*)) − *b*(*τ*, *ξ*(*τ*), 0, *u*(*τ*))|*dw*(*τ*) % % % % % % 2 + + % % % % *<sup>t</sup> tk* |*b*(*τ*, *ξ*(*τ*), 0, *u*(*τ*))|*dw*(*τ*) % % % % 2 + + % % % % % % *t tk* R*m* <sup>|</sup>*c*(*τ*, *<sup>ξ</sup>*(*τ*), *<sup>x</sup>*(*τ*), *<sup>u</sup>*(*τ*), *<sup>z</sup>*) <sup>−</sup> *<sup>c</sup>*(*τ*, *<sup>ξ</sup>*(*τ*), 0, *<sup>u</sup>*(*τ*), *<sup>z</sup>*)|*ν*\*(*dz*, *<sup>d</sup>τ*) % % % % % % 2 + + % % % % % % *t tk* R*m* <sup>|</sup>*c*(*τ*, *<sup>ξ</sup>*(*τ*), 0, *<sup>u</sup>*(*τ*), *<sup>z</sup>*)|*ν*\*(*dz*, *<sup>d</sup>τ*) % % % % % % 2 ⎤ ⎥ <sup>⎦</sup> <sup>≤</sup> ≤ 7 ⎡ ⎢ <sup>⎣</sup> sup *tk*≤*t*<*tk*+<sup>1</sup> |*x*(*t*)| <sup>2</sup> + sup *tk*≤*t*<*tk*+<sup>1</sup> *L*2 % % % % % % *t tk* |*x*(*τ*)|*dτ* % % % % % % 2 + <sup>+</sup>*C*2(*tk*<sup>+</sup><sup>1</sup> <sup>−</sup> *tk*) + sup *tk*≤*t*<*tk*+<sup>1</sup> *L*2 % % % % % % *t tk* |*x*(*τ*)|*dw*(*τ*) % % % % % % 2 <sup>+</sup> *<sup>C</sup>*2(*tk*<sup>+</sup><sup>1</sup> <sup>−</sup> *tk*)+ + sup *tk*≤*t*<*tk*+<sup>1</sup> % % % % % % *t tk* R*m* <sup>|</sup>*c*(*τ*, *<sup>ξ</sup>*(*τ*), *<sup>x</sup>*(*τ*), *<sup>u</sup>*(*τ*), *<sup>z</sup>*) <sup>−</sup> *<sup>c</sup>*(*τ*, *<sup>ξ</sup>*(*τ*), 0, *<sup>u</sup>*(*τ*), *<sup>z</sup>*)|*ν*\*(*dz*, *<sup>d</sup>τ*) % % % % % % 2 + + sup *tk*≤*t*<*tk*+<sup>1</sup> % % % % % % *t tk* R*m* <sup>|</sup>*c*(*τ*, *<sup>ξ</sup>*(*τ*), 0, *<sup>u</sup>*(*τ*), *<sup>z</sup>*)|*ν*\*(*dz*, *<sup>d</sup>τ*) % % % % % % 2 ⎤ ⎥ ⎦.

Consider the designation:

$$y(t) = \mathbb{E}\left\{ \sup\_{t\_k \le s < t} |\varkappa(s)|^2 / \mathfrak{F}\_{t\_k} \right\}.$$

Then, according to the last inequality, *y*(*t*) satisfies the ratio:

$$y(t) \le 7\left[\mathbb{E}\left\{|\mathbf{x}(t)|^2/\mathfrak{F}\_{\mathbf{t}\_k}\right\} + 3\mathbb{C}^2(t\_{k+1} - t\_k) + L^2((t\_{k+1} - t\_k) + 8) \cdot \int\_{t\_k}^t y(\tau)d\tau\right].$$

Using the Gronwall inequality, we obtain an estimate of:

$$\mathbb{E}\left\{\sup\_{t\_k \le t < t\_{k+1}} \left|\mathbf{x}(t)\right|^2 / \mathfrak{F}\_{t\_k}\right\} \le$$

$$\frac{1}{2} \mathbb{E}\left[\mathbb{E}\left|\mathbf{x}(t\_k)\right|^2 + 3\mathcal{C}^2(t\_{k+1} - t\_k)\right] e^{\mathcal{T}L^2\left((t\_{k+1} - t\_k) + 8\right)},$$

as required as proof.

**Remark 1.** *We will consider the stability of the trivial solution x* ≡ 0 *of the system* (2)*–*(4)*; that is, the fulfillment of* (6) *when C* = 0 *[17–19].*

#### **Theorem 1.** *Let:*

*(1) Interval lengths* [*tk*, *tk*<sup>+</sup>1) *do not exceed* Δ > 0*, i.e.,* |*tk*<sup>+</sup><sup>1</sup> − *tk*| ≤ Δ, *k* ≥ 0;

*(2) The Lipschitz condition is satisfied* (5)*;*

*(3) There exists Lyapunov functions vk*(*y*, *h*, *x*), *k* ≥ 0 *such that the following inequality holds true*

$$(lv\_k)(y,h,x) \le 0, k \ge 0. \tag{10}$$

*Then, the system of random structure* (2)*–*(4) *is stable in probability on the whole.*

**Remark 2.** *It should be noted that if condition 1 is not satisfied, the number of jumps* (3) *is finite and the system* (2)*–*(4) *turns into a system without jumps after* max *tk. In this case, we can use the results presented in [1].*

**Proof of Theorem 1.** The conditional expectation of the Lyapunov function is:

$$\mathbb{E}\left\{v\_{k+1}(\tilde{\boldsymbol{\xi}}(t\_{k+1}),\boldsymbol{\eta}\_{k+1},\mathbf{x}(t\_{k+1})) / \tilde{\eta}\_{k}\right\} = \int\_{\mathbf{Y}\times\mathbf{H}\times\mathbb{R}^{m}} \mathbf{P}\_{k}((\tilde{\boldsymbol{\xi}}(t\_{k}),\boldsymbol{\eta}\_{k},\mathbf{x}(t\_{k}))(d\boldsymbol{u}\times d\boldsymbol{z}\times d\boldsymbol{l})v\_{k+1}(\boldsymbol{u},\boldsymbol{z},\boldsymbol{l})).\tag{11}$$

Then, by the definition of the discrete Lyapunov operator (*lvk*)(*y*, *h*, *x*) (see (7)) and from Equation (11), taking into account (10), we obtain the following inequality:

$$\mathbb{E}\left\{v\_{k+1}(\mathbb{f}(t\_{k+1}),\eta\_{k+1},\mathbf{x}(t\_{k+1})) / \mathfrak{F}\_{\mathbf{k}\_{k}}\right\} = v\_{k}(\mathbb{f}(t\_{k}),\eta\_{k},\mathbf{x}(t\_{k})) + (lv\_{k})(\mathbb{f}(t\_{k}),\eta\_{k},\mathbf{x}(t\_{k})) \leq \vartheta(|\mathbf{x}(t\_{k})|).\tag{12}$$

From Lemma 1 and the properties of the function *v*¯, it follows that the conditional expectation of the left part of inequality (12) exists.

Using (11) and (12), let us write the discrete Lyapunov operator (*lvk*)(*y*, *h*, *x*), defined by the solutions of (2)–(4):

$$\mathbb{E}\left(l\eta\_{k}\big)\big(\tilde{\boldsymbol{\xi}}(t\_{k}),\eta\_{k},\mathbf{x}(t\_{k})\big)=\mathbf{E}\big\{\upsilon\_{k+1}(\tilde{\boldsymbol{\xi}}(t\_{k+1}),\eta\_{k+1},\mathbf{x}(t\_{k+1})) / \tilde{\eta}\_{t\_{k}}\big\} - \upsilon\_{k}(\tilde{\boldsymbol{\xi}}(t\_{k}),\eta\_{k},\mathbf{x}(t\_{k})) \leq 0. \tag{13}$$

Then, when *k* ≥ 0, the following inequality is satisfied:

$$\mathbb{E}\{\upsilon\_{k+1}(\mathfrak{f}(t\_{k+1}),\eta\_{k+1},\mathfrak{x}(t\_{k+1}))/\mathfrak{F}\_{t\_k}\} \le \upsilon\_k(\mathfrak{f}(t\_k),\eta\_{k\prime}\mathfrak{x}(t\_k)).$$

This means that the sequence of random variables *vk*(*ξ*(*tk*), *ηk*, *x*(*tk*)) is a supermartingale with respect to F*tk* [5].

Thus, the following inequality holds:

$$\mathbb{E}\{v\_{N+1}(\tilde{\xi}(t\_{N+1}),\eta\_{N+1},\chi(t\_{N+1}))\} - \mathbb{E}\{v\_{\mathcal{U}}(\tilde{\xi}(t\_{\mathcal{U}}),\eta\_{\mathcal{U}},\chi(t\_{\mathcal{U}}))\} = 0$$

$$=\sum\_{k=n}^{N} \mathbf{E}\{ (lv\_k) (\not\xi(t\_k), \eta\_{k\prime} \mathfrak{x}(t\_k)) \} \le 0.$$

Since the random variable sup *tk*≤*t*<*tk*+<sup>1</sup> |*x*(*t*)| <sup>2</sup> is independent of events of *<sup>σ</sup>*- algebra <sup>F</sup>*tk* [4],

then

$$\mathbb{E}\left\{\sup\_{t\_k \le t < t\_{k+1}} \left| \mathfrak{x}(t) \right|^2 \bigg/ \mathfrak{F}\_{t\_k} \right\} = E \left\{\sup\_{t\_k \le t < t\_{k+1}} \left| \mathfrak{x}(t) \right|^2 \right\},$$

i.e., the inequality (9) also holds for the usual expectation

$$\mathbb{E}\left\{\sup\_{t\_k \le t < t\_{k+1}} |\varkappa(t)|^2\right\} \le \mathcal{T}\left[\mathbb{E}|\varkappa|^2\right]e^{\mathcal{T}L^2(\Lambda+8)}.$$

at *C* = 0, assuming that the stability of the trivial solution is investigated. Then, 

$$\mathbb{P}\left\{\sup\_{t\geq 0}|\mathbf{x}(t)|>\varepsilon\_{1}\right\}=$$

$$=\mathbb{P}\left\{\sup\_{n\in\mathbb{N}}\sup\_{t\_{n-1}\leq t\varepsilon\_{1}\right\}\leq$$

$$\leq\mathbb{P}\left\{\sup\_{n\in\mathbb{N}}\mathbb{T}e^{\tau\mathcal{L}^{2}(\Delta+8)}|\mathbf{x}(t\_{n-1})|>\varepsilon\_{1}\right\}\leq$$

$$\leq\mathbb{P}\left\{\sup\_{n\in\mathbb{N}}|\mathbf{x}(t\_{n-1})|>\frac{\varepsilon\_{1}}{T}e^{-\tau\mathcal{L}^{2}(\Delta+8)}\right\}\leq$$

$$\leq\mathbb{P}\left\{\sup\_{n\in\mathbb{N}}v\_{n-1}(\mathcal{E}(t\_{n-1}),\eta\_{n-1},\mathbf{x}(t\_{n-1}))\geq\varepsilon(\frac{\varepsilon\_{1}}{T}e^{-\tau\mathcal{L}^{2}(\Delta+8)})\right\}\tag{14}$$

If sup|*x*(*tk*)| ≥ *r*, then based on the definition of the Lyapunov function, the inequality is fulfilled

$$\sup\_{k\geq 0} \upsilon\_k(\xi(t\_k), \eta\_k, x(t\_k)) \geq \inf\_{k\geq 0, y \in \mathcal{Y}, h \in \mathcal{H}, |x| \geq r} \upsilon\_k(y, h, x) = \overline{\upsilon}(r). \tag{15}$$

Using the inequality for non-negative supermartingales [5,7], we obtain an estimate of the right-hand side of (14):

$$\mathbb{P}\left\{\sup\_{n\in\mathbb{N}}v\_{n-1}(\mathfrak{z}(t\_{n-1}),\eta\_{n-1},\mathfrak{x}(t\_{n-1})) \ge \overline{v}(\frac{\mathfrak{z}\_{1}}{\mathcal{T}}e^{-\mathsf{T}\mathcal{L}^{2}(\Lambda+8)})\right\} \le$$

$$\le \frac{1}{\overline{v}(\frac{\mathfrak{z}}{\mathcal{T}}e^{-\mathsf{T}\mathcal{L}^{2}(\Lambda+8)})}v\_{0}(\mathcal{Y},h,\mathfrak{x}) \le \frac{\overline{v}(|\mathfrak{x}|)}{\overline{v}(\frac{\mathfrak{z}}{\mathcal{T}}e^{-\mathsf{T}\mathcal{L}^{2}(\Lambda+8)})}.\tag{16}$$

Given the inequality (14), the inequality (16) guarantees that the inequality (8) of stability in probability holds for the whole system (2)–(4).

#### **4. Stabilization**

The optimal stabilization problem is such that for an SDE (2) with switches (3), one should construct a control *u*(*t*, *x*(*t*)) such that the unperturbed motion *x*(*t*) ≡ 0 of the system (2)–(4) is stable in probability on the whole.

It is assumed that the control, *u*, will be determined by the full feedback principle. In addition, the condition of continuity of *u*(*t*) on *t* is in the range

$$\forall t \ge 0, \ x \in \mathbb{R}^m, \ y \in \Upsilon, \ h \in \mathbf{H}. \tag{17}$$

for every fixed *ξ*(*t*) = *y* ∈ **Y** and *η<sup>k</sup>* = *h* ∈ **H**.

It is also assumed that the structure of the system at time *t* ≥ 0, which is independent of the Markov chain *η<sup>k</sup>* (*k* ≥ 0 corresponds to time *tk* ∈ *K*), is known.

Obviously, there is an infinite set of controls. The only control should be chosen from the requirement of the best quality of the process, which is expressed in the form of the minimization condition of the functional:

$$I\_{\mathbf{u}}(y, h, \mathbf{x}\_0) := \sum\_{k=0}^{\infty} \int\_{t\_k}^{\infty} E\{\mathcal{W}(t, \mathbf{x}(t), \mathbf{u}(t)) / \xi(0) = y, \eta\_0 = h, \mathbf{x}(0) = \mathbf{x}\_0\} dt,\tag{18}$$

where *<sup>W</sup>*(*t*, *<sup>x</sup>*, *<sup>u</sup>*) <sup>≥</sup> 0 is a non-negative function defined in the region *<sup>t</sup>* <sup>≥</sup> 0, *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*m*, *<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*<sup>r</sup>* . The algorithm for calculating the functional (18) for a given control, *u*(*t*, *x*), is as follows:


**Remark 3.** *For a linear SDE* (2)*, in many cases the quadratic form with respect to the variables x and u is satisfactory*

$$\mathcal{W}(\mathbf{t}, \mathbf{x}, \boldsymbol{\mu}) = \mathbf{x}^T \mathcal{M}(\mathbf{t}) \mathbf{x} + \boldsymbol{\mu}^T \mathcal{D}(\mathbf{t}) \boldsymbol{\mu}, \tag{19}$$

*where M*(*t*) *is a symmetric non-negative matrix of size m* × *m and D*(*t*) *is a positively determined matrix of the size r* × *r for all t* ≥ 0*.*

**Remark 4.** *Note that according to the feedback principle, M*(*t*) *and D*(*t*) *depend indirectly on the values of ξ*(*t*) *and ηk. Therefore, in the examples below, we will calculate the values of M*(*t*) *and D*(*t*) *for fixed ξ*(*t*) *and ηk.*

The value *Iu* in the case of the quadratic form of the variables *x* and *u* evaluates the quality of the transition process quite well on average. The presence of the term *uTDu* and the minimum condition simultaneously limit the amount of the control action *<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*<sup>r</sup>* .

**Remark 5.** *If the jump condition of the phase trajectory is linear, then the solution of the stabilization problem belongs to the class of linear on the phase vector <sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup> controls <sup>u</sup>*(*t*, *<sup>x</sup>*)*. Such problems are called linear-quadratic stabilization problems.*

**Definition 4.** *The control u*0(*t*)*, which satisfies the condition:*

$$I\_{\mathfrak{u}^0}(y\_\prime h\_\prime \mathfrak{x}\_0) = \min I\_{\mathfrak{u}}(y\_\prime h\_\prime \mathfrak{x}\_0)\_{\prime\prime}$$

*where the minimum should be searched for all controls continuous variables t and x at each ξ*(0) = *y* ∈ **Y** *and η*<sup>0</sup> = *h* ∈ **H***, let us call it optimal in the sense of optimal stabilization of the strong solution x* <sup>∈</sup> <sup>R</sup>*<sup>m</sup> of the system* (2)*–*(4)*.*

**Theorem 2.** *Let the system* (2)*–*(4) *have a scalar function v*0(*tk*, *y*, *h*, *x*) *and an r-vector function <sup>u</sup>*0(*t*, *<sup>y</sup>*, *<sup>h</sup>*, *<sup>x</sup>*) <sup>∈</sup> <sup>R</sup>*<sup>r</sup> in the region* (17) *and fulfill the conditions:*


$$
u\_k^0(y, h, \boldsymbol{\pi}) \equiv \boldsymbol{u}^0(t\_k, y, h, \boldsymbol{\pi}) \in \mathbb{R}^r;\tag{20}$$

*is measurable in all arguments where* 0 ≤ *tk* < *tk*+1, *k* ≥ 0*;*

*3. The sequence of functions from the criterion* (18) *by <sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup> is positive definite, i.e., for* ∀*t* ∈ [*tk*, *tk*<sup>+</sup>1), *k* ≥ 0*,*

$$\mathcal{W}(t, \ge, u\_k^0(\underline{y}, h, \underline{x})) \gg 0;\tag{21}$$

*4. The sequence of infinitesimal operators* (*lv*<sup>0</sup> *k* )| *u*0 *k , calculated for u*<sup>0</sup> *<sup>k</sup>* <sup>≡</sup> *<sup>u</sup>*0(*y*, *<sup>h</sup>*, *<sup>x</sup>*)*, satisfies the condition for* ∀*t* ∈ [*tk*, *tk*<sup>+</sup>1)

$$(l v\_k^0)|\_{u\_k^0} = -\mathcal{W}(t, \ge, u\_k^0);\tag{22}$$

*5. The value of* (*lv*<sup>0</sup> *<sup>k</sup>* ) + *<sup>W</sup>*(*t*, *<sup>x</sup>*, *<sup>u</sup>*) *reaches a minimum at u* <sup>=</sup> *<sup>u</sup>*0, *<sup>k</sup>* <sup>≥</sup> <sup>0</sup>*, i.e.,*

$$\left(\left(lv\_k^0\right)\vert\_{\mathfrak{u}\_k^0} + \mathcal{W}(t, \mathfrak{x}, \mathfrak{u}\_k^0) = \min\_{\mathfrak{u} \in \mathbb{R}^\ell} \{\left(lv\_k^0\right)\vert\_{\mathfrak{u}} + \mathcal{W}(t, \mathfrak{x}, \mathfrak{u})\} = 0;\tag{23}$$

*6. The series*

$$\sum\_{k=0}^{\infty} \int\_{t\_k}^{\infty} \mathbb{E}\{\mathcal{W}(t, \mathfrak{x}(t), \mathfrak{u}(t)) / \mathfrak{x}(t\_{k-1})\} dt < \infty \tag{24}$$

*converges.*

*Then, the control u*<sup>0</sup> *<sup>k</sup>* <sup>≡</sup> *<sup>u</sup>*0(*tk*, *<sup>y</sup>*, *<sup>h</sup>*, *<sup>x</sup>*), *<sup>k</sup>* <sup>≥</sup> <sup>0</sup> *stabilizes the solution of Equations* (2)*–/*(4)*. In this case, the equality*

$$\mathbf{r}^{0}(\mathbf{y}, h, \mathbf{x}\_{0}) \equiv$$

$$\equiv \sum\_{k=0}^{\infty} \int\_{t\_{k}}^{\infty} \mathbf{E}\{\mathcal{W}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) / \mathbf{x}(t\_{k-1})\} dt =$$

$$= \min\_{\mathbf{u} \in \mathbb{R}^{r}} \sum\_{k=0}^{\infty} \int\_{t\_{k}}^{\infty} \mathbf{E}\{\mathcal{W}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) / \mathbf{x}\_{\parallel}(t\_{k})\} dt \equiv I\_{\mathbf{n}^{0}}(\boldsymbol{y}, h, \mathbf{x}\_{0}) \tag{25}$$

*is held.*

**Proof of Theorem 2.** I. Stability in probability in the whole of a dynamical system of a random structure (2)–(4) for *<sup>u</sup>* <sup>≡</sup> *<sup>u</sup>*0(*tk*, *<sup>x</sup>*), *<sup>k</sup>* <sup>≥</sup> 0 immediately follows from Theorem 1, since the functionals *<sup>v</sup>*0(*y*, *<sup>h</sup>*, *<sup>x</sup>*) for any *<sup>t</sup>* <sup>∈</sup> [*tk*, *tk*<sup>+</sup>1), *<sup>k</sup>* <sup>≥</sup> 0 satisfy the conditions of this theorem.

II. The equality (25) is obviously also a consequence of Theorem 1.

III. Proof by contradiction that the stabilization of a strong solution of a dynamical system of random structure (2)–(4) is controlled by *<sup>u</sup>*0(*tk*, *<sup>x</sup>*), *tk* <sup>≤</sup> *<sup>t</sup>* <sup>&</sup>lt; *tk*+1, *<sup>k</sup>* <sup>≥</sup> 0.

Let the control *<sup>u</sup>*∗(*tk*, *<sup>x</sup>*) <sup>=</sup> *<sup>u</sup>*0(*tk*, *<sup>x</sup>*) exist, which, when substituted into the SDE (2), realizes a solution *x*∗(*t*) with initial conditions (3) and (4), such that the equality

$$I\_{u^\*}\left(\mathcal{Y}, h, \mathbf{x}\_0\right) \preceq I\_{u^0}\left(\mathcal{Y}, h, \mathbf{x}\_0\right). \tag{26}$$

is held.

The fulfilment of conditions (1)–(6) of Theorem 2 will lead to an inequality (see (27)) in contrast to (26).

From the condition (5) (see (23)) follows the inequality:

$$\left(\left.l\mathbf{v}\_k^0\right)\right|\_{\mathbf{u}^\*} \ge -\mathcal{W}(t, \mathbf{x}, \mathbf{u}^\*(t, \mathbf{y}, h, \mathbf{x})).\tag{27}$$

Averaging (27) over random variables {*x*∗(*t*), *ξ*(*t*), *ηk*} over intervals [*tk*, *tk*<sup>+</sup>1), *k* ≥ 0 and integrating over *t* from 0 to *T*, we obtain *n* inequalities:

$$\mathbb{E}\left\{v^{0}(t\_{1},\tilde{\xi}(t\_{1}),\eta\_{k\_{1}},\mathbf{x}^{\*}(t\_{1}))/y\_{1},\eta\_{k\_{1}},\mathbf{x}^{\*}(t\_{1})\right\}-v^{0}(y,h,\mathbf{x}\_{0}) \geq$$

$$\geq -\int\_{0}^{t\_{1}} \mathbb{E}\{W(t,\mathbf{x}^{\*}(t),\mathbf{u}^{\*}(t))/x\_{0}\}dt,\tag{28}$$

$$\mathbb{E}\{v^{0}(t\_{2},\tilde{\xi}(t\_{2}),\eta\_{k\_{2}},\mathbf{x}^{\*}(t\_{2}))/y\_{1},\eta\_{k\_{1}},\mathbf{x}^{\*}(t\_{1})\}-$$

$$-\{v^{0}(t\_{1},\tilde{\xi}(t\_{1}),\eta\_{k\_{1}},\mathbf{x}^{\*}(t\_{1}))/y\_{\prime}h,\mathbf{x}\_{0}\} \geq$$

$$\geq -\int\_{t\_{1}}^{t\_{2}} \mathbb{E}\{W(t,\mathbf{x}^{\*}(t),\mathbf{u}^{\*}(t))/\mathbf{x}^{\*}(t\_{1})\}dt,\tag{29}$$

$$\mathbb{E}\{\upsilon^{0}(t\_{n},\mathsf{f}(t\_{n}),\eta\_{k\_{n}},\mathsf{x}^{\*}(t\_{n})) / \Big/ y\_{n-1}, \eta\_{k\_{n-1}}, \mathsf{x}^{\*}(t\_{n-1})\} -$$

$$-\{\upsilon^{0}(t\_{n-1},\mathsf{f}(t\_{n-1}),\eta\_{k\_{n-1}},\mathsf{x}^{\*}(t\_{n-1})) / \Big/ y\_{n-2}, \eta\_{k\_{n-2}}, \mathsf{x}^{\*}(t\_{n-2})\} \geq$$

$$\geq -\int\_{t\_{n-1}}^{t\_{n}} \mathbb{E}\{W(t,\mathsf{x}^{\*}(t),\mathsf{u}^{\*}(t)) / \mathsf{x}^{\*}(t\_{n-1})\}\tag{30}$$

Taking into account the martingale property of the Lyapunov functions *v*0(*t*, *ξ*(*t*), *h*, *x*∗(*t*)) (see condition (1) of the theorem) due to the system (2)–(4), i.e., by the definition of a martingale, we have *n* equalities with the probability of one being:

$$\mathbb{E}\{\upsilon^0(t\_{k'}\tilde{\mathfrak{z}}(t\_k), \eta\_{k'}, \mathfrak{x}^\*(t\_k))/y\_{k-1'}\eta\_{k-1'}, \mathfrak{x}^\*(t\_{k-1})\} = $$
 
$$= \upsilon^0(t\_{k-1}, \tilde{\mathfrak{z}}(t\_{k-1}), \eta\_{k-1'}, \mathfrak{x}^\*(t\_{k-1})), k = \overline{1, n}. \tag{31}$$

Substituting (31) into the inequalities (28)–(30), we obtain the inequality:

$$\mathbb{E}\{\vartheta^{0}(t\_{n},\xi(t\_{n}),\eta\_{k\_{n}},\mathbf{x}^{\*}(t\_{n}))/t\_{n-1},\xi(t\_{n-1}),\eta\_{k\_{n-1}},\mathbf{x}^{\*}(t\_{n-1})\}-\upsilon^{0}(y,h,\mathbf{x}\_{0}) \ge 0$$

$$\geq -\sum\_{k=0}^{n} \int\_{t\_{k}}^{t\_{k+1}} \mathbb{E}\{\mathcal{W}(t,\mathbf{x}^{\*}(t),\boldsymbol{u}^{\*}(t))/\mathbf{x}^{\*}(t\_{k-1})\}dt \geq$$

$$\geq -\sum\_{k=0}^{\infty} \int\_{t\_{k}}^{\infty} \mathbb{E}\{\mathcal{W}(t,\mathbf{x}^{\*}(t),\boldsymbol{u}^{\*}(t))/\mathbf{x}^{\*}(t\_{k-1})\}dt. \tag{32}$$

According to the assumption (26), it follows that for *tn* → ∞, the integrals on the right-hand side of (32) converge and, taking into account the convergence of the series (24) (condition (6)), we obtain the inequality:

$$v^0(y, h, \mathbf{x}\_0) = I\_{\mathbb{n}^0}(y, h, \mathbf{x}\_0) \le$$

$$\le \sum\_{k=0}^{\infty} \int\_{t\_k}^{\infty} \mathbb{E}\{W(t, \mathbf{x}^\*(t), \boldsymbol{u}^\*(t)) / \boldsymbol{x}^\*(t\_{k-1})\} dt = $$

$$= I\_{\mathbb{n}^\*}(y, h, \mathbf{x}\_0). \tag{33}$$

Indeed, from the convergence of the series (32) (under condition (6)), it follows that the integrands in (33) tend to zero as *<sup>t</sup>* <sup>→</sup> <sup>∞</sup>. In this way, lim*n*→<sup>∞</sup> **<sup>E</sup>**{*v*0(*tn*, *yn*, *<sup>η</sup>kn* , *<sup>x</sup>*∗(*tn*)} <sup>=</sup> 0. Note that it makes sense to consider natural cases when from the condition

$$\mathbb{E}\{W\} \underset{t \to \infty}{\to} 0$$

it follows that **<sup>E</sup>**{*v*0} →*t*→<sup>∞</sup> 0.

Thus, the inequality (33) contradicts the inequality (26). This contradiction proves the statement regarding the optimality of the control *<sup>u</sup>*0(*tk*, *<sup>x</sup>*), *<sup>k</sup>* <sup>≥</sup> 0.

In cases when the Markov process with a finite number of states *ξ*(*tk*) admits a conditional expansion of the conditional transition probability

$$\mathbf{P}\{\omega : \tilde{\xi}(t + \Delta t) = y\_j/\tilde{\xi}(t) = y\_{i\prime} y\_i \neq y\_{\bar{j}}\} = $$
 
$$= q\_{\bar{i}\bar{j}}(t)\Delta t + o(\Delta t), i, j = \overline{1, N}, \tag{34} $$

we obtain an equation that must be satisfied by the optimal Lyapunov functions, *v*<sup>0</sup> *<sup>k</sup>* (*y*, *h*, *x*), and the optimal control, *u*<sup>0</sup> *<sup>k</sup>* (*t*, *x*), ∀*t* ∈ [*tk*, *tk*<sup>+</sup>1).

Note that according to [6,21], the weak infinitesimal operator (7) has the form:

$$(lv\_k)(y, h, \mathbf{x}) = \frac{\partial v\_k(y, h, \mathbf{x})}{\partial t} + (\nabla v\_k(y, h, \mathbf{x}), a(t, y, \mathbf{x}, u)) +$$

$$+ \frac{1}{2} \mathcal{S} p(b^T(t, y, \mathbf{x}, u) \cdot \nabla^2 v\_k(y, h, \mathbf{x}) \cdot b(t, y, \mathbf{x}, u)) +$$

$$+ \int\_{\mathbb{R}^n} [v\_k(y, h, \mathbf{x} + c(t, y, \mathbf{x}, u, z)) - v\_k(y, h, \mathbf{x}) - (\nabla v\_k(y, h, \mathbf{x}))^T \cdot c(t, y, \mathbf{x}, u, z)] \Pi(dz) +$$

$$+ \sum\_{j \neq i}^N [\int\_{\mathbb{R}^n} v\_j(t, \mathbf{x}) p\_{ij}(t, z/x) dz - v\_i(t, \mathbf{x})] q\_{ij},\tag{35}$$

where (·, ·) is a scalar product, ∇*vk* = *∂vk ∂x*<sup>1</sup> ,..., *<sup>∂</sup>vk ∂xm T* , <sup>∇</sup><sup>2</sup>*vk* <sup>=</sup> *∂*<sup>2</sup>*vk ∂xi∂xj m i*,*j*=1 , *k* ≥ 0, "*T*" stands for a transposition, *Sp* is a trace of matrix and *pij*(*t*, *z*/*x*) is a conditional probability density:

$$P\mathfrak{x}(\mathfrak{r}) \in [z, z + dz] / \mathfrak{x}(\mathfrak{r} - 0) = \mathfrak{x} = p\_{\bar{i}\bar{j}}(\mathfrak{r}, z/\mathfrak{x})dz + o(dz)$$

assuming that *ξ*(*τ* − 0) = *yi*, *ξ*(*τ*) = *yj*.

Taking into account Formula (35), the first equation for *v*<sup>0</sup> can be obtained by replacing the left side of (23) with the expression for the averaged infinitesimal operator, (*lv*<sup>0</sup> *k* ) % % *<sup>u</sup>*<sup>∗</sup> [1].

Then, the desired equation at the points (*tk*, *yj*, *ηk*, *x*) has the form:

$$\frac{\partial v\_k^0}{\partial t} + \left( \left( \frac{\partial v\_k^0}{\partial \mathbf{x}} \right)^T \cdot a(t, y, \mathbf{x}, u) \right) + \frac{1}{2} S p \left( \left( b^\top(t, y\_i, \mathbf{x}) \cdot \frac{\partial^2 v\_k^0}{\partial \mathbf{x}^2} \cdot b(t, y\_i, \mathbf{x}) \right) \right) + \dots$$

$$+ \int\_{\mathbb{R}^n} [v\_k^0(\cdot, \cdot, \mathbf{x} + c(t, y, \mathbf{x}, u, z)) - v\_k^0 - (\frac{\partial v\_k^0}{\partial \mathbf{x}})^T \cdot c(t, y, \mathbf{x}, u, z)] \Pi(dz) +$$

$$+ \sum\_{j \neq i}^l \left[ \int\_{-\infty}^{+\infty} v\_j^0(y\_j, h, \mathbf{x}\_j) p\_{ij}(t, z/\mathbf{x}) \, dz - v\_i^0(y\_i, h, \mathbf{x}) \right] q\_{ij}(t) dt +$$

$$+ \mathcal{W}(t, \mathbf{x}, u) = 0. \tag{36}$$

The second equation for optimal control, *u*<sup>0</sup> *<sup>k</sup>* (*t*, *y*, *h*, *x*), can be obtained from (36) by differentiation with respect to the variable *u*, since *u* = *u*<sup>0</sup> delivers the minimum of the left side of (36):

$$\left[ \left( \frac{\partial v^0}{\partial \boldsymbol{x}} \right)^T \cdot \left( \frac{\partial \boldsymbol{a}}{\partial \boldsymbol{u}} \right) + \left( \frac{\partial \mathcal{W}}{\partial \boldsymbol{u}} \right)^T \right] \bigg|\_{\boldsymbol{u} = \boldsymbol{u}\_k^0} = \boldsymbol{0},\tag{37}$$

where *<sup>∂</sup><sup>a</sup> <sup>∂</sup><sup>u</sup>* – *<sup>m</sup>* <sup>×</sup> *<sup>r</sup>*-matrix of Jacobi, stacked with elements <sup>+</sup>*∂an ∂us* , *n* = 1, *m*,*s* = 1,*r* , ; *∂W ∂u* ≡ *∂W ∂u*<sup>1</sup> ,..., *<sup>∂</sup><sup>W</sup> ∂ur* , *k* ≥ 0.

Thus, the problem of optimal stabilization, according to the Theorem 2, consists of solving the complex nonlinear system of Equation (23) with partial derivatives to, determine the unknown Lyapunov functions, *v*<sup>0</sup> *ik* <sup>≡</sup> *<sup>v</sup>*<sup>0</sup> *<sup>k</sup>* (*y*, *h*, *x*), *i* = 1, *l*, *k* ≥ 0.

Note that this system is obtained by eliminating the control *u*<sup>0</sup> *<sup>k</sup>* = *<sup>u</sup>*0(*t*, *<sup>y</sup>*, *<sup>h</sup>*, *<sup>x</sup>*) from Equations (36) and (37).

It is quite difficult to solve such a system; therefore, we will further consider linear stochastic systems for which convenient solution schemes can be constructed.

#### **5. Stabilization of Linear Systems**

Consider a controlled stochastic system defined by a linear Ito's SDE with Markov parameters and Poisson perturbations:

$$d\mathbf{x}(t) = [A(t-,\tilde{\xi}(t-))\mathbf{x}(t-) + B(t-,\tilde{\xi}(t-))u(t-)]dt + \sigma(t-,\tilde{\xi}(t-))\mathbf{x}(t-)dw(t) + \int\_{\mathbb{R}^n} \mathbf{c}(t-)\tilde{\mathbf{x}}(t-)dt,\tag{38}$$

$$+ \int\_{\mathbb{R}^n} \mathbf{c}(t-,\tilde{\xi}(t-),u(t-),z)\mathbf{x}(t-)\tilde{\mathbf{v}}(dz,dt),\ t \in \mathbb{R}\_+\backslash K,\tag{38}$$

with Markov switching

$$\Delta \mathbf{x}(t)\Big|\_{t=t\_k} = \mathbf{g}(t\_k - \mathbf{\bar{s}}(t\_k -), \eta\_{k\prime} \mathbf{x}(t\_k -)), \quad t\_k \in K = \{t\_n \,\,\forall \,\}\tag{39}$$

for lim *<sup>n</sup>*→+<sup>∞</sup> *tn* = +<sup>∞</sup> and initial conditions

$$\mathbf{x}(0) = \mathbf{x}\_0 \in \mathbb{R}^m, \,\,\xi(0) = \boldsymbol{y} \in \mathbf{Y}, \eta\_0 = h \in \mathbf{H}.\tag{40}$$

Here, *A*, *B*, *σ* and *C* are piecewise continuous integrable matrix functions of appropriate dimensions.

Let us assume that the conditions for the jump of the phase vector *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* at the moment when *t* = *t* ∗ of the change in the structure of the system due to the transition *ξ*(*t* ∗−) = *yi* in *ξ*(*t* <sup>∗</sup>) = *yj* = *yi* are linear and given in the form:

$$\mathbf{x}(t^\*) = K\_{ij}\mathbf{x}(t^\*-) + \sum\_{s=1}^N \xi\_s \mathbf{Q}\_s \mathbf{x}(t^\*-),\tag{41}$$

where *ξ<sup>s</sup>* := *ξs*(*ω*) are independent random variables for which **E***ξ<sup>s</sup>* = 0, **E***ξ*<sup>2</sup> *<sup>s</sup>* = 1 and *Kij* and *Qs* are given as (*m* × *m*)-matrices.

Note that the equality (41) can replace the general jump conditions [6]:


$$\mathfrak{x}(t^\*) = \mathbb{K}\_{ij}\mathfrak{x}(t^\*-);$$


The quality of the transition process will be estimated by the quadratic functional

$$I\_{\mathbb{U}}(\boldsymbol{y}, h, \mathbf{x}\_{0}) := \sum\_{k=0}^{\infty} \int\_{t\_{k}}^{\infty} \mathbb{E}\left\{\mathbf{x}^{T}(t)M(t)\mathbf{x}(t) + \boldsymbol{u}^{T}(t)D(t)\mathbf{u}(t) \, \bigvee \, \mathbf{y}, h, \mathbf{x}\_{0} \right\} dt,\tag{42}$$

where *M*(*t*) ≥ 0, *D*(*t*) > 0 are symmetric matrices of dimensions (*m* × *m*) and (*r* × *r*), respectively.

According to the Theorem 2, we need to find optimal Lyapunov functions, *v*<sup>0</sup> *<sup>k</sup>* (*y*, *h*, *x*), and a control, *u*<sup>0</sup> *<sup>k</sup>* (*t*, *x*), for ∀*t* ∈ [*tk*, *tk*<sup>+</sup>1), *tk* ∈ *K*, *k* = 0, 1, 2, . . .

The optimal Lyapunov functions are sought in the form:

$$w\_k^0(y, h, \mathbf{x}) = \mathbf{x}^T G(t, y, h)\mathbf{x},\tag{43}$$

where *G*(*t*, *y*, *h*) is a positive-definite symmetric matrix of the size (*m* × *m*).

Hereafter, when *ξ*(*t*) describes a Markov chain with a finite number of states **Y** = {*y*1, *y*2, ... , *yl*}, and *ηk*, *k* ≥ 0 describes a Markov chain with values *hk* in metric space **H** and with transition probability at the *k*-th step **P***k*(*h*, *G*), we introduce the following notation:

$$A\_i(t) := A(t, y\_i), \; B\_i(t) := B(t, y\_i), \; \sigma\_i(t) := \sigma(t, y\_i), \; \mathbb{C}\_i(t, z) := \mathbb{C}(t, y\_i, z),$$

$$G\_{ik}(t) := G(t, y\_i, h\_k), \; \upsilon\_{ik} := \upsilon(y\_i, h\_{k'}x).$$

Let us substitute the functional (43) into Equations (36) and (38) to find an optimal Lyapunov function, *v*<sup>0</sup> *<sup>k</sup>* (*y*, *<sup>h</sup>*, *<sup>x</sup>*), and an optimal control, *<sup>u</sup>*<sup>0</sup> *<sup>k</sup>* (*t*, *x*), for ∀*t* ∈ [*tk*, *tk*<sup>+</sup>1). Given the form of a weak infinitesimal operator (35), we obtain:

$$\mathbf{x}^T(t)\frac{d\mathbf{G}\_{\bar{\mathbf{x}}}(t)}{dt}\mathbf{x}(t) + 2[A\_i(t)\mathbf{x}(t) + B\_i(t)u(t)]\mathbf{G}\_{\bar{\mathbf{x}}}(t)\mathbf{x}(t) +$$

$$+ Sp(\mathbf{x}^T(t)\sigma\_i^T(t)\mathbf{G}\_{\bar{\mathbf{x}}}(t)\sigma\_i(t)\mathbf{x}(t)) + \int\_{\mathbb{R}^n} \mathbf{x}^T(t)\mathbf{C}\_i^T(t,z)\mathbf{G}\_{\bar{\mathbf{x}}}(t)\mathbf{C}\_i(t,z)\mathbf{x}(t)\Pi(dz) +$$

$$+ \mathbf{x}^T(t)\sum\_{j\neq i}^N \left[K\_{ij}^T\mathbf{G}\_{\bar{\mathbf{x}}}(t)K\_{ij} + \sum\_{s=0}^I Q\_s^T\mathbf{G}\_{\bar{\mathbf{x}}}(t)Q\_s - G\_{\bar{\mathbf{x}}}(t)\right]q\_{ij}\mathbf{x}(t) +$$

$$+ \mathbf{x}^T(t)M\_{i\bar{\mathbf{x}}}(t)\mathbf{x}(t) + u^T(t)D\_{i\bar{\mathbf{x}}}(t)u(t) = \mathbf{0},\tag{44}$$

$$2\mathbf{x}^T(t)G\_{ik}(t)B\_i(t) + 2\boldsymbol{\mu}^T(t)D\_{ik}(t) = 0. \tag{45}$$

Note that the partial derivative with respect to *u* of the operator (*lv*) is equal to zero, which confirms the conjecture about constructing an optimal control that does not depend on switching (39) for the system (40).

From (45), we find an optimal control for *ξ*(*t*) = *yi*, when switching (39) *η<sup>k</sup>* = *hk*, *k* ≥ 0,

$$
\mu\_{ik}^{0}(t, \mathbf{x}) = -D\_{ik}^{-1}(t)B\_i^T(t)G\_{ik}(t)\mathbf{x}(t). \tag{46}
$$

Given the matrix equality

$$2\mathbf{x}^T(t)\mathbf{G}\_{ik}(t)A\_i(t)\mathbf{x} = \mathbf{x}^T(t)(\mathbf{G}\_{ik}(t)A\_i(t) + A\_i^T(t)\mathbf{G}\_{ik}(t))\mathbf{x}(t)$$

and excluding *u*<sup>0</sup> *ik* from (44) and equating the resulting matrix with a quadratic form to zero, we can obtain a system of matrix differential equations of Riccati type for finding the matrices *Gik*(*t*), where *i* = 1, 2, . . . , *l*, *k* ≥ 0, corresponding to the interval [*tk*, *tk*<sup>+</sup>1):

$$\frac{dG\_{ik}(t)}{dt} + G\_{ik}(t)A\_i(t) - B\_i(t)D\_{ik}^{-1}(t)B\_i^T(t)G\_{ik}(t)) + \dots$$

$$+Sp(\sigma\_i^T(t)\mathcal{G}\_{ik}(t)\sigma\_i(t)) + \int\_{\mathbb{R}^n} \mathbb{C}\_i^T(t,z)\mathcal{G}\_{ik}(t)\mathcal{C}\_i(t,z)\Pi(dz) +$$

$$+\sum\_{j\neq i}^N \left[K\_{ij}^T\mathcal{G}\_{ik}(t)K\_{ij} + \sum\_{s=0}^l \mathcal{Q}\_s^T\mathcal{G}\_{ik}(t)\mathcal{Q}\_s - \mathcal{G}\_{ik}(t)\right]q\_{ij} + \mathcal{M}\_{ik}(t) = 0,\tag{47}$$

$$\lim\_{t \to \infty} G\_{i\bar{k}}(t) = 0, i = \overline{1, N}, k \ge 0. \tag{48}$$

Thus, we obtain the following statement, which is actually a corollary to Theorem 2.

**Theorem 3.** *Let the system of matrix Equations* (47) *and* (48) *have positive-definite solutions of the order* (*m* × *m*)

$$G\_{1k}(t) > 0,\\ G\_{2k}(t) > 0, \dots, G\_{lk}(t) > 0.$$

*Then, the control* (46) *gives a solution to the problem of optimal stabilization of the system* (38)*–*(40) *with jump condition* (41) *and the criterion of optimality* (42)*.*

#### **6. Stabilization of Autonomous Systems**

Consider the case of an autonomous system that is given by the SDE

$$d\mathbf{x} = [A(\boldsymbol{\xi}(t))\mathbf{x} + B(\boldsymbol{\xi}(t))\mathbf{u}]dt + \sigma(\boldsymbol{\xi}(t))\mathbf{x}dw(t) + \mathbf{C}(\boldsymbol{\xi}(t))\mathbf{x}dN(t), \; t \in \mathbb{R}\_+ \; \mathsf{K},\tag{49}$$

with Markov switching (39) and initial conditions (40). Here, *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*m*, *<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*<sup>r</sup>* , *A*(*y*), *B*(*y*), *σ*(*y*) and *C*(*y*) are known matrix functions defined in the set **Y** = {*y*1, *y*2, ... , *yk*} of possible values of the Markov chain *ξ*. *N*(*t*), *t* ≥ 0 is a Poisson process with intensity *λ* [4].

In the case of phase vector jumps (41) and the quadratic quality functional (42), the systems (47) and (48) for finding unknown matrices *Gik*, *i* = 1, *N*, *k* ≥ 0, will take the form:

$$G\_{ik}A\_i + A\_i^T G\_{ik} - B\_i D\_{ik}^{-1} B\_i^T G\_{ik} + \sigma\_i^T G\_{ik} \sigma\_i + \cdots$$

$$+ \lambda \mathbf{C}\_i^T G\_{ik} \mathbf{C}\_i +$$

$$\left[ + \sum\_{j \neq i}^{N} \left[ K\_{ij}^T G\_{ik} K\_{ij} + \sum\_{s=0}^{l} Q\_s^T G\_{ik} Q\_s - G\_{ik} \right] q\_{lj} + M\_{ik} = 0, i = \overline{1, N}, k \ge 0. \tag{50}$$

**Remark 6.** *Note that any differential system written in the normal form (such as the system* (38)*, where the dependence of x on t is explicitly indicated) can be reduced to an autonomous system by increasing the number of unknown functions (coordinates) by one.*

#### *Small Parameter Method for Solving the Problem of the Optimal Stabilization*

The algorithmic solution to the problem of optimal stabilization of a linear autonomous system of random structure ((43), (39) and (40)) is achieved by introducing a small parameter [1]. There are two ways to introduce the small parameter:

Case I. Transition probabilities *yi* → *yj* of Markov chains *ξ* are small, i.e., the transition intensities, *qij*, due to the small parameter (*ε* > 0) can be represented as:

$$
\eta\_{ij} = \varepsilon r\_{ij}.\tag{51}
$$

Case II. Small jumps of the phase vector *<sup>x</sup>*(*t*) <sup>∈</sup> *<sup>R</sup>m*, i.e., matrices *Kij* and *Qs* from (41), should be presented in the form:

$$K\_{i\bar{j}} = I + \varepsilon K\_{i\bar{j}\prime} \text{ } Q\_{\mathbb{S}} = \varepsilon Q\_{\mathbb{S}}.\tag{52}$$

In these cases, we will search for the optimal Lyapunov function *v*<sup>0</sup> *<sup>k</sup>* (*y*, *x*, *h*), *k* ≥ 0, in the form of a convergent power series with a base *ε* > 0

$$\upsilon\_k^0(y, h, \mathbf{x}) = \mathbf{x}^T \sum\_{r=0}^{\infty} \varepsilon^r \mathbf{G}^{(r)}(y, h)\mathbf{x}.\tag{53}$$

According to (46), the optimal control, *u*0, should be sought in the form of a convergent series:

$$u\_k^0(y, h, \mathbf{x}) = -[D^{-1}(y)B^T(y) \sum\_{r=0}^{\infty} \mathfrak{e}^r G^{(r)}(y, h)]\mathbf{x}.\tag{54}$$

Case I. Let us substitute the series (53) and (54), taking into account (51), into (44):

$$\mathbf{G}\_{ik}\mathbf{A}\_{i} + (\mathbf{A}\_{i})^{T}\mathbf{G}\_{ik} - \mathbf{B}\_{i}\mathbf{D}\_{ik}^{-1}\mathbf{B}\_{i}^{T}\mathbf{G}\_{ik} + \sigma\_{i}^{T}\mathbf{G}\_{ik}\sigma\_{i} +$$

$$+ \lambda \mathbf{C}\_{i}^{T}\mathbf{G}\_{ik}\mathbf{C}\_{i} +$$

$$+ \sum\_{j \neq i}^{l} \mathbf{K}\_{ij}^{T}\mathbf{G}\_{ik}\mathbf{K}\_{ij} + \sum\_{s=1}^{N} \mathbf{Q}\_{s}^{T}(\mathbf{G}\_{ik}\mathbf{Q}\_{s} - \mathbf{G}\_{ik})\mathbf{c}\_{i}\mathbf{r}\_{ij} + \mathbf{C}\_{ik} = \mathbf{0}; i = \overline{1, l}, k \ge 0.$$

Equating the coefficients at the same powers of *ε* > 0, we get:

$$A\_i^T G\_i^{(0)} + G\_{ik}^{(0)} A\_i - B\_i D\_{ik}^{-1} B\_{ik}^T G\_{ik}^{(0)} +$$

$$+ \sigma\_{ik}^T G\_{ik}^{(0)} \sigma\_{ik} + \lambda C\_i^T G\_{ik}^{(0)} C\_i + M\_{ik} = 0, i = \overline{1, l}, k \ge 0,\tag{55}$$

$$\begin{split} A\_{ik}^T G\_{ik}^{(r)} + G\_{ik}^{(r)} A\_{ik} + \sigma\_i^T G\_{ik}^{(r)} \sigma\_i + \lambda C\_i^T G\_{ik}^{(0)} C\_i &= \\ - \sum\_{j \ne i} (K\_{ik}^T G\_{ik}^{(r-1)} K\_{ij} + \sum\_{s=1}^N Q\_s^T G\_{ik}^{(r-1)} Q\_s - G\_{ik}^{(r-1)}) r\_{ij} + \\ + \sum\_{q=1}^{r-1} B\_i D\_{ik}^{-1} B\_i^T G\_{ik}^{(r-q)}, \end{split} \tag{56}$$

$$r > 1; \tilde{A}\_{ik} \equiv A\_i - B\_i D\_{ik}^{-1} B\_i^T G\_{ik}^{(0)}, i = \overline{1, l}, k \ge 0.$$

Note that the system (55) consists of independent matrix equations which, for a fixed *i* = 1, 2, . . . , *l*, gives a solution to the problem of optimal stabilization of the system

$$d\mathbf{x}(t) = (A\_i\mathbf{x}(t) + B\_i\mathbf{u}(t))dt + \sigma\_i\mathbf{x}(t)dw(t) + \mathbb{C}\_i\mathbf{x}(t)dN(t),\tag{57}$$

with the quality criterion

$$I\_{\boldsymbol{u}}(\boldsymbol{y}, \boldsymbol{h}, \boldsymbol{x}\_{0}) = \sum\_{k=0}^{\infty} \int\_{t\_{k}}^{\infty} \boldsymbol{E} \{ \boldsymbol{x}^{T}(t) \boldsymbol{M}\_{ik} \boldsymbol{x}(t) + \boldsymbol{u}^{T}(t) \boldsymbol{D}\_{ik} \boldsymbol{u}(t) / \boldsymbol{x}\_{0} \} dt,$$

$$\boldsymbol{i} = \overline{\mathbf{1}\_{\prime}} \boldsymbol{l}, \boldsymbol{k} \ge \boldsymbol{0}, \boldsymbol{M}\_{ik} > \boldsymbol{0}, \boldsymbol{D}\_{ik} > \boldsymbol{0}. \tag{58}$$

A necessary and sufficient condition for the solvability of the system (55) is the existence of linear admissible control in the system (57), which provides exponential stability in the mean square of the unperturbed motion of this system [17].

Let us assume that the system of matrix quadratic Equation (55) has a unique positive definite solution, *G*(0) *ik* > 0, *i* = 1, *l*, *k* ≥ 0.

Equation (56) to find *G*(*r*) *ik* > 0,*r* ≥ 1, *k* ≥ 0 is linear, so it has a unique solution for a fixed *i* = 1, *l*, *k* ≥ 0,*r* ≥ 1 and any matrices that are on the right side of (56).

Indeed, the system

$$d\mathbf{x}(t) = \vec{A}\_{ik}\mathbf{x}(t)dt + \sigma\_i \mathbf{x}(t)dw(t) + \mathbb{C}\_i \mathbf{x}(t)dN(t) \tag{59}$$

is obtained by closing the system (57) with the optimal control

$$
\mu\_k^0 = -D\_{ik}^{-1} B\_{ik}^T G\_{ik}^{(0)} \varkappa(t),
$$

which provides exponential stability in the mean square. Then, there is a unique solution to the system (56). Note that in the linear case for autonomous systems, the asymptotic stability is equivalent to the exponential stability [2]. Consider a theorem which originates from the results of this work.

**Theorem 4.** *If a strong solution, x*(*t*)*, of the system* (57) *is exponentially stable in the mean square, then there exists Lyapunov functions vk*(*y*, *h*, *x*), *k* ≥ 0*, which satisfy the conditions:*

$$c\_1 \|\|\mathbf{x}\|\|^2 \le v\_k(y, h, \mathbf{x}) \le c\_1 \|\|\mathbf{x}\|\|^2$$

$$\frac{dE[v\_k]}{dt} \le -c\_3 \|\|\mathbf{x}\|\|^2.$$

Thus, the system of matrix Equations (55) and (56) allows us to consistently find the coefficients *G*(*r*) *ik* > 0 of the corresponding series (53) and (54), starting with a positive solution *G*(0) *ik* > 0, *i* = 1, *l*, *k* ≥ 0 of the system (55).

The next step is to prove the convergence of the series (53) and (54). Without the loss of generality, we simplify notations by fixing *k* ≥ 0. Let *Lr* := max *i* = 1, *l*, *k* ≥ 0 C C C *G*(*r*) *i* C C C. Then, from

(56), it follows that there is a constant *c* > 0, such that for any *r* > 0 the following estimate is correct:

$$L\_r \le c \left[ \sum\_{q=1}^{r-1} L\_q L\_{r-q} + L\_{r-1} \right]. \tag{60}$$

Next, we use the method of majorant series. Consider the quadratic equation

$$
\rho^2 + (a + \varepsilon)\rho + b = 0,\tag{61}
$$

where the coefficients *a* and *b* are chosen such that the power series expansion of one of the roots of this equation is a majorant series for (53).

We obtain

$$
\rho\_{1,2} = -\frac{a+\varepsilon}{2} \pm \sqrt{\frac{(a+\varepsilon)^2}{4} - b} = \sum\_{r=0}^{\infty} \varepsilon^r \rho\_r. \tag{62}
$$

Let us substitute (62) into (61), and equate coefficients at equal powers of *ε*. Then, we get an expression for *ρ<sup>r</sup>* through *ρ*0,..., *ρr*−1:

$$\rho\_r = -\frac{1}{2\rho\_0 + a} \left[ \sum\_{q=1}^{r-1} \rho\_q \rho\_{r-q} + \rho\_{r-1} \right],\tag{63}$$

where *ρ*<sup>0</sup> should be found from the Equation

$$
\rho\_0^2 + a\rho\_0 + b = 0.\tag{64}
$$

Comparing (60) and (63), we find that the series (62) will be major for (53), if we consider

$$c = -\frac{1}{2\rho\_0 + a} > 0; \; \rho\_0 = L\_0 > 0.$$

Thus, the values of the coefficients *a* and *b* in Equation (61) are

$$\begin{aligned} a &= -\left[\frac{1}{\mathfrak{c}} + 2L\_0\right] < 0; \\ b &= \frac{L\_0}{\mathfrak{c}} + L\_0^2 > 0. \end{aligned}$$

Using the known *a* and *b* from (61), we find that the majorant series for (53) will be the expansion one of the roots of (61). This root is such that its values are determined by

$$
\rho\_0 = L\_0 = -\frac{a}{2} - \sqrt{\frac{a^2}{4} - b}.
$$

Convergence of the series (53) for *v*<sup>0</sup> *<sup>k</sup>* (*y*, *h*, *x*) follows from the obvious inequality

$$\left\| \sum\_{r=0}^{\infty} \varepsilon^r G^{(r)}(y, h) \right\| \le \sum\_{r=0}^{\infty} L\_r \varepsilon^r. $$

Thus, we have proved the assertion which is formalized below as Theorem 5:

**Theorem 5.** *1. For* ∀*i* = 1, *l*, *k* ≥ 0*, the system* (57) *has a linear admissible control;*


Case II. Let us substitute the series *Gik* = ∑<sup>∞</sup> *<sup>r</sup>*=<sup>0</sup> *<sup>ε</sup>rG*(*r*) *ik* into (44) and equate the coefficients at the same powers *ε*. Then, taking into account (52), we obtain the following equations:

$$\begin{aligned} \mathbf{G}\_{ik}^{(0)} A\_i + A\_i^T \mathbf{G}\_{ik}^{(0)} + \sigma\_i^T \mathbf{G}\_{ik}^{(0)} \sigma\_i - B\_i \mathbf{D}\_{ik}^{-1} B\_i^T \mathbf{G}\_{ik}^{(0)} + \\\\ + \lambda \mathbf{C}\_i^T \mathbf{G}\_{ik} \mathbf{C}\_i + \sum\_{j \neq i}^l (\mathbf{G}\_{jk}^{(0)} - \mathbf{G}\_{ik}^{(0)}) q\_{ij} + M\_{ik} = \mathbf{0}, k \ge \mathbf{0}, \end{aligned} \tag{65}$$

$$G\_{\rm ik}^{(r)}\,\tilde{A}\_{\rm ik} + \bar{A}\_{\rm ik}^{T}G\_{\rm ik}^{(r)} + \sigma\_{i}^{T}G\_{\rm ik}^{(r)}\,\sigma\_{i} + \lambda\mathcal{C}\_{i}^{T}G\_{\rm ik}\mathcal{C}\_{i} + \sum\_{j \neq i}^{l} (G\_{j\rm k}^{(r)} - G\_{\rm ik}^{(r)})q\_{\rm ij} = \Phi\_{\rm ik}^{(r)},\tag{66}$$

where *<sup>i</sup>* <sup>=</sup> 1, *lk* <sup>≥</sup> 0, *<sup>A</sup>*˜ *ik* <sup>=</sup> *Ai* <sup>−</sup> *BiD*−<sup>1</sup> *ik <sup>B</sup><sup>T</sup> <sup>i</sup> <sup>G</sup>*(0) *ik* ,

$$\begin{aligned} \boldsymbol{\Phi}\_{i\bar{k}}^{(r)} &= \sum\_{q=1}^{r-1} \boldsymbol{B}\_{i} \boldsymbol{D}\_{i\bar{k}}^{-1} \boldsymbol{B}\_{i}^{T} \boldsymbol{G}\_{i\bar{k}}^{(r-q)} - \\\\ -\sum\_{j\neq i}^{l} (\boldsymbol{K}\_{ij}^{T} \boldsymbol{G}\_{j\bar{k}}^{(r-1)} + \boldsymbol{G}\_{j\bar{k}}^{(r-1)} \boldsymbol{K}\_{ij} + \boldsymbol{K}\_{ij}^{T} \boldsymbol{G}\_{j\bar{k}}^{(r-2)} \boldsymbol{K}\_{ij} + \sum\_{s=1}^{N} \boldsymbol{Q}\_{s}^{T} \boldsymbol{G}\_{j\bar{k}}^{(r-2)} \boldsymbol{Q}\_{s}) q\_{i\bar{j}}. \end{aligned}$$

Based on the equations above, the following theorem is correct:

**Theorem 6.** *1. The system of matrix Equation* (65) *has a unique positive definite solution G*(0) *ik* > 0, *i* = 1, *l*; *k* ≥ 0*;*

*2. Jumps of the phase vector x* <sup>∈</sup> *<sup>R</sup><sup>m</sup> satisfy the condition* (52)*.*

*Then, the linear-quadratic optimal stabilization problem* (43)*,* (39) *and* (40) *of minimizing the functional* (42) *has a unique solution, which is given in the form of convergent series* (53) *and* (54)*, and the matrices G*(*r*) *ik* , *i* = 1, *l*;*r* ≥ 1, *k* ≥ 0 *is the only solution to the linear matrix Equation* (66)*.*

#### **7. Model Example**

To illustrate the above theoretical results, consider an example with the following parameters:

• The continuous Markov chain *ξ*(*t*), *t* ≥ 0 is defined by generator

$$Q = \begin{pmatrix} -7 & 7\\ 3 & -3 \end{pmatrix};$$


$$g(t, \xi, \eta, x) = K\_{ij}x = \mathfrak{a}x, Q\_s = 0;$$

where *α* ∈ [−1, 1]. For example, below we use *α* = 0.2;


$$A\_1 = \begin{pmatrix} -2 & 1 & 1 \\ 3 & -3 & 0 \\ 0 & 6 & -2 \end{pmatrix}, A\_2 = \begin{pmatrix} -4 & 8 & 0 \\ 0 & 1 & 2 \\ 3 & -2 & -1 \end{pmatrix};$$

• The values of the matrices *B*(*ξ*) for *ξ*(*t*) ∈ {1, 2} are

$$B\_1 = \begin{pmatrix} 4 & 2 \\ 0 & -2 \\ 1 & 1 \end{pmatrix}, B\_2 = \begin{pmatrix} 0 & 1 \\ 4 & -3 \\ -1 & 1 \end{pmatrix};$$

• The values of the matrices *σ*(*ξ*) for *ξ*(*t*) ∈ {1, 2} are

$$
\sigma\_1 = \begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 1 \\ -1 & -1 & 1 \end{pmatrix}, \\
\sigma\_2 = \begin{pmatrix} 1 & -1 & -1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix}.
$$

• The values of the matrices *C*(*ξ*) for *ξ*(*t*) ∈ {1, 2} are

$$\mathbf{C}\_{1} = \begin{pmatrix} 0.3 & 0.2 & 0.1 \\ -0.2 & 0.4 & 0.8 \\ 0.1 & 0.1 & 0.2 \end{pmatrix}, \mathbf{C}\_{2} = \begin{pmatrix} -0.5 & 0.1 & -0.2 \\ 0.4 & 0.5 & -1.1 \\ -0.7 & -0.6 & 0.3 \end{pmatrix};$$

• The control parameters are

$$M\_{i\bar{k}} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}, D\_{i\bar{k}} = \begin{pmatrix} 10 & 0 \\ 0 & 40 \end{pmatrix}.$$

For simplicity, we will assume that the random variables *η<sup>k</sup>* are constants and the solution, *x*(*t*), and optimal control, *u*(*t*), depend only on the random process, *ξ*(*t*).

The main problem of optimal control is the solution of the Riccati Equation (50). There are several basic approaches to finding an approximate solution to this equation. However, in our example, we used the particle swarm optimization method, which allows us to relatively quickly find the solution to Equation (50). The results of finding this equation will be the matrices

$$\mathbf{G}\_1 = \begin{pmatrix} 0.2044 & 0.0943 & 0.0043 \\ 0.1258 & 0.3605 & 0.165 \\ 0.0139 & 0.1575 & 0.3146 \end{pmatrix}, \mathbf{G}\_2 = \begin{pmatrix} 0.1268 & 0.5538 & -0.0962 \\ 0.2533 & 2.8729 & 0.2214 \\ 0.1096 & 0.2075 & 2.0079 \end{pmatrix}.$$

Both solutions are positively defined, so by Theorem 3 there exists an optimal control, which stabilizes system (57) and is defined by

$$\mu(t)\_{\vec{\xi}(t) = i} = -D^{-1}B^T G\_i \mathfrak{x}(t)$$

for *i* ∈ {1, 2}. Two realization of the solution, *x*(*t*), and corresponding control, *u*(*t*), are shown in Figures 1 and 2.

**Figure 1.** Realizations of the solution and optimal control with initial conditions (10, 20, 30).

As we can see from the above examples, the resulting optimal control, *u*(*t*), stabilizes the system, and therefore minimizes the functional *Iu*(*y*, *h*, *x*0). In addition, considering the form of the matrix *D* from (42), we can see that *u*2(*t*) is close to 0, because

$$
\mu^T(t)D\mu(t) = 100\mu\_1^2(t) + 1600\mu\_2^2(t).
$$

In this way, the optimal control found will agree with the given quality functional *Iu*(*y*, *h*, *x*0).

An analysis of the solution showed that there is an optimal control for an arbitrary *α* ∈ [−1, 1]. The |*α*| < 1 case corresponds to the compressive case, since in this case the solution is compressed by the *α* coefficient at each step in the point, *tk*. The |*α*| = 1 case is not compressible, but the existence of an optimal control can be found based on Theorem 3. The case of |*α*| > 1 is not included in the theory of this work, since in this case, the solution

of Equation (50) either does not exist or is not positively defined. This case needs further investigation.

**Figure 2.** Realizations of solution and optimal control with initial conditions (100, 100, 100).

#### **8. Discussion**

In this work, we have obtained sufficient conditions for the existence of an optimal solution for a stochastic dynamical system with jumps, which transform the system to a stable one in probability. The second Lyapunov method was used to investigate the existence of an optimal solution. This method is efficient both for ordinary differential equations (ODE) and for stochastic differential equations (SDE). As it can be seen from the proof of the Theorem 2, the existence of finite bounds for jumps at non-random time moments, *tm* (lim*m*→<sup>∞</sup> *tm* = *T*<sup>∗</sup> < ∞), does not impact the stability of the solution. On the other hand, |*tm*<sup>+</sup> − *tm*| > *δ*, *m* ≥ 1 was used for proving the existence of the optimal control (Theorem 3). This restriction is also present in the works of other authors. Thus, a goal of future work could be to construct an optimal control without the assumption |*tm*<sup>+</sup> − *tm*| > *δ*, *m* ≥ 1, which will considerably expand the scope of the second Lyapunov method.

The limitation of the proposed method is linked to the need for a solution to Riccati´s equations that can be computationally heavy. For small dimensions of *m*, Riccati's equations can be solved either by iteration or by genetic algorithms, but for large dimensions of *m*, only genetic algorithms work.

#### **9. Conclusions**

In this work, we obtained sufficient conditions for the existence of a solution to an optimal stabilization problem for dynamical systems with jumps. We considered the case of a linear system with a quadratic quality functional. We showed that by designing an optimal control that stabilizes the system to a stable one in probability reduces the problem of solving the Riccati equations. Additionally, for a linear autonomous system, the method using a small parameter is substantiated for solving the problem of optimal stabilization. The obtained solutions can be used to describe a stock market in economics, biological systems, including models of response to treatment of cancer, and other complex dynamical systems.

In addition, this work serves as a basis for the study of systems of type (3)–(4) under the conditions of the presence of condensation point, i.e.,

$$\lim\_{k \to \infty} t\_k = t^\* < \infty.$$

Systems with this condition are a mathematical model of real phenomena, in which exceptional events accumulate very quickly over a finite period of time, which can lead to a collapse of the system. For example, paper [22] examines the mathematical model of the collapse of the bridge in Tacoma. At the same time, the authors of this work took into account only deterministic influences, and did not include random events affecting the dynamics of the bridge. Considering both deterministic and random influences can provide a more precise picture for the understanding of such dramatic effects.

**Author Contributions:** Conceptualization, T.L. and I.V.M.; methodology, T.L. and I.V.M.; validation, T.L., Y.L., I.V.M. and P.V.N.; formal analysis, T.L., Y.L., I.V.M. and P.V.N.; writing—original draft preparation, T.L., Y.L. and I.V.M.; writing—review and editing, T.L., P.V.N. and A.G.; supervision, I.V.M. and P.V.N.; project administration, P.V.N. and A.G.; funding acquisition, A.G. and P.V.N. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Luxembourg National Research Fund C21/BM/15739125/ DIOMEDES to T.L., P.V.N. and A.G.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We would like to acknowledge the administrations of the Luxembourg Institute of Health (LIH) and the Luxembourg National Research Fund (FNR) for their support in organizing scientific contacts between research groups in Luxembourg and Ukraine.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:

ODE ordinary differential equation

SDE stochastic differential equation

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
