Next Article in Journal
Comparative Study of Crossover Mathematical Model of Breast Cancer Based on Ψ-Caputo Derivative and Mittag-Leffler Laws: Numerical Treatments
Previous Article in Journal
An Eye Tracking Study on Symmetry and Golden Ratio in Abstract Art
Previous Article in Special Issue
Multistability, Chaos, and Synchronization in Novel Symmetric Difference Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Linear Quadratic Optimal Control Problem for Stochastic Systems Controlled by Impulses

by
Vasile Dragan
1,2,† and
Ioan-Lucian Popa
3,4,*,†
1
Institute of Mathematics “Simion Stoilow” of the Romanian Academy, P.O. Box 1-764, 014700 Bucharest, Romania
2
Academy of the Romanian Scientists, 3 Ilfov, 050044 Bucharest, Romania
3
Department of Computing, Mathematics and Electronics, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
4
Faculty of Mathematics and Computer Science, Transilvania University of Braşov, Iuliu Maniu Street 50, 500091 Braşov, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2024, 16(9), 1170; https://doi.org/10.3390/sym16091170
Submission received: 1 August 2024 / Revised: 28 August 2024 / Accepted: 2 September 2024 / Published: 6 September 2024
(This article belongs to the Special Issue Symmetry in Nonlinear Dynamics and Chaos II)

Abstract

:
This paper focuses on addressing the linear quadratic (LQ) optimal control problem on an infinite time horizon for stochastic systems controlled by impulses. No constraint regarding the sign of the quadratic functional is applied. That is why our first concern is to find conditions which guarantee that the considered optimal control problem is well posed. Then, when the optimal control problem is well posed, it is natural to look for conditions which guarantee the attainability of the optimal control problem that is being evaluated. The main tool involved in the solution of the problems stated before is a backward jump matrix linear differential equation (BJMLDE) with a Riccati-type jumping operator. This is formulated using the matrix coefficients of the controlled system and the weight matrices of the performance criterion. We show that the well posedness of the optimal control problem under investigation is guaranteed by the existence of the maximal and bounded solution of the associated BJMLDE with a Riccati-type jumping operator. Further, we show that when the associated BJMLDE with a Riccati-type jumping operator has a maximal solution which satisfies a suitable sign condition, then the optimal control problem is attainable if and only if it has an optimal control in a state feedback form, or if and only if the maximal solution of the BJMLDE with a Riccati-type jumping operator is a stabilizing solution. In order to make the paper more self-contained, we present a set of conditions that correspond to the existence of the maximal solution of the BJMLDE satisfying the desired sign condition.

1. Introduction

Studying impulsive systems holds significant theoretical and practical significance. In more formal terms, impulsive systems belong to a category of dynamic systems, where state evolution follows continuous time dynamics, except for a number of instances where the state undergoes instantaneous changes. Numerous examples of such systems exist as referenced in [1,2,3]. These systems serve as valuable models for diverse real-world applications across engineering [4,5,6], environmental science [7], and mathematical finance [8]. For the latest insights into the subject matter, readers can consult the review article [9].
LQ control theory and its utilization of Riccati equations has grown substantially. This initial expansion includes investigations into both deterministic systems [10,11] and stochastic systems [12,13], marking it as a significant area within control theory. In 2000, Rami, Zhou, and Moore [14] examined the attainability of the LQ problem; specifically, the existence of an optimal control is established using the generalized algebraic Riccati equation. Numerous developments have been made in this area, and we will highlight only a few that are particularly relevant: in [15], the examination revolves around studying LQ optimal control problems in an infinite horizon characterized by stochastic coefficients. In [16], the investigation centers on exploring optimal quadratic control for an affine equation in the infinite horizon driven by Levy processes. In [17], the authors examine an LQ optimal control problem in the infinite horizon case pertaining to mean-field stochastic differential equations. Also worth considering are references [18,19,20,21].
In [22,23], several criteria for exponential stability in the mean square and mean square stabilizability of linear stochastic systems controlled by impulses are provided.
In the present work, we consider a linear quadratic (LQ) optimal control problem for a dynamical system described by the Ito differential equation controlled by impulses. Often, such an optimal control problem by impulses is obtained as an equivalent version of an optimal control problem requiring the minimization of a quadratic criterion along the trajectories of a stochastic dynamical system controlled by piece-wise constant controls; see [24]. No constraint regarding the sign of the quadratic functional is applied. That is why our first concern is to find conditions which guarantee that the considered optimal control problem is well posed. Then, when the optimal control problem is well posed, it is natural to look for conditions which guarantee the attainability of the optimal control problem that is being evaluated. The main tool involved in the solution of the problems stated before is a backward jump matrix linear differential equation (BJMLDE) with a Riccati-type jumping operator. This is formulated using the matrix coefficients of the controlled system and the weight matrices of the performance criterion.
It is worth mentioning that in the case of an LQ control problem for a stochastic system controlled by impulses, the BJMLDE with a Riccati-type jumping operator plays the role which the Riccati differential equation or the Riccati algebraic equation, respectively, has in the case of an LQ control problem, where the control laws do not act by impulses. That is why it is natural to introduce the concepts of maximal solution and stabilizing solution of a BJMLDE with a Riccati-type jumping operator.
The principal contributions of the present paper encompass the following:
  • We introduce the concepts of maximal solution and stabilizing solution of the BJMLDE with a Riccati-type jumping operator and provide necessary and sufficient conditions which guarantee the existence of these solutions.
  • We show that the existence of the maximal solution of the BJMLDE with a Riccati-type jumping operator satisfying a suitable sign condition guarantees the well posedness of the optimal control problem under consideration. We introduce the value function v ( t 0 , x 0 ) associated to the optimal control problem and compute its value for all initial pairs ( t 0 , x 0 ) [ 0 , ) × R n .
  • Under the assumption that the corresponding BJMLDE with a Riccati-type jumping operator has a maximal solution which satisfies a suitable sign condition, the optimal control problem under consideration is attainable if and only if it admits a unique optimal control which is in a state feedback form, or equivalently, if and only if the maximal solution of the BJMLDE with a Riccati-type jumping operator is a stabilizing solution too.
The derivations use the equivalence of the exponential stability in the mean square of a continuous time jump linear stochastic system with the exponential stability of the companion discrete-type linear equation. That is why we do not need to refer to the criteria for the exponential stability in the mean square or to the criteria for stabilizability in the mean square of the linear stochastic systems controlled by impulses existing in the literature, such as those from [22,23].
The rest of the paper is structured as follows: In Section 2, we introduce the problem and provide necessary preliminary concepts. In Section 3, we introduce the backward jump matrix linear differential equation (BJMLDE) with a Riccati-type jumping operator and provide a necessary and sufficient condition for the existence of the maximal and bounded solution for this (see Theorems 1 and 2). Section 4 is centered on the the well posedness (see Theorem 3) and attainability (see Theorem 4) of the optimal control problem. Finally, Section 5 offers the concluding remarks for the paper.

2. The Problem

Consider the impulsive controlled linear stochastic system (ICLSS) described by
d x ( t ) = A 0 x ( t ) d t + A 1 x ( t ) d w ( t ) , k h < t ( k + 1 ) h , x ( k h + ) = A 0 x ( k h ) + B 0 u ( k ) + w d ( k ) ( A 1 x ( k h ) + B 1 u ( k ) ) ,
h > 0 is given , k Z + = { 0 , 1 , } ,
x ( 0 ) = x 0 ,
where we denote by x ( t ) R n the state vector of the system and by u ( k ) R m the control parameters. In (1a), { w ( t ) } t 0 is a one-dimensional standard Wiener process (or process of the Brownian motion). This is defined on a given probability space ( Ω , F , P ) , and in (1b), { w d ( k ) } k Z + is a sequence of independent random variables with zero mean and variance one. For further information on the definition and characteristics of the Wiener process, we can refer to [25,26].
Throughout the work, we assume that { w ( t ) } t 0 and { w d ( k ) } k Z + are independent stochastic processes. For each t 0 , F t F represents the σ algebra generated by the random variables w ( s ) , w d ( k ) , t s 0 , and t h > k 0 , augmented with all subsets A F with P ( A ) = 0 .
In (1) the controls u ( k ) act at impulsive time instances τ k = k h , k = 0 , 1 , . That is why this kind of system is called impulsive controlled systems.
Based on Theorem 5.2.1 from [27] used on each interval [ k h , ( k + 1 ) h ] , we can demonstrate the following.
Proposition 1.
For each initial pair ( t 0 , x 0 ) R + × R n and for any random vectors u ( k ) : Ω R m which are F k h measurable with
E [ | u ( k ) | 2 ] < ,
the ICLSS (1) has a unique solution x ( t ) = x ( t ; t 0 , x 0 , u ) , t 0 , with the following properties:
(a)
x ( · ; t 0 , x 0 , u ) is continuous (with probability 1) in any t k h , left continuous in k h , for k h > t 0 ;
(b)
x ( · ; t 0 , x 0 , u ) L F 2 { [ t 0 , T ] , R n } for all T > t 0 ;
(c)
x ( t 0 ; t 0 , x 0 , u ) = x 0 and lim t k h t > k h x ( t ; t 0 , x 0 , u ) = x ( k h + ) , where x ( k h + ) is provided by (1b).
We recall that R + = [ 0 , ) , and E [ · ] is used for the mathematical expectation. If I R + is an interval, then L F 2 { I , R p } denotes the Hilbert space of the stochastic processes Y = { y ( t ) } t I , which are non-anticipative with respect to the filtration F = { F t } t I and satisfies
E I | y ( t ) | 2 d t < .
For each initial pair ( t 0 , x 0 ) R + × R n , we denote U a d ( t 0 , x 0 ) the set of controls u = { u ( k ) } k κ ( t 0 ) satisfying the following properties:
(a)
u ( k ) : Ω R m are F k h measurable and
k = κ ( t 0 ) E [ | u ( k ) | 2 ] < ;
(b)
E t 0 | x ( t ; t 0 , x 0 , u ) | 2 d t + k = κ ( t 0 ) E [ | x ( k h ; t 0 , x 0 , u ) | 2 ] < ,
lim t E [ | x ( t ; t 0 , x 0 , u ) | 2 ] = 0 .
Here and in the sequel, κ ( t 0 ) is the integer with the property that
( κ ( t 0 ) 1 ) h < t 0 κ ( t 0 ) h .
In (2), x ( · ; t 0 , x 0 , u ) , t t 0 is the trajectory of ICLSS (1) starting from x 0 at the initial time t 0 , and it is determined by the input u = { u ( k ) } k κ ( t 0 ) .
It is worth noticing that if the ICLSS (1) is mean square stabilizable by the linear state feedback in the sense of Definition 2 from [28], then the sets U a d ( t 0 , x 0 ) are not empty for any initial pair ( t 0 , x 0 ) R + × R n . In the sequel, U a d ( t 0 , x 0 ) will be called the set of admissible controls associated to the initial pair ( t 0 , x 0 ) .
To the pair formed by the ICLSS (1) and the admissible controls U a d ( t 0 , x 0 ) , we associate the following quadratic functional:
J ( t 0 , , x 0 ; u ) = t 0 E [ x u ( t ) M x u ( t ) ] d t + k = κ ( t 0 ) E [ x u ( k h ) M d x u ( k h ) + 2 x u ( k h ) L d u ( k ) + u ( k ) R d u ( k ) ] ,
where x u ( · ) = x ( · ; t 0 , x 0 , u ) .
Regarding the matrix coefficients of (1) and the weight matrices from (4), we assume that A j , A j R n × n , B j R n × m , j = 0 , 1 , M , M d S n , R d S m , L d R n × m are known matrices. Here and in the sequel, S q represents the linear space of real symmetric matrices of size q × q .
The optimal control problem we aim to solve in this work involves finding conditions that ensure the existence of an admissible control u ˜ = { u ˜ } k κ ( t 0 ) which is satisfying the optimality condition
< J ( t 0 , , x 0 ; u ˜ ) J ( t 0 , , x 0 ; u ) ,
for any u U a d ( t 0 , x 0 ) .
Since we do not impose any condition regarding the sign of the considered weighting matrices from (4), it is necessary to find first conditions which guarantee that the functional (4) is bounded below across the set of the admissible controls U a d ( t 0 , x 0 ) .
To this end, we introduce the so-called value function v ( · , · ) associated to the optimal control problem under consideration. We set
v ( t 0 , x 0 ) = Δ inf u U a d ( t 0 , x 0 ) J ( t 0 , , x 0 ; u ) .
Definition 1.
We say that the optimal control problem described by the performance criterion (4), the controlled system (1), and the set of the admissible controls U a d ( t 0 , x 0 ) is as follows:
(a)
Well posed if
v ( t 0 , x 0 ) > ,
for any initial pair ( t 0 , x 0 ) R + × R n ;
(b)
Attainable, if for each ( t 0 , x 0 ) R + × R n , there exists the control u ˜ U a d ( t 0 , x 0 ) with
J ( t 0 , , x 0 ; u ˜ ) = v ( t 0 , x 0 ) > .
In Section 3, we examine the issue of whether a maximal solution and a stabilizing solution exist for this type of backward jump matrix linear differential equation.
In Section 4, we shall develop conditions which guarantee that the optimal control problem under consideration is well posed and attainable. The primary instrument is a BJMLDE with a Riccati-type jumping operator.

3. BJMLDE with Riccati-Type Jumping Operators

Based on the matrix coefficients of the ICLSS (1) and the weighting matrices provided by the performance criterion (4), we introduce the following BJMLDE with a Riccati-type jumping operator:
X ˙ ( t ) + A 0 X ( t ) + X ( t ) A 0 + A 1 X ( t ) A 1 + M = 0 , k h t < ( k + 1 ) h ,
X ( k h ) = j = 0 1 A j X ( k h ) A j j = 0 1 A j X ( k h ) B j + L d · j = 0 1 B j X ( k h ) B j + R d 1 j = 0 1 B j X ( k h ) A j + L d + M d , k Z + .
Γ Σ is the set of all bounded functions Z : R + S n satisfying the following:
(a)
Z ( · ) is a differentiable function on each interval ( k h , ( k + 1 ) h ) , right continuous in t k = k h , k Z + ;
(b)
Z ( · ) verify the following jump matrix linear differential inequalities:
Z ˙ ( t ) + A 0 Z ( t ) + Z ( t ) A 0 + A 1 Z ( t ) A 1 + M 0 , k h t < ( k + 1 ) h ,
Z ( k h ) + M d L d L d R d + j = 0 1 A j B j Z ( k h ) A j B j 0 ,
j = 0 1 B j Z ( k h ) B j + R d γ I m > 0 ,
for all k Z + , γ > 0 .
Remark 1.
Employing the Schur complement technique (Theorem 1 from [29]), one may show that the set Γ Σ contains all the bounded solutions t X ( t ) : R + S n of the BJMLDE (8) which are satisfying the sign conditions
j = 0 1 B j X ( k h ) B j + R d γ I m > 0 ,
for all k Z + .
A significant role will be performed by the subset Γ ˜ Σ which contains all bounded functions Z ( · ) Γ Σ satisfying the stronger condition:
Z ( k h ) + M d L d L d R d + j = 0 1 A j B j Z ( k h ) A j B j γ ^ I n + m > 0 ,
for all k Z + .
Definition 2.
A globally defined solution t X m a x ( t ) : R + S n of the BJMLDE (8) is named a maximal solution if
X m a x ( t ) Z ( t ) ,
for all t R + for any Z ( · ) Γ Σ .
Definition 3.
A globally defined solution t X s ( t ) : R + S n of the BJMLDE (8) is called the stabilizing solution if the closed-loop jump stochastic linear differential equation, JSLDE,
d x ( t ) = A 0 x ( t ) d t + A 1 x ( t ) d w ( t ) , k h < t ( k + 1 ) h ,
x ( k h + ) = ( A 0 + B 0 F s ( k h ) ) x ( k h ) + w d ( k ) ( A 1 + B 1 F s ( k h ) ) x ( k h ) ,
k Z + is exponentially stable in mean square (ESMS), where
F s ( k h ) = j = 0 1 B j X s ( k h ) B j + R d 1 j = 0 1 B j X s ( k h ) A j + L d .
Definition 4.
The ICLSS is mean square stabilizable by the linear state feedback if there exists a matrix F R m × n such that the following closed-loop JSLDE
d x ( t ) = A 0 x ( t ) d t + A 1 x ( t ) d w ( t ) , k h < t ( k + 1 ) h ,
x ( k h + ) = ( A 0 + B 0 F ) x ( k h ) + w d ( k ) ( A 1 + B 1 F ) x ( k h ) ,
k Z + , is ESMS.
A set of necessary and sufficient criteria for the mean square stabilizability by the linear state feedback of an ICLSS of type (1) is provided by Theorem 6 in [28].
In this section, we shall highlight the necessary and sufficient conditions for the existence of the maximal solution and the stabilizing solution of a BJMLDE of type (8). To this end, we will appeal to the properties of the linear and positive operators and of the linear operators which are defining a positive evolution on an ordered Hilbert space.
We recall that S n is a finite dimensional real Hilbert space with the inner product:
X , Y : = Tr [ X Y ] ,
for all X , Y S n , here, Tr [ · ] stands for the trace operator. Furthermore, S n becomes an ordered Hilbert space with the order relation ⪰ provided by the convex cone
S n + : = { X S n | X is a positive semidefinite matrix } .
Consider X L [ X ] : S n S n the linear operator defined as
L [ X ] = A 0 X + X A 0 + A 1 X A 1 ,
for all X S n . Based on the inner product (14), we consider the adjoint operator L * [ · ] which is described by
L * [ X ] = A 0 X + X A 0 + A 1 X A 1 ,
for all X S n . With these notations, the differential Equation (8a) can be expressed in a compact form as
X ˙ ( t ) + L * [ X ( t ) ] + M = 0 , k h < t ( k + 1 ) h .
Let t X ( t ) : R + S n be a globally defined solution of (8). From (17), we obtain that
X ( t ) = e L * ( ( k + 1 ) h t ) [ X ( ( k + 1 ) h ) ] + t ( k + 1 ) h e L * ( s t ) [ M ] d s , t ( k h , ( k + 1 ) h ) , k Z + .
Letting t k h , t > k h in (18) and substituting the result in (8b), one obtains that the sequence { X ( k h ) } k Z + solves the next generalized discrete-time Riccati-type equation (GDTRE):
Y ( k ) = Π 1 [ Y ( k + 1 ) ] + M ˜ ( Π 2 [ Y ( k + 1 ) ] + L ˜ ) · ( Π 3 [ Y ( k + 1 ) ] + R ˜ ) 1 ( Π 2 [ Y ( k + 1 ) ] + L ˜ ) ,
where
Π 1 [ Y ] : = j = 0 1 A j e L * h [ Y ] A j ,
Π 2 [ Y ] : = j = 0 1 A j e L * h [ Y ] B j ,
Π 3 [ Y ] : = j = 0 1 B j e L * h [ Y ] B j ,
M ˜ : = M d + j = 0 1 A j 0 h e L * s [ M ] d s A j ,
L ˜ : = L d + j = 0 1 A j 0 h e L * s [ M ] d s B j ,
R ˜ : = R d + j = 0 1 B j 0 h e L * s [ M ] d s B j .
Employing Theorem 2.6.1 from [30], we obtain the following.
Corollary 1.
If L and L * are the linear operators, defined by (15) and (16), then both e L t as well as e L * t , t 0 , are positive operators defined on the ordered Hilbert space ( S n , S n + ) , that is
e L t [ Y ] 0 ,
and
e L * t [ Y ] 0 ,
for all t 0 if Y 0 .
Based on (20), we introduce the following linear operator Y Π [ Y ] : S n S n + m defined by
Π [ Y ] : = Π 1 [ Y ] Π 2 [ Y ] Π 2 [ Y ] Π 3 [ Y ] ,
for all Y S n . From (20) and (22), one obtains via Corollary 1 that
Π [ Y ] 0 if Y 0 .
This allows us to conclude that the GDTRE (19) is a special case of the relation 5.8 from [31].
The GDTRE (19) is determined by the pair Σ d = ( Π , Q ) , where Π : S n S n + m is the linear operator defined in (22), while
Q = M ˜ L ˜ L ˜ R ˜ ,
with M ˜ , L ˜ , and R ˜ being described in (21). To the pair Σ d , we associate the following sets of sequences of symmetric matrices:
Γ Σ d : = { Z = { Z ( k ) } k 0 | D [ Z ] ( k ) 0 , Π 3 [ Z ( k + 1 ) ] + R ˜ γ d I m > 0 , k 0 } ,
Γ ˜ Σ d : = { Z = { Z ( k ) } k 0 | D [ Z ] ( k ) γ ^ d I n + m > 0 , for all k 0 } ,
where
D [ Z ] ( k ) : = Z ( k ) 0 0 0 + Π [ Z ( k + 1 ) ] + Q .
Lemma 1.
(a)
There exists a one-to-one correspondence between the set Γ Σ of the matrix valued functions Z ( · ) which are satisfying the linear matrix differential inequalities with jumps (9) and the set Γ Σ d of the sequences of symmetric matrices Z = { Z ( k ) } k 0 which are satisfying the linear matrix inequalities (23);
(b)
There exists a one-to-one correspondence between the set Γ ˜ Σ of the matrix valued functions which satisfy the linear matrix inequalities (9a) with jumps described by (10) and the set Γ ˜ Σ d of the matrix valued sequences Z = { Z ( k ) } k 0 which satisfy the linear matrix inequalities (24).
Proof. 
( a ) If Z ( · ) is satisfying (9a), we define
Δ ( t ) : = Z ˙ ( t ) + L * [ Z ( t ) ] + M .
So (9a) can be expressed as
Z ˙ ( t ) = L * [ Z ( t ) ] + M Δ ( t ) , k h t < ( k + 1 ) h , k Z + ,
where
Δ ( t ) 0 , for all t R + .
Hence, if Z ( · ) is satisfying (9a), it also satisfies
Z ( t ) = e L * ( ( k + 1 ) h t ) [ Z ( ( k + 1 ) h ) ] + t ( k + 1 ) h e L * ( s t ) [ M ] d s t ( k + 1 ) h e L * ( s t ) [ Δ ( s ) ] d s , k h t < ( k + 1 ) h , k Z + .
Letting t k h , t > k h in (26) and substituting the result in (9b) and in (9c), we obtain via (20) and (21) that the sequence { Z ( k h ) } k Z + lies in Γ Σ d because
t ( k + 1 ) h e L * ( s t ) [ Δ ( s ) ] d s 0 , for all k Z + .
Conversely, if Y = { Y ( k ) } k 0 lies in Γ Σ d , we define
Z ( t ) : = e L * ( ( k + 1 ) h t ) [ Y ( k + 1 ) ] + t ( k + 1 ) h e L * ( s t ) [ M ] d s ,
for all k h t < ( k + 1 ) h , for any k Z + . Letting t ( k + 1 ) h , t < ( k + 1 ) h in (27), we obtain that
Z ( ( k + 1 ) h ) = Y ( k + 1 ) , for all k Z + .
Differentiating (27) with respect to the parameter t, we obtain
Z ˙ ( t ) + L * [ Z ( t ) ] + M = 0 , k h t < ( k + 1 ) h .
Employing (20), (23), (25), and (27) written for t = k h with (28) written for k replaced by k 1 , we obtain that Z ( · ) satisfies (9b) and (9c). Hence, Z ( · ) defined in (27) lies in Γ Σ . Thus, the proof of ( a ) is complete.
Part ( b ) may be proved in a similar manner. □
Particularizing the Definition 4.7 from [31] for the stabilizability of a sequence of linear operators to the case of the linear operator Π defined by (20) and (22), we obtain the following.
Definition 5.
The linear operator Π : S n S n + m is stabilizable if there exists a matrix F R m × n with the property that the discrete time linear equation on the Hilbert space S n
S ( k + 1 ) = Π F * [ S ( k ) ] ,
is exponentially stable. Π F * [ · ] is the adjoint of the linear operator X Π F [ X ] : S n S n with respect to the inner product (14), where
Π F [ X ] : = I n F Π [ X ] I n F .
By direct calculation, involving (20), (22), (29), and (30), we obtain the following.
Corollary 2.
On the linear operator specified by (20) and (22), we have the following equivalences:
(i)
The operator Π is stabilizable (in the framework of Definition 5);
(ii)
There exists a matrix F R m × n with the property that the discrete-time linear equation on the Hilbert space S n
S ( k + 1 ) = j = 0 1 e L h [ ( A j + B j F ) S ( k ) ( A j + B j F ) ] ,
is exponentially stable.
Invoking Proposition 3.1 and Proposition 3.3 from [28] in the case of the JSLDE (13), we obtain via Corollary 2 Definitions 4 and 5.
Corollary 3.
The ICLSS (1) is mean square stabilizable by the linear state feedback if and only if the accompanying linear operator Π associated via (20) and (22) is stabilizable.
The following result establishes the conditions that are both necessary and sufficient conditions for the existence of the maximal and bounded solution of the BJMLDE (8).
Theorem 1.
If the ICLSS (1) is mean square stabilizable by the linear state feedback, then we have the following equivalences:
(i)
The set Γ Σ is not empty;
(ii)
The BJMLDE (8) with a Riccati-type jumping operator has a unique maximal and bounded solution X m a x : R + S n which satisfies the sign condition
R d + j = 0 1 B j X m a x ( k j ) B j ν I m > 0 ,
for all k Z + . Furthermore, X m a x ( · ) is a periodic function of period h .
Proof. 
The implication ( i i ) ( i ) follows immediately because, according to Remark 1, the maximal solution, if any, lies in Γ Σ .
In order to prove the implication ( i ) ( i i ) , let us remark that Lemma 1 ( a ) together with Corollary 3 guarantees that in the case of GDTRE (19), the assumptions from Theorem 5.3 in [31] are fulfilled. Hence, under the considered assumptions, the GDTRE (19) has a unique maximal solution Y m a x S n which satisfies the following sign condition:
Π 3 [ Y m a x ] + R ˜ > 0 .
For each k 0 and t [ k h , ( k + 1 ) h ) , we define
X m a x ( t ) : = e L * ( ( k + 1 ) h t ) [ Y m a x ] + t ( k + 1 ) h e L * ( s t ) [ M ] d s .
So,
X m a x ( ( k + 1 ) h ) = lim t ( k + 1 ) h t < ( k + 1 ) h X m a x ( t ) = Y m a x , for all k = 0 , 1 , .
One shows that (34) defines a periodic function of period h, and (33) becomes (32).
Finally, employing (20), (21), and (34), one see that X m a x ( · ) is a solution of the BJMLDE (8). Employing again Lemma 1 ( a ) , one obtains that X m a x ( · ) defined in (34) is just the unique bounded and maximal solution of the BJMLDE (8). Thus, the proof ends. □
Theorem 2.
The following are equivalent:
(i)
The ICLSS (1) is mean square stabilizable by the linear state feedback, and the set Γ ˜ Σ is not empty;
(ii)
The BJMLDE with a Riccati-type jumping operator (8) has a unique bounded and stabilizing solution X s ( · ) : R + S n which satisfies the following sign condition:
R d + j = 0 1 B j X s ( k h ) B j ν ˜ I m > 0 ,
for all k Z + . Further, X s ( · ) is a periodic function of period h, and it coincides with the maximal bounded solution X m a x ( · ) of (8).
Proof. 
Invoking Lemma 1 ( b ) together with Corollary 3, we see that we can apply the implication ( i ) ( i i ) of Theorem 5.5 and Theorem 5.6 from [31] to deduce that the GDTRE (19) has a stabilizing solution Y s which satisfies the following sign condition:
R ˜ + Π 3 [ Y s ] > 0 .
We define
X s ( t ) : = e L * ( ( k + 1 ) h t ) [ Y s ] + t ( k + 1 ) h e L * ( s t ) [ M ] d s ,
for any k = 0 , 1 , , and for all t [ k h , ( k + 1 ) h )
One shows that X s ( · ) defined as before is just the stabilizing solution of (8). Thus, the proof of the implication ( i ) ( i i ) ends.
To prove ( i i ) ( i ) , one takes into account that if X s ( · ) is the stabilizing and bounded solution of (8), then the sequence { X s ( k h ) } k 0 is the stabilizing solution of the GDTRE (19). Reasoning from ( i i ) ( i ) from Theorem 5.6 of [31], we obtain that the operator Π is stabilizable (in the framework of Definition 5), and the set Γ ˜ Σ is not empty.
Invoking again Lemma 1 ( b ) together with Corollary 3, we may conclude that the set Γ ˜ Σ is not empty, and the ICLSS (1) is mean square stabilizable by the linear state feedback. □
Remark 2.
According to the equivalence ( i ) ( i i i ) from Proposition 5.2 from [31] applied in the particular case of the GDTRE (19), if the set Γ ˜ Σ is not empty, it necessarily contains constant sequences. That is why in view of Lemma 1 ( b ) , in order to test if the set Γ ˜ Σ is not empty, it is sufficient to check if the linear matrix inequality (LMI) (24)–(25) has a solution which is not dependent upon k Z + .

4. Well Posedness and Attainability of the Optimal Control Problem

First, we present an auxiliary result.
Lemma 2.
Let X ( · ) : I X R + S n be a solution of the BJMLDE (8). For each [ t 0 , T ] I X , we have
t 0 T E [ x u ( t ) M x u ( t ) ] + k = κ ( t 0 ) κ ( T ) 1 E [ x u ( k h ) M d x u ( k h ) + 2 x u ( k h ) L d u ( k ) + u ( k ) R d u ( k ) ] = x 0 X ( t 0 ) x 0 E [ x u ( T ) X ( T ) x u ( T ) ] + k = κ ( t 0 ) κ T 1 E [ ( u ( k ) F X ( k h ) x u ( k h ) ) R ( X ( k h ) ) ( u ( k ) F X ( k h ) x u ( k h ) ) ] ,
where κ ( T ) follows the definition provided in (3) but with t 0 replaced by T and
R ( X ( k h ) ) : = R d + j = 0 1 B j X ( k h ) B j ,
and
F X ( k h ) : = ( R ( X ( k h ) ) ) 1 j = 0 1 B j X ( k h ) A j + L d .
In (38), x 0 X ( t 0 ) x 0 is replaced by x 0 X ( t 0 ) x 0 whenever t 0 < κ ( t 0 ) h .
Proof. 
Employing the Itô formula (see [27]) to V ( t , x ) = x X ( t ) x and to the stochastic process x u ( t ) which is satisfying (1a) on the interval of the form [ a p , b p ] [ k h , ( k + 1 ) h ] if κ ( t 0 ) k κ ( T ) 2 or [ a p , b p ] [ ( κ ( T ) 1 ) h , T ] and letting a + p k h , b p ( k + 1 ) h , or a p ( κ ( T ) 1 ) h and b p T , we obtain via (8a) that
k h ( k + 1 ) h E [ x u ( t ) M x u ( t ) ] d t = E [ x u ( k h + ) X ( k h ) x u ( k h + ) ] E [ x u ( ( k + 1 ) h ) X ( ( k + 1 ) h ) x u ( ( k + 1 ) h ) ] ,
for κ ( t 0 ) k κ ( T ) 2 and
( κ ( T ) 1 ) h T E [ x u ( t ) M x u ( t ) ] d t = E [ x u ( ( κ ( T ) 1 ) h + ) X ( ( κ ( T ) 1 ) h ) · x u ( ( κ ( T ) 1 ) h + ) ] E [ x u ( ( T ) X ( T ) x u ( ( T ) ] .
From (1b), we obtain that
E [ x u ( k h + ) X ( k h ) x u ( k h + ) ] = j = 0 1 E [ ( A j X ( k h ) B j u ( k ) ) X ( k h ) ( A j X ( k h ) + B j u ( k ) ) ] .
After some computations involving (8b) and the technique of square computations, we obtain
E [ x u ( k h + ) X ( k h ) x u ( k h + ) ] = E [ x u ( k h ) X ( k h ) x u ( k h ) ] + E [ ( u ( k ) F X ( k h ) x u ( k h ) ) R ( X ( k h ) ) ( u ( k ) F X ( k h ) x u ( k h ) ) ] E [ x u ( k h ) M d x u ( k h ) + 2 x u ( k h ) L d u ( k ) + u ( k ) R d u ( k ) ] ,
κ ( t 0 ) k κ ( T ) 1 . Substituting (43) in (41) and (42) and summing from k = κ ( t 0 ) to k = κ ( T ) 1 , we obtain
κ ( t 0 ) T E [ x u ( t ) M x u ( t ) ] d t + k = κ ( t 0 ) κ ( T ) 1 E [ x u ( k h ) M d x u ( k h ) + 2 x u ( k h ) L d u ( k ) u ( k ) R d u ( k ) ] = E [ x u ( κ ( t 0 ) h ) X ( κ ( t 0 ) h ) x u ( κ ( t 0 ) h ) ] E [ x u ( T ) X ( T ) x u ( T ) ] + k = κ ( t 0 ) κ ( T ) 1 E [ ( u ( k ) F X ( k h ) x u ( k h ) ) R ( X ( k h ) ) ( u ( k ) F X ( k h ) x u ( k h ) ] .
If t 0 = κ ( t 0 ) h , then (44) is just (38). In this case, the proof would be complete.
If ( κ ( t 0 ) 1 ) h < t 0 < κ ( t 0 ) h , we use Itô’s formula to V ( t , x ) = x X ( t ) x and to the stochastic process x u ( t ) , t [ t 0 , κ ( t 0 ) h ] , and we obtain via (8a) that
t 0 κ ( t 0 ) h E [ x u ( t ) M x u ( t ) ] d t = x 0 X ( t 0 ) x 0 E [ x u ( κ ( t 0 ) h ) X ( κ ( t 0 ) h ) x u ( κ ( t 0 ) h ) ] .
Adding this equality to (44), we obtain (38). Thus, the proof ends. □
The next result provides a sufficient condition for the attainability of the optimal control problem under consideration.
Proposition 2.
Assume that the BJMLDE with a Riccati-type jumping operator (8) has a bounded and stabilizing solution X s ( · ) : R + S n satisfying the sign condition (35).
Let us explore the control law u ˜ = { u ˜ ( k ) } k κ ( t 0 ) defined by
u ˜ ( k ) : = F s ( k h ) u ˜ ( k h ) , k κ ( t 0 ) ,
where F s ( k h ) are the gain matrices associated to the stabilizing solution X s ( · ) via (12), and x ˜ ( k h ) , k h t 0 are the value of the solution of the IVP (11).
Under these conditions, the control defined by (45) lies in U a d ( t 0 , x 0 ) and satisfies the optimality condition (5). The minimal value of the cost function (4) is:
J ( t 0 , , x 0 ; u ˜ ) = x 0 X s ( t 0 ) x 0 ,
with the remark that X s ( t 0 ) = X s ( t 0 ) whenever t 0 < κ ( t 0 ) h .
Proof. 
The fact that the control (45) satisfies the constraints (2) follows immediately from Definition 3 of the stabilizing solution. Since X s ( · ) is a bounded function, then there exists c ˜ > 0 with the property that
| X s ( T ) | c ˜ ,
and
| X s ( T ) | c ˜ ,
for all T 0 . Employing (2b), we may infer that
lim T E [ x u ( T ) X s ( T ) x u ( T ) ] = 0 ,
for all u U a d ( t 0 , x 0 ) for any ( t 0 , x 0 ) R + × R n .
Writing (38) for X ( · ) replaced by X s ( · ) , we obtain via (2a) and (46) that
J ( t 0 , , x 0 ; u ) = x 0 X s ( t 0 ) x 0 + k = κ ( t 0 ) E [ ( u ( k ) F s ( k h ) x u ( k h ) ) R ( X s ( k h ) ) ( u ( k ) F s ( k h ) x u ( k h ) ) ] ,
for all u U a d ( t 0 , x 0 ) , ( t 0 , x 0 ) R + × R n . For the unicity of the solution of the IVP (1) directed by a control u U a d ( t 0 , x 0 ) , we deduce that in the case of the control (45), we have
x u ˜ ( t ) = x ˜ ( t ) ,
for all t t 0 , x ˜ ( · ) being the solution of (11) which satisfies
x ˜ ( t 0 ) = x 0 .
This allows us to conclude that, in the case of the control (45), the equality (47) gives
J ( t 0 , , x 0 ; u ˜ ) = x 0 X s ( t 0 ) x 0 .
From (35), (47) and (48), we conclude that the control (45) satisfies the optimality condition (5), and (48) provides the maximal value of the cost function (4) over the set of the admissible controls. Thus, the proof ends. □
The next theorem provides a sufficient condition for the well posedness of the optimal control problem under consideration.
Theorem 3.
Assume the following:
(a)
The ICLSS (1) is mean square stabilizable by the linear state feedback;
(b)
The set Γ Σ of the functions Z ( · ) which satisfy the constraints (9) is not empty.
Under these conditions, the optimal control problem described by the performance criterion (4), the controlled system (1), and the set of the admissible controls U a d ( t 0 , x 0 ) is well posed. Furthermore,
v ( t 0 , x 0 ) = x 0 X m a x ( t 0 ) x 0 ,
for all ( t 0 , x 0 ) R + × R n . X m a x ( · ) is the unique maximal and bounded solution of the BJMLDE (8) which satisfies the sign condition (32). In (49), X m a x ( t 0 ) coincides to X m a x ( t 0 ) whenever ( κ ( t 0 ) 1 ) h < t 0 < κ ( t 0 ) h .
Proof. 
If the assumption ( a ) holds, then the set U a d ( t 0 , x 0 ) is not empty for any initial pair ( t 0 , x 0 ) . Further, Theorem 1 guarantees the existence of the maximal and bounded solution X m a x ( · ) which satisfies the sign condition (32), and it is a periodic function with period h . Applying Lemma 2, taking X m a x ( · ) instead of X ( · ) , we obtain
t 0 T E [ x u ( t ) M x u ( t ) ] d t + k = κ ( t 0 ) κ ( T ) 1 E [ x u ( k h ) M d x u ( k h ) + 2 x u ( k h ) L d u ( k ) + u ( k ) R d u ( k ) ] = x 0 X m a x ( t 0 ) x 0 E [ x u ( T ) X m a x ( T ) x u ( T ) ] + k = κ ( t 0 ) κ ( T ) 1 E [ ( u ( k ) F m a x ( k h ) x u ( k h ) ) R ( X m a x ( k h ) ) ( u ( k ) F m a x ( k h ) x u ( k h ) ] ,
for all T > κ ( t 0 ) h and any u U a d ( t 0 , x 0 ) for any ( t 0 , x 0 ) R + × R n . Here, F m a x ( k h ) is computed as in (40) with X m a x ( · ) instead of X ( · ) . Since X m a x ( · ) is a bounded function, then there exists c ^ > 0 such that
| X m a x ( T ) | c ^ ,
and
| X m a x ( T ) | c ^ ,
for all T 0 . Based on (2b), we may infer that
lim T E [ x u ( T ) X m a x ( T ) x u ( T ) ] = 0 ,
for all u U a d ( t 0 , x 0 ) for an arbitrary initial pair ( t 0 , x 0 ) R + × R n . Further, taking the limit for T in (50), we obtain via (2a) and (51) that
J ( t 0 , , x 0 ; u ) = x 0 X m a x ( t 0 ) x 0 + k = κ ( t 0 ) E [ ( u ( k ) F m a x ( k h ) x u ( k h ) ) R ( X m a x ( k h ) ) ( u ( k ) F m a x ( k h ) x u ( k h ) ) ] ,
for all u = { u ( k ) } k κ ( t 0 ) U a d ( t 0 , x 0 ) for arbitrary ( t 0 , x 0 ) R + × R n . Bearing in mind (32), we may deduce from (52) that
J ( t 0 , , x 0 ; u ) x 0 X m a x ( t 0 ) x 0 ,
for all u U a d ( t 0 , x 0 ) . From (53) and (6), we may conclude that
v ( t 0 , x 0 ) x 0 X m a x ( t 0 ) x 0 ,
for all ( t 0 , x 0 ) R + × R n . Thus, from Definition 1 ( a ) , it follows that the optimal control problem under consideration is well posed.
To end the proof, it remains to show that in (54), we have equality. To accomplish this, we consider a sequence { δ q } g 1 with the following properties:
δ q δ q + 1 > 0 ,
and
lim q δ q = 0 .
We consider the following perturbation of (4):
J δ q ( t 0 , , x 0 ; u ) = J ( t 0 , , x 0 ; u ) + δ q k = κ ( t 0 ) E [ | x u ( k h ) | 2 ] ,
for all u = { u ( k ) } k κ ( t 0 ) U a d ( t 0 , x 0 ) .
The BJMLDE of type (8) associated to the pair formed by the ICLSS (1) and the quadratic functional (55) is
X ˙ ( t ) + L * [ X ( t ) ] + M = 0 , k h t < ( k + 1 ) h ,
X ( k h ) = R d ( X ( k h ) ) + δ q I n , k Z + ,
where L * ( · ) is introduced by (16), while R d ( X ( k h ) ) is described by the right-hand side of (8b).
The accompanying GDTRE of the form (19) associated to (56) is
Y ( k ) = Π 1 [ Y ( k + 1 ) ] + M ˜ + δ q I n ( Π 2 [ Y ( k + 1 ) ] + L ˜ ) · ( Π 3 [ Y ( k + 1 ) ] + R ˜ ) 1 ( Π 2 [ Y ( k + 1 ) ] + L ˜ ) ,
where Π k [ · ] , j = 1 , 2 , 3 are defined as in (20) and M ˜ , L ˜ , R ˜ are those computed in (21). Hence, the GDTRE (57) is described by the pair
σ d δ q = ( Π , Q δ q ) ,
where Π : S n S n + m is the operator defined in (22) and
Q δ q = M ˜ + δ q I n L ˜ L ˜ R ˜ , q 1 .
Let Γ ˜ Σ d δ q be the set of the sequences of symmetric matrices Z = { Z ( k ) } k 0 which satisfy
D δ q ( Z ) ( k ) γ I n + m > 0 ,
for all k 0 , where
D δ q ( Z ) ( k ) : = Z ( k ) 0 0 0 + Π [ Z ( k + 1 ) ] + Q δ q .
From Lemma 1 ( a ) together with (23), (59) and (60) we may infer that if the set Γ Σ is not empty, then the set Γ ˜ Σ d δ q associated to the BJMLDE (56) is not empty. Invoking the implication ( i ) ( i i ) from Theorem 2, we may conclude that the BJMLDE (56) has a stabilizing solution X s δ q ( · ) which satisfies the sign conditions
R d + j = 0 1 B j X s δ q ( k h ) B j γ 1 I m > 0 ,
for all k Z + . One proves via Theorem 5.4 from [31] that
X s δ q ( t ) X s δ q + 1 ( t ) X m a x ( t ) ,
for all q 1 and
lim q X s δ q ( t ) = X m a x ( t ) , t R + .
Let F s δ q ( t ) be the stabilizing gain matrix associated to the solution X s δ q ( · ) as in (12). We consider the control law
u ˜ δ q ( k ) : = F s δ q ( k h ) x ˜ δ q ( k h ) ,
x ˜ δ q ( · ) being the solution of the IVP obtained when the control (62) is plugged in (1). Applying Proposition 2 in the case of the optimal control problem described by the cost function (55), we obtain via (52) and (55) that
x 0 X m a x ( t 0 ) x 0 v ( t 0 , x 0 ) J ( t 0 , , x 0 ; u ˜ δ q ) J δ q ( t 0 , , x 0 ; u ˜ δ q ) = x 0 X s δ q ( t 0 ) x 0 ,
for all q 1 . Taking the limit for q in (63), we obtain via (61) that
v ( t 0 , x 0 ) = x 0 X m a x ( t 0 ) x 0 ,
which complete the proof. □
The next result provides a necessary and sufficient condition for the attainability of the optimal control problem under consideration.
Theorem 4.
Under the assumptions of Theorem 3, the following are equivalent:
(i)
The optimal control problem described by the cost functional (4), the controlled system (1), and the set of the admissible controls U a d ( t 0 , x 0 ) is attainable.
(ii)
The unique bounded and maximal solution of the BJMLDE (8) is also a stabilizing solution of the BJMLDE (8).
Proof. 
The implication ( i i ) ( i ) follows directly from Proposition 2. It remains to prove the implication ( i ) ( i i ) . If the assumptions from Theorem 3 hold, then the BJMLDE (8) has a maximal and bounded solution X s ( · ) which satisfies the sign condition (32).
Let u ˜ U a d ( t 0 , x 0 ) satisfy (7). Then, from (7), (49), and (52), we deduce that
k = κ ( t 0 ) E [ ( u ˜ ( k ) F m a x ( k h ) x u ˜ ( k h ) ) R ( X m a x ( k h ) ) ( u ˜ ( k ) F m a x ( k h ) x u ˜ ( k h ) ) ] = 0 .
Bearing in mind (32), we may infer that (64) holds if and only if
u ˜ ( k ) = F m a x ( k h ) x u ˜ ( k h ) a . s . ,
for all k κ ( t 0 ) . Moreover,
lim t E [ | x u ˜ ( t ) | 2 ] = 0 ,
because u ˜ U a d ( t 0 , x 0 ) . Taking into account that X m a x ( · ) is a periodic function of period h , it follows that
F m a x ( k h ) = F m a x ( 0 ) ,
for all k Z + . Hence, (65) becomes
u ˜ ( k ) = F m a x ( 0 ) x u ˜ ( k h ) a . s . ,
for all k Z + . Because of the unicity of the solution of an IVP, we deduce that x u ˜ ( · ) coincides to x ( · ; t 0 , x 0 ) , where x ( · ; t 0 , x 0 ) is the solution of the jump linear stochastic system (JLSS):
d x ( t ) = A 0 x ( t ) d t + A 1 x ( t ) d w ( t ) , k h < t ( k + 1 ) h ,
x ( k h + ) = ( A 0 + B 0 F m a x ( 0 ) ) x ( k h ) + w d ( k ) ( A 1 + B 1 F m a x ( 0 ) ) x ( k h ) ,
x ( t 0 ) = x 0 ,
obtained when (67) is substituted in (1). Thus, (66) yields
lim t E [ x ( t ; t 0 , x 0 ) | 2 ] = 0 ,
for all ( t 0 , x 0 ) R + × R n . This means that the JLSS (68) is asymptotic stable in mean square (ASMS).
Reasoning as in the proofs of Propositions 2 and Proposition 3 from [28], one may prove that the JLSS (68) is ASMS if and only if the companion DTLE
Z ( k + 1 ) = L d [ Z ( k ) ] ,
is asymptotic stable, where
L d [ Z ] : = j = 0 1 e L h [ ( A j + B j F m a x ( 0 ) ) Z ( A j + B j F m a x ( 0 ) ) ] .
Since the DTLE (69) is a linear time-invariant equation, it follows that it is asymptotic stable if and only if it is exponentially stable. According to the Remark 3 from [28], we deduce that (69) is exponentially stable if and only if the JLSS (68) is ESMS. Thus, the proof ends. □

5. Conclusions

This paper has contributed to the further development of the LQ control problem for stochastic systems controlled by impulses. A sufficient condition for the well posedness of the LQ control problem under consideration is expressed in terms of the existence of a maximal solution satisfying a suitable sign condition of the BJMLDE with a Riccati-type jumping operator constructed based on the matrix coefficients of the controlled system and the weight matrices of the performance criterion. When the condition for the well posedness is fulfilled, the LQ control problem is attainable if and only if the maximal solution of the BJMLDE with a Riccati-type jumping operator is also a stabilizing solution.
According to the terminology introduced in [32] for the deterministic framework, the problem analyzed in the present work is a generalization to the stochastic case of a fixed-endpoint LQ control problem. A remaining challenge for future research is the analysis of the well-posedness and attainability of a free-endpoint LQ control problem for a stochastic system controlled by impulses.
Another challenge for future research could be the investigation of the well-posedness and attainability of an LQ optimal control problem for a stochastic system controlled by impulses in which the length of the intervals between two impulse instances is not constant, it being time varying or just driven by a stochastic process.
The derivation of some efficient numerical procedures for the computation of the maximal solution and the stabilizing solution (if any) of the BJMLDE with a Riccati-type jumping operator associated to the control problems under consideration is necessary in order to be able to illustrate the applicability of the theoretical developments both in the constant dwell time case as well as in the time-varying dwell time case.

Author Contributions

Conceptualization, V.D. and I.-L.P.; methodology, V.D. and I.-L.P.; investigation, V.D. and I.-L.P.; writing—original draft preparation, V.D. and I.-L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, T. Impulsive Control Theory; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  2. Yang, T. Impulsive Systems and Control: Theory and Applications; Nova Science Publishers: Hauppauge, NY, USA, 2001. [Google Scholar]
  3. Chen, X.; Luo, S.; Chen, W.H. Moment observability and output feedback stabilisation for linear stochastic impulsive systems. Int. J. Syst. Sci. 2023, 54, 1015–1032. [Google Scholar] [CrossRef]
  4. Stamova, I.; Stamov, T.; Li, X. Global exponential stability of a class of impulsive cellular neural networks with supremums. Internat. J. Adapt. Control Signal Process 2015, 28, 1227–1239. [Google Scholar] [CrossRef]
  5. Morris, K. Linear-Quadratic Optimal Actuator Location. IEEE Trans. Automat. Contr. 2011, 56, 113–124. [Google Scholar] [CrossRef]
  6. Ji, X.; Lu, J.; Li, X. Pinning Impulsive Synchronization of Complex Dynamical Networks: A Stabilizing Delay Perspective. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 3091–3095. [Google Scholar] [CrossRef]
  7. Newman, C.; Costanza, V. Deterministic impulse control in native forest ecosystems management. J. Optim. Theory Appl. 1990, 66, 173–196. [Google Scholar] [CrossRef]
  8. Korn, R. Some applications of impulse control in mathematical finance. Math. Methods Oper. Res. 1999, 50, 493–518. [Google Scholar] [CrossRef]
  9. Yang, X.; Peng, D.; Lv, X.; Li, X. Recent progress in impulsive control systems. Math. Comput. Simul. 2019, 155, 244–268. [Google Scholar] [CrossRef]
  10. Kalman, R. Contribution to the theory of optimal control. Bol. Soc. Mat. Mex. 1960, 5, 102–119. [Google Scholar]
  11. Anderson, B.; Moore, J.B. Optimal Control: Linear Quadratic Methods; Prentice Hall: Hoboken, NJ, USA, 1990. [Google Scholar]
  12. Wonham, W. On a matrix Riccati equation of stochastic control. SIAM J. Control 1968, 6, 312–325. [Google Scholar] [CrossRef]
  13. Davis, M. Linear Estimation and Stochastic Control; Chapman & Hall: London, UK, 1977. [Google Scholar]
  14. Rami, M.A.; Zhoua, X.Y.; Moore, J. Well-posedness and attainability of inde nite stochastic linear quadratic control in infinite time horizon. Syst. Control Lett. 2000, 41, 123–133. [Google Scholar] [CrossRef]
  15. Guatteri, G.; Tessitore, G. Backward stochastic Riccati equations and infinite horizon L-Q optimal control problems with stochastic coefficients. Appl. Math. Optim. 2008, 57, 159–194. [Google Scholar] [CrossRef]
  16. Hu, S. Infinite horizontal optimal quadratic control for an affine equation driven by Levy processes. Chin. Ann. Math. 2013, 34A, 179–204. [Google Scholar]
  17. Huang, J.; Li, X.; Yong, J. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Math. Optim. Control. 2015, 5, 179–204. [Google Scholar] [CrossRef]
  18. Rami, M.A.; Moore, J.; Zhoua, X.Y. Indefinite Stochastic Linear Quadratic Control and Generalized Differential Riccati Equation. Siam. J. Control. Optim. 2001, 40, 1296–1311. [Google Scholar] [CrossRef]
  19. Wu, H.; Zhou, X.Y. Characterizing all optimal controls for an indefinite stochastic linear quadratic control problem. IEEE Trans. Autom. Control. 2002, 47, 4042–4078. [Google Scholar]
  20. Chen, X.; Zhou, X.Y. Stochastic linear-quadratic control with conic control constraints on an infinite time horizon. SIAM J. Control. Optim. 2004, 43, 1120–1150. [Google Scholar] [CrossRef]
  21. Sun, J.; Yong, J. Stochastic Linear Quadratic Optimal Control Problems in Infinite Horizon. Appl. Math. Optim. 2018, 78, 145–183. [Google Scholar] [CrossRef]
  22. Briat, C. Stability analysis and stabilization of stochastic linear impulsive, switched and sampled-data systems under dwell-time constraints. Automatica 2016, 74, 279–287. [Google Scholar] [CrossRef]
  23. Briat, C. Stability analysis and stabilization of linear symmetric matrix-valued continuous, discrete, and impulsive dynamical systems — A unified approach for the stability analysis and the stabilization of linear systems. Nonlinear Anal. Hybrid Syst. 2022, 46, 101242. [Google Scholar] [CrossRef]
  24. Drăgan, V.; Popa, I.L.; Aberkane, S. On the stochastic linear quadratic optimal control problem by piecewise constant controls: The infinite horizon time case. Math. Methods Appl. Sci. 2024, 45, 3734–3762. [Google Scholar] [CrossRef]
  25. Lipser, A.; Shiryaev, A. Statistics of Random Processes; Springer: Berlin, Germany, 1977. [Google Scholar]
  26. Friedman, A. Stochastic Differential Equations and Applications; Academic Press: Cambridge, MA, USA, 1975; Volume 1. [Google Scholar]
  27. Øksendal, B. Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  28. Drăgan, V.; Aberkane, S.; Popa, I.L.; Morozan, T. On the stability and mean square stabilization of a class of linear stochastic systems controlled by impulses. Ann. Acad. Rom. Sci. Ser. Math. Appl. 2023, 15, 45–72. [Google Scholar] [CrossRef]
  29. Albert, A. Conditions for positive and nonnegative definiteness in terms of pseudoinverses. SIAM J. Appl. Math. 1969, 17, 434–440. [Google Scholar] [CrossRef]
  30. Drăgan, V.; Morozan, T.; Stoica, A.M. Mathematical Methods in Robust Control of Linear Stochastic Systems; Springer: New York, NY, USA, 2013. [Google Scholar]
  31. Drăgan, V.; Morozan, T.; Stoica, A.M. Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems; Springer: New York, NY, USA, 2010. [Google Scholar]
  32. Vukosavljev, M.; Schoellig, A.P.; Broucke, M.E. The regular indefinite linear quadratic optimal control problem: Stabilizable case. Siam J. Control Optim. 2018, 56, 496–516. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dragan, V.; Popa, I.-L. The Linear Quadratic Optimal Control Problem for Stochastic Systems Controlled by Impulses. Symmetry 2024, 16, 1170. https://doi.org/10.3390/sym16091170

AMA Style

Dragan V, Popa I-L. The Linear Quadratic Optimal Control Problem for Stochastic Systems Controlled by Impulses. Symmetry. 2024; 16(9):1170. https://doi.org/10.3390/sym16091170

Chicago/Turabian Style

Dragan, Vasile, and Ioan-Lucian Popa. 2024. "The Linear Quadratic Optimal Control Problem for Stochastic Systems Controlled by Impulses" Symmetry 16, no. 9: 1170. https://doi.org/10.3390/sym16091170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop