Next Article in Journal
Multi-Frame Joint Detection Approach for Foreign Object Detection in Large-Volume Parenterals
Previous Article in Journal
Nonlinear Adaptive Fuzzy Hybrid Sliding Mode Control Design for Trajectory Tracking of Autonomous Mobile Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Feedback Policy for the Tracking Control of Markovian Jump Boolean Control Networks over a Finite Horizon

by
Bingquan Chen
,
Yuyi Xue
* and
Aiju Shi
School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1332; https://doi.org/10.3390/math13081332
Submission received: 23 March 2025 / Revised: 12 April 2025 / Accepted: 17 April 2025 / Published: 18 April 2025

Abstract

:
This paper aims to find optimal feedback policies for the tracking control of Markovian jump Boolean control networks (MJBCNs) over a finite horizon. The tracking objective is a predetermined time-varying trajectory with a finite length. To minimize the expected total tracking error between the output trajectory of MJBCN and the reference trajectory, an algorithm is proposed to determine the optimal policy for the system. Furthermore, considering the penalty for control input changes, a new objective function is obtained by weighted summing the total tracking error with the total variation of control input. Certain optimal policies sre designed using an algorithm to minimize the expectation of the new objective function. Finally, the methodology is applied to two simplified biological models to demonstrate its effectiveness.

1. Introduction

Boolean networks (BNs) were first put forward by Kauffman as a kind of model for genetic regulatory networks (GRNs) [1]. When simulating a GRN with a BN, each node of the BN represents a gene, whose state is quantified as 1 or 0, and the genes’ mutual regulatory interactions are characterized by logical functions. Moreover, sometimes, there are external drug interventions in GRNs, which can be interpreted as control inputs in BNs. BNs with control input nodes are termed Boolean control networks (BCNs). There was no unified tool for studying BNs until semi-tensor product (STP) was proposed [2], which has greatly facilitated research on many related problems regarding BNs, including stability and stabilization [3,4,5], controllability and observability [6,7,8], state estimation [9,10,11,12], state compression [13], and the output tracking control (OTC) problem [14,15,16,17]. In addition, STP has also been widely adopted in other areas like nonlinear shift registers [18] and fuzzy relational inequalities [19].
As we know, there is usually randomness in gene expression. Consequently, Shmulevich et al. proposed probabilistic Boolean networks (PBNs) to characterize the uncertainty in GRNs [20,21]. A PBN has several realizations, and each realization is selected according to a probability distribution for each time. Similarly, probabilistic Boolean control networks (PBCNs) are derived from PBNs by incorporating control inputs into networks. In addition, a Markov chain was shown to emulate the dynamic of a small GRN well [22]. Furthermore, Markovian jump Boolean control networks (MJBCNs) form another category of stochastic extensions to BNs. As stated in [23], PBCNs can be regarded as a special kind of MJBCNs. In an MJBCN, mode transitions are governed by a Markov chain. Recently, some results on MJBCNs have been obtained, such as controllability [23] and stabilization [24,25].
The significance of OTC lies in its ability to design the control so that the system output closely follows a desired reference trajectory. This is valuable in various practical applications such as robotic manipulator control [26] and flight control [27]. By solving the OTC problem, systems can operate stably in dynamic environments, reduce errors, improve performance, and lower costs and energy consumption. In fields like disease treatment, it can also help precisely control drug concentrations and treatment processes, enhancing effectiveness and safety. The finite-time OTC of BCNs was first investigated by Li et al. [14,15]. Moreover, the finite-time OTC of PBCNs has been studied in [28]. In addition, the asymptotic OTC of PBCNs has been addressed by Chen et al. [29]. In these studies, the tracking objective is a constant state or a reference system.
As stated in [30], a useful method to improve the efficiency of bioreactors is forcing the states of microalgas to follow a predetermined reference trajectory. Motivated by this, Zhang et al. [16] investigated the optimal (i.e., minimum error) OTC of BCNs over a finite time horizon, where the tracking objective is a predefined finite-length trajectory. Moreover, in practice, it is difficult to make substantial changes to the control inputs in a short period, or it may require increased costs. For example, in disease treatment, the concentration of a drug in the body typically decreases gradually rather than instantaneously. Thus, the penalty for the control input changes was taken into account in [16]. After that, the OTC problems of PBCNs and MJBCNs with respect to a predefined reference trajectory were studied in [31,32,33], respectively. However, there was no feedback policy provided for all of the initial states in [33]; moreover, a penalty for control input changes was not considered in [31,32,33].
In this article, we study the finite horizon OTC of MJBCNs with respect to a predefined reference trajectory with a finite length.
  • The OTC problem is reformulated as an optimal control problem, and then an optimal policy is obtained to minimize the expected total tracking error.
  • A new objective function is constructed by performing a weighted sum of the total tracking error and the total variation of the control input. An optimal policy is given to minimize the expected objective function value.
  • The optimal feedback policies obtained in this paper apply to all initial states. As shown in the examples, the design of policies is based on the specific weightings given to the two objectives (i.e., reducing tracking errors and decreasing input variations).
The rest of the paper is organized as follows. Section 2 introduces some preliminaries. In Section 3, the main results on the optimal finite horizon OTC of MJBCNs are obtained. Two examples are given in Section 4 to demonstrate the results. Finally, concluding remarks are provided in Section 5.

2. Preliminaries

The basic symbols are given in Table 1. Due to STP being a generalization of ordinary matrix multiplication [2], in the sequel, we omit symbol ⋉ as long as there is no ambiguity.
MJBCN is presented as follows:
X i ( t + 1 ) = f i σ ( t ) ( X ( t ) , U ( t ) ) , i = 1 , , n , Y j ( t ) = h j ( X ( t ) ) , j = 1 , , q ,
where X ( t ) = ( X 1 ( t ) , X 2 ( t ) , , X n ( t ) ) D n is the state vector, U ( t ) = ( U 1 ( t ) , U 2 ( t ) , , U m ( t ) ) D m is the control input vector, Y ( t ) = ( Y 1 ( t ) , Y 2 ( t ) , , Y q ( t ) ) D q is the output vector, and σ ( t ) [ 1 : s ] is a Markov switching signal with a transition probability matrix (TPM) P = ( p i j ) s × s , that is, Pr { σ ( t + 1 ) = j σ ( t ) = i } = p i j , i , j [ 1 : s ] . In addition, all f i k , k [ 1 : s ] , i [ 1 : n ] and h j , j [ 1 : q ] are logical mappings.
Let x ( t ) = i = 1 n v ( X i ( t ) ) Δ 2 n , y ( t ) = j = 1 q v ( Y j ( t ) ) Δ 2 q and u ( t ) = k = 1 m v ( U k ( t ) ) Δ 2 m . Then MJBCN (1) can be converted into [24]:
x ( t + 1 ) = F σ ( t ) u ( t ) x ( t ) , y ( t ) = H x ( t ) ,
where F i L 2 n × 2 n + m , i [ 1 : s ] and H L 2 q × 2 n .
Assumption 1
([35]). x ( t + 1 ) and σ ( t + 1 ) are conditionally independent of x ( t ) , σ ( t ) , u ( t ) for all t > 0 .
Let N = s 2 n , M = 2 m , and F ˜ = [ F 1 F 2 F s ] L 2 n × N M . Introduce instrumental variables ω ( t ) = δ s σ ( t ) Δ s and z ( t ) = ω ( t ) x ( t ) Δ N . Define a matrix
F ^ = 1 M P 1 2 n ( F ˜ W [ M , s ] ) .
where W [ M , s ] = [ I s δ M 1 I s δ M 2 I s δ M M ] is called the swap matrix [2]. Split matrix F ^ into M blocks of the same size: F ^ = F ¯ 1 F ¯ 2 F ¯ 2 m .
Proposition 1.
For any δ N i , δ N j Δ N and δ M k Δ M ,
Pr { z ( t + 1 ) = δ N i z ( t ) = δ N j , u ( t ) = δ M k } = F ¯ k i j .
Proof. 
Suppose δ N i = δ s γ i δ 2 n ξ i and δ N j = δ s γ j δ 2 n ξ j . Note that x ( t + 1 ) = F ˜ ω ( t ) u ( t ) x ( t ) = F ˜ W [ M , s ] u ( t ) z ( t ) , so
Pr { x ( t + 1 ) = δ 2 n ξ i z ( t ) = δ N j , u ( t ) = δ M k } = [ F ˜ W [ M , s ] δ M k ] ξ i j .
Moreover, note that Pr { σ ( t + 1 ) = γ i σ ( t ) = γ j } = [ P ] γ i γ j , so
Pr { σ ( t + 1 ) = γ i z ( t ) = δ N j , u ( t ) = δ M k } = [ P ] γ i γ j = [ P 1 2 n ] γ i j .
Therefore, under Assumption 1,
Pr { z ( t + 1 ) = δ N i z ( t ) = δ N j , u ( t ) = δ M k } = Pr { σ ( t + 1 ) = γ i , x ( t + 1 ) = δ 2 n ξ i z ( t ) = δ N j , u ( t ) = δ M k } = Pr { σ ( t + 1 ) = γ i σ ( t ) = γ j , x ( t ) = δ 2 n ξ j , u ( t ) = δ M k } × Pr { x ( t + 1 ) = δ 2 n ξ i z ( t ) = δ N j , u ( t ) = δ M k } = [ P 1 2 n ] γ i j × [ F ˜ W [ M , s ] δ M k ] ξ i j = F ¯ k i j ,
That is, Pr { z ( t + 1 ) = δ N i z ( t ) = δ N j , u ( t ) = δ M k } = F ¯ k i j .    □

3. Main Results

3.1. Finite Horizon OTC of MJBCNs

A reference trajectory of length T N is given as follows:
y r ( 1 ) = δ 2 q r 1 , y r ( 2 ) = δ 2 q r 2 , , y r ( T ) = δ 2 q r T ,
where r t [ 1 : 2 q ] , t = 1 , , T .
A policy is a sequence of mappings in the form of π = { ϕ 0 , ϕ 1 , , ϕ T 1 } , where ϕ t : Δ N Δ M t [ 0 : T 1 ] . There is a K ϕ t L M × N for each ϕ t such that ϕ t ( z ( t ) ) = K ϕ t z ( t ) . If policy π is determined, we stipulate
u ( t ) = K ϕ t z ( t ) , t [ 0 : T 1 ] .
Conversely, if the feedback control (5) is given, policy π is also determined. Π represents the set of all policies.
Definition 1.
The reference trajectory (4) is considered exactly tracked by MJBCN (2) via policy π under the initial condition z ( 0 ) = z 0 Δ N if
Pr { y ( t ) = y r ( t ) z ( 0 ) = z 0 , π } = 1 , t [ 1 : T ] .
If the reference output trajectory (4) can not be exactly tracked, we attempt to find a policy π Π that can minimize the expected total tracking error between the output trajectory of MJBCN (2) and the reference trajectory (4) from t = 1 to t = T .
Given two output state vectors Y = ( Y 1 , Y 2 , , Y q ) D q and Y = ( Y 1 , Y 2 , , Y q ) D q , let y 1 = i = 1 q v ( Y i ) Δ 2 q and y 2 = i = 1 q v ( Y i ) Δ 2 q . The distance between y 1 and y 2 (or between Y and Y ) is given as follows [16]:
d ( y 1 , y 2 ) = i = 1 q | Y i Y i | .
For example, suppose Y = ( 1 , 1 , 1 ) and Y = ( 0 , 1 , 1 ) . We can calculate y 1 = δ 8 1 and y 2 = δ 8 5 . By (6), d ( y 1 , y 2 ) = 1 . Although Y and y 1 (likewise Y and y 2 ) uniquely determine each other, d ( y 1 , y 2 ) cannot be intuitively observed from y 1 = δ 8 1 and y 2 = δ 8 5 . In fact, the Euclidean distance y 1 y 2 of y 1 and y 2 is either 0 or 2 , which cannot reflect the degree of difference between Y and Y . The distance formula (6) represents the number of different components of Y and Y . In comparison, the distance formula (6) is more in line with our requirements than the Euclidean distance.
The total tracking error between the output trajectory of MJBCN (2) and the reference trajectory (4) is expressed as
e ( z ( 0 ) , y , y r ) = t = 1 T d ( y ( t ) , y r ( t ) ) .
As the output state vectors of system (2) are finite in number, E e ( z ( 0 ) , y , y r ) = 0 means that MJBCN (2) exactly tracks the reference trajectory (4). We intend to find a policy to minimize E e ( z ( 0 ) , y , y r ) for all z ( 0 ) Δ N .
Define weight factor vectors:
C ( T ) = [ d ( δ 2 q 1 , δ 2 q r T ) d ( δ 2 q 2 , δ 2 q r T ) d ( δ 2 q 2 q , δ 2 q r T ) ] , C ( t ) = 1 M [ d ( δ 2 q 1 , δ 2 q r t ) d ( δ 2 q 2 , δ 2 q r t ) d ( δ 2 q 2 q , δ 2 q r t ) ] , t [ 1 : T 1 ] .
Then d ( y ( t ) , y r ( t ) ) = C ( t ) u ( t ) y ( t ) , t [ 1 : T 1 ] and d ( y ( T ) , y r ( T ) ) = C ( T ) y ( T ) . Consequently,
e ( z ( 0 ) , y , y r ) = C ( T ) y ( T ) + t = 1 T 1 C ( t ) u ( t ) y ( t ) = C ( T ) H ( 1 s ω ( T ) ) x ( T ) + t = 1 T 1 C ( t ) u ( t ) H ( 1 s ω ( t ) ) x ( t ) = C ( T ) H 1 s z ( T ) + t = 1 T 1 C ( t ) ( I M ( H 1 s ) ) u ( t ) z ( t ) .
When the dimensions of the matrices do not match, we default to using STP.
Update the weight factor vectors:
C ˜ ( 0 ) = 0 N M , C ˜ ( t ) = C ( t ) ( I M ( H 1 s ) ) , t [ 1 : T 1 ] , C ˜ ( T ) = C ( T ) H 1 s .
Then we have
e ( z ( 0 ) , y , y r ) = C ˜ ( T ) z ( T ) + t = 0 T 1 C ˜ ( t ) u ( t ) z ( t ) .
Minimizing E e ( z ( 0 ) , y , y r ) ) by a policy can be formulated as addressing an optimization problem as follows:
min π Π J ( z ( 0 ) , π ) = min π Π E e ( z ( 0 ) , y , y r ) ) , π = arg min π Π J ( z ( 0 ) , π ) .
π is called an optimal policy if (11) holds for all z ( 0 ) Δ N .
The sub-policies of π = { ϕ 0 , ϕ 1 , , ϕ T 1 } are denoted by π k = { ϕ k , ϕ k + 1 , , ϕ T 1 } , k [ 1 : T 1 ] . The set of all possible π k is represented by Π k . Next, define the optimal values of the optimization problem (11) and its sub-problems as follows:
J 0 ( z ( 0 ) ) = min π Π J ( z ( 0 ) , π ) , J k ( z ( k ) ) = min π k Π k E C ˜ ( T ) z ( T ) + t = k T 1 C ˜ ( t ) u ( t ) z ( t ) , k [ 1 : T 1 ] , J T ( z ( T ) ) = C ˜ ( T ) z ( T ) .
The expectation in (12) is actually the conditional expectation given z ( k ) Δ N and π k Π k . The following lemma, based on dynamic programming, is given to calculate J 0 ( z ( 0 ) ) through an iterative process for all z ( 0 ) Δ N . We omit the detailed proof for brevity, which is similar to the Lemma 2 of [36] and the Theorem 4.1 of [37].
Lemma 1.
For any given z ( t ) Δ N ,
J t ( z ( t ) ) = min u ( t ) Δ M E C ˜ ( t ) u ( t ) z ( t ) + J t + 1 ( z ( t + 1 ) ) , t [ 0 : T 1 ] .
Moreover, if u ( t ) = K ϕ t z ( t ) minimizes the expectation in (13) for all z ( t ) Δ N and all t [ 0 : T 1 ] , then π = { ϕ 0 , ϕ 1 , , ϕ T 1 } is an optimal solution of (11).
Specifically, given z ( t ) = δ N j in (13), by Proposition 1,
J t ( δ N j ) = min k [ 1 : M ] C ˜ ( t ) δ M k δ N j + i = 1 N F ¯ k i j J t + 1 ( δ N i ) , t [ 0 : T 1 ] .
Furthermore, define the optimal value vectors:
J t = [ J t ( δ N 1 ) J t ( δ N 2 ) J t ( δ N N ) ] , t [ 0 : T ] .
Then, Equation (14) is equivalent to
J t ( δ N j ) = min k [ 1 : M ] [ C ˜ ( t ) + J t + 1 F ^ ] ( k 1 ) N + j , t [ 0 : T 1 ] ,
where [ C ˜ ( t ) + J t + 1 F ^ ] ( k 1 ) N + j represents the ( k 1 ) N + j -th entry of C ˜ ( t ) + J t + 1 F ^ .
Theorem 1.
The policy π obtained by Algorithm 1 can minimize the expected total tracking error between the output trajectory of MJBCN (2) and the reference trajectory (4) for all of the initial states.
Proof. 
It easy to find that the policy π = { ϕ 0 , ϕ 1 , , ϕ T 1 } obtained by Algorithm 1 satisfies μ j ( t ) = arg min k [ 1 : M ] [ C ˜ ( t ) + J t + 1 F ^ ] ( k 1 ) N + j for all δ N j Δ N and all t [ 0 : T 1 ] , where K ϕ t : = δ 2 m [ μ 1 ( t ) μ 2 ( t ) μ N ( t ) ] . That is, u ( t ) = K ϕ t z ( t ) can minimize the expectation in (13) for all z ( t ) Δ N , t [ 0 : T 1 ] . Therefore, by Lemma 1, π = { ϕ 0 , ϕ 1 , , ϕ T 1 } is an optimal solution of (11). In other words, π can minimize the expected total tracking error for all of the initial states.    □
Remark 1.
Algorithm 1 calculates J t , t = T 1 , , 0 recursively and elementwise. The calculation of J t + 1 F ^ takes a time complexity of O ( N 2 M ) , while lines 4 11 require O ( N M ) time complexity, which is negligible compared with O ( N 2 M ) . Therefore, Algorithm 1 takes, at most, O ( T N 2 M ) time complexity.
Algorithm 1: Find an optimal solution for (11).
Mathematics 13 01332 i001

3.2. Finite Horizon OTC of MJBCNs with a Penalty for Control Input Changes

Given two control input vectors U = ( U 1 , U 2 , , U m ) , U = ( U 1 , U 2 , , U m ) D m , let u 1 = i = 1 m v ( U i ) Δ M and u 2 = i = 1 m v ( U i ) Δ M . Similar to the distance formula (6), the distance between u 1 and u 2 (or between U and U ) is given by
d i n ( u 1 , u 2 ) = i = 1 m | U i U i | .
Then, the total variation of the control input of MJBCN (2) within the time period of [ 0 : T 1 ] is expressed as
e ˜ ( z ( 0 ) , u ) = t = 1 T 1 d i n ( u ( t ) , u ( t 1 ) ) .
Define a penalty factor vector:
D ˜ = [ d i n ( δ M 1 , δ M 1 ) d i n ( δ M 1 , δ M 2 ) d i n ( δ M 1 , δ M M ) d i n ( δ M 2 , δ M 1 ) d i n ( δ M M , δ M M ) ] ,
which satisfies d i n ( u 1 , u 2 ) = D ˜ u 1 u 2 , u 1 , u 2 Δ M . Then
e ˜ ( z ( 0 ) , u ) = t = 1 T 1 D ˜ u ( t ) u ( t 1 ) .
By performing a weighted sum of (10) and (17), a new objective function denoted by e ^ ( z ( 0 ) , y , y r , u ) is obtained as follows:
e ^ ( z ( 0 ) , y , y r , u ) = α · e ( z ( 0 ) , y , y r ) + ( 1 α ) · e ˜ ( z ( 0 ) , u ) , = α C ˜ ( T ) z ( T ) + t = 1 T 1 C ˜ ( t ) u ( t ) z ( t ) + ( 1 α ) t = 1 T 1 D ˜ u ( t ) u ( t 1 ) ,
where 0 α 1 . We aim to minimize E e ^ ( z ( 0 ) , y , y r , u ) for all z ( 0 ) Δ N . When we are more concerned with the tracking error, we can set α to a larger value. Conversely, if we want to reduce the variation of the control input, we can set α to a smaller value.
Based on the Kronecker product and STP, we can derive
C ˜ ( T ) z ( T ) = C ˜ ( T ) 1 M z ( T ) u ( T 1 ) , C ˜ ( t ) u ( t ) z ( t ) = C ˜ ( t ) 1 M u ( t ) z ( t ) u ( t 1 ) , D ˜ u ( t ) u ( t 1 ) = D ˜ I M 1 N u ( t ) z ( t ) u ( t 1 ) .
Introduce another instrumental variable z ^ ( t ) as follows:
z ^ ( 0 ) = z ( 0 ) Δ N , z ^ ( t ) = z ( t ) u ( t 1 ) Δ N M , t = 1 , 2 , .
Define weight factor vectors:
C ^ ( 0 ) = 0 N M , C ^ ( t ) = α C ˜ ( t ) 1 M + ( 1 α ) D ˜ ( I M 1 N ) , t [ 1 : T 1 ] , C ^ ( T ) = α C ˜ ( T ) 1 M .
Then we have
e ^ ( z ( 0 ) , y , y r , u ) = C ^ ( T ) z ^ ( T ) + t = 0 T 1 C ^ ( t ) u ( t ) z ^ ( t ) .
Define two matrices as follows:
R = F ^ I M 1 N , L = F ^ 1 M I M 1 N M .
Split R and L into M blocks of the same size: R = R ¯ 1 R ¯ 2 R ¯ M and L = L ¯ 1 L ¯ 2 L ¯ M .
Proposition 2.
For any δ N M i , δ N j Δ N and δ M k Δ M ,
Pr { z ^ ( 1 ) = δ N M i z ^ ( 0 ) = δ N j , u ( 0 ) = δ M k } = R ¯ k i j ;
for any δ N M i , δ N M j Δ N M and δ M k Δ M ,
Pr { z ^ ( t + 1 ) = δ N M i z ^ ( t ) = δ N M j , u ( t ) = δ M k } = L ¯ k i j .
Proof. 
The proof is similar to Proposition 1.    □
Next, a policy is in the form of π ^ = { ψ 0 , ψ 1 , , ψ T 1 } , where ψ 0 : Δ N Δ M and ψ t : Δ N M Δ M t [ 1 : T 1 ] . There is a K ψ 0 L M × N such that ψ 0 ( z ^ ( 0 ) ) = K ψ 0 z ^ ( 0 ) , and for each ψ t with t 1 , there is a G ψ t L M × N M such that ψ t ( z ^ ( t ) ) = G ψ t z ^ ( t ) . Once a policy π ^ is given, a feedback control is determined as follows:
u ( 0 ) = K ψ 0 z ^ ( 0 ) , u ( t ) = G ψ t z ^ ( t ) , t [ 1 : T 1 ] .
The set of all possible π ^ is represented by Π ^ .
Minimizing the expected value of (20) by a policy is equivalent to solving an optimization problem as follows:
min π ^ Π ^ J ^ ( z ( 0 ) , π ^ ) = min π ^ Π ^ E e ^ ( z ( 0 ) , y , y r , u ) , π ^ = arg min π ^ Π ^ J ^ ( z ( 0 ) , π ^ ) .
π ^ is called an optimal policy if (23) holds for all z ( 0 ) Δ N .
The sub-policies of π ^ = { ψ 0 , ψ 1 , , ψ T 1 } are denoted by π ^ k = { ψ k , ψ k + 1 , , ψ T 1 } , k [ 1 : T 1 ] . Denote by Π ^ k the set of all possible π ^ k . Define the optimal values of the optimization problem (23) and its sub-problems as follows:
J ^ 0 ( z ( 0 ) ) = min π ^ Π ^ J ^ ( z ( 0 ) , π ^ ) , J ^ k ( z ^ ( k ) ) = min π ^ k Π ^ k E C ^ ( T ) z ^ ( T ) + t = k T 1 C ^ ( t ) u ( t ) z ^ ( t ) , k [ 1 : T 1 ] , J ^ T ( z ^ ( T ) ) = C ^ ( T ) z ^ ( T ) .
Similar to Lemma 1, the following lemma is given to determine J ^ 0 ( z ( 0 ) ) through an iterative process for all z ( 0 ) Δ N .
Lemma 2.
For any given z ^ ( 0 ) Δ N ,
J ^ 0 ( z ^ ( 0 ) ) = min u ( 0 ) Δ M E C ^ ( 0 ) u ( 0 ) z ^ ( 0 ) + J ^ 1 ( z ^ ( 1 ) ) ,
and for any given z ^ ( t ) Δ N M ,
J ^ t ( z ^ ( t ) ) = min u ( t ) Δ M E C ^ ( t ) u ( t ) z ^ ( t ) + J ^ t + 1 ( z ^ ( t + 1 ) ) , t [ 1 : T 1 ] .
Moreover, if u ( 0 ) = K ψ 0 z ^ ( 0 ) minimizes the expectation in (24) for all z ^ ( 0 ) Δ N , and u ( t ) = G ϕ t z ^ ( t ) minimizes the expectation in (25) for all z ^ ( t ) Δ N M , t [ 1 : T 1 ] , then π ^ = { ψ 0 , ψ 1 , , ψ T 1 } is an optimal solution of (23).
Define the optimal value vectors:
J ^ 0 = [ J ^ 0 ( δ N 1 ) J ^ 0 ( δ N 2 ) J ^ 0 ( δ N N ) ] , J ^ t = [ J ^ t ( δ N M 1 ) J ^ t ( δ N M 2 ) J ^ t ( δ N M N M ) ] , t [ 1 : T ] .
Similarly, given z ^ ( 0 ) = δ N i in (24) and z ^ ( t ) = δ N M j in (25), by Proposition 2,
J ^ 0 ( δ N i ) = min k [ 1 : M ] [ J ^ 1 R ] ( k 1 ) N + i , J ^ t ( δ N M j ) = min k [ 1 : M ] [ C ^ ( t ) + J ^ t + 1 L ] ( k 1 ) N M + j , t [ 1 : T 1 ] .
Theorem 2.
The policy π ^ obtained by Algorithm 2 can minimize the expectation of the objective function (20) for all initial states.
Proof. 
Based on Lemma 2, the proof is similar to Theorem 1.    □
Remark 2.
Algorithm 2 calculates J ^ t , t = T 1 , , 0 recursively and elementwise. The calculation of J ^ t + 1 L takes a time complexity of O ( N 2 M 3 ) , while lines 4 11 operate in O ( N M 2 ) time. Thus, lines 2 13 require O ( T N 2 M 3 ) . The time complexity of the rest is obviously less than O ( T N 2 M 3 ) . Therefore, Algorithm 2 takes at most O ( T N 2 M 3 ) time complexity. Comparing Algorithm 1 with Algorithm 2, we observe that Algorithm 1 has a lower time complexity. Therefore, if the penalty for control input changes is not considered in the OTC problem, we prioritize using Algorithm 1 to design an optimal strategy π for MJBCN (2). However, when such a penalty is incorporated, it becomes necessary to employ Algorithm 2 to develop an optimal strategy π ^ .
Algorithm 2: Calculate an optimal solution for (23).
Mathematics 13 01332 i002
Remark 3.
Generally, u ( t ) is given from t = 0 . In [16], the virtual variable u ( 1 ) needs to be used. Although time-invariant BCNs were considered, the optimal finite horizon OTC problem with a penalty for control input changes has not been completely addressed. In this paper, we define z ^ ( t ) in segmented form, which can effectively solve this problem.

4. Illustrative Examples

Example 1.
Consider an MJBCN model of the form (1) with 3 internal nodes, 1 input node, 2 output nodes and 2 realizations [38], where f 1 1 = X 1 , f 2 1 = ¬ X 1 X 3 , f 3 1 = X 2 U 1 , f 1 2 = ( ¬ X 1 X 2 ) U 1 , f 2 2 = X 1 X 2 , f 3 2 = X 1 X 2 , h 1 = X 1 , h 2 = X 2 . The TPM of { σ ( t ) , t N } is assumed to be P = 0 1 0.6 0.4 .
Let x ( t ) = i = 1 3 v ( X i ( t ) ) , u ( t ) = v ( U 1 ( t ) ) , y ( t ) = v ( Y 1 ( t ) ) v ( Y 2 ( t ) ) , ω ( t ) = δ 2 σ ( t ) and z ( t ) = ω ( t ) x ( t ) . Then, this system can be converted into the form (2) with F 1 = δ 8 [ 3 3 3 3 5 7 5 7 3 3 4 4 5 7 6 8 ] , F 2 = δ 8 [ 1 1 4 4 4 4 4 4 5 5 8 8 4 4 8 8 ] and H = δ 4 [ 1 1 2 2 3 3 4 4 ] . The calculation results of F ^ , R, and L are omitted.
A reference trajectory is given in Table 2. By (8), we can obtain
C ( 4 ) = [ 2 1 1 0 ] , C ( 1 ) = [ 1 2 0 1 1 2 0 1 ] , C ( 2 ) = [ 1 0 2 1 1 0 2 1 ] , C ( 3 ) = [ 0 1 1 2 0 1 1 2 ] .
Next, by (9), we can calculate
C ˜ ( 4 ) = [ 2 2 1 1 1 1 0 0 2 2 1 1 1 1 0 0 ] , C ˜ ( 1 ) = [ 1 1 2 2 0 0 1 1 1 1 2 2 0 0 1 1 1 1 2 2 0 0 1 1 1 1 2 2 0 0 1 1 ] , C ˜ ( 2 ) = [ 1 1 0 0 2 2 1 1 1 1 0 0 2 2 1 1 1 1 0 0 2 2 1 1 1 1 0 0 2 2 1 1 ] , C ˜ ( 3 ) = [ 0 0 1 1 1 1 2 2 0 0 1 1 1 1 2 2 0 0 1 1 1 1 2 2 0 0 1 1 1 1 2 2 ] .
By Algorithm 1, we can successively obtain
J 4 = [ 2 2 1 1 1 1 0 0 2 2 1 1 1 1 0 0 ] , J 3 = [ 1 1 2 2 2 1 3 2 1 1 1 1 2 2 2 2 ] , J 2 = [ 2 2 1 1 4 4 3 3 2 2 1.6 1.6 3.6 3.6 2.6 2.6 ] , J 1 = [ 2.6 2.6 3.6 3.6 3.6 2.6 4.6 3.6 3 3 3.24 3.24 1.24 1.24 2.24 2.24 ] , J 0 = [ 3.24 3.24 3.24 3.24 1.24 2.24 1.24 2.24 2.656 2.656 3.056 3.056 3.456 3.456 3.056 3.056 ] ,
and an optimal policy π = { ϕ 0 , ϕ 1 , ϕ 2 , ϕ 3 } with feedback matrices
K ϕ 0 = δ 2 [ 1 1 1 1 1 1 1 1 2 2 2 2 1 1 2 2 ] , K ϕ 1 = δ 2 [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , K ϕ 2 = δ 2 [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , K ϕ 3 = δ 2 [ 1 1 1 1 1 1 1 1 2 2 2 2 1 1 2 2 ] .
Next, take the penalty for the control input changes into account. Let α = 0.7 . By (16) and (19), we can obtain D ˜ = [ 0 1 1 0 ] and
C ^ ( 4 ) = [ 1.4 1.4 1.4 1.4 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0 0 0 0 1.4 1.4 1.4 1.4 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0 0 0 0 ] , C ^ ( 1 ) = [ 0.7 1 0.7 1 1.4 1.7 1.4 1.7 0 0.3 0 0.3 0.7 1 0.7 1 0.7 1 0.7 1 1.4 1.7 1.4 1.7 0 0.3 0 0.3 0.7 1 0.7 1 1 0.7 1 0.7 1.7 1.4 1.7 1.4 0.3 0 0.3 0 1 0.7 1 0.7 1 0.7 1 0.7 1.7 1.4 1.7 1.4 0.3 0 0.3 0 1 0.7 1 0.7 ] , C ^ ( 2 ) = [ 0.7 1 0.7 1 0 0.3 0 0.3 1.4 1.7 1.4 1.7 0.7 1 0.7 1 0.7 1 0.7 1 0 0.3 0 0.3 1.4 1.7 1.4 1.7 0.7 1 0.7 1 1 0.7 1 0.7 0.3 0 0.3 0 1.7 1.4 1.7 1.4 1 0.7 1 0.7 1 0.7 1 0.7 0.3 0 0.3 0 1.7 1.4 1.7 1.4 1 0.7 1 0.7 ] , C ^ ( 3 ) = [ 0 0.3 0 0.3 0.7 1 0.7 1 0.7 1 0.7 1 1.4 1.7 1.4 1.7 0 0.3 0 0.3 0.7 1 0.7 1 0.7 1 0.7 1 1.4 1.7 1.4 1.7 0.3 0 0.3 0 1 0.7 1 0.7 1 0.7 1 0.7 1.7 1.4 1.7 1.4 0.3 0 0.3 0 1 0.7 1 0.7 1 0.7 1 0.7 1.7 1.4 1.7 1.4 ] .
By Algorithm 2, we can successively get
J ^ 4 = [ 1.4 1.4 1.4 1.4 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0 0 0 0 1.4 1.4 1.4 1.4 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0 0 0 0 ] , J ^ 3 = [ 0.7 0.7 0.7 0.7 1.4 1.4 1.4 1.4 1.4 1.4 0.7 0.7 2.1 2.1 1.4 1.4 1 0.7 1 0.7 1 0.7 1 0.7 1.4 1.4 1.4 1.4 1.7 1.4 1.7 1.4 ] , J ^ 2 = [ 1.7 1.4 1.7 1.4 1 0.7 1 0.7 2.8 2.8 3.1 2.8 2.1 2.1 2.4 2.1 1.52 1.82 1.52 1.82 1.24 1.4 1.24 1.4 2.64 2.52 2.64 2.52 1.94 2.1 1.94 2.1 ] , J ^ 1 = [ 1.94 2.1 1.94 2.1 2.64 2.8 2.64 2.8 2.64 2.52 1.94 2.1 3.34 3.22 2.64 2.8 2.328 2.628 2.328 2.628 2.496 2.796 2.496 2.796 1.096 0.98 1.096 0.98 1.796 2.096 1.796 2.096 ] , J ^ 0 = [ 2.496 2.496 2.496 2.496 0.98 1.796 0.98 1.796 1.904 1.904 2.5184 2.5184 2.5824 2.5824 2.5184 2.5184 ] ,
and an optimal policy π ^ = { ψ 0 , ψ 1 , ψ 2 , ψ 3 } with feedback matrices
K ψ 0 = δ 2 [ 1 1 1 1 2 1 2 1 2 2 2 2 1 1 2 2 ] , G ψ 1 = δ 2 [ 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 1 1 1 1 1 1 1 2 1 2 1 1 1 1 ] , G ψ 2 = δ 2 [ 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 1 1 1 2 1 2 1 2 1 2 1 2 1 2 ] , G ψ 3 = δ 2 [ 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 2 2 2 2 2 2 2 1 2 1 2 2 2 2 2 ] .
For each given parameter α, we can always obtain the optimal value of E e ^ ( z ( 0 ) , y , y r , u ) and an optimal policy by Algorithm 2. However, these optimal values are not directly comparable across different α settings. Therefore, to evaluate the relative merits of different α values, we compare the performance of E { e ( z ( 0 ) , y , y r ) } and E { e ˜ ( z ( 0 ) , u ) } under their respective optimal policies generated by varying α. This approach allows us to select a preferable α value.
Let α in the objective function (18) take 1 , 0.7 , 0.4 , 0 sequentially, and an optimal policy can be determined by Algorithm 2 for each value of α. Under each corresponding optimal policy, E { e ( z ( 0 ) , y , y r ) } and E { e ˜ ( z ( 0 ) , u ) } for each z ( 0 ) Δ 16 are shown in Figure 1 and Figure 2, respectively, where a horizontal axis value of i actually means z ( 0 ) = δ 16 i , i [ 1 : 16 ] .
As shown in the figures, increasing α results in a smaller tracking error, whereas reducing α leads to diminished variation in the control input. This aligns with the design intent of the objective function (18). In comparison, we observe that when α = 0.7 , both the tracking error and variation of control input are effectively maintained at satisfactory levels.
Example 2.
Consider an MJBCN model of the form (1) with 9 internal nodes, 2 input nodes, 3 output nodes, and 2 realizations [39], where
f 1 1 = f 1 2 = ¬ X 7 X 3 ; f 2 1 = f 2 2 = X 1 ; f 3 1 = f 3 2 = ¬ U 1 ; f 4 1 = f 4 2 = X 5 X 6 ; f 5 1 = X 5 , f 5 2 = X 5 U 2 ; f 6 1 = f 6 2 = X 1 ; f 7 1 = f 7 2 = ¬ X 4 ¬ X 8 ; f 8 1 = f 8 2 = X 4 X 5 X 9 ; f 9 1 = ¬ U 1 X 9 , f 9 2 = X 9 ; h 1 = ¬ X 1 X 3 ; h 2 = X 6 X 9 ; h 3 = X 2 ¬ X 8 .
The TPM of { σ ( t ) , t N } is assumed to be P = 0.5 0.5 0.3 0.7 . Let x ( t ) = i = 1 9 v ( X i ( t ) ) , u ( t ) = v ( U 1 ( t ) ) v ( U 1 ( t ) ) , y ( t ) = j = 1 3 v ( Y j ( t ) ) , ω ( t ) = δ 2 σ ( t ) and z ( t ) = ω ( t ) x ( t ) . The transition matrices of this MJBCN are not presented here due to the large dimensionality.
A reference trajectory is given in Table 3. Letting α take 1 , 0.7 , 0.4 , 0 in (18) sequentially, we can obtain an optimal policy by Algorithm 2 for each value of α. Under each corresponding optimal policy, E { e ( z ( 0 ) , y , y r ) } and E { e ˜ ( z ( 0 ) , u ) } for each z ( 0 ) Δ 1024 are shown in the Figure 3 and Figure 4, respectively, where a horizontal axis value of i actually means z ( 0 ) = δ 1024 i , i [ 1 : 1024 ] . To avoid visual clutter, the figures selectively display sparsely sampled data points on the horizontal axis.

5. Conclusions

This paper studied the minimum error OTC of an MJBCN with respect to a predefined trajectory with a finite length, which was transformed into a dynamic optimization problem in terms of the instrumental variable z ( t ) . An optimal policy was designed using an algorithm to minimize the expected total tracking error.
Next, the penalty for control input changes was taken into account. Through the weighted summation of the total tracking error and the total variation of the control input, a new objective function was constructed. The optimal expected value of the objective function and the optimal policy were determined through the dynamic programming of the instrumental variable z ^ ( t ) . A methodology framework diagram of this paper is provided in Figure 5.
Finally, the main results were applied to two simplified biological models. As shown in the examples, the parameter α can be adjusted according to different requirements, and different values of α lead to optimal policies with varying emphasis on the tracking error and the variation of the control input.

Author Contributions

Writing—original draft preparation, B.C.; validation, writing—review and editing, Y.X. and A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 62403253 and 12401642), in part by the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20240604 and BK20240606), and in part by the Natural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications (Grant Nos. NY223195 and NY223198).

Data Availability Statement

All data are included in this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kauffman, S.A. Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theor. Biol. 1969, 22, 437–467. [Google Scholar] [CrossRef] [PubMed]
  2. Cheng, D.; Qi, H.; Li, Z. Analysis and Control of Boolean Networks: A Semi-Tensor Product Approach; Springer: London, UK, 2011. [Google Scholar]
  3. Liu, W.; Fu, S.; Zhao, J. Set stability and set stabilization of Boolean control networks avoiding undesirable set. Mathematics 2021, 9, 2864. [Google Scholar] [CrossRef]
  4. Sun, Q.; Li, H. Robust stabilization of impulsive Boolean control networks with function perturbation. Mathematics 2022, 10, 4029. [Google Scholar] [CrossRef]
  5. Deng, L.; Cao, X.; Zhao, J. One-bit function perturbation impact on robust set stability of Boolean networks with disturbances. Mathematics 2024, 12, 2258. [Google Scholar] [CrossRef]
  6. Tang, T.; Ding, X.; Lu, J.; Liu, Y. Improved criteria for controllability of Markovian jump Boolean control networks with time-varying state delays. IEEE Trans. Autom. Control 2024, 69, 7028–7035. [Google Scholar] [CrossRef]
  7. Li, Y.; Feng, J.-E.; Wang, B. Observability of singular Boolean control networks with state delays. J. Franklin Inst. 2022, 359, 331–351. [Google Scholar] [CrossRef]
  8. Li, Y.; Li, H. Relation coarsest partition method to observability of probabilistic Boolean networks. Inf. Sci. 2024, 681, 121221. [Google Scholar] [CrossRef]
  9. Chen, H.; Wang, Z.; Liang, J.; Li, M. State estimation for stochastic time-varying Boolean networks. IEEE Trans. Autom. Control 2020, 65, 5480–5487. [Google Scholar] [CrossRef]
  10. Chen, H.; Wang, Z.; Shen, B.; Liang, J. Model evaluation of the stochastic Boolean control networks. IEEE Trans. Autom. Control 2022, 67, 4146–4153. [Google Scholar] [CrossRef]
  11. Li, Y.; Li, H.; Xiao, G. Luenberger-like observer design and optimal state estimation of logical control networks with stochastic disturbances. IEEE Trans. Autom. Control 2023, 68, 8193–8200. [Google Scholar] [CrossRef]
  12. Li, B.; Pan, Q.; Zhong, J.; Xu, W. Long-run behavior estimation of temporal Boolean networks with multiple data losses. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 15004–15011. [Google Scholar] [CrossRef]
  13. Li, B.; Lu, J.; Xu, W.; Zhong, J. Lossless state compression of Boolean control networks. IEEE Trans. Autom. Control 2024, 69, 4166–4173. [Google Scholar] [CrossRef]
  14. Li, H.; Wang, Y.; Xie, L. Output tracking control of Boolean control networks via state feedback: Constant reference signal case. Automatica 2015, 59, 54–59. [Google Scholar] [CrossRef]
  15. Li, H.; Xie, L.; Wang, Y. Output regulation of Boolean control networks. IEEE Trans. Autom. Control 2017, 62, 2993–2998. [Google Scholar] [CrossRef]
  16. Zhang, Z.; Leifeld, T.; Zhang, P. Finite horizon tracking control of Boolean control networks. IEEE Trans. Autom. Control 2018, 63, 1798–1805. [Google Scholar] [CrossRef]
  17. Zhao, Y.; Zhao, X.; Fu, S.; Xia, J. Robust output tracking of Boolean control networks over finite time. Mathematics 2022, 10, 4078. [Google Scholar] [CrossRef]
  18. Gao, Z.; Feng, J.-E. Research status of nonlinear feedback shift register based on semi-tensor product. Mathematics 2022, 10, 3538. [Google Scholar] [CrossRef]
  19. Wang, S.; Li, H. Resolution of fuzzy relational inequalities with Boolean semi-tensor product composition. Mathematics 2021, 9, 937. [Google Scholar] [CrossRef]
  20. Shmulevich, I.; Dougherty, E.R.; Zhang, W. From Boolean to probabilistic Boolean networks as models of genetic regulatory networks. Proc. IEEE 2002, 90, 1778–1792. [Google Scholar] [CrossRef]
  21. Shmulevich, I.; Dougherty, E.R.; Kim, S.; Zhang, W. Probabilistic Boolean networks: A rule-based uncertainty model for gene regulatory networks. Bioinformatics 2002, 18, 261–274. [Google Scholar] [CrossRef]
  22. Kim, S.; Li, H.; Dougherty, E.R.; Cao, N.; Chen, Y.; Bittner, M.; Suh, E.B. Can Markov chain models mimic biological regulation? J. Biol. Syst. 2002, 10, 337–357. [Google Scholar] [CrossRef]
  23. Meng, M.; Xiao, G.; Zhai, C.; Li, G. Controllability of Markovian jump Boolean control networks. Automatica 2019, 106, 70–76. [Google Scholar] [CrossRef]
  24. Chen, B.; Cao, J.; Lu, G.; Rutkowski, L. Stabilization of Markovian jump Boolean control networks via sampled-data control. IEEE Trans. Cybern. 2022, 52, 10290–10301. [Google Scholar] [CrossRef]
  25. Chen, B.; Cao, J.; Lu, G.; Rutkowski, L. Stabilization of Markovian jump Boolean control networks via event-triggered control. IEEE Trans. Autom. Control 2023, 68, 1215–1222. [Google Scholar] [CrossRef]
  26. Melhem, K.; Wang, W. Global output tracking control of flexible joint robots via factorization of the manipulator mass matrix. IEEE Trans. Robot. 2009, 25, 428–437. [Google Scholar] [CrossRef]
  27. Al-Hiddabi, S.A.; McClamroch, N.H. Tracking and maneuver regulation control for nonlinear nonminimum phase systems: Application to flight control. IEEE Trans. Control Syst. Technol. 2002, 10, 780–792. [Google Scholar] [CrossRef]
  28. Li, H.; Wang, Y.; Guo, P. State feedback based output tracking control of probabilistic Boolean networks. Inf. Sci. 2016, 349–350, 1–11. [Google Scholar] [CrossRef]
  29. Chen, B.; Cao, J.; Luo, Y.; Rutkowski, L. Asymptotic output tracking of probabilistic Boolean control networks. IEEE Trans. Circuits Syst. I. Reg. Papers 2020, 67, 2780–2790. [Google Scholar] [CrossRef]
  30. Abdollahi, J.; Dubljevic, S. Lipid production optimization and optimal control of heterotrophic microalgae fed-batch bioreactor. Chem. Eng. Sci. 2012, 84, 619–627. [Google Scholar] [CrossRef]
  31. Zhang, Q.; Feng, J.-E.; Jiao, T. Finite horizon tracking control of probabilistic Boolean control networks. J. Franklin Inst. 2021, 358, 9909–9928. [Google Scholar] [CrossRef]
  32. Zhang, A.; Li, L.; Li, Y.; Lu, J. Finite-time output tracking of probabilistic Boolean control networks. Appl. Math. Comput. 2021, 411, 126413. [Google Scholar] [CrossRef]
  33. Li, Y.; Li, H.; Xiao, G. Optimal control for reachability of Markov jump switching Boolean control networks subject to output trackability. Int. J. Control 2025, 98, 200–207. [Google Scholar] [CrossRef]
  34. Khatri, C.G.; Rao, C.R. Solutions to some functional equations and their applications to characterization of probability distributions. Sankhyā Indian J. Stat. A 1968, 30, 167–180. [Google Scholar]
  35. Li, C.; Zhang, X.; Feng, J.-E.; Cheng, D. Transition analysis of stochastic logical control networks. IEEE Trans. Autom. Control 2024, 69, 1226–1233. [Google Scholar] [CrossRef]
  36. Liu, Z.; Wang, Y.; Li, H. Two kinds of optimal controls for probabilistic mix-valued logical dynamic networks. Sci. China Inf. Sci. 2014, 57, 1–10. [Google Scholar] [CrossRef]
  37. Wu, Y.; Shen, T. An algebraic expression of finite horizon optimal control algorithm for stochastic logical dynamical systems. Syst. Control Lett. 2015, 82, 108–114. [Google Scholar] [CrossRef]
  38. Meng, M.; Liu, L.; Feng, G. Stability and l1 gain analysis of Boolean networks with Markovian jump parameters. IEEE Trans. Autom. Control 2017, 62, 4222–4228. [Google Scholar] [CrossRef]
  39. Acernese, A.; Yerudkar, A.; Glielmo, L.; Del Vecchio, C. Reinforcement learning approach to feedback stabilization problem of probabilistic Boolean control networks. IEEE Control Syst. Lett. 2021, 5, 337–342. [Google Scholar] [CrossRef]
Figure 1. E { e ( z ( 0 ) , y , y r ) } for each z ( 0 ) Δ 16 under the optimal policy.
Figure 1. E { e ( z ( 0 ) , y , y r ) } for each z ( 0 ) Δ 16 under the optimal policy.
Mathematics 13 01332 g001
Figure 2. E { e ˜ ( z ( 0 ) , u ) } for each z ( 0 ) Δ 16 under the optimal policy.
Figure 2. E { e ˜ ( z ( 0 ) , u ) } for each z ( 0 ) Δ 16 under the optimal policy.
Mathematics 13 01332 g002
Figure 3. E { e ( z ( 0 ) , y , y r ) } for each z ( 0 ) Δ 1024 under the optimal policy.
Figure 3. E { e ( z ( 0 ) , y , y r ) } for each z ( 0 ) Δ 1024 under the optimal policy.
Mathematics 13 01332 g003
Figure 4. E { e ˜ ( z ( 0 ) , u ) } for each z ( 0 ) Δ 1024 under the optimal policy.
Figure 4. E { e ˜ ( z ( 0 ) , u ) } for each z ( 0 ) Δ 1024 under the optimal policy.
Mathematics 13 01332 g004
Figure 5. Methodology framework diagram.
Figure 5. Methodology framework diagram.
Mathematics 13 01332 g005
Table 1. Notations.
Table 1. Notations.
NotationsDefinitions
N Set of natural numbers
D , D n { 0 , 1 } , D × D × × D n
I n n-dimensional identity matrix
δ n i i-th column of I n
Δ n Set of columns of I n ; Δ : = Δ 2
v ( X ) Vector form of X D , i.e., v ( X ) = δ 2 2 X Δ
[ A ] i j ( i , j ) -th entry of the matrix A
Col i ( A ) i-th column of the matrix A
[ v ] i i-th entry of the vector v
[ n : m ] Set of integers { n , n + 1 , , m }
δ n [ i 1 i 2 i m ] The logical matrix of which k-column is δ n i k
R n × m Set of n × m real matrices
L n × m Set of n × m logical matrices
Kronecker product
Semi-tensor product [2]
*Khatri-Rao product [34]
i = 1 n x i Short for x 1 x 2 x n
1 n , 0 n , n [ 1 1 1 n ] ,   [ 0 0 0 n ]   [ n ]
Table 2. Reference trajectory 1.
Table 2. Reference trajectory 1.
t1234
Y 1 r 0110
Y 2 r 1010
y r δ 4 3 δ 4 2 δ 4 1 δ 4 4
Table 3. Reference trajectory 2.
Table 3. Reference trajectory 2.
t123456
Y 1 r 111000
Y 2 r 110100
Y 3 r 011010
y r δ 8 2 δ 8 1 δ 8 3 δ 8 6 δ 8 7 δ 8 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, B.; Xue, Y.; Shi, A. Optimal Feedback Policy for the Tracking Control of Markovian Jump Boolean Control Networks over a Finite Horizon. Mathematics 2025, 13, 1332. https://doi.org/10.3390/math13081332

AMA Style

Chen B, Xue Y, Shi A. Optimal Feedback Policy for the Tracking Control of Markovian Jump Boolean Control Networks over a Finite Horizon. Mathematics. 2025; 13(8):1332. https://doi.org/10.3390/math13081332

Chicago/Turabian Style

Chen, Bingquan, Yuyi Xue, and Aiju Shi. 2025. "Optimal Feedback Policy for the Tracking Control of Markovian Jump Boolean Control Networks over a Finite Horizon" Mathematics 13, no. 8: 1332. https://doi.org/10.3390/math13081332

APA Style

Chen, B., Xue, Y., & Shi, A. (2025). Optimal Feedback Policy for the Tracking Control of Markovian Jump Boolean Control Networks over a Finite Horizon. Mathematics, 13(8), 1332. https://doi.org/10.3390/math13081332

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop