Next Article in Journal
Weighted Sum Secrecy Rate Maximization for Joint ITS- and IRS-Empowered System
Previous Article in Journal
A Novel Trajectory Feature-Boosting Network for Trajectory Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critic Learning-Based Safe Optimal Control for Nonlinear Systems with Asymmetric Input Constraints and Unmatched Disturbances

1
School of Artificial Intelligence, Henan University, Zhengzhou 450000, China
2
School of Software, Henan University, Kaifeng 475000, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(7), 1101; https://doi.org/10.3390/e25071101
Submission received: 29 May 2023 / Revised: 1 July 2023 / Accepted: 7 July 2023 / Published: 24 July 2023
(This article belongs to the Section Complexity)

Abstract

:
In this paper, the safe optimal control method for continuous-time (CT) nonlinear safety-critical systems with asymmetric input constraints and unmatched disturbances based on the adaptive dynamic programming (ADP) is investigated. Initially, a new non-quadratic form function is implemented to effectively handle the asymmetric input constraints. Subsequently, the safe optimal control problem is transformed into a two-player zero-sum game (ZSG) problem to suppress the influence of unmatched disturbances, and a new Hamilton–Jacobi–Isaacs (HJI) equation is introduced by integrating the control barrier function (CBF) with the cost function to penalize unsafe behavior. Moreover, a damping factor is embedded in the CBF to balance safety and optimality. To obtain a safe optimal controller, only one critic neural network (CNN) is utilized to tackle the complex HJI equation, leading to a decreased computational load in contrast to the utilization of the conventional actor–critic network. Then, the system state and the parameters of the CNN are uniformly ultimately bounded (UUB) through the application of the Lyapunov stability method. Lastly, two examples are presented to confirm the efficacy of the presented approach.

1. Introduction

Safety-critical systems are those that, in case of accidents or failures, can result in significant consequences, including but not limited to injuries, loss of life, environmental harm, or financial losses. The emergence of safety-critical systems like unmanned aerial vehicles (UAVs) [1,2,3] and robots [4] has led to an increased focus on safety control design within the field of control systems [5,6]. Safety control designs entail control strategies that satisfy safety specifications imposed by environmental limitations or physical limitations of the system. Ignoring the detrimental impact of safety entails substantial risks to both the safety of belongings and personal security. To address the challenges of the safe controller design, researchers have provided some effective approaches [7,8,9,10,11]. The problem of safety in the presence of unmodeled dynamics or disturbances in drones has recently been addressed by designing the robust controller based on the nonlinear estimator in [9]. In ref. [10], the use of neural networks integrated with the Lyapunov theory was preliminarily treated with application in the automotive sector for critical situations, and this aspect was further addressed in an even more organic way. In ref. [11], the quadratic programming-based method was applied to develop a safe controller. Despite the fact that this method can guarantee safety at a local level for every time step, selecting a step size that is too small leads to redundant computations. In contrast, a step size that is too large causes unsafe behavior, making it challenging to ensure the safety of the system. Hence, it is crucial to identify an appropriate control design method for CT safety-critical systems that can guarantee the safety of the systems.
Recently, the CBF technique has emerged as an effective approach for ensuring the security of safety-critical systems [12,13,14]. The underlying principle of the CBF is to insure the forward invariance of the safe set. In ref. [15], the safe-based reinforcement learning approach was demonstrated, where the CBF was merged into the cost function to assure both the safety and optimality of the system. Typically, the CBF component is contained within the primitive cost function to penalize behavior that violates safety constraints. Reference [16] incorporated damping factors into the CBF and intervened selectively only in the event of safety constraint violations, aiming to reduce disruptions to the optimal controller. Reference [17] introduced the utilization of the CBF and summarized the verification approach for safety-critical control systems. Nevertheless, the methods mentioned above do not take into account the presence of external disturbances, which served as a motivation for the research conducted in this paper.
As disturbances are present in almost all industrial systems and hurt control performance, it is necessary to consider external disturbances in actual projects. Recently, several methods have been proposed to address disturbances [18,19,20,21,22,23,24]. For instance, references [18,19] used H control to reduce external disturbances in nonlinear systems. Reference [23] combined ADP with sliding mode control for addressing optimal control problems of CT nonlinear systems considering uncertain disturbances. Reference [24] cast external disturbances as the ZSG problem, with the control strategy aiming at minimizing the cost function and the disturbance strategy striving towards maximizing it. It is well known that for the HJI equation of the ZSG, it is difficult to find its analytical solution. Fortunately, with the evolution of optimal control [25,26,27], the ADP approach [28] was employed to approximately tackle the ZSG problem. For example, in reference [29], a new database-based adaptive critic algorithm was presented to study the infinite-scale robust control for nonlinear systems. However, the aforementioned methods fail to consider the capability limitations of the system due to asymmetric input constraints.
Although symmetric input constraints have been widely investigated in prior research and tackled with various techniques, such as the control problem that concerns the uncertain impulse system that has input constraints, which was handled in [30], and the utilization of integral reinforcement learning with the actor–critic network to address the tracking control problem under input constraints in [31,32], there has been relatively little research on the treatment of asymmetric input constraints that frequently occur in practical systems. Several optimal control methods exist for addressing CT systems with nonlinear dynamics and input constraints that are asymmetric. Among them is one that used the cost function with adjustable upper and lower limits of integration [33,34,35]. Another proposed the switching function [36] to tackle the problem, but it is only applied to linear systems. However, none of these results considered the incorporation of CBF into the CT nonlinear safety-critical systems to study the safe and optimal control problem under asymmetric input constraints with external disturbances.
This study explores the safe and optimal control issue for safety-critical nonlinear systems to reject unmatched external disturbances under the condition of asymmetric input constraints. Unlike other works, a new non-quadratic form function for handling asymmetric input constraints is proposed in this paper. To tackle the challenge posed by unmatched disturbances, a two-player ZSG is put forward to formulate the optimization problem. The ZSG is then addressed by finding the Nash equilibrium point, which is obtained by addressing the HJI equation. However, since solving the HJI equation is challenging, an ADP technique similar to that used in references [37,38,39] is exploited to estimate the solution of the HJI equation. In addition, one single CNN is used instead of a dual actor–critic neural network to diminish the computational complexity in approximating the control policy. Consequently, the optimal control policy is obtained by considering the worst disturbance.
The contributions are outlined mainly as follows:
  • Asymmetric input constraints are considered in the control problem of the CT nonlinear safety-critical systems. In addition, this paper proposes a new non-quadratic form function to address the issue of asymmetric input constraints. It is important to note that when applying this approach, the optimal control policy no longer remains at 0, even when the system state reaches the equilibrium point of x = 0 (see u * ( x ) in later Equation (15)).
  • This paper adopts the CBF to construct safety constraints and proposes designing a damping coefficient within the CBF to balance the safety and optimality of safety-critical systems based on varying safety requirements in different applications.
  • The safe optimal control problem is turned into the ZSG problem to address unmatched disturbances; then, the optimal control law is gained by tackling the HJI equation using one CNN. Moreover, the use of only one CNN to approximate the HJI equation is an effective way to reduce the computational burden compared to the actor–critic network and the system state, and CNN parameters are demonstrated to be UUB.
The following structure is adopted for this article. Section 2 provides the initial formulation of the problem. Section 3 presents a safe optimal control design for the two-player ZSG problem. Then, in Section 4, an adaptive CNN method for addressing the HJI equation using an online method is proposed, and its stability is verified. Section 5 introduces two examples to demonstrate that the presented approach is effective. Lastly, Section 6 gives conclusions.

2. Problem Statement

Consider the CT nonlinear safety-critical system as
x ˙ = F ( x ) + G ( x ) u + P ( x ) v ,
where x = [ x 1 , x 2 , , x n ] T C a R n indicates the system state vector with n-dimensional parameters, F ( x ) R n represents the internal dynamics, G ( x ) R n × m and P ( x ) R n × q indicate control and disturbance coefficient matrices, respectively. Additionally, u R m denotes an input variable with m-dimensional parameters denoted by u = u | u m a x u u m i n , where u m a x and u m i n stand for the upper and lower bounds, respectively. And v R q is the unmatched disturbances. The paper assumes F ( · ) , G ( · ) , P ( · ) are Lipschitz continuous and satisfy F ( 0 ) = 0 , and the safety-critical System (1) is stabilizable and controllable. Moreover, we assume there exist two constants G M > 0 and P M > 0 . Both G ( x ) and P ( x ) have upper bounded values, i.e., G M G ( x ) , P M P ( x ) , for any x R n .
In addition, it is essential to emphasize that C a represents a safe set for (1). C a is derived from operational restrictions, such as the allowable states of the robot arm, which is mathematically determined by
C a = x R n | z ( x ) 0 , i n t ( C a ) = x R n | z ( x ) > 0 , C a = x R n | z ( x ) = 0 ,
where z ( x ) represents continuous concerning x. The set i n t ( C a ) denotes the interior of C a , while C a represents the boundary of C a .
Subsequently, the representation of the infinite horizon cost function from t = 0 for System (1) is given by
V ( x ) = 0 x ( t ) T Q x ( t ) + U ( u ) Υ 2 v 2 d t ,
where Q represents a function with positive definite properties, v 2 = v T v , Υ > 0 represents a constant weight coefficient, U ( u ) is a non-quadratic form function employed for handling the asymmetric input constraints determined by
U ( u ) = 2 u Ψ t a n h 1 ( t Ψ ) d t = 2 Ψ ( u ) t a n h 1 ( u Ψ ) + Ψ 2 ln ( 1 ( u ) 2 Ψ ) ,
with Ψ and defined as
Ψ = 1 2 ( u m a x u m i n ) , = 1 2 ( u m a x + u m i n ) ,
where u m a x u m i n and t a n h ( z ) = ( e z e z ) / ( e z + e z ) with z R .
Remark 1. 
Even though t a n h ( z ) is symmetric, U ( u ) in (4) generates asymmetric constraints in the control signal u * ( x ) (see u * ( x ) in later (15)). This is due to the fact that ℑ is not equal to 0 in (4). This feature is different from studying the symmetric input constraints.
Additionally, the ultimate objective of this paper is to devise the safe and optimal control input policy for (1), which involves the utilization of the CBF concept. In the upcoming section, this paper presents the concept of the CBF and proposes an ADP-based approach to design the safe and optimal controller.

3. Safe Optimal Control Design

This section presents a detailed explanation of the concept of the CBF. Then, the safe and optimal control problem is converted to the two-player ZSG to overcome the unmatched disturbances, and the CBF is integrated with the cost function without an intermediary to punish unsafe behavior.

3.1. Control Barrier Function

The utilization of the CBF provides a solution to address the safety constraint problem in safety-critical systems. The CBF is a function that is non-negative within the set C a and exhibits divergence to infinity at the edge of C a . As the state x is about to reach the boundary of C a , the condition of negative derivative can bring the system state x back within C a , ensuring that the system state is always confined within C a . To better illustrate the properties of the CBF, the following assumption is given.
Assumption 1. 
The CBF candidate B r ( x ) meets the subsequent three characteristics [40,41]:
(1) 
B r ( x ) 0 , x i n t ( C a ) ,
(2) 
B r ( x ) , x C a ,
(3) 
B r ( x ) is monotonically decreasing x C a .
Moreover, for all x C a , the CBF B r ( x ) has the following properties:
1 γ 1 ( z ( x ) ) B r ( x ) 1 γ 2 ( z ( x ) ) , B r ˙ ( x ) γ 3 ( z ( x ) ) ,
where γ 1 ( · ) , γ 2 ( · ) , and γ 3 ( · ) are class K functions.
Under the premise that Assumption 1 and Equation (6) both hold, a suitable choice for B r ( x ) is ρ y ( x ) / z ( x ) , where y ( x ) represents a special scheduling function determined by the user to allow for flexibility in selecting B r ( x ) . Specifically, y ( x ) ensures that the CBF operates only when the system is close to the unsafe set. ρ > 0 is the damping factor used to balance safety and optimality.
Remark 2. 
In contrast to the previous CBF [16], the ρ chosen here shows a positive correlation with the value of B r ( x ) . The larger the value of ρ, the faster the system state moves away from the unsafe set, and the smaller the value of ρ, the slower the state x moves away from the unsafe set. A smaller value of ρ emphasizes optimality and a larger value of ρ enforces safety.

3.2. Safe and Optimal Control Approach

By augmenting the selected CBF B r ( x ) to the cost function (3), a new refined cost function is obtained, that is,
V ( x ) = 0 x ( t ) T Q x ( t ) + U ( u ) Υ 2 v 2 + B r ( x ) d t .
Remark 3. 
To ensure the safety of the system, it is assumed that the original system state x is confined within the set C a . This is because the rapid increase in B r ( x ) as the state x nears the boundary of C a is the reason behind the penalization of state convergence behavior when the initial state is beyond C a . This prevents the system state from converging.
The conventional control problems can be transformed into two-player ZSG problems. The Nash equilibrium point, i.e., the saddle point ( u * , v * ) can be obtained by addressing the special HJI equation. Then, the optimal cost function is defined by
V * ( x ) = m i n u m a x v 0 x ( t ) T Q x ( t ) + U ( u ) Y 2 v 2 + B r ( x ) d t .
The purpose of the two-player ZSG problem is to identify a saddle point so that the following inequality can hold:
V * ( x , u * , v ) V * ( x , u * , v * ) V * ( x , u , v * ) .
Therefore, for the two-player ZSG problem, u * is the optimal control input policy minimizing the cost function, and v * represents the worst disturbance input policy maximizing the cost function.
Definition 1. 
Input policy u is considered admissible in relation to (7) on R n , denoted by u ( ) , u stabilizes (1) on ℧ if u is continuous on ℧, and (7) is limited for any x .
For the admissible input policy u ( ) , if Equation (7) is continuously differentiable, computing the gradient of V ( x ) with respect to t on both sides of Equation (7) yields the nonlinear Lyapunov equation as
0 = V ( x ) T ( F ( x ) + G ( x ) u + P ( x ) v ) + x ( t ) T Q x ( t ) + U ( u ) Y 2 v 2 + B r ( x ) ,
where V ( x ) is the gradient of V ( x ) , V ( 0 ) = 0 .
Based on the optimal control approach, the HJI equation for the two-player ZSG problem possesses an exclusive solution if there exists a saddle point, that is, if the following conditions hold:
0 = m i n u m a x v H ( x , u , v , V * ( x ) ) = m a x v m i n u H ( x , u , v , V * ( x ) ) ,
where H ( x , u , v , V * ( x ) ) refers to the Hamiltonian function of the safety-critical system (1), that is,
H ( x , u , v , V * ( x ) ) = V * ( x ) T ( F ( x ) + G ( x ) u + P ( x ) v ) + x ( t ) T Q x ( t ) + U ( u ) Y 2 v 2 + B r ( x ) .
By using Equations (11) and (12), the saddle point can be found by addressing two equations as
u * ( x ) = a r g m i n u H ( x , u , v , V * ( x ) ) ,
and
v * ( x ) = a r g m a x v H ( x , u , v , V * ( x ) ) .
Thus, the saddle point ( u * , v * ) can be gained as
u * ( x ) = Ψ t a n h ( 1 2 Ψ G ( x ) T V * ( x ) ) + ψ ,
and
v * ( x ) = 1 2 Y 2 P ( x ) T V * ( x ) ,
where ψ = [ , , , ] T R m with given by Equation (5).
Remark 4. 
Given that 0 from (5), it can be concluded that u ( 0 ) = 0 . Therefore, in order to establish the equilibrium point of (1) at x = 0 , the assumption of G ( 0 ) = 0 is necessary.
Substituting Equations (15) and (16) into Equation (11), the HJI equation can be redefined as
0 = V * ( x ) T ( F ( x ) + P ( x ) v * ) + x ( t ) T Q x ( t ) + U ( Ψ t a n h ( T ( x ) ) + ψ ) Y 2 v * 2 ( Ψ V * ( x ) ) T G ( x ) t a n h ( T ( x ) ) + V * ( x ) T G ( x ) ψ + B r ( x ) ,
where T ( x ) = 1 / ( 2 Ψ ) G ( x ) T V * ( x ) and V * ( 0 ) = 0 .
For the optimal safe control problem of the ZSG with unmatched external disturbances and asymmetric input constraints, it is necessary to obtain the value corresponding to the optimal cost Function (8) for achieving the optimal control input Policy (15) and the worst disturbance input Policy (16). Therefore, the solution of Equation (17) needs to be obtained. Nevertheless, since Equation (17) represents a nonlinear partial differential equation, it is challenging to find its analytical solution using conventional mathematical approaches. Hence, the solution of this equation is estimated by using the CNN in the next section.

4. Adaptive CNN Design

4.1. Solving the HJI Equation via the CNN

This section designs a CNN to estimate cost function V * ( x ) as
V * ( x ) = W c T δ ( x ) + ξ ( x ) ,
where ξ ( x ) represents the estimation error about the CNN with ξ ( 0 ) = 0 , W c R r represents the ideal weight vector of the CNN, δ ( x ) = [ δ 1 ( x ) ; δ 2 ( x ) ; ; δ r ( x ) ] represents activation function with δ j ( 0 ) = 0 , j = 1 , 2 , , r , r is the number of neurons in the CNN.
The gradient of the approximate optimal cost function is
V * ( x ) = δ ( x ) T W c + ξ ( x ) .
Substituting Equation (19) into Equation (15), u * ( x ) can be represented as
u * ( x ) = Ψ t a n h ( A ¯ ( x ) ) + ξ u * ( x ) + ψ ,
where
A ¯ ( x ) = 1 2 Ψ G ( x ) T δ ( x ) T W c ,
and
ξ u * ( x ) = 1 2 ( I m Φ ( A ( x ) ) ) G ( x ) T ξ ( x ) ,
with Φ ( A ( x ) ) = d i a g t a n h 2 ( A l ( x ) ) ( l = 1 , 2 , , m ) with A l ( x ) = [ A 1 ( x ) ; A 2 ( x ) ; ; A m ( x ) ] R m being selected between A ¯ ( x ) and T ( x ) . Then, considering Equation (19), v * ( x ) in Equation (16) can be redefined as
v * ( x ) = 1 2 Y 2 P ( x ) T δ ( x ) T W c + ξ v * ( x ) ,
where ξ v * ( x ) = 1 2 Υ 2 P ( x ) T ξ ( x ) .
Similarly, substituting Equation (19) into Equation (17), the HJI equation can be rewritten as
0 = W c T δ ( x ) ( F ( x ) + P ( x ) v * ) + x T Q x + U ( Ψ t a n h ( A ¯ ( x ) + K ( x ) + ψ ) ) + B r ( x ) Ψ W c T δ ( x ) G ( x ) t a n h ( A ¯ ( x ) + K ( x ) ) Ψ ξ ( x ) T G ( x ) t a n h ( A ¯ ( x ) + K ( x ) ) + ξ ( x ) T ( F ( x ) + P ( x ) v * ) Y 2 v * 2 + ( W c T δ ( x ) + ξ ( x ) T ) G ( x ) ψ ,
where K ( x ) = 1 / ( 2 Ψ ) G ( x ) T ξ ( x ) .
However, since the ideal CNN weight W c in Equation (18) is unknown, it can not be used in the control procedure. Hence, the CNN is used to estimate the cost function and its gradient as
V ^ ( x ) = W ^ c T δ ( x ) ,
V ^ ( x ) = δ ( x ) T W ^ c ,
where W ^ c represents the estimation of W c .
Therefore, the approximate optimal input and the approximate worst disturbance input become
u ^ * ( x ) = Ψ t a n h ( 1 2 Ψ G ( x ) T δ ( x ) T W ^ c ) + ψ ,
and
v ^ * ( x ) = 1 2 Y 2 G ( x ) T δ ( x ) T W ^ c .
Subsequently, the approximated Hamilton function can be formulated by
H ^ ( x , W ^ c , v ^ * ) = W ^ c T ð + W ^ c T δ ( x ) G ( x ) ψ + U ( Ψ t a n h ( Γ ( x ) ) + ψ ) + x T Q x Y 2 W ^ c 2 Ψ W ^ c T δ ( x ) G ( x ) t a n h ( Γ ( x ) ) + B r ( x ) ,
where
ð = δ ( x ) ( F ( x ) + P ( x ) v ^ * )
and
Γ ( x ) = 1 2 Ψ G ( x ) T δ ( x ) T W ^ c .
The CNN weight estimation error is denoted by
W ˜ c = W c W ^ c ,
and the approximation error ϱ c of the Hamiltonian function is derived as
ϱ c = H ^ ( x , W ^ c , v ^ * ) H ( x , u * , v * , V * ( x ) ) = H ^ ( x , W ^ c , v ^ * ) .
To achieve W ^ c W c , it is necessary to ensure that ϱ c 0 . Therefore, the chosen target function is denoted by E = 1 2 ϱ c T ϱ c ( 1 / ( 1 + ð T ð ) 2 ) , where O = 1 + ð T ð . Consequently, based on a normalized gradient descent algorithm, the weight vector W ^ c is defined by
W ^ ˙ c = α O 2 E W ^ c = α O 2 ϱ c ,
with α > 0 being the adjustable parameter and ϱ c defined as Equation (33).
Using Equations (32) and (34), the weight approximation error W ˜ ˙ c can be expressed as
W ˜ ˙ c = α ζ O ξ c α ζ ζ T W ˜ c ,
where ξ c = ξ ( x ) T ( F ( x ) + P ( x ) v ^ * ) is the residual error and ζ = ð O .

4.2. Stability Analysis

The UUB of both the state x and the CNN parameters in the closed-loop system is demonstrated by utilizing the Lyapunov stability analysis principle in this subsection. First, two assumptions that were also used in [28,42] are required, as
Assumption 2. 
The ideal optimal CNN weight vector W c is upper bounded, i.e., W c b W c , where b W c > 0 is a constant. Moreover, for any x , this paper assumes that there are two known constants b δ > 0 , b δ > 0 so that δ ( x ) b δ ,   δ ( x ) b δ . Meanwhile, there exist b ξ > 0 and b ξ > 0 so that ξ ( x ) b ξ , ξ ( x ) b ξ for any x .
Assumption 3. 
We make b ξ u * , b ξ v * , b ξ c be positive constants.
(1) 
b ξ u * ξ u * ( x ) for any x .
(2) 
b ξ v * ξ v * ( x ) for any x .
(3) 
b ξ c ξ c for any x .
Theorem 1. 
Assuming Assumptions 1–3 are met, we consider System (1) with the associated Control (27) and the update rule of CNN (34), ensuring all signals in the nonlinear system are UUB if the following condition holds:
α k m i n ( ζ ζ T ) ( 1 / Y 2 ) δ 2 P M 2 > 0 .
Proof. 
We let the Lyapunov candidate function as the following (note: for convenience, V * ( x ) and ( 1 / 2 ) W ˜ c T W ˜ c are abbreviated as L 1 and L 2 below):
L ( t ) = V * ( x ) L 1 + ( 1 / 2 ) W ˜ c T W ˜ c L 2 .
Taking the derivation of L 1 in Equation (37) and using System (1), the derivation of L 1 can be expressed as
L ˙ 1 = d V * ( x ) d t = V * ( x ) T ( F ( x ) + G ( x ) u ^ * + P ( x ) v ^ * ) = V * ( x ) T ( F ( x ) + G ( x ) u * + P ( x ) v * ) + V * ( x ) T P ( x ) ( v ^ * v * ) + V * ( x ) T G ( x ) ( u ^ * u * ) .
Then, using Equations (12) and (11), it can be derived as
V * ( x ) T ( F ( x ) + G ( x ) u * + P ( x ) v * ) = x T Q x U ( u * ) + Y 2 v * 2 B r ( x ) .
Similarly, taking into account Equations (27) and (28), the derived results are
V * ( x ) T G ( x ) = 2 Ψ ( t a n h 1 ( ( ψ u * ) / ( Ψ ) ) ) T ,
and
V * ( x ) T P ( x ) = 2 Y 2 v * T .
According to Equations (38)–(41), Equation (38) can be rewritten as follows (note: for convenience, ω ¯ U ( u * ) and 2 Y 2 v * T v ^ * Y 2 v * 2 B r ( x ) are abbreviated as Λ 1 and Λ 2 below):
L ˙ 1 = x T Q x + ω ¯ U ( u * ) Λ 1 + 2 Y 2 v * T v ^ * Y 2 v * 2 B r ( x ) Λ 2 ,
where
ω ¯ = 2 Ψ ( t a n h 1 ( ( ψ u * ) / Ψ ) ) ( u ^ * u * ) .
We apply Young’s inequality to Equation (43). Additionally, considering Equations (19), (20), (27), (40) and (41), ω ¯ can be formulated as
ω ¯ Ψ ( t a n h 1 ( ( ψ u * ) / Ψ ) ) 2 + u ^ * u * 2 = 1 4 G ( x ) T V * ( x ) 2 + u ^ * u * 2 = 1 4 G ( x ) T ( δ ( x ) T W c + ξ ( x ) ) 2 + Ψ t a n h ( Γ ( x ) ) + Ψ t a n h ( A ¯ ( x ) ) ξ u * ( x ) 2 .
Furthermore, utilizing Young’s inequality, ω ¯ in Equation (44) further yields
ω ¯ 2 Ψ t a n h ( Γ ( x ) ) + Ψ t a n h ( A ¯ ( x ) ) 2 + 2 ξ u * ( x ) 2 + 1 2 G ( x ) T δ ( x ) T W c 2 + 1 2 G ( x ) T ξ ( x ) 2 4 Ψ t a n h ( Γ ( x ) ) 2 + Ψ t a n h ( A ¯ ( x ) ) 2 + 2 ξ u * ( x ) 2 + 1 2 G ( x ) T δ ( x ) T W c 2 + 1 2 G ( x ) T ξ ( x ) 2 .
According to Equations (21) and (31), the following inequalities can be depicted as
t a n h ( Γ ( x ) ) 2 = t a n h 2 ( Γ ( x ) ) m
and
t a n h ( A ¯ ( x ) ) 2 = t a n h 2 ( A ¯ ( x ) ) m .
Based on Equation (46) and Assumptions 2 and 3, ω ¯ can be expressed as
ω ¯ 8 Ψ 2 m + 1 2 G M 2 ( δ 2 W c 2 + ξ 2 ) + 2 b ξ u * 2 .
By observing Equations (4) and (5), it can be concluded that U ( u * ) > 0 . Using Young’s inequality and Equation (48), the expression of Λ 1 in Equation (42) can be rewritten as
Λ 1 8 Ψ 2 m + 1 2 G M 2 ( δ 2 W c 2 + ξ 2 ) + 2 b ξ u * 2 .
Similarly, Λ 2 in Equation (42) can be rewritten as follows (note: from Assumption 1, B r ( x ) 0 ):
Λ 2 = Y 2 v ^ * 2 B r ( x ) = Y 2 v * 2 + Y 2 v * 2 + Y 2 v ^ * 2 B r ( x ) Y 2 v * 2 + Y 2 v * 2 + Y 2 v ^ * 2 Y 2 v * 2 + Y 2 ( v * 2 + v ^ * 2 ) = ( 1 / ( 4 Y 2 ) ) P ( x ) T δ ( x ) T ( W c W ˜ c ) 2 .
Meanwhile, using Young’s inequality and Assumptions 1 and 3, Λ 2 in Equation (50) further yields
Λ 2 ( 1 / ( 4 Y 2 ) ) P M 2 δ 2 W c W ˜ c 2 ( 1 / ( 2 Y 2 ) ) P M 2 δ 2 ( W c 2 + W ˜ c 2 ) .
Hence, by observing Equations (49) and (51), it can be inferred that L ˙ 1 in Equation (42) satisfies
L ˙ 1 k m i n ( Q ) x 2 + ( 1 / ( 2 Y 2 ) ) P M 2 δ 2 W c 2 + 8 Ψ 2 m + 2 b ξ u * 2 + ( 1 / 2 ) G M 2 ( δ 2 W c 2 + ξ 2 ) + ( 1 / ( 2 Y 2 ) ) P M 2 δ 2 W ˜ c 2 .
Then, the derivative of L 2 in Equation (37) along the solution of Equation (34) is as follows (note: α W ˜ c T ( ζ / O ) ξ c is abbreviated as Λ 3 below):
L ˙ 2 = W ˜ c T W ˜ ˙ c = α W ˜ c T ( ζ / O ) ξ c Λ 3 α W ˜ c T ζ ζ T W ˜ c .
Immediately after, using Young’s inequality, Λ 3 can be depicted as
Λ 3 α 2 O ( ζ T W ˜ c 2 + ξ c 2 ) α ( 1 2 W ˜ c T ζ ζ T W ˜ c + 1 2 ξ c 2 ) .
Additionally, with Assumption 3 holding, it can be deduced that L ˙ 2 in Equation (53) satisfies
L ˙ 2 1 2 ( α W ˜ c T ζ ζ T W ˜ c + α ξ c 2 ) 1 2 ( α k m i n ( ζ ζ T ) W ˜ c 2 + α ξ c 2 ) .
Using Equations (37), (52) and (55), L ˙ can be depicted as
L ˙ k m i n ( Q ) x 2 + ( 1 / ( 2 Y 2 ) P M 2 δ 2 W 2 ) + ( 1 / 2 ) G M 2 ( δ 2 W 2 + ξ 2 ) ( 1 / 2 ) ( α k m i n ζ ζ T ( 1 / Y 2 ) P M 2 δ 2 ) W ˜ c 2 + 8 Ψ 2 m + 2 b ξ u * 2 + ( α / 2 ) ξ c 2 .
Finally, L ˙ < 0 is true if x ( x ) or W ˜ c ( W ˜ c ) , and based on Equation (36), ( x ) and ( W ˜ c ) can be respectively formulated as
( x ) = x α ξ c 2 + Ξ + A 1 P M 2 2 k m i n ( Q ) ,
and
( W ˜ c ) = W ˜ c α ξ c 2 + Ξ + A 1 P M 2 α k m i n ( ζ ζ T ) A 2 P M 2 ,
where Ξ = G M 2 ( δ 2 W c 2 + ξ 2 ) + 16 Ψ 2 m + 4 b ξ u * 2 , A 1 = ( 1 / Y 2 ) δ 2 W c 2 and A 2 = ( 1 / Y 2 ) δ 2 .
To summarize, the Lyapunov stability method has been used to demonstrate the state x of Equation (1) and W ˜ c are UUB, with Equations (57) and (58) representing their respective bounds. The proof is complete. □

5. Simulation Study

Within this section, two examples are utilized to validate the efficacy of the proposed approach.

5.1. Example 1

Consider the F16 aircraft plant used in [28] as
x ˙ = F ( x ) + G ( x ) u + P ( x ) v ,
where x ( t ) = [ x 1 , x 2 , x 3 ] T R 3 with x 0 = [ 1 , 1 , 1 ] T represents the system state vector, where x 1 ,   x 2 and x 3 represent the attack angle, the pitch rate, and the elevator deflection angle, respectively. u is control input, v is disturbance input. The internal dynamics, control, and disturbance coefficient matrices are expressed as
F ( x ) = 1.01887 x 1 + 0.90506 x 2 0.00215 x 3 0.82225 x 1 1.07741 x 2 0.17555 x 3 x 3 , G ( x ) = 0 0 1 , P ( x ) = 0 0 1 .
The control input u is constrained to be greater than −1 and less than 2. Hence, Ψ = 1.5 and = 0.5 . And then, the danger region is described as a ball with a radius of 0.15 and a center at [ 0.3 , 0.05 , 0.05 ] T . The y ( x ) is chosen as
1.5 ( x 1 0.3 ) 2 + 0.1 ( x 2 0.05 ) 2 + 1.2 ( x 3 + 0.05 ) 2 0.15 ( x 1 0.3 ) 2 + ( x 2 0.05 ) 2 + 25 ( x 3 + 0.05 ) 2 0.15 .
The z ( x ) is chosen as
( x 1 0.3 ) 2 + ( x 2 0.05 ) 2 + ( x 3 + 0.05 ) 2 0.15 .
In addition, substituting Ψ and into Equation (4), U ( u ) can be expressed as
U ( u ) = 2 Ψ ( u ) t a n h 1 ( u Ψ ) + Ψ 2 ln ( 1 ( u ) 2 Ψ ) = 3 ( u 0.5 ) t a n h 1 ( u 0.5 1.5 ) + 2.25 ln ( 1 ( u 0.5 ) 2 1.5 ) .
Letting Q = I 3 and Y = 2 , the cost function for Equation (62) is formulated as
V ( x ) = 0 x ( t ) T Q x ( t ) + U ( u ) 2 2 v 2 + B r ( x ) d t ,
where B r ( x ) = ρ y ( x ) z ( x ) represents the CBF and ρ = 2 .
The activation function is given as δ ( x ) = [ x 1 2 , x 1 x 2 , x 1 x 3 , x 2 2 , x 2 x 3 , x 3 2 ] T and the CNN weight vector is W ^ c = [ W ^ c 1 , W ^ c 2 , W ^ c 3 , W ^ c 4 , W ^ c 5 , W ^ c 6 ] T . In addition, the adjustable parameter α is 10, and the original parameters of the CNN are configured as 1. At last, the probing noise e x p ( 0.1 t ) ( 0.001 ) ( s i n ( t ) 2 c o s ( t ) + s i n ( 2 t ) 2 c o s ( 0.1 t ) ) is added to the control input policy for the initial 30 s in order to ensure the persistence of the excitation.
Through simulation experiments, Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 are obtained. Figure 1 displays that W ^ c is convergent after the first 10 s, and can know the ideal vector W c * = [ 16.4603 , 6.5022 , 4.3910 , 4.8851 , 3.7081 , 11.6158 ] T . Figure 2 displays the convergence of the states x 1 , x 2 , and x 3 . Figure 3 displays the danger region, which is represented by the ball, and the original states are in the danger area. However, the system states controlled by the safe optimal controller bypass this ball, and as the damping coefficient ρ increases, the distance between the system states and the dangerous region becomes larger and larger. Figure 3 shows that as states x 1 , x 2 , and x 3 gradually approach the danger zone, the convergence of x 3 is accelerated due to the CBF and cost function. Figure 4 presents the control input u with asymmetric input constraints. The plot reveals that the value of u remains within the specified range, bounded by u m a x = 2 and u m i n = 1 , providing evidence that the asymmetric input constraints are implemented successfully. Figure 5 presents the disturbance input v . Figure 6 presents the cost function of the system. It can be seen that when the system states confront the danger area, the cost function changes significantly and eventually converges to zero. According to the principle of optimal control, when the cost function converges to zero, the following conclusion can be drawn: The cost function imposes a higher penalty on control actions that do not comply with the asymmetric input constraints and safety constraints. Therefore, when the cost function converges to zero, the system finds the optimal control actions that satisfy all the constraints.
In order to further show the efficiency of the presented method, Equation (4) is redefined as u T R u (where R = I 1 ), and the simulation results are illustrated in Figure 7. Subsequently, Figure 4 illustrates the control input, which is restricted to the limits of −1 to 2. This can be observed by comparing it with Figure 7, where the input is clearly outside this range.

5.2. Example 2

We consider the nonlinear system as
x ˙ = F ( x ) + G ( x ) u + P ( x ) v ,
where x ( t ) = [ x 1 , x 2 ] T R 2 with x 0 = [ 1 , 1 ] T represents the system state vector; the internal dynamics, control, and disturbance coefficient matrices are expressed as
F ( x ) = 1 2 x 1 + x 2 2 x 2 c o s ( 2 x 1 ) , G ( x ) = 0 x 1 , P ( x ) = 0 x 1 .
Just like F16, the control input u is subject to an asymmetrical boundary, with a lower bound of −1 and an upper bound of 3, establishing its limits. Hence, Ψ = 2 and = 1 . And then, the danger region is described as a circle with radius = 0.1 , and the center of the circle is [ 0.19 , 0.12 ] T . The y ( x ) is chosen as
a t a n ( 1 ( x 1 0.19 ) 2 + ( x 2 + 0.12 ) 2 0.1 ) .
The z ( x ) is chosen as
( x 1 0.19 ) 2 + ( x 2 + 0.12 ) 2 0.1 .
In addition, substituting Ψ and into Equation (4), U ( u ) can be expressed as
U ( u ) = 2 Ψ ( u ) t a n h 1 ( u Ψ ) + Ψ 2 ln ( 1 ( u ) 2 Ψ ) = 4 ( u 1 ) t a n h 1 ( u 1 2 ) + 4 ln ( 1 ( u 1 ) 2 2 ) .
Letting Q = I 2 and Y = 1.35 , the cost function for Equation (62) is formulated as
V ( x ) = 0 x ( t ) T Q x ( t ) + U ( u ) 1.35 2 v 2 + B r ( x ) d t ,
where B r ( x ) = ρ y ( x ) z ( x ) represents the CBF and ρ = 0.3 .
Then, the CNN presented as Equation (18) is applied to address the HJI equation for Equation (62). The activation function is given as δ ( x ) = [ x 1 2 , x 1 x 2 , x 2 2 , x 1 4 , x 1 3 x 2 , x 1 2 x 2 2 , x 1 x 2 3 , x 2 4 ] T and the CNN weight vector is W ^ c = [ W ^ c 1 , W ^ c 2 , W ^ c 3 , W ^ c 4 , W ^ c 5 , W ^ c 6 , W ^ c 7 , W ^ c 8 ] T . In addition, the adjustable parameter α is 20, the original parameters of the CNN are configured as 1. At last, the probing noise e x p ( 0.001 t ) ( 0.1 ( s i n ( t ) 2 c o s ( t ) + s i n ( t ) 5 + s i n ( 2 t ) 2 c o s ( 0.1 t ) + s i n ( 1.2 t ) 2 c o s ( 0.5 t ) ) is added to the control input policy for the initial 30 s.
Through simulation experiments, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 are obtained. Figure 8 displays that W ^ c is convergent after the first 10 s, and can know the ideal vector W c * = [ 84.6487 , 12.2017 , 9.5269 , 11.7425 , 3.0924 , 3.4273 , 0.5533 , 2.0591 ] T . Figure 9 displays the convergence of the states x 1 and x 2 . Figure 10 illustrates the relationship between the system states and the dangerous area, revealing that increasing the damping factor ρ leads to a greater distance between the system states and the dangerous zone. Evidently, system states x 1 and x 2 with a safe and optimal controller take an alternate route to avoid the dangerous region, while the conventional optimal controller cannot circumvent the dangerous region. As can be seen from Figure 10, when states x 1 and x 2 gradually approach the danger zone, the convergence speed of x 2 is accelerated due to the influence of CBF and cost function and obtains an optimal trajectory around the danger zone again. Figure 11 shows input u with asymmetric input constraints. The plot reveals that the value of u remains within the specified range, bounded by u m a x = 3 and u m i n = 1 , providing evidence that the asymmetric input constraints are implemented successfully. Figure 12 presents disturbance input v . Figure 13 presents the cost function of the system. It can be seen that the cost function eventually converges to zero. Similar to the linear system, when the cost function converges to zero, it can be concluded that the system finds the optimal control action that satisfies the asymmetric input constraints and safety constraints.
In this paper, asymmetric input constraints and unmatched disturbances are applied to nonlinear safety-critical systems for the first time, and Equation (4) is used to handle the asymmetric input constraints. To further demonstrate the efficacy of the presented algorithm, as in articles [14,16,28], (4) is redefined as u T R u (where R = I 1 ) and the simulation results are shown in Figure 14. Subsequently, the control input in Figure 11 is constrained to fall within the limits of −1 to 3, as can be observed by comparing it with Figure 14, while the input in Figure 14 is clearly outside this range.

6. Conclusions

The safe and optimal control problem of the nonlinear CT safety-critical systems with asymmetric input constraints and unmatched disturbances was addressed. Firstly, the new non-quadratic form function was considered for addressing the issue of asymmetric input constraints. Then, the control design was transformed into the two-player ZSG problem to handle unmatched disturbances. In order to obtain the optimal controller for safety, the combination of the CBF and cost function was directly used to penalize unsafe behavior. Moreover, the CNN was applied to reduce the computational complexity of dual actor–critic network. The effectiveness of the proposed method was validated by the simulation results.

Author Contributions

C.Q. and K.J. provided methodology, validation, and writing—original draft preparation; T.Z. provided conceptualization, writing—review; J.Z. provided supervision; C.Q. provided funding support. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by science and technology research project of the Henan province 222102240014.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The authors can confirm that all relevant data are included in the article.

Conflicts of Interest

The authors declare that they have no conflict of interest. All authors have approved the manuscript and agreed with submission to this journal.

References

  1. Yi, X.; Luo, B.; Zhao, Y. Adaptive dynamic programming-based visual servoing control for quadrotor. Neurocomputing 2022, 504, 251–261. [Google Scholar] [CrossRef]
  2. Liscouët, J.; Pollet, F.; Jézégou, J.; Budinger, M.; Delbecq, S.; Moschetta, J. A methodology to integrate reliability into the conceptual design of safety-critical multirotor unmanned aerial vehicles. Aerosp. Sci. Technol. 2022, 127, 107681. [Google Scholar] [CrossRef]
  3. Dou, L.; Cai, S.; Zhang, X.; Su, X.; Zhang, R. Event-triggered-based adaptive dynamic programming for distributed formation control of multi-UAV. J. Frankl. Inst. 2022, 359, 3671–3691. [Google Scholar] [CrossRef]
  4. Molnar, T.; Cosner, R.; Singletary, A.; Ubellacker, W.; Ames, A. Model-free safety-critical control for robotic systems. IEEE Robot. Autom. Lett. 2021, 7, 944–951. [Google Scholar] [CrossRef]
  5. Nguyen, Q.; Sreenath, K. Robust safety-critical control for dynamic robotics. IEEE Trans. Autom. Control 2021, 67, 1073–1088. [Google Scholar] [CrossRef]
  6. Liu, S.; Liu, L.; Yu, Z. Safe reinforcement learning for affine nonlinear systems with state constraints and input saturation using control barrier functions. Neurocomputing 2023, 518, 562–576. [Google Scholar] [CrossRef]
  7. Han, J.; Liu, X.; Wei, X.; Sun, S. A dynamic proportional-integral observer-based nonlinear fault-tolerant controller design for nonlinear system with partially unknown dynamic. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 5092–5104. [Google Scholar] [CrossRef]
  8. Ohnishi, M.; Wang, L.; Notomista, G.; Egerstedt, M. Barrier-certified adaptive reinforcement learning with applications to brushbot navigation. IEEE Trans. Robot. 2019, 35, 1186–1205. [Google Scholar] [CrossRef] [Green Version]
  9. Bianchi, D.; Di Gennaro, S.; Di Ferdinando, M.; Acosta Lùa, C. Robust Control of UAV with Disturbances and Uncertainty Estimation. Machines 2023, 11, 352. [Google Scholar] [CrossRef]
  10. Bianchi, D.; Borri, A.; Di Benedetto, M.; Di Gennaro, S. Active Attitude Control of Ground Vehicles with Partially Unknown Model. IFAC-PapersOnLine 2020, 53, 14420–14425. [Google Scholar] [CrossRef]
  11. Ames, A.; Xu, X.; Grizzle, J.; Tabuada, P. Control barrier function based quadratic programs for safety critical systems. IEEE Trans. Autom. Control 2016, 62, 3861–3876. [Google Scholar] [CrossRef]
  12. Wang, H.; Peng, J.; Zhang, F.; Zhang, H.; Wang, Y. High-order control barrier functions-based impedance control of a robotic manipulator with time-varying output constraints. ISA Trans. 2022, 129, 361–369. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, S.; Liu, L.; Yu, Z. Safe reinforcement learning for discrete-time fully cooperative games with partial state and control constraints using control barrier functions. Neurocomputing 2023, 517, 118–132. [Google Scholar] [CrossRef]
  14. Qin, C.; Wang, J.; Zhu, H.; Zhang, J.; Hu, S.; Zhang, D. Neural network-based safe optimal robust control for affine nonlinear systems with unmatched disturbances. Neurocomputing 2022, 506, 228–239. [Google Scholar] [CrossRef]
  15. Xu, X.; Tabuada, P.; Grizzle, J.W.; Ames, A. Robustness of control barrier functions for safety critical control. IFAC-PapersOnLine 2015, 48, 54–61. [Google Scholar] [CrossRef]
  16. Marvi, Z.; Kiumarsi, B. Safe reinforcement learning: A control barrier function optimization approach. Int. J. Robust Nonlinear Control 2021, 31, 1923–1940. [Google Scholar] [CrossRef]
  17. Xiao, W.; Belta, C.; Cassandras, C. Adaptive control barrier functions. IEEE Trans. Autom. Control 2021, 67, 2267–2281. [Google Scholar] [CrossRef]
  18. Modares, H.; Lewis, F.; Sistani, M. Online solution of nonquadratic two-player zero-sum games arising in the H∞ control of constrained input systems. Int. J. Adapt. Control Signal Process. 2014, 28, 232–254. [Google Scholar] [CrossRef]
  19. Qin, C.; Zhu, H.; Wang, J.; Xiao, Q.; Zhang, D. Event-triggered safe control for the zero-sum game of nonlinear safety-critical systems with input saturation. IEEE Access 2022, 10, 40324–40337. [Google Scholar] [CrossRef]
  20. Song, R.; Zhu, L. Stable value iteration for two-player zero-sum game of discrete-time nonlinear systems based on adaptive dynamic programming. Neurocomputing 2019, 340, 180–195. [Google Scholar] [CrossRef]
  21. Lu, W.; Li, Q.; Lu, K.; Lu, Y.; Guo, L.; Yan, W.; Xu, F. Load adaptive PMSM drive system based on an improved ADRC for manipulator joint. IEEE Access 2021, 9, 33369–33384. [Google Scholar] [CrossRef]
  22. Qin, C.; Qiao, X.; Wang, J.; Zhang, D. Robust Trajectory Tracking Control for Continuous-Time Nonlinear Systems with State Constraints and Uncertain Disturbances. Entropy 2022, 24, 816. [Google Scholar] [CrossRef] [PubMed]
  23. Fan, Q.; Yang, G. Adaptive actor—Critic design-based integral sliding-mode control for partially unknown nonlinear systems with input disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 165–177. [Google Scholar] [CrossRef]
  24. Yang, X.; He, H. Event-driven H∞-constrained control using adaptive critic learning. IEEE Trans. Cybern. 2020, 51, 4860–4872. [Google Scholar] [CrossRef] [PubMed]
  25. Lewis, F.; Vrabie, D.; Syrmos, V. Optimal Control; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  26. Kiumarsi, B.; Vamvoudakis, K.; Modares, H.; Lewis, F. Optimal and autonomous control using reinforcement learning: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 2042–2062. [Google Scholar] [CrossRef]
  27. Liu, D.; Xue, S.; Zhao, B.; Luo, B.; Wei, Q. Adaptive dynamic programming for control: A survey and recent advances. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 142–160. [Google Scholar] [CrossRef]
  28. Vamvoudakis, K.; Lewis, F. Online actor—Critic algorithm to solve the continuous-time infinite horizon optimal control problem. Automatica 2010, 46, 878–888. [Google Scholar] [CrossRef]
  29. Han, H.; Zhang, J.; Yang, H.; Hou, Y.; Qiao, J. Data-driven robust optimal control for nonlinear system with uncertain disturbances. Inf. Sci. 2023, 621, 248–264. [Google Scholar] [CrossRef]
  30. Lou, X.; Zhang, X.; Ye, Q. Robust control for uncertain impulsive systems with input constraints and external disturbance. Int. J. Robust Nonlinear Control 2022, 32, 2330–2343. [Google Scholar] [CrossRef]
  31. Wang, N.; Gao, Y.; Yang, C.; Zhang, X. Reinforcement learning-based finite-time tracking control of an unknown unmanned surface vehicle with input constraints. Neurocomputing 2022, 484, 26–37. [Google Scholar] [CrossRef]
  32. Liu, C.; Zhang, H.; Xiao, G.; Sun, S. Integral reinforcement learning based decentralized optimal tracking control of unknown nonlinear large-scale interconnected systems with constrained-input. Neurocomputing 2019, 323, 1–11. [Google Scholar] [CrossRef]
  33. Yang, X.; Zhao, B. Optimal neuro-control strategy for nonlinear systems with asymmetric input constraints. IEEE/CAA J. Autom. Sin. 2020, 7, 575–583. [Google Scholar] [CrossRef]
  34. Tang, Y.; Yang, X. Robust tracking control with reinforcement learning for nonlinear-constrained systems. Int. J. Robust Nonlinear Control 2022, 32, 9902–9919. [Google Scholar] [CrossRef]
  35. Zhou, W.; Liu, H.; He, H.; Yi, J.; Li, T. Neuro-optimal tracking control for continuous stirred tank reactor with input constraints. IEEE Trans. Ind. Inform. 2018, 15, 4516–4524. [Google Scholar] [CrossRef]
  36. Kong, L.; He, W.; Dong, Y.; Cheng, L.; Yang, C.; Li, Z. Asymmetric bounded neural control for an uncertain robot by state feedback and output feedback. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 1735–1746. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, Y.; Zhao, B.; Liu, D.; Zhang, S. Event-triggered control of discrete-time zero-sum games via deterministic policy gradient adaptive dynamic programming. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 4823–4835. [Google Scholar] [CrossRef]
  38. Zhang, S.; Zhao, B.; Liu, D.; Zhang, Y. Observer-based event-triggered control for zero-sum games of input constrained multi-player nonlinear systems. Neural Netw. 2021, 144, 101–112. [Google Scholar] [CrossRef]
  39. Wei, Q.; Liu, D.; Lin, Q.; Song, R. Adaptive dynamic programming for discrete-time zero-sum games. IEEE Trans. Neural Networks Learn. Syst. 2017, 29, 957–969. [Google Scholar] [CrossRef]
  40. Perrusquía, A.; Yu, W. Continuous-time reinforcement learning for robust control under worst-case uncertainty. Int. J. Syst. Sci. 2021, 52, 770–784. [Google Scholar] [CrossRef]
  41. Yang, Y.; Vamvoudakis, K.; Modares, H.; Yin, Y.; Wunsch, D. Safe intermittent reinforcement learning with static and dynamic event generators. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5441–5455. [Google Scholar] [CrossRef]
  42. Fu, Z.; Xie, W.; Rakheja, S.; Na, J. Observer-based adaptive optimal control for unknown singularly perturbed nonlinear systems with input constraints. IEEE/CAA J. Autom. Sin. 2017, 4, 48–57. [Google Scholar] [CrossRef]
Figure 1. Convergence of the CNN weights.
Figure 1. Convergence of the CNN weights.
Entropy 25 01101 g001
Figure 2. Convergence of system states x 1 , x 2 , and x 3 .
Figure 2. Convergence of system states x 1 , x 2 , and x 3 .
Entropy 25 01101 g002
Figure 3. The comparison between the safe and unsafe states.
Figure 3. The comparison between the safe and unsafe states.
Entropy 25 01101 g003
Figure 4. Control input in the system.
Figure 4. Control input in the system.
Entropy 25 01101 g004
Figure 5. Disturbance input in the system.
Figure 5. Disturbance input in the system.
Entropy 25 01101 g005
Figure 6. The cost function of the system.
Figure 6. The cost function of the system.
Entropy 25 01101 g006
Figure 7. Control input without asymmetric input constraints.
Figure 7. Control input without asymmetric input constraints.
Entropy 25 01101 g007
Figure 8. Convergence of the CNN weights.
Figure 8. Convergence of the CNN weights.
Entropy 25 01101 g008
Figure 9. Convergence of system states x 1 and x 2 .
Figure 9. Convergence of system states x 1 and x 2 .
Entropy 25 01101 g009
Figure 10. The comparison between the safe and unsafe states.
Figure 10. The comparison between the safe and unsafe states.
Entropy 25 01101 g010
Figure 11. Control input in the system.
Figure 11. Control input in the system.
Entropy 25 01101 g011
Figure 12. Disturbance input in the system.
Figure 12. Disturbance input in the system.
Entropy 25 01101 g012
Figure 13. The cost function of the system.
Figure 13. The cost function of the system.
Entropy 25 01101 g013
Figure 14. Control input without asymmetric input constraints.
Figure 14. Control input without asymmetric input constraints.
Entropy 25 01101 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, C.; Jiang, K.; Zhang, J.; Zhu, T. Critic Learning-Based Safe Optimal Control for Nonlinear Systems with Asymmetric Input Constraints and Unmatched Disturbances. Entropy 2023, 25, 1101. https://doi.org/10.3390/e25071101

AMA Style

Qin C, Jiang K, Zhang J, Zhu T. Critic Learning-Based Safe Optimal Control for Nonlinear Systems with Asymmetric Input Constraints and Unmatched Disturbances. Entropy. 2023; 25(7):1101. https://doi.org/10.3390/e25071101

Chicago/Turabian Style

Qin, Chunbin, Kaijun Jiang, Jishi Zhang, and Tianzeng Zhu. 2023. "Critic Learning-Based Safe Optimal Control for Nonlinear Systems with Asymmetric Input Constraints and Unmatched Disturbances" Entropy 25, no. 7: 1101. https://doi.org/10.3390/e25071101

APA Style

Qin, C., Jiang, K., Zhang, J., & Zhu, T. (2023). Critic Learning-Based Safe Optimal Control for Nonlinear Systems with Asymmetric Input Constraints and Unmatched Disturbances. Entropy, 25(7), 1101. https://doi.org/10.3390/e25071101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop