Next Article in Journal
A Survey on Multi-Sensor Fusion Perimeter Intrusion Detection in High-Speed Railways
Previous Article in Journal
Detection of Oil Spill in SAR Image Using an Improved DeepLabV3+
Previous Article in Special Issue
SPIN-Based Linear Temporal Logic Path Planning for Ground Vehicle Missions with Motion Constraints on Digital Elevation Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Finite-Time-Based Neural Optimal Control of Time-Delayed Wheeled Mobile Robotics Systems

1
The Key Laboratory of Intelligent Control Theory and Application of Liaoning Provincial, Liaoning University of Technology, Jinzhou 121001, China
2
The State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(17), 5462; https://doi.org/10.3390/s24175462 (registering DOI)
Submission received: 3 April 2024 / Revised: 30 July 2024 / Accepted: 13 August 2024 / Published: 23 August 2024

Abstract

:
For nonlinear systems with uncertain state time delays, an adaptive neural optimal tracking control method based on finite time is designed. With the help of the appropriate LKFs, the time-delay problem is handled. A novel nonquadratic Hamilton–Jacobi–Bellman (HJB) function is defined, where finite time is selected as the upper limit of integration. This function contains information on the state time delay, while also maintaining the basic information. To meet specific requirements, the integral reinforcement learning method is employed to solve the ideal HJB function. Then, a tracking controller is designed to ensure finite-time convergence and optimization of the controlled system. This involves the evaluation and execution of gradient descent updates of neural network weights based on a reinforcement learning architecture. The semi-global practical finite-time stability of the controlled system and the finite-time convergence of the tracking error are guaranteed.

1. Introduction

Adaptive intelligent control algorithms have developed rapidly with the advancement of intelligent approximation technology, especially in neural networks (NN) and fuzzy logic systems (FLS), and have achieved a series of excellent research results [1,2,3,4,5,6,7,8,9]. This has also significantly motivated many scholars to explore the adaptive control algorithm, laying a solid foundation for using the corresponding control theory algorithm in the field of practical engineering applications.
Considering that control and decision-making problems are essentially optimization problems, and optimal control plays a key role in engineering applications, the research on intelligent majorization control algorithms in this paper has a certain role in promoting practical engineering applications. In view of the importance of optimal control, many scholars have conducted extensive research on optimal control algorithms and have obtained certain achievements, mainly including two optimization methods [10,11], adaptive dynamic programming (ADP) methods [12] and reinforcement learning (RL) methods [13].
The ADP approach could realize the online approximation of the optimal target by the recursive numerical method without relying on the control algorithm of the model [14,15,16,17,18,19]. Using NNs, the performance function, designed control laws, and the uncertain part of the nonlinear system could be approximated, which helps solve the HJB function; then, the optimal stability is guaranteed. Similar to the learning mechanism of mammals, the reinforcement learning mechanism aims to regulate both the critic and action adaptive laws in order to control the long-term interaction cost of the environment. The action NNs could modify the action laws, while the critic NNs reduce the virtual energy of the long-term storage function. Thanks to the interoperability of the operating mechanism, refs. [20,21,22,23,24] have made outstanding contributions to online optimization control and model-free optimization control.
Although previous ADP-based methods perform well for non-linear systems without time delays, achieving the ideal control effect on time-delayed non-linear systems is often challenging. Therefore, research on this topic has generated interest among experts and scholars and has achieved preliminary results. However, the time delay in the form of nonlinear interference is a major obstacle to applications of control theory algorithms. Some scholars have paid attention to this and have achieved certain results. Regarding existing methods, there are two main forms of system delay: state and input [25].
State time delays are mainly found in intricate engineering systems, for example, wheeled mobile robot (WMR) systems and chemical engineering, which are hysteresis induced by internal propagation of signals during system motion. With assistance from the Lyapunov–Krasovskii functional (LKF) [25,26,27], the influence caused by the state time delay is overcome, and superior control algorithms are designed.
Due to the contradiction between the convergence characteristics of existing optimization algorithms in infinite time and the fast convergence requirements of actual engineering systems, it has greatly inhibited the practical application and promotion of intelligent optimization algorithms. Therefore, in recent years, some scholar award research has focused on the study of convergence speed and convergence domain equilibrium. The existing breakthrough theoretical research results on infinite convergence algorithms [28,29,30,31,32,33] have promoted the research process of finite-time convergence control algorithms to a certain extent and also reflect the necessity of studying the algorithm from the side. At the same time, they also point out the key and difficult issues faced by finite-time convergence control research.
To meet the finite-time or finite-horizon domain convergence characteristics of actual engineering requirements, some scholars have begun relevant research. For nonlinear discrete systems, researchers use the ADP-based approach to solve the finite-time domain convergence problem [34,35], which greatly stimulates the authors’ research passion for finite-time convergence optimization control algorithms.
Different from finite-horizon convergence, besides guaranteeing the time domain of system convergence, finite-time convergence also increases the speed and accuracy of system convergence. However, the existing research is not perfect and is still in its infancy, but some studies with excellent performance have been obtained [36,37,38,39,40,41,42,43,44,45]. Up to now, the finite-time optimization algorithm, considering both convergence speed and convergence precision as well as considering energy consumption, is basically absent. Therefore, based on the previous research, this paper not only considers the state delay, but also considers the input delay, and uses the ADP method to effectively resolve the finite-time optimal tracking control problem of the controlled target.
An adaptive finite-time online optimal tracking control method based on neural networks is designed for uncertain nonlinear systems with state time delays. Firstly, the initial nonlinear system is extended to an augmentation system, which contains tracking error and target expectation information, and a novel discounted performance function is presented. Secondly, a Hamiltonian function is constructed, and the appropriate LKFs are used to resolve the problem of state delay. Then, for the solution of the ideal HJB function, this paper introduces the method of integral reinforcement learning (IRL). Finally, by designing the optimal control strategy and optimizing the control adaptive law, the semi-global practical finite-time stability (SGPFS) lemma, not only is the influence of time delays eliminated, but the stability of uncertain nonlinear systems is guaranteed. The main innovative work includes:
(1)
The time-delay effect is incorporated into the strategy design process to address the finite-time convergence issues.
(2)
The problem caused by the state time delay is solved simultaneously in the optimal control process.
(3)
The optimal control policy guarantees that the target control system achieves optimal control within a finite time.

2. System Description and Preliminaries

Considering the state time-delayed nonlinear system as
β ˙ t = p t β t + h β t t 1 + g t u t + ω t
where the delayed dynamics h β t t 1 is one known function vector with an unknown time delay t 1 . For the sake of simplicity in subsequent expressions, except for the hysteresis term β t t 1 , t and other variables are omitted. g t denotes the input function, p t denotes the state function, u t denotes the system control input, and ω t denotes the external perturbation function.
Considering the state and the input time delays in system (1), the appropriate LKF is introduced to deal with the state time-delay problem, respectively. And according to Remark 1 in [26], only when the delayed dynamics α t are known, one can obtain h β 1 h β 2 = h β / β β = β κ β 1 β 2 with β κ = κ β 1 + 1 κ β 2 and 0 < κ < 1 .
The following scientific assumptions are made, and corresponding lemmas are given to ensure that the subsequent design process achieves the expected control objectives.
Assumption 1.
Both function  p t  and  g t  are continuously differentiable. For the time-delay function  p , its Jacobi matrix  p β / β  satisfies the Lipchitz condition  p β / β η  with  η 0 .
Assumption 2.
The boundedness of the unknown input transfer function  g t  can be obtained as  g _ < g g ¯ . Similarly,  σ min σ σ max ;  φ min φ φ max  can be used to present the boundedness of the activation functions in hidden layers of NNs  φ  and the functional approximation error  σ .
Lemma 1
([44]). For any states  y i R ,  i = 1 , 2 , , m , if the positive constant satisfies  0 < q < 1 , we have
i = 1 m y i q i = 1 m y i q m 1 q i = 1 m y i q
Lemma 2
([39]). For the nonlinear system x ˙ = f x , if (3) holds,
L ˙ x ι L b x + σ , t 0
where L x is a smooth positive definite function, ι > 0 , 0 < b < 1 , σ > 0 , one can further obtain that the nonlinear system x ˙ = f x is SGPFS.
In this paper, by designing an adaptive NN-based optimal controller u t such that β t , the output of the system could track β d k well in a finite time. The two main types of neural networks used in this paper include critic neural networks and action neural networks. The critic neural network is used for the estimation of the long-term utility function, while the action neural network is used to ensure the stability of the system and the solution of the optimal control inputs of the system.

3. Controller Design and Stability Analysis

This section is divided into subheadings, which provide a concise and precise description of the experimental results and their interpretation, as well as the experimental conclusions that can be drawn. Depicted in Figure 1, in this section, we design an optimal controller, which ensures the optimal control of the system and converges within a finite time. By transforming the initial system into an augmented system, which tracks errors and targets expected information, a novel discounted performance function is presented. Furthermore, a Hamiltonian function is constructed, and the time-delay problem will be solved by using the appropriate LKFs. Then, by introducing the IRL method to the Hamiltonian function, a finite-time optimal tracking controller based on neural networks is designed. Finally, the adaptive law of the appropriate evaluator and the adaptive law of the action NN are designed; the target system’s SGPFS can be ensured.

3.1. System Transformation

Considering the nonstrict nonlinear system (1), we developed a controller using a neural network to enable the system to follow the desired trajectory. Firstly, the tracking error system can be design as
z t = β t β d t
Then, find the (4) derivative, and we can obtain
z ˙ = p t β t + h β t t 1 + g t u t + ω t t β ˙ d
Assumption 3.
The target-given trajectory  β d  with the initial state as  β d 0 = 0  is bounded, and  β ˙ d t  can be rewritten into the form of (6) by a command generator function that satisfies the Lipschitz continuity property.
β ˙ d t = l β d t
The algorithm is expected to adopt a new type of discounted performance function, which includes both tracking error terms and expected trajectories and time-delay terms. Therefore, we constructed the following widening system.
ψ ˙ t = F ψ t + H ψ t t 1 + G t u t + W t
where ψ t = z t , β d t , F ψ t = F 1 t z + β d l β d l β d , H ψ t t 1 = H 1 z t t 1 + β d t t 1 0 , G t = G 1 t 0 , W t = D 1 t 0 .
Furthermore, the novel discounted performance function is
L 1 t = t t + t 0 e χ τ t Γ T Q Γ + U u d τ
where Γ = ψ t , ψ t t 1 T , χ is the discount factor, with χ > 0 is a constant, and Q = Q 1 0 0 Q 2 , where Q i is a matrix that is positive definite, and t 1 satisfies t t 1 , and the semi-global uniform convergence in (7) can be ensured with t t 1 .
Based on [46,47,48,49,50,51,52], and taking the input constraints into consideration, the nonquadratic functional is proposed as
U u = 2 0 u ε tanh 1 v / ε T R d v
where ε is the saturation input, and R = d i a g r 1 , r 2 , U u is a non-quadratic matrix.

3.2. Virtual Control

In this part, based on the Hamiltonian function, which is established based on the discounted performance function, the virtual optimal controller u * t will be designed.
To obtain the tracking Bellman equation, we used the Leibniz rule and (9) to obtain
L ˙ 1 = χ L 1 1 e χ t 0 Γ T Q Γ 2 0 u ε 0 tanh 1 v / ε 0 T R d v
Then, we moved the right-hand side of Equation (10) to the left-hand side of the equation and substituted it into Equation (8) to finally obtain Equation (11)
V = 1 e χ t 0 Γ T Q Γ + 2 0 u ε tanh 1 v / ε T R d v χ L 1 + L 1 Γ F ψ + G u + H ψ t t 1 + W = 0
In addition, we designed the optimal cost function in (12) from
L 1 * t = min u t t + t 0 e χ τ t Γ T Q Γ + U u d τ
the following conditions should be guaranteed
V * = 1 e χ t 0 Γ T Q Γ + 2 0 u * ε tanh 1 v / ε T R d v χ L 1 + L 1 * Γ F ψ + G u * + H ψ t t 1 + W = 0 .
Based on (11) and [53], and the finite-time convergence theory [34], the optimal control input defined as
u = ε tanh l 1 ψ 2 b 1 + ε r 1 G T 2 1 e χ t 0 L 1 Ψ .
According to (13) and [44,53], the ideal optimal control input is abbreviated as
u * = ε tanh l 1 ψ 2 b 1 + Ξ L 1 * Ψ
where l 1 = R 1 2 ε is a constant greater than zero and Ξ = r 1 G T 2 ε 1 e χ t 0 .
Then, together with (14), (9) can be written as the following form
U u * = 2 ε 2 l 1 ψ 2 b 1 + Ξ L 1 * Γ T tanh l 1 ψ 2 b 1 + Ξ L 1 * Γ + ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ L 1 * Γ
where R 0 = R 1 , , R m and E 0 = 1 , , 1 m .
The Hamiltonian function can be written as the following form
V * = χ L 1 * + 1 e χ t 0 ψ T Q ψ + 2 ε 2 1 e χ t 0 l 1 ψ 2 b 1 tanh l 1 ψ 2 b 1 + Ξ L 1 * Γ + L 1 * Γ F ψ + H ψ t t 1 + W + ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ L 1 * Γ = 0
Furthermore, (17) can be written as
V * = 2 ε 2 1 e χ t 0 l 1 ψ 2 b 1 tanh l 1 ψ 2 b 1 + Ξ L 1 * Γ + L 1 * Γ F ψ + H ψ + W H ψ H ψ t t 1 χ L 1 * + 1 e χ t 0 Γ T Q Γ + ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ L 1 * Γ = 0
To deal with the challenges brought by online tracking control, the optimal value L 1 * should be solved using (17). Furthermore, the optimal control policy u ( L 1 * ) is shown in (14).

3.3. State Time Delay

Choosing the appropriate LKFs solves the problem caused by the state time delay, which laid the foundation for the application of the IRL algorithm.
According to Assumption 2 in [36] and Remark 5 in [15], the IRL method can be used to solve the L 1 * , only when the function Θ 1 t = F ψ + H ψ , Θ 2 t = H ψ H ψ t t 1 and G satisfy that
F ψ + H ψ b 1 ψ
H ψ H ψ t t 1 θ = 1 n b 2 , θ ψ t θ Δ t
G b 3
where b 1 , b 2 , θ , and b 3 are positive constants, with 0 < θ n and Δ t = t 1 / n .
Considering the function F ψ and the known function H ψ satisfy the Lipchitz condition, and Assumption 2, (19) and (21) can be guaranteed. However, the state time delay t 1 is uncertain, and the boundedness of (20) cannot be obtained. In addition, because of the uncertain state time delay, the uncertain function H ψ t t 1 cannot be approximated using NN.
In order to better complete the controller design, the problem caused by the state time delay will be handled first. Defining the new function Θ 2 t as
Θ 2 t = H ψ H ψ t t 1
and (22) can be written in the following form
Θ 2 t = H ψ H ψ t Δ t + H ψ t Δ t H ψ t 2 Δ t + H ψ t 2 Δ t H ψ t 3 Δ t + H ψ t θ Δ t + H ψ t θ + 1 Δ t + H ψ t n 1 Δ t H ψ t t 1
where Δ t = t 1 / n , and both i and n are positive integers.
By Assumption 1, the mean-value theorem is introduced to H ψ . Therefore, one obtains
Δ ψ t Δ t = H ψ t H ψ t Δ t = H ψ ψ ψ = ψ η ψ t ψ t Δ t
where ψ η = η ψ t + 1 η ψ t Δ t , 0 < η < 1 .
The error function caused by Δ t can be obtained as
Δ ψ t θ + 1 Δ t = H ψ ψ ψ = ψ η ψ t θ Δ t ψ t θ + 1 Δ t
Defining the augmented system states as
Δ ψ ¯ = Δ ψ t , , Δ ψ t θ Δ t , Δ ψ t t 1
then, we can write system (24) as follows
Δ ψ ˙ ¯ t = Π Δ ψ ¯ t
To guarantee that system (27) is uniformly ultimately bounded (UUB), the following lemma is proposed.
Lemma 3.
If the dimension of the state vector matches that of the function  Π Δ ψ , where  Π Δ ψ 0 = 0 .
Δ ψ ˙ t = Π Δ ψ t
converges to a compact set exponentially, where the Lyapunov function satisfies
c 1 Δ ψ t 2 L 0 Δ ψ t c 2 Δ ψ t 2 + c 3
L ˙ 0 Δ ψ t c 4 Δ ψ t 2 + c 5
where c i > 0 , θ = 1 , 2 , , 5 . In addition, the uniformly ultimately boundedness of system (28) can be guaranteed.
Proof. 
Inspired by the research in [26], the following proof process is given. Defining the initial state of (28) as Δ ψ ˙ 0 = Δ ψ t , we obtain
Δ ψ ˙ t = Π Δ ψ ˙ t Δ t = Π Δ ψ t Δ ψ ˙ t Δ t = Π Δ ψ ˙ t 2 Δ t = Π 2 Δ ψ t Δ ψ ˙ t θ Δ t = Π Δ ψ ˙ t n θ 1 Δ t = Π θ + 1 Δ ψ t Δ ψ ˙ t n 1 Δ t = Π Δ ψ ˙ t t 1 = Π n Δ ψ t
If the system exponentially converges to a compact set, then
Π θ Δ ψ t a 1 a 2 θ Δ ψ t + a 3
where positive constants satisfy a 1 > 0 , 0 < a 2 < 1 , and a 3 0 .
Furthermore, the Lyapunov function is
Δ L 0 = i = 0 n 1 Π θ Δ ψ t T Π θ Δ ψ t .
Submitting (32) to (33), we obtain
Δ L 0 2 a 1 2 i = 0 n 1 a 2 2 θ Δ ψ t 2 + 2 n a 3 2 2 a 1 2 i = 0 n 1 Δ ψ t 2 + 2 n a 3 2 .
Considering the fact that 0 < a 2 < 1 , we obtain
c 1 i = 0 n 1 Δ ψ t 2 Δ L 0 Δ ψ t c 2 i = 0 n 1 Δ ψ t 2 + c 3
where c 1 = 1 , c 2 = 2 a 1 2 1 a 2 2 n 1 a 2 2 , c 3 = 2 n a 3 2 .
Moreover, we can obtain
Δ L ˙ 0 Δ ψ t = 1 Δ t Δ L 0 Δ ψ t + Δ t Δ L 0 Δ ψ t = 1 Δ t i = 1 n Π i Δ ψ t 2 i = 0 n 1 Π i Δ ψ t 2 1 Δ t a 1 a 2 n Δ ψ t + a 3 2 Δ ψ t 2 1 2 a 1 2 a 2 2 n 1 Δ t Δ ψ t 2 + 2 a 3 2 1 Δ t c 4 Δ ψ t 2 + c 5
where c 4 = 1 2 a 1 2 a 2 2 n / Δ t and c 5 = 2 a 3 2 / Δ t . If a sufficiently small-time interval Δ t is chosen, n will be sufficiently large to ensure that c 4 is a positive constant.
Based on, (35) and (36), one has
Δ L ˙ 0 Δ ψ t 1 2 a 1 2 a 2 2 n Δ ψ t 2 + 2 a 3 2 1 2 a 1 2 a 2 2 n 1 a 2 2 2 a 1 2 1 a 2 2 n L 0 Δ ψ t   + 2 1 a 2 2 1 2 a 1 2 a 2 2 n n a 3 2 2 a 1 2 1 a 2 2 n + 2 a 3 2 = b 1 Δ L 0 Δ ψ t + b 2
where
b 1 = 1 2 a 1 2 a 2 2 n 1 a 2 2 2 a 1 2 1 a 2 2 n
b 2 = 2 1 a 2 2 1 2 a 1 2 a 2 2 n n a 3 2 2 a 1 2 1 a 2 2 n + 2 a 3 2
Furthermore, we can obtain the Lyapunov function (23), and guarantee the UUB of system (23), which is composed by n subsystems similar to (28)
L ˙ 0 Δ ψ t = i = 1 n Δ L ˙ 0 Δ ψ t θ Δ t
Substituting (36) for (40), one has
L ˙ 0 Δ ψ t c 4 i = 1 n Δ ψ t θ Δ t 2 + c 5
Similarly, one has
L ˙ 0 Δ ψ t b 1 L 0 Δ ψ t + b 2
when n and a 1 are selected large enough, the ultimate boundedness of L 0 Δ ψ t can be assured for any initial condition L 0 Δ ψ t t 1 within a bounded set, guaranteeing the UUB of system states are guaranteed.
The proof is competed. □

3.4. Critic NN and Value Function Approximation

In summary, the boundedness of (19)–(21) can be obtained. The IRL method will be extended to the solution of L 1 * in the following section.
When the IRL interval is choose as T > 0 , (8) can be written as the following form
L 1 t T = 1 e χ t 0 t T t e χ τ t T Γ T Q Γ + U u d τ + e χ T L 1 t
Assuming that (8) is a continuous smooth function, L 1 and its gradient L 1 / ψ are approximated as
L 1 = ω c T φ c Γ + ε c
L 1 Γ = ω c φ c T Γ + ε c Γ
where ω c l c is the constant-target online estimate parameter vector, in which l c is the quantity of neurons within the neural network, and φ c and ε c are the activation function of the critic NN and approximate error, respectively.
Assumption 4.
The boundedness of the activation function and assessment of the error of the critic NN and their gradient can be obtained as  φ c φ ¯ c , and  ε c ε ¯ c ,  φ c / Ψ ε ¯ c , 0  and  ε c / Ψ ε ¯ c , 0 , respectively.
When the IRL interval is T > 0 , the Bellman equation induced in the critic NN estimated value, can be expressed as
z B = t T t e χ τ t + T 1 e χ t 0 Γ T Q Γ + U u d τ + ω c T Δ φ c
where
Δ φ c = e χ T φ c Γ t φ c Γ t T
The constraint on (46) can be derived based on Assumption 4, i.e., z B z ¯ B .
To derive the approximate tracking Bellman function, the approximation of the neural network is evaluated to obtain
L ^ 1 = ω ^ c T φ c Γ
where ω ^ c is the estimation of the critic law ω c .
Therefore, the estimation of (46) is
z B = R t + ω ^ c T Δ φ c
where the reinforcement learning reward is denoted by
r t = t T t e χ τ t + T 1 e χ t 0 Γ T Q Γ + U ^ u d τ .
To reduce the approximation error, we give a function in (51) from
Z B t = 1 / 2 z B 2 t .
We can obtain the following expression using the Gradient descent method.
ω ^ ˙ c = α c Δ φ c Δ φ c T 1 + Δ φ c T Δ φ c 2 z B
where α c represents the learning rate of the critic neural network.
Considering ω ˜ c = ω c ω ^ c , (46) and (49), we have
z B = ω ˜ c T Δ φ c + ε B
ω ˜ ˙ c = α c Δ φ c Δ φ c T 1 + Δ φ c T Δ φ c 2 ω ˜ c + α c Δ φ c Δ φ c T 1 + Δ φ c T Δ φ c 2 ε B

3.5. Action NN and Controller Design

According to (45), the optimal control input, i.e., (14) is as follows
u = ε tanh l 1 ψ 2 b 1 + Ξ ω c φ c T Γ + ε c Γ
To solve the issue in the tracking HJB induced ε c Ψ , we obtain
t T t e χ v t + T φ ˙ c d v = t T t e χ v t + T φ c Γ Θ 1 + Θ 2 + G u + W d v = Δ φ c + χ t T t e χ v t T φ c d v
In addition, we obtain
Δ φ c = t T t e χ v t + T φ c Γ Θ 1 + Θ 2 + G u + W χ φ c d v
Equation (16) becomes
U u = ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ ω c T φ c Γ + ε c Γ 2 ε l 1 ψ 2 b 1 + Ξ ω c T φ c Γ + ε c Γ u
Then, (46) can be rewritten as
t T t e χ τ t + T 1 e χ t 1 Γ T Q Γ χ ω c T φ c + 2 ε 2 1 e χ t 1 l 1 ψ 2 b 1 tanh l 1 ψ 2 b 1 + Ξ ω c φ c T Γ + ε H J B + ω c φ c T Γ Θ 1 + Θ 2 + W + ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ ω c φ c T Γ d v = 0
where
ε H J B = e χ τ t + T ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ ω c φ c T Γ 2 ε 2 1 e χ t 0 l 1 ψ 2 b 1 × tanh l 1 ψ 2 b 1 + Ξ ω c φ c T Γ + 2 ε 2 1 e χ t 0 l 1 ψ 2 b 1 tanh l 1 ψ 2 b 1 + Ξ ω c φ c T Γ + ε c Γ ω c T ε c Γ Θ 1 + Θ 2 + W + ε 2 R 0 ln E 0 tanh 2 l 1 ψ 2 b 1 + Ξ ω c φ c T Γ + ε c Γ
To obtain the limitation of the HJB proximity error, we can use the boundary proximity error. In addition, when NN is selected, the construction cannot be changed. We can only solve this problem by uncertain weights of NN.
Approximating the control input (55) by critic NN, we have
u 1 = ε tanh l 1 ψ 2 b 1 + Ξ ω ^ c φ c T Γ
where ω ^ c is the estimated value of ω c .
However, the function of (61) is only to estimate the current critical NN weight, which fails to keep the system (1) stable. Hence, to guarantee the stability of the system and solve the optimal control strategy, we introduce another NN as the action NN.
u ^ 1 = ε tanh l 1 ψ 2 b 1 + Ξ ω ^ a φ c T Γ
where ω ^ a represents the weight vector of the action neural network, denoting the present evaluation value of ω c .
Then, the interval IRL Bellman equation error is estimated as
z ^ B = t T t e χ τ t + T Γ T Q Γ + U ^ u d τ + ω ^ c T Δ φ c
where
U ^ u = 2 0 u ^ ε tanh 1 v / ε T R d v
Therefore, (52) can be rewritten as
ω ^ ˙ c = α c Δ φ c Δ φ c T 1 + Δ φ c T Δ φ 2 z ^ B
defining the input assessment error as
z u = u ^ 1 u 1 = ε tanh l 1 ψ 2 b 1 + Ξ ω ^ a φ a T Γ tanh l 1 ψ 2 b 1 + Ξ ω ^ c φ c T Γ
To minimize (66), we use the following formula
Z u t = z u T t R z u t .
We can utilize the gradient descent method to derive the following equation
ω ^ ˙ a = α a Ξ φ a T Γ sech 2 l 1 ψ 2 b 1 + Ξ φ a T Γ ω ^ a z u + η ω ^ a
where Ξ = R Ξ , and η is a positive design variable.

3.6. Stability Analysis

According to the proposed lemmas and assumptions, the following theorem is given to analyze the effectiveness of the proposed algorithm.
Theorem 1.
Based on the definition in [43], Lemma 1–4, Assumptions 1–4, and the design of the proposed control policy (62) and the control laws, (65) and (68), the proposed optimal tracking control algorithm ensures that the partially uncertain nonlinear system (1) is SGPFS.
Proof of Theorem 1.
The candidate function of the Lyapunov function is designed as
L k = L 0 k + L 1 k + L 2 k + L 3 k + L 4 k
where L ˙ 0 Δ ψ t c 4 i = 1 n Δ ψ t θ Δ t 2 + c 5 and L 1 k is given in (8) as the optimal value function. Then, the provided expressions are applicable:
L 2 = i = 1 n Δ ψ t θ Δ t T Q 3 Δ ψ t θ Δ t
L 3 = 1 / α c ω ˜ c T ω ˜ c
L 4 = 1 / α a ω ˜ a T ω ˜ a
Based on (24) and (31), the first derivative of L 2 can be given as
L ˙ 2 = 2 j = 1 n Δ ψ t i Δ t Q 3 Δ ψ ˙ t i Δ t = 2 Δ ψ t n Δ t T Q 3 Δ ψ t + + Δ ψ t Δ t T Q 3 Δ ψ t n 1 Δ t + Δ ψ t T Q 3 Δ ψ t n Δ t
Using Young’s inequality ± a T b 0 μ a T a / 2 + b 0 T b 0 / 2 μ , one has
L ˙ 2 μ Δ ψ t n Δ t T Q 3 Δ ψ t n Δ t + 1 μ Δ ψ t T Q 3 Δ ψ t + + μ Δ ψ t Δ t T × Q 3 Δ ψ t Δ t + 1 μ Δ ψ t n 1 Δ t T Q 3 Δ ψ t n 1 Δ t + μ Δ ψ t T Q 3 Δ ψ t + 1 μ Δ ψ t n Δ t T Q 3 Δ ψ t n Δ t
L ˙ 2 c 0 i = 1 n Δ ψ t θ Δ t 2
where c 0 = μ + 1 / μ Q 3 .
Based on (44), the first derivative of L 3 is
L ˙ 3 = ω ^ c T φ Γ Θ 1 + Θ 2 + G u + W
Considering (57), with the IRL interval chosen small enough, we have ρ 1 φ c = φ c t T , ρ 1 = 1 ± ρ 0 , ρ 1 U 1 , ρ 0 , and ρ 0 is a sufficiently small positive constant.
χ + 1 T φ φ c t T T e χ T φ Γ Θ 1 + Θ 2 + G u + W
Then, (76) can be written as
L ˙ 3 = χ + 1 T ω ^ c T φ 1 T e χ T ω ^ c T φ c t T χ + 1 T 2 + 1 T 2 e 2 χ T ω ^ c T φ T ω ^ c T φ c
Based on (54) we have
ε B = T t 1 ω ^ c T φ c + ω ^ c T e χ T φ c φ c t T
Then, the approximate of (54) is
ω ˜ ˙ c = α c Δ φ Δ φ T 1 + Δ φ T Δ φ 2 ω ˜ c + α c Δ φ 1 + Δ φ T Δ φ 2 T t 1 ω ^ c T φ c + ω ^ c T e χ T φ c φ c t T
And, for the first difference of (71)
L ˙ 3 = Δ φ c Δ φ c T 1 + Δ φ c T Δ φ 2 ω ˜ c T ω ˜ c + ω c T ω ^ c T Δ φ c 1 + Δ φ c T Δ φ 2 T t 1 ω ^ c T φ c + ω ^ c T e χ T φ c φ c t T
Using Cauchy’s mean value theorem, (81) changed into
L ˙ 3 = 1 1 + Δ φ c T Δ φ c 2 Δ φ 2 ω ˜ c T ω ˜ c T + ω c T Δ φ c T ω c T Δ φ c 1 2 T t 1 e χ T ρ 1 T 2 t 1 2 ω ^ c T φ c T ω ^ c T φ c
Based on (68) and (66), we have
ω ˜ ˙ a = α a Ξ φ a Γ sech 2 l 1 ψ 2 b 1 + Ξ T φ c T Γ ω ^ a × tanh Ξ ω ^ a φ a T Γ + l 1 ψ 2 b 1 tanh l 1 ψ 2 b 1 + Ξ ω ^ c φ c T Γ η ω ˜ a + η ω c
Then, the first derivative of L 3 is
L ˙ 4 = η ω ˜ a T ω ˜ a + ω ˜ a T J 0
where
J 0 = Ξ φ a Γ sech 2 l 1 ψ 2 b 1 + Ξ T φ a T Γ ω ^ a tanh Ξ ω ^ a φ a T Γ + l 1 ψ 2 b 1 tanh l 1 ψ 2 b 1 + Ξ ω ^ c φ c T Γ + η ω c
By using Cauchy’s mean value theorem, we have
L ˙ 4 η 1 / 2 α a ω ˜ a T ω ˜ a + α a 2 J 0 2
Above all, the first difference of (69)
L ˙ c 1 ω ^ c T φ T ω ^ c T φ c 2 ω ˜ c T ω ˜ c c 3 ω ˜ a T ω ˜ a c 6 i = 1 n Δ ψ t θ Δ t 2 + J 2
where c θ > 0 , θ = 1 , 2 , 3 , c 1 = T t 1 e χ T ρ 1 T 2 t 1 2 2 1 + Δ φ c T Δ φ c 2 χ + 1 T 2 + 1 T 2 e 2 χ T , c 2 = Δ φ c 2 1 + Δ φ c T Δ φ c 2 , c 3 = η 1 2 α a , c 6 = c 4 c 0 and J 2 = ω c T Δ φ c T ω c T Δ φ c 1 + Δ φ c T Δ φ c 2 + c 5 + α a 2 J 0 2 .
To make the finite-time convergence, we deal with the equation and add and subtract several terms on the right side
L ˙ c 1 ω ^ c T φ c T ω ^ c T φ c β + c 1 ω ^ c T φ c T ω ^ c T φ c β c 1 ω ^ c T φ c T ω ^ c T φ c c 2 ω ˜ c T ω ˜ c β + c 2 ω ˜ c T ω ˜ c β c 2 ω ˜ c T ω ˜ c c 3 ω ˜ a T ω ˜ a β + c 3 ω ˜ a T ω ˜ a β c 3 ω ˜ a T ω ˜ a c 6 i = 1 n Δ ψ t θ Δ t 2 β + c 6 i = 1 n Δ ψ t θ Δ t 2 β c 6 i = 1 n Δ ψ t θ Δ t 2 + J 2
To make the system (1) stable, and the finite time, we consider Lemma 1. Therefore, the constant must be greater than zero.
α a > 1 2 η
and there are the following formulas:
2 1 + Δ φ T Δ φ 2 χ + 1 T 2 1 T 2 e 2 γ T t 1 2 T e χ T ρ 1 t 1 + T 2 < 0
with
e χ T > ρ 1
T χ + 1 > 1 e χ T
then
t 1 > e χ T ρ 1 + e χ T ρ 1 2 8 1 + Δ φ c T Δ φ c 2 χ + 1 T 2 1 T 2 e 2 χ T 2 + 1 Δ φ c T Δ φ c 2 χ + 1 T 2 1 T 2 e 2 χ T T .
Taking Lemma 1 to c 1 ω ^ c T φ c T ω ^ c T φ c , c 2 ω ˜ c T ω ˜ c and c 3 ω ˜ a T ω ˜ a , as x = 1 , and y = c 1 ω ^ c T φ c T ω ^ c T φ c , or y = c 2 ω ˜ c T ω ˜ c , or y = c 3 ω ˜ a T ω ˜ a , or y = c 6 i = 1 n Δ ψ t θ Δ t 2 with μ 1 = b , μ 2 = 1 b and l = 1 b 1 b b . Then, we have
c 1 ω ^ c T φ c T ω ^ c T φ c b b l + c 1 ω ^ c T φ c T ω ^ c T φ c
c 2 ω ˜ c T ω ˜ c b b l + c 2 ω ˜ c T ω ˜ c
c 3 ω ˜ a T ω ˜ a b b l + c 3 ω ˜ a T ω ˜ a
c 6 i = 1 n Δ ψ t θ Δ t 2 b b l + c 6 i = 1 n Δ ψ t θ Δ t 2 .
Considering (94)–(97), (88) can be rewritten as
L ˙ c 1 ω ^ c T φ c T ω ^ c T φ c b c 2 α c b ω ˜ c T α c 1 ω ˜ c b c 6 Q 3 1 i = 1 n Δ ψ t θ Δ t T Q 3 Δ ψ t θ Δ t b c 3 α a b ω ˜ a T α a 1 ω ˜ a b + 4 b l + J 2 .
Inspired by reference [45], we have x ^ ˙ t = χ x ^ t + κ υ t , if υ t > 0 , t > t 0 , x ^ ˙ > 0 , x t > 0 , t > t 0 .
L ˙ c L b + π
where
c = min c 1 , c 2 α c b , c 3 α a b , c 6 Q 3 1
π = 4 b l + J 2 .
Based on (89)–(93), (99)–(101), and the lemma in [39], the boundedness of all singles in the closed loop nonstrict system is SGPFS for t > t 1 .
Furthermore, by using (62), we obtain the optimal control strategy, which guarantees that the target nonlinear system system’s state and the stability of input delays are maintained, and that the tracking error converges to a sufficiently small neighborhood around zero.
The proof is completed. □

4. Results of Simulation Example

The WMR system [54] in Figure 2 illustrates the effectiveness of the proposed algorithm.
m v ˙ cos β w m v β ˙ w sin β w + m d w ϕ ˙ 2 = F D P 1 + F D P 2 f D P m g sin θ cos ο I ω ˙ = F D P 1 d 1 + F D P 2 d 1 τ R
where m represent the robot mass, and I denotes the rotational inertia around the motion center. β w is the angle between the robot speed and the x m axis, and ο is the angle of inclination of the ground environment where the robot is located.
Then, rewrite (102) as a vector form
M v ˙ + V v + G = B τ T D e F R
where, according to [55], we have M = m cos β w 0 0 I , V = m β ˙ w sin β w m d 2 ϕ ˙ 0 0 , v = v ω , F R = f D P τ R , τ = τ 1 τ 2 , G = m g sin θ cos ο 0 , B = 1 r s 1 1 d 1 d 2 , T D e = R C 01 s + k R C 1 s s 1 + A R C 1 s cos n L ϑ 2 ξ 01 s F N 1 R C 02 s + k R C 2 s s 2 + A R C 2 s cos n L ϑ 2 ξ 02 s F N 2 .
Considering the symmetry of the quality matrix and incorporating the time delay in the state, the state-space form can be used to represent the dynamic system of WMR.
v ˙ t = f t v t + h v t t 1 + g t u + ω t
where f t = M 1 V , g t = M 1 B is an unknown function, and ω t = M 1 B T D e + F R G denotes the resistance torque with the same effect and unknown resistance.
The desired trajectories indicate the forward and steering angular velocity, are given as x d , 1 = 1.2 + 0.5 sin 0.05 t and x d , 2 = 0.5 cos 0.05 t .
Depending on the actual WMR system, the initial values are β w 0 = 0 , 0 T , r a n d 1 , 4 and r a n d 1 , 4 , and α c = 0.13 , α a = 0.12 , λ = 0.05 , γ = 0.10 and R = 1 , Q = 1 , 0 ; 0 , 1 in this simulation. Then, the following simulation results are presented.
With the state time delay handled by appropriate LKFs, the impact of delay is successfully suppressed. From Figure 3 and Figure 4, we can obtain that the tracking performance of the proposed algorithm has good tracking performance.
In addition, the adaptive update of the critic and the action can be reflected in Figure 5 and Figure 6, which ensures the boundedness of the adaptive law. Moreover, the tracking trajectory of the WMR is shown in Figure 7. According to the process, the signal in the wheeled mobile robotic system is SGPFS. Compared with previous work [56], with similar control effects, this paper additionally considers finite-time control and the final simulation results achieve finite-time convergence, reflecting the control advantages of the proposed algorithm.

5. Conclusions

A finite-time adaptive online optimization tracking control algorithm was suggested for nonlinear systems incorporating state time delays. By using appropriate LKFs, the issue arising from time delays in both state and input variables has been resolved. Then, a novel nonquadratic HJB function was defined, where finite time was selected as the upper limit of integration, which contains information of the state time delay on the premise of containing the basic information. With the premise of meeting specific requirements, the ideal HJB function was solved by using the IRL method. Furthermore, the SGPFS was guaranteed with the definition of the optimal control policy and the update of the adaptations of the critic and action NNs.

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, S.L.; validation, S.L.; formal analysis, S.L.; investigation, S.L.; resources, S.L.; data curation, S.L.; writing—original draft preparation, S.L.; writing—review and editing, S.L., T.R., L.D. and L.L.; visualization, S.L.; supervision, S.L., L.D. and L.L.; project administration, S.L.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Natural Science Foundation of China under Grant 62203198 and Grant 62173173, in part by the Doctoral Startup Fund of Liaoning University of Technology under Grant XB2021010, in part by the Liaoning Revitalization Talents Program under Grant XLYC1907050/XLYC2203094 and in part by the key project of “Unveiling the List and Leading the Command” in Liaoning Province under Grant JBGS2022005.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.J.; Li, J.; Tong, S.C.; Chen, C.L.P. Neural network control-based adaptive learning design for nonlinear systems with full state constraints. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1562–1571. [Google Scholar] [PubMed]
  2. Wu, C.W.; Liu, J.X.; Xiong, Y.Y.; Wu, L.G. Observer-based adaptive fault-tolerant tracking control of nonlinear nonstrict-feedback systems. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 3022–3033. [Google Scholar] [PubMed]
  3. Gao, T.T.; Liu, Y.J.; Liu, L.; Li, D.P. Adaptive neural network-based control for a class of nonlinear pure-feedback systems with time-varying full state constraints. IEEE/CAA J. Autom. Sin. 2018, 5, 923–933. [Google Scholar]
  4. Li, D.P.; Chen, C.L.P.; Liu, Y.J.; Tong, S.C. Neural network controller design for a class of nonlinear delayed systems with time-varying full-state constraints. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2625–2636. [Google Scholar] [PubMed]
  5. Liu, L.; Wang, Z.S.; Yao, X.S.; Zhang, H.G. Echo state networks based data-driven adaptive fault tolerant control with its application to electromechanical system. IEEE/ASME Trans. Mechatron. 2018, 23, 1372–1382. [Google Scholar]
  6. Ge, S.S.; Hang, C.C.; Zhang, T. Adaptive neural network control of nonlinear systems by state and output feedback. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1999, 29, 818–828. [Google Scholar]
  7. Wen, G.X.; Ge, S.S.; Chen, C.L.P.; Tu, F.W.; Wang, S.N. Adaptive tracking control of surface vessel using optimized backstepping technique. IEEE Trans. Cybern. 2019, 49, 3420–3431. [Google Scholar] [PubMed]
  8. Jagannathan, S.; He, P. Neural-network-based state feedback control of a nonlinear discrete-time system in nonstrict feedback form. IEEE Trans. Neural Netw. 2008, 19, 2073–2087. [Google Scholar] [PubMed]
  9. Liu, L.; Liu, Y.J.; Tong, S.C. Fuzzy based multi-error constraint control for switched nonlinear systems and its applications. IEEE Trans. Fuzzy Syst. 2019, 27, 1519–1531. [Google Scholar]
  10. Bertsekas, D.P.; Tsitsiklis, J.N. Neuro-dynamic programming: An overview. In Proceedings of the 1995 34th IEEE Conference on Decision and Control, New Orleans, LA, USA, 13–15 December 1995; Volume 1, pp. 560–564. [Google Scholar]
  11. Lewis, F.L.; Liu, D. Reinforcement Learning and Approximate Dynamic Programming for Feedback Control; John Wiley & Sons: New York, NY, USA, 2013. [Google Scholar]
  12. Zhang, H.G.; Liu, D.R.; Luo, Y.H.; Wang, D. Adaptive Dynamic Programming for Control: Algorithms and Stability; Springer: London, UK, 2013. [Google Scholar]
  13. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  14. Wen, G.X.; Chen, C.L.P.; Ge, S.S.; Yang, H.L.; Liu, X.G. Optimized adaptive nonlinear tracking control using actor-critic reinforcement learning strategy. IEEE Trans. Ind. Inform. 2019, 15, 4969–4977. [Google Scholar]
  15. Modares, H.; Lewis, F.L. Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning. Automatica 2014, 50, 1780–1792. [Google Scholar]
  16. Vamvoudakis, K.G. Event-triggered optimal adaptive control algorithm for continuous-time nonlinear systems. IEEE/CAA J. Autom. Sin. 2014, 1, 282–293. [Google Scholar]
  17. Wang, D.; Liu, D.R.; Wei, Q.L.; Zhao, D.; Jin, N. Optimal control of unknown nonaffine nonlinear discrete-time systems based on adaptive dynamic programming. Automatica 2012, 48, 1825–1832. [Google Scholar]
  18. Liu, D.R.; Wei, Q.L. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 621–634. [Google Scholar] [PubMed]
  19. Wei, Q.L.; Liu, D.R. A novel iterative adaptive dynamic programming for discrete-time nonlinear systems. IEEE Trans. Autom. Sci. Eng. 2014, 11, 1176–1190. [Google Scholar]
  20. Cao, Y.; Ni, K.; Kawaguchi, T.; Hashimoto, S. Path following for autonomous mobile robots with deep reinforcement learning. Sensors 2024, 24, 561. [Google Scholar] [CrossRef] [PubMed]
  21. Li, S.; Ding, L.; Gao, H.B.; Liu, Y.J.; Li, N.; Deng, Z.Q. Reinforcement learning neural network-based adaptive control for state and input time-delayed wheeled mobile robots. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 4171–4182. [Google Scholar]
  22. Luy, N.T.; Thanh, N.T.; Tri, H.M. Reinforcement learning-based intelligent tracking control for wheeled mobile robot. Trans. Inst. Meas. Control 2014, 36, 868–877. [Google Scholar]
  23. Li, S.; Ding, L.; Gao, H.B.; Liu, Y.J.; Huang, L.; Deng, Z.Q. ADP-based online tracking control of partially uncertain time-delayed nonlinear system and application to wheeled mobile robots. IEEE Trans. Cybern. 2020, 50, 3182–3194. [Google Scholar] [PubMed]
  24. Shih, P.; Kaul, B.C.; Jagannathan, S.; Drallmeier, J.A. Reinforcement-learning-based output-feedback control of nonstrict nonlinear discrete-time systems with application to engine emission control. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009, 39, 1162–1179. [Google Scholar]
  25. Wei, Q.L.; Zhang, H.G.; Liu, D.R.; Zhao, Y. An optimal control scheme for a class of discrete-time nonlinear systems with time delays using adaptive dynamic programming. Acta Autom. Sin. 2010, 36, 121–129. [Google Scholar]
  26. Na, J.; Herrmann, G.; Ren, X.; Barber, P. Adaptive discrete neural observer design for nonlinear systems with unknown time-delay. Int. J. Robust Nonlinear Control 2011, 21, 625–647. [Google Scholar]
  27. Li, D.P.; Liu, Y.J.; Tong, S.C.; Chen, C.L.P.; Li, D.J. Neural networks- based adaptive control for nonlinear state constrained systems with input delay. IEEE Trans. Cybern. 2019, 49, 1249–1258. [Google Scholar] [PubMed]
  28. Chen, C.L.P.; Wen, G.X.; Liu, Y.J.; Wang, F.Y. Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1217–1226. [Google Scholar]
  29. Li, H.; Wang, L.J.; Du, H.P.; Boulkroune, A. Adaptive fuzzy backstepping tracking control for strict-feedback systems with input delay. IEEE Trans. Fuzzy Syst. 2017, 25, 642–652. [Google Scholar]
  30. Wang, D.; Zhou, D.H.; Jin, Y.H.; Qin, S.J. Adaptive generic model control for a class of nonlinear time-varying processes with input time delay. J. Process Control 2004, 14, 517–531. [Google Scholar]
  31. Li, D.P.; Li, D.J. Adaptive neural tracking control for an uncertain state constrained robotic manipulator with unknown time-varying delays. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 2219–2228. [Google Scholar]
  32. Iglehart, D.L. Optimality of (s, S) policies in the infinite horizon dynamic inventory problem. Manag. Sci. 1963, 9, 259–267. [Google Scholar]
  33. Sondik, E.J. The optimal control of partially observable Markov processes over the infinite horizon: Discounted costs. Oper. Res. 1978, 26, 282–304. [Google Scholar]
  34. Keerthi, S.S.; Gilbert, E.G. Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: Stability and moving-horizon approximations. J. Optim. Theory Appl. 1988, 57, 265–293. [Google Scholar]
  35. Chen, H.; Allgöwer, F. A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability. Automatica 1998, 34, 1205–1217. [Google Scholar]
  36. Vamvoudakis, K.G.; Lewis, F.L. Online actor–critic algorithm to solve the continuous-time infinite horizon optimal control problem. Automatica 2010, 46, 878–888. [Google Scholar]
  37. Wei, Q.L.; Liu, D.R.; Yang, X. Infinite horizon self-learning optimal control of nonaffine discrete-time nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 866–879. [Google Scholar] [PubMed]
  38. Wang, D.; Liu, D.R.; Wei, Q.L. Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach. Neurocomputing 2012, 78, 14–22. [Google Scholar]
  39. Wang, F.Y.; Jin, N.; Liu, D.; Wei, Q.L. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound. IEEE Trans. Neural Netw. 2011, 22, 24–36. [Google Scholar]
  40. Liu, X.; Gao, Z. Robust finite-time fault estimation for stochastic nonlinear systems with Brownian motions. J. Frankl. Inst. 2017, 354, 2500–2523. [Google Scholar]
  41. Liu, L.; Liu, Y.J.; Tong, S.C. Neural networks-based adaptive finite-time fault-tolerant control for a class of strict-feedback switched nonlinear systems. IEEE Trans. Cybern. 2019, 49, 2536–2545. [Google Scholar] [PubMed]
  42. Wang, F.; Zhang, X.; Chen, B.; Lin, C.; Li, X.; Zhang, J. Adaptive finite-time tracking control of switched nonlinear systems. Inf. Sci. 2017, 421, 126–135. [Google Scholar]
  43. Wang, F.; Chen, B.; Liu, X.; Lin, C. Finite-time adaptive fuzzy tracking control design for nonlinear systems. IEEE Trans. Fuzzy Syst. 2018, 26, 1207–1216. [Google Scholar]
  44. Wang, F.; Chen, B.; Lin, C.; Zhang, J.; Meng, X. Adaptive neural network finite-time output feedback control of quantized nonlinear systems. IEEE Trans. Cybern. 2018, 48, 1839–1848. [Google Scholar] [PubMed]
  45. Ren, H.; Ma, H.; Li, H.; Wang, Z. Adaptive fixed-time control of nonlinear mass with actuator faults. IEEE/CAA J. Autom. Sin. 2023, 10, 1252–1262. [Google Scholar]
  46. Wang, N.; Tong, S.C.; Li, Y.M. Observer -based adaptive fuzzy control of nonlinear non-strict feedback system with input delay. Int. J. Fuzzy Syst. 2018, 20, 236–245. [Google Scholar]
  47. Zhu, Z.; Xia, Y.Q.; Fu, M.Y. Attitude stabilization of rigid spacecraft with finite-time convergence. Int. J. Robust Nonlinear Control 2011, 21, 686–702. [Google Scholar]
  48. Hardy, G.H.; Littlewood, J.E.; Polya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  49. Chen, B.; Liu, X.P.; Liu, K.F.; Lin, C. Fuzzy approximation-based adaptive control of nonlinear delayed systems with unknown dead-zone. IEEE Trans. Fuzzy Syst. 2014, 22, 237–248. [Google Scholar]
  50. Qian, C.; Lin, W. Non-Lipschitz continuous stabilizers for nonlinear systems with uncontrollable unstable linearization. Syst. Control Lett. 2001, 42, 185–200. [Google Scholar]
  51. Adhyaru, D.M.; Kar, I.N.; Gopal, M. Bounded robust control of nonlinear systems using neural network–based HJB solution. Neural Comput. Appl. 2011, 20, 91–103. [Google Scholar]
  52. Lyshevski, S.E. Optimal control of nonlinear continuous-time systems: Design of bounded controllers via generalized nonquadratic functionals. In Proceedings of the 1998 American Control Conference. ACC (IEEE Cat. No. 98CH36207), Philadelphia, PA, USA, 26 June 1998; Volume 1, pp. 205–209. [Google Scholar]
  53. Abu-Khalaf, M.; Lewis, F.L. Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach. Automatica 2005, 41, 779–791. [Google Scholar]
  54. Li, S.; Wang, Q.; Ding, L.; An, X.; Gao, H.; Hou, Y.; Deng, Z. Adaptive NN-based finite-time tracking control for wheeled mobile robots with time-varying full state constraints. Neurocomputing 2020, 403, 421–430. [Google Scholar]
  55. Ding, L.; Huang, L.; Li, S.; Gao, H.; Deng, H.; Li, Y.; Liu, G. Definition and application of variable resistance coefficient for wheeled mobile robots on deformable terrain. IEEE Trans. Robot. 2020, 36, 894–909. [Google Scholar]
  56. Li, S.; Li, D.P.; Liu, Y.J. Adaptive neural network tracking design for a class of uncertain nonlinear discrete-time systems with unknown time-delay. Neurocomputing 2015, 168, 152–159. [Google Scholar]
Figure 1. Finite time convergence adaptive optimal tracking control algorithm structure drawing.
Figure 1. Finite time convergence adaptive optimal tracking control algorithm structure drawing.
Sensors 24 05462 g001
Figure 2. The structure of the wheeled mobile robot.
Figure 2. The structure of the wheeled mobile robot.
Sensors 24 05462 g002
Figure 3. Tracking trajectories of the states.
Figure 3. Tracking trajectories of the states.
Sensors 24 05462 g003
Figure 4. Tracking errors.
Figure 4. Tracking errors.
Sensors 24 05462 g004
Figure 5. The adaptive laws of the action NNs.
Figure 5. The adaptive laws of the action NNs.
Sensors 24 05462 g005
Figure 6. The adaptive laws of the critic NNs.
Figure 6. The adaptive laws of the critic NNs.
Sensors 24 05462 g006
Figure 7. Tracking trajectories of the position.
Figure 7. Tracking trajectories of the position.
Sensors 24 05462 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Ren, T.; Ding, L.; Liu, L. Adaptive Finite-Time-Based Neural Optimal Control of Time-Delayed Wheeled Mobile Robotics Systems. Sensors 2024, 24, 5462. https://doi.org/10.3390/s24175462

AMA Style

Li S, Ren T, Ding L, Liu L. Adaptive Finite-Time-Based Neural Optimal Control of Time-Delayed Wheeled Mobile Robotics Systems. Sensors. 2024; 24(17):5462. https://doi.org/10.3390/s24175462

Chicago/Turabian Style

Li, Shu, Tao Ren, Liang Ding, and Lei Liu. 2024. "Adaptive Finite-Time-Based Neural Optimal Control of Time-Delayed Wheeled Mobile Robotics Systems" Sensors 24, no. 17: 5462. https://doi.org/10.3390/s24175462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop