Next Article in Journal
Solid Isotropic Material with Penalization-Based Topology Optimization of Three-Dimensional Magnetic Circuits with Mechanical Constraints
Previous Article in Journal
Existence and Hyers–Ulam Stability for Random Impulsive Stochastic Pantograph Equations with the Caputo Fractional Derivative
Previous Article in Special Issue
Chaotic Path-Planning Algorithm Based on Courbage–Nekorkin Artificial Neuron Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Near-Optimal Tracking Control of Partially Unknown Discrete-Time Nonlinear Systems Based on Radial Basis Function Neural Network

1
School of Automation, Guangxi University of Science and Technology, Liuzhou 545000, China
2
School of Physics and Electrical Engineering, Liupanshui Normal University, Liupanshui 553000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(8), 1146; https://doi.org/10.3390/math12081146
Submission received: 28 February 2024 / Revised: 5 April 2024 / Accepted: 8 April 2024 / Published: 10 April 2024
(This article belongs to the Special Issue Advances in Nonlinear Analysis and Control)

Abstract

:
This paper proposes an optimal tracking control scheme through adaptive dynamic programming (ADP) for a class of partially unknown discrete-time (DT) nonlinear systems based on a radial basis function neural network (RBF-NN). In order to acquire the unknown system dynamics, we use two RBF-NNs; the first one is used to construct the identifier, and the other one is used to directly approximate the steady-state control input, where a novel adaptive law is proposed to update neural network weights. The optimal feedback control and the cost function are derived via feedforward neural network approximation, and a means of regulating the tracking error is proposed. The critic network and the actor network were trained online to obtain the solution of the associated Hamilton–Jacobi–Bellman (HJB) equation within the ADP framework. Simulations were carried out to verify the effectiveness of the optimal tracking control technique using the neural networks.

1. Introduction

As is widely known, nonlinear system control is an important topic of control fields, especially for discrete-time nonlinear systems, and is difficult for traditional control methods. In recent decades, many different approaches to discrete-time system control have been proposed, such as adaptive control [1], fuzzy control [2], and PID control [3]. Optimal tracking control, one of the effective methods for nonlinear systems, has many practical engineering applications [4,5,6]. Its purpose is to design a control law that not only allows the system to track the desired trajectory but also minimizes a specific performance index. It is of great theoretical significance to explore the optimal tracking optimal control of nonlinear systems. Although dynamic programming is an effective method for solving optimal control problems, there is the problem of “curse of dimensionality” when dealing with relatively complex systems [7,8]. Moreover, it is difficult to solve the HJB equation derived from the optimal control of nonlinear systems, which has no analytical solution.
On the other hand, neural network control is used as a common control method for uncertainly nonlinear systems. In 1990, Narendra et al. first proposed an artificial neural network (ANN) adaptive control method for nonlinear dynamical systems [9]. Through neural network approximation, the uncertain system can be reconstructed using input and output data. Since then, multilayer neural networks (MNNs) have been successfully applied in pattern recognition and control systems [10]. This also has led to the generation of many types of neural networks, including the RBF-NN. In [11], Poggio et al. first proved that the RBF-NN is superior in approximating functions. Studies of RBF-NNs have also shown that these neural networks have the ability to approximate any nonlinear function with a compact ensemble and arbitrary accuracy [12,13]. Compared to other ANNs, the RBF-NN does not have the complex structure of neural networks such as back propagation (BP) networks or recurrent neural networks (RNNs), and it is easier to select parameters [11,14,15]. Its good generalization ability, simple network structure, and avoidance of unnecessarily lengthy computations are advantages that make RBF-NNs attract attention [15,16]. Many research results have been published on neural network control for nonlinear systems [17,18,19].
Benefiting from neural networks and reinforcement learning (RL), the difficult problem of solving nonlinear HJB partial differential equations is solved. The ADP algorithm was proposed by Powell to approximate the solution of the HJB equation [20], which combines the theory and methods of RL, neural networks, adaptive control and optimal control. As developed, ADP has not only been considered as one of the core methods for solving the diversity of optimal control problems but also has been successfully applied to both continuous-time systems [21,22,23] and discrete-time systems [24,25,26,27,28,29,30,31] to search for solutions of the HJB equations online. Particularly, several works have attempted to solve the discrete time nonlinear optimal regulation problem using the ADP algorithm such as robust ADP [32,33,34,35], iterative/invariant ADP [36,37,38,39], off-policy RL [40,41,42] and the Q-Learning Algorithm [40,43].
In the past decades, many relevant studies have been conducted on the optimal tracking control of discrete-time nonlinear system using the ADP algorithm. However, in the existing literature on optimal tracking of nonlinear discrete-time systems, there is no RBF neural network-based ADP algorithm. In this paper, an optimal tracking control method based on RBF-NNs for discrete-time partially unknown nonlinear systems is proposed. Two RBF neural networks are used to approximate the unknown system dynamic as well as the steady-state control. After transforming the tracking problem into a regulation problem, the critic network and the actor network are used to obtain the nearly optimal feedback control, which allows the online learning process to require only current and past system data.
The contributions of article are as follows: (1) Unlike the classical technique of NN approximation, we propose a near-optimal tracking control scheme for a class of partially unknown discrete-time nonlinear systems based on RBF-NNs and prove the stability of the system. (2) Compared with [35,39], we additionally used an RBF-NN to directly approximate the steady-state controller of the unknown system. It can solve the requirement for the priori knowledge of the controlled system dynamics and reference system dynamics. Moreover, we propose a novel adaptive law to update the weight of the steady-state controller.
The paper is organized as follows. The problem statement is shown in Section 2. The design of the optimal tracking controller of the system with partially unknown nonlinear dynamics is given in Section 3, which includes the RBF-NN identifier, the RBF-NN steady-state controller, near optimal feedback controller, and stability analysis. Section 4 provides simulation results to validate the proposed control method and details the method comparison. Section 5 draws some conclusions.

2. Problem Statement

Consider the following affine nonlinear discrete-time system [31]:
x ( k + 1 ) = f [ x ( k ) ] + g [ x ( k ) ] u ( k )
where x ( k ) R n is the measurable system state and u ( k ) R m is the control input. Assume that the nonlinear smooth function f [ x ( k ) ] R n is an unknown drift function, g [ x ( k ) ] R n × m is a known function, and g [ x ( k ) ] F g 1 where the Frobenius norm · F is applied. In addition, assume that g [ x ( k ) ] has a generalized inverse matrix g [ x ( k ) ] + R m × n such that g [ x ( k ) ] g [ x ( k ) ] + = I R n × n where I is the identity matrix. Let x ( 0 ) be the initial state.
The reference trajectory is generated by the following bounded command:
x d ( k + 1 ) = φ ( x d ( k ) )
where x d ( k ) R n and φ ( x d ( k ) ) R n , and x d ( k ) is the reference trajectory; it need only be a stable state trajectory or asymptotically stable.
The goal of this paper is to design a controller u ( k ) that not only ensures the state of the system (1) tracks the reference trajectory but also minimizes a cost function. For the optimal tracking control technique, the cost functions are usually considered in quadratic form [4], that is
J ( e ( k ) , u ( k ) ) = k = 0 e T ( k ) Q e ( k ) + u T ( k ) R u ( k )
where Q R n × n and R R m × m are symmetric positive definite; e ( k ) = x ( k ) x d ( k ) is tracking error. For common solutions of tracking problems, the control input consists of two parts, a steady-state input u d and a feedback input u e [24]. Next, we will discuss how to obtain each part.
The steady-state controller is used to ensure perfect tracking. This perfect tracking equation is realized under the condition x ( k ) = x d ( k ) . For this condition to be fulfilled, the steady-state part of the control u d ( k ) must exist to make x ( k ) equivalent to x d ( k ) . By substituting x d ( k ) and u d ( k ) into system (1), the reference state is
x d ( k + 1 ) = f [ x d ( k ) ] + g [ x d ( k ) ] u d ( k )
where x d ( k ) and x d ( k + 1 ) are bounded to be tracked by the reference trajectory. If the system dynamics (1) are known, u d ( k ) is acquired by
u d ( k ) = g [ x d ( k ) ] + ( x d ( k + 1 ) f [ x d ( k ) ] )
where g [ x d ( k ) ] + = ( g [ x d ( k ) ] T g [ x d ( k ) ] ) 1 g [ x d ( k ) ] T is the generalized inverse of g [ x d ( k ) ] with g [ x d ( k ) ] + g [ x d ( k ) ] = I .
Remark 1. 
In the subsequent discussion, the RBF network can be used to identify the unknown dynamics of system (1); hence, (5) can be computed.
By using (1) and (4), the tracking error dynamics e ( k ) are given by
e ( k + 1 ) = f [ x ( k ) ] + g [ x ( k ) ] u ( k ) x d k + 1 = f e ( k ) + g e ( k ) u e ( k )
where f e ( k ) = g ( e ( k ) + x d ( k ) ) g ( x d ( k ) ) + ( φ ( x d ( k ) ) f ( x d ( k ) ) ) + f ( e ( k ) + x d ( k ) ) φ ( x d ( k ) ) , u e ( k ) = u ( k ) u d ( k ) , and g e ( k ) = g [ x d ( k ) + e ( k ) ] . u e ( k ) R m is the feedback control input. By minimizing the cost function, it is designed to stabilize the tracking error dynamics. For e ( k ) in the control sequence, the cost function can be expressed as the following discrete time tracking HJB equation
J e ( e ( k ) , u e ( k ) ) = k = 0 e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) = e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + J e ( e ( k + 1 ) , u e ( k + 1 ) ) = r ( k ) + J e ( e ( k + 1 ) , u e ( k + 1 ) )
where r ( k ) = e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) , J e ( e ( k ) , u e ( k ) ) > 0 for e ( k ) , u e ( k ) 0 and J e ( e ( k + 1 ) , u e ( k + 1 ) ) denotes the cost function at the next tracking error dynamics e ( k + 1 ) . The tracking error e ( k ) is used in the study of the cost function of the optimal tracking control problem.
In general, this feedback control u e ( k ) is found by minimizing (7) to solve the extremum condition in the optimal control framework [4]. This result is
u e * ( k ) = 1 2 R 1 g e ( k ) J ( e ( k + 1 ) ) e ( k + 1 )
Then, the standard control input is obtained
u * ( k ) = u d ( k ) + u e * ( k )
where u d ( k ) is obtained from (5), and u e * ( k ) is obtained from (8).
As detailed in the subsequent discussion, in order to acquire the unknown dynamics in system (1), we used the RBF neural networks to reconstruct system dynamics. Moreover, faced with the problem of unable to find the analytical solution of (7) and the curse of dimensionality, the ADP algorithm was used to approximately solve the HJB Equation (7).
The main results of this paper are based on the following definitions and assumptions [30].
Definition 1. 
A control law u e is admissible with respect to (7) on the set Ω if u e is continuous on a compact set Ω u R for e ( k ) Ω , u e ( 0 ) = 0 , and J ( e ( 0 ) , u e ( · ) ) is finite.
Assumption 1. 
System (1) is controllable, and the system state x ( k ) = 0 is in equilibrium under control u ( k ) = 0 . Input control u ( k ) = u ( x ( k ) ) satisfies u ( x ( k ) ) = 0 for x ( k ) = 0 , and the cost function is a positive definite function for any x ( k ) and u ( k ) .
Lemma 1. 
For the tracking error system (6), assume that u e ( k ) is an admissible control, the internal dynamics f e ( k ) is bounded, and
f e ( k ) 2 Γ λ min ( Q ) e ( k ) 2 / 2 + ( Γ λ min ( R ) 2 g 1 2 ) u e ( k ) 2 / 2 ,
where λ min ( R ) is the minimum eigenvalue of R, λ min ( Q ) is the minimum eigenvalue of Q, and Γ > 2 g 1 2 / λ min ( R ) is a known positive constant. Then, the tracking error system (6) is asymptotically stable.
Proof. 
We consider the following Lyapunov function,
V ( k ) = e T ( k ) e ( k ) + Γ J e ( k )
where J e ( k ) = J e ( e ( k ) , u e ( k ) ) is defined in (7). Differencing the Lyapunov function yields
Δ V ( k ) = e T ( k + 1 ) e ( k + 1 ) e T ( k ) e ( k ) + Γ ( J e ( k + 1 ) J e ( k ) )
Using (6) and (7), we can obtain
Δ V ( k ) = ( f e ( k ) + g e ( k ) u e ( k ) ) T ( f e ( k ) + g e ( k ) u e ( k ) ) e T ( k ) e ( k ) Γ ( e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) )
Using the Cauchy–Schwarz inequality yields
Δ V ( k ) 2 f e ( k ) 2 ( Γ λ min ( R ) 2 g 1 2 ) u e ( k ) 2 Γ λ min ( Q ) e ( k ) 2 e ( k ) 2
For the purpose of asymptotically stabilizing the tracking system (6), i.e., Δ V ( k ) < 0 , it is necessary to satisfy the following
2 f e ( k ) 2 Γ λ min ( Q ) e ( k ) 2 + ( Γ λ min ( R ) 2 g 1 2 ) u e ( k ) 2
Thus, Δ V ( k ) < 0 and the asymptotic stability of the tracking error system (6) are proved if the bound in (10) is satisfied. □
Remark 2. 
Lemma 1 shows that under the condition that the internal dynamics f e ( k ) is bounded to satisfy (10), there exists an admissible control u e ( k ) that not only stabilizes the tracking error system (6) on Ω but also guarantees that the cost function J e ( k ) is finite.

3. Optimal Tracking Controller Design with Partially Unknown Dynamics

In this section, firstly, we use an RBF-NN to approximate the unknown system dynamics f [ x ( k ) ] and use another RBF-NN to approximate the steady-state controller u d ( k ) . Secondly, two feedback neural networks are introduced to approximate the cost function and the optimal feedback control u e ( k ) . Finally, the system stability is proved by selecting an appropriate Lyapunov function.

3.1. RBF-NN Identifier Design

In this subsection, in order to capture the unknown dynamics of the system (1), an RBF-NN-based identifier is proposed. Without losses of generality, this unknown dynamics is assumed to be a smooth function within a compact set. Using an RBF-NN, this unknown dynamics (1) is identified as
f ^ ( x ( k ) ) = w f ^ ( k ) T h [ x ( k ) ] + Δ f ( x )
where w f ^ ( k ) is the matrix of ideal output weights of the neural network and h [ x ( k ) ] is the vector of radial basis functions, Δ f ( x ) is the bounded approximation error, and | | Δ f ( x ) | | < ε f , where ε f is a positive constant.
For any non-zero approximation error Δ f ( x ) , there exists optimal weight matrix w f * such that
f ( x ( k ) ) = f ^ ( x , w f * ) Δ f ( x )
where w f * is the optimal weight of identifier, and f ^ ( x , w f * ) = w f * ( k ) T h [ x ( k ) ] . The output weights are updated, and the hidden weights remain unchanged when training, so the neural network model identification error is
f ˜ ( x ( k ) ) = f [ x ( k ) ] f ^ [ x ( k ) ] = f ^ ( x , w f * ) Δ f [ x ( k ) ] w f ^ ( k ) T h [ x ( k ) ] = w f ˜ ( k ) T h [ x ( k ) ] Δ f [ x ( k ) ]
where w f ˜ ( k ) = w f * ( k ) w f ^ ( k ) .
The error function is defined as the following
E ( k + 1 ) = 1 2 [ f ˜ ( x ( k ) ) ] T [ f ˜ ( x ( k ) ) ]
Using the gradient descent method, the weights are updated by
Δ w f j ( k + 1 ) = η E w f j = η ( f ( x ( k ) ) f ^ ( x ( k ) ) ) h [ x ( k ) ] = η ( f ˜ ( x ( k ) ) ) h [ x ( k ) ]
and
w f j k = w f j k 1 + Δ w f j k
where η > 0 is the learning rate of the identifier.
Inspired by the work in [36], we must state the following assumptions before proceeding.
Assumption 2. 
The neural network identifying error is assumed to have an upper bound, namely
Δ f ( x ) T Δ f ( x ) w f ˜ ( k ) T w f ˜ ( k ) h [ x ( k ) ] T h [ x ( k ) ]

3.2. RBF-NN Steady-State Controller Design

We use the RBF-NN to approximate the steady-state control u d ( k ) directly, and the inverse dynamic NN is established to approximate [16,19].
We design the steady-state control u d ( k ) through the approximation of the RBF-NN
u d ( k ) = w d ^ T k h [ x d ( k ) ]
where w d ^ is the actual neural network weights; h [ x d ( k ) ] is the output of the hidden layers; and u d ( k ) is the output of the RBF-NN.
Let the ideal steady-state control u d * ( k ) be
u d * ( k ) = w d * T h [ x d ( k ) ] + ε u
where w d * is the optimal neural network weights and ε u is the error vector. Assuming that x d ( k + 1 ) is the reference output of the system at the point k + 1 , without considering external disturbances, the control input u d * ( k ) satisfies
L [ x d ( k ) , u d * ( k ) ] x d ( k + 1 ) = 0
where L [ x d ( k ) , u d * ( k ) ] = f [ x d ( k ) ] + g [ x d ( k ) ] u d * ( k ) .
Thus, we can define the error e m ( k ) of the approximation state as
e m ( k + 1 ) = L [ x d ( k ) , u d ( k ) ] x d ( k + 1 )
where L [ x d ( k ) , u d ( k ) ] = f [ x d ( k ) ] + g [ x d ( k ) ] u d ( k ) .
(24) subtracted from (23) yields
u d ( k ) u d * ( k ) = w d ^ T ( k ) h [ x d ( k ) ] w d * T ( k ) h [ x d ( k ) ] ε u = w d ˜ T k h [ x d ( k ) ] ε u
where w d ˜ ( k ) = w d ^ ( k ) w d * ( k ) is weight approximation error.
The weights are updated by the following update law of the weights
w d ^ ( k + 1 ) = w d ^ ( k ) γ [ h ( x d ( k ) ) e m ( k + 1 ) + σ w d ^ ( k ) ]
where γ > 0 and σ > 0 are the positive constant.
Assumption 3. 
Within the set Ω ε , the optimal neural network weights w * and the approximation error are bounded.
w d * w m , | | ε u | | ε l

3.3. Near-Optimal Feedback Controller Design

In this subsection, we present an ADP algorithm based on the Bellman optimality. The goal is to find the optimal approximate feedback control law that minimizes the approximate cost function.
First, considering the HJB Equation (7) and the optimal feedback control (8), the cost function J e ( e ( k ) , u e ( k ) ) is rewritten as V i ( e ( k ) ) . The initial cost function V 0 ( e ( k ) ) = 0 may not represent the optimal value function. Then, a single control vector u e 0 ( k ) can be solved by
V 0 ( e ( k ) ) = min u e ( k ) { e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V 0 ( e ( k + 1 ) ) } = e T ( k ) Q e ( k ) + ( u e 0 ( k ) ) T R u e 0 ( k )
Updating the control law yields
u e 1 ( k ) = arg min u e ( k ) { e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V 0 ( e ( k + 1 ) ) } = 1 2 R 1 g e T ( k ) V 0 ( e ( k + 1 ) ) e ( k + 1 )
Hence, for i = 1 , 2 , , the ADP algorithm can be realized in a continuous iterative process in
V i ( e ( k ) ) = min u e ( k ) e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V i ( e ( k + 1 ) ) = e T ( k ) Q e ( k ) + ( u e i ( k ) ) T R u e i ( k ) + V i ( e ( k + 1 ) )
and
u e i + 1 ( k ) = arg min u e ( k ) { e T ( k ) Q e ( k ) + u e T ( k ) R u e ( k ) + V i ( e ( k + 1 ) ) } = 1 2 R 1 g e T ( k ) V i ( e ( k + 1 ) ) e ( k + 1 )
where index i represents the number of iterations of the control law and the cost function, i.e., the update count of internal neuron to update the weight parameters, while index k represents time index of state. Moreover, it is worth noting in the iterative process of the ADP algorithm that the number of iterations of the cost function and the control law increases from zero to infinity.
To begin the development of the feedback control policy, we used neural networks to construct the critic network and the actor network.
The cost function V i ( e ( k ) ) is defined as the critic network.
The output of the critic network is denoted as
V ^ i ( e ( k ) ) = w c i T z ( ν c i T e ( k ) ) + ε c ( k )
where z ( ν c i T e ( k ) ) is the hidden layer function, w c i is the hidden layer weight of the critic network, ν c i is the input layer weight of the critic network, and ε c ( k ) is the approximation error.
So, we define the prediction error of the critic network as
e c i ( k ) = V ^ i ( e ( k ) ) V i ( e ( k ) )
The error function of the critic network is defined as
E c i ( k ) = 1 2 e c i T ( k ) e c i ( k ) .
Using the gradient descent method, the weights of the critic network are updated,
w c i ( k + 1 ) = w c i ( k ) α c [ E c i ( k ) w c i ( k ) ]
where α c > 0 is the learning rate of the critic network.
The inputs of the actor network is the system error e ( k ) , and the outputs of the actor network is the optimal feedback control u e ( k ) . The output can be formulated as
u ^ e i ( k ) = w a i T z ( v a i T e ( k ) ) + ε a ( k ) ,
where z ( ν a i T e ( k ) ) is the hidden layer function, w a i is the hidden layer weight of the actor network, ν a i is the input layer weight of the actor network, and ε a ( k ) is the approximation error.
Similar to the critic network, the prediction error of the actor network is defined as
e a i ( k ) = u ^ e i ( k ) u e i ( k )
where u ^ e i ( k ) is approximation optimal feedback control, and u e i ( k ) is the optimal feedback control at the iterative number i.
The error function of the actor network is defined as
E a i ( k ) = 1 2 e a i T ( k ) e a i ( k )
The weights of the actor network are also updated in the same way as the critic network; we use the gradient descent method
w a i ( k + 1 ) = w a i ( k ) β a [ E a i ( k ) w a i ( k ) ] ,
where β a > 0 is the learning rate of the actor network, and i is the update count of the internal neuron to update the weight parameters.

3.4. Stability Analysis

In this subsection, we give the stability proof by Lyapunov’s stability theory.
Assumption 4. 
Radial basis function h t = exp x t c t 2 2 b 2 of the maximum value is h m a x = 1 , where c ( t ) is the center point and b is the width of radial basis function. Assuming the numbers of neurons is l [ l f , l d ] for any radial basis function h [ h [ x ( k ) ] , h [ x d ( k ) ] ] , then
h i 1 , h l l , h T h = h 2 l
We can know the maximum value h 2 of the hidden layer with l neurons is l [ l f , l d ] , then we assume the maximum value h [ x ( k ) ] 2 of the hidden layer for the identifier f ^ ( x ( k ) ) is l f , and the maximum value h d [ x ( k ) ] 2 of the hidden layer for the steady-state controller u d ( k ) is l d .
Lemma 2. 
The relationship between (25) and weight approximation error (27) satisfies the following equation.
w d ˜ T ( k ) h [ x d ( k ) ] = e m ( k + 1 ) L u + ε u
where e m ( k ) is the error of the approximation state x d ( k ) , L u = L u | u = ξ , ξ [ u d * ( k ) , u d ( k ) ] , g 1 L u > ϵ > 0 , g 1 and ϵ are positive constants.
Proof. 
Subtracting w d * from both sides of (28), we obtain
w d ˜ ( k + 1 ) = w d ˜ ( k ) γ [ h [ x d ( k ) ] e m ( k + 1 ) + σ w d ^ ( k ) ]
Combining (25) and (27) with the mean value theorem, we can obtain
L [ x d ( k ) , u d ( k ) ] = [ L [ x d ( k ) ] , u d * ( k ) + w d ˜ T ( k ) h [ x d ( k ) ] ε u ] = [ L x d ( k ) , u d * ( k ) ] + [ w d ˜ T ( k ) h [ x d ( k ) ] ε u ] L u = x d ( k + 1 ) + w d ˜ T ( k ) h [ x d ( k ) ] ε u L u
Further combining (45) with (26), we can obtain
e m ( k + 1 ) = L [ x d ( k ) , u d ( k ) ] x d ( k + 1 ) = [ w d ˜ T ( k ) h [ x d ( k ) ] ε u ] L u
After rearranging, we can obtain
w d ˜ T ( k ) h [ x d ( k ) ] = e m ( k + 1 ) L u + ε u
The proof is completed. □
Lemma 3. 
For simplicity of analysis, ε u and e m ( k + 1 ) have an inequality relation though using Assumption 3 and Young’s inequality.
2 ε u T e m ( k + 1 ) k 0 ε u 2 + 1 k 0 e m ( k + 1 ) 2
where k 0 is a positive constant.
Theorem 1. 
For the optimal tracking problem (1)–(3), the RBF-NN identifier (16) is used to approximate f ( x ( k ) ) , the steady-state controller u d ( k ) is approximated by the RBF-NN (23), and the feedforward networks (34), (38) are used to approximate the cost function J ( e ( k ) , u ( k ) ) and the feedback controller u e ( k ) , respectively. Assume that the parameters satisfy the following inequality
( a ) 0 < η 1 l f ( b ) 0 < g 1 k 0 ( c ) 0 < ( 1 + σ ) l d γ 1 g 1 1 k 0 ( d ) 0 < ( l d + σ ) γ 1 ( e ) a c 2 / z ( v c i T e ( k ) ) 2 ( f ) β a 2 / z ( ν a i T e ( k ) ) 2
where η is the learning rate of the RBF-NN identifier, σ and γ are the update parameters of the steady-state controller approximation network weights, a c is the learning rate of the actor network, β a is the learning rate of the critic network, and z ( v c i T e ( k ) ) and z ( ν a i T e ( k ) ) are hidden layer functions of the actor network and the critic network. Then, the closed loop system (6) of approximation error is asymptotically stable when the parameter estimation errors are bounded.
Proof. 
Considering the following positive definite Lyapunov function candidate
J k = J 1 k + J 2 k + J 3 k + J 4 k = 1 η w f ˜ ( k ) T w f ˜ ( k ) + 1 g 1 e m k T e m k + 1 γ w d ˜ k T w d ˜ ( k ) + w c i ( k ) T w c i ( k ) + w a i ( k ) T w a i ( k )
where J 1 k = 1 η w f ˜ ( k ) T w f ˜ ( k ) , J 2 k = 1 g 1 e m k T e m k + 1 γ w d ˜ k T w d ˜ ( k ) , J 3 k = w c i ( k ) T w c i ( k ) , J 4 k = w a i ( k ) T w a i ( k ) .
Firstly, differencing it according to the Lyapunov function of J 1 k = 1 η w f ˜ ( k ) T w f ˜ ( k ) yields
Δ J 1 k = J 1 k + 1 J 1 k = 1 η w f ˜ ( k + 1 ) T w f ˜ ( k + 1 ) 1 η w f ˜ ( k ) T w f ˜ ( k ) = 1 η [ w f ˜ ( k ) + η f ˜ ( x ( k ) ) h [ x ( k ) ] ] T [ w f ˜ ( k ) + η f ˜ ( x ( k ) ) h [ x ( k ) ] ] 1 η w f ˜ ( k ) T w f ˜ ( k ) = 1 η [ w f ˜ ( k ) T w f ˜ ( k ) 1 η w f ˜ ( k ) T w f ˜ ( k ) + η 2 [ f ˜ ( x ( k ) ) T f ˜ ( x ( k ) ) h [ x ( k ) ] T h [ x ( k ) ] + 2 η f ˜ ( x ( k ) ) w f ˜ ( k ) T h [ x ( k ) ] = η [ [ w f ˜ ( k ) T h [ x ( k ) ] + Δ f [ x ] ] T [ w f ˜ ( k ) T h [ x ( k ) ] + Δ f [ x ] ] h [ x ( k ) ] T h [ x ( k ) ] ] + 2 [ w f ˜ ( k ) T h [ x ( k ) ] + Δ f [ x ] ] w f ˜ ( k ) T h [ x ( k ) ] = η [ w f ˜ ( k ) T w f ˜ ( k ) h [ x ( k ) ] T h [ x ( k ) ] + 2 w f ˜ ( k ) T h [ x ( k ) ] Δ f [ x ] 2 w f ˜ ( k ) T h [ x ( k ) ] 2 w f ˜ ( k ) T w f ˜ ( k ) h [ x ( k ) ] T h [ x ( k ) ] + Δ f [ x ] T Δ f [ x ] h [ x ( k ) ] T h [ x ( k ) ]
According to the Assumption 2, Assumption 4 and (42), (51) can be carried out to obtain
Δ J 1 k η l f 2 w f ˜ ( k ) 2 2 l f w f ˜ ( k ) 2 + η l f 2 w f ˜ ( k ) 2 + 2 η l f w f ˜ ( k ) T h [ x ( k ) ] Δ f [ x ] 2 w f ˜ ( k ) T h [ x ( k ) ] Δ f [ x ] w f ˜ ( k ) 2 ( 2 η l f 2 2 l f ) + ( 2 l f η 2 ) w f ˜ ( k ) T h [ x ( k ) ] Δ f [ x ] w f ˜ ( k ) 2 ( 4 l f 2 η 4 l f )
Next, differencing according to the Lyapunov function of J 2 k = 1 g 1 e m k T e m k + 1 γ w d ˜ k T w d ˜ ( k ) yields
Δ J 2 k = J 2 k + 1 J 2 k = 1 g 1 [ e m k + 1 T e m k + 1 e m k T e m k ] 1 γ w d ˜ ( k ) T w d ˜ ( k ) + 1 γ w d ˜ ( k + 1 ) T w d ˜ ( k + 1 ) = 1 γ w d ˜ ( k ) γ [ h [ x d ( k ) ] e m ( k + 1 ) + σ w d ^ ( k ) ] T w d ˜ ( k ) γ [ h [ x d ( k ) ] e m ( k + 1 ) + σ w d ^ ( k ) ] 1 γ w d ˜ ( k ) T w d ˜ ( k ) + 1 g 1 [ e m k + 1 T e m k + 1 e m k T e m k ] = 1 g 1 [ e m k + 1 T e m k + 1 e m k T e m k ] 2 w d ˜ ( k ) T h [ x d ( k ) ] e m ( k + 1 ) 2 σ w d ˜ ( k ) T w d ^ ( k ) + γ h T [ x d ( k ) ] h [ x d ( k ) ] e m k + 1 T e m k + 1 + γ σ 2 w d ^ ( k ) T w d ^ ( k ) + 2 γ σ w d ^ ( k ) T h [ x d ( k ) ] e m ( k + 1 )
where
2 σ w d ˜ ( k ) T w d ^ ( k ) = σ w d ˜ ( k ) T [ w d ¯ ( k ) + w d * ] + σ [ w d ^ ( k ) ω d * ] T w d ^ ( k ) = σ w d ˜ ( k ) 2 + w d ^ ( k ) 2 + w d ˜ ( k ) T w d * w d * w d ^ ( k ) T = σ [ w d ˜ ( k ) 2 + w d ^ ( k ) 2 w d * 2 ] , γ h T [ x d ( k ) ] h [ x d ( k ) ] e m k + 1 T e m k + 1 γ l d e m k + 1 2 , 2 γ σ w d ^ T k h [ x d ( k ) ] e m ( k + 1 ) γ σ l d [ w d ^ ( k ) 2 + e m k + 1 2 ] , γ σ 2 w d ^ T k w d ^ ( k ) = γ σ 2 w d ^ ( k ) 2
Considering (26) and g 1 L u > ϵ > 0 , we can deduce
1 g 1 2 L u 1 g 1 2 g 1 = 1 g 1 < 0
Recall Lemmas 2 and 3; substituting (54) into (53) yields
Δ J 2 k 1 g 1 + γ ( 1 + σ ) l d + 1 k 0 e m k + 1 2 + σ ( γ l d + γ σ 1 ) w d ^ ( k ) 2 1 g 1 e m k 2 σ w d ¯ ( k ) 2 + σ ω m 2 + k 0 ε l 2 = 1 g 1 ( 1 + σ ) l d γ 1 k 0 e m k + 1 2 + σ [ l d + σ γ 1 ] w d ^ ( k ) 2 1 g 1 e m k 2 β σ w d ˜ ( k ) 2
where β = g 1 ( σ w m 2 + k 0 ε l 2 ) is a positive constant.
Next, we consider the following Lyapunov function
J 3 ( k ) + J 4 ( k ) = w c i ( k ) T w c i ( k ) + w a i ( k ) T w a i ( k ) .
Then, differencing it according to the Lyapunov function of (57) yields
Δ J 3 ( k ) + Δ J 4 ( k ) = { w c i ( k + 1 ) T w c i ( k + 1 ) + w a i ( k + 1 ) T w a i ( k + 1 ) } { w c i ( k ) T w c i ( k ) + w a i ( k ) T w a i ( k ) } = a c e c i ( k ) 2 ( 2 + a c z v c i T e ( k ) 2 ) + β a e a i ( k ) 2 ( 2 + β a z ( v a i T e ( k ) ) 2 ) .
Finally, Δ J k is derived from (52), (56), and (58)
Δ J k = Δ J 1 k + Δ J 2 k + Δ J 3 k + Δ J 4 k 4 w f ˜ ( k ) 2 ( l f 2 η l f ) σ w d ˜ ( k ) 2 1 g 1 ( 1 + σ ) l d γ 1 k 0 e m k + 1 2 + σ [ l d + σ γ 1 ] w d ^ ( k ) 2 1 g 1 e m k 2 β + a c e c i ( k ) 2 ( 2 + a c z v c i T e ( k ) 2 ) + β a e a i ( k ) 2 ( 2 + β a z ( v a i T e ( k ) ) 2 ) .
Based on the above analysis, when the parameters are selected to fulfill the following condition with e m k 2 β ,
0 < η 1 l f 0 < g 1 k 0 0 < ( 1 + σ ) l d γ 1 g 1 1 k 0 0 < ( l d + σ ) γ 1 a c 2 / z ( v c i T e ( k ) ) 2 β a 2 / z ( ν a i T e ( k ) ) 2
we can obtain Δ J k 0 . □
The working process of the proposed control technique is shown in Figure 1. As shown in Figure 1, with x ( k ) , u d ( k ) and u e i ( k ) , the estimated error e ( k + 1 ) can be obtained by using the RBF-NN identifier and the steady-state controller. Corresponding to the steady-state controller u d ( k ) , we can obtain the reference trajectory x d ( k ) . Using the ADP algorithm, we can obtain nearly optimal feedback controller u ^ e i ( k ) . Then, the actual controller u ( k ) = u ^ e i ( k ) + u d ( k ) and system dynamics x ( k + 1 ) can be obtained. In addition, by using x d ( k ) and x ( k ) the estimated tracking error e ( k ) can be obtained, e ( k + 1 ) can be further obtained. Finally, we can reconstruct the system dynamics to track the reference trajectory.

4. Simulation

In this section, we give the simulation results of our method and compare it with other methods [36]. A discrete-time nonlinear system is introduced to demonstrate the effectiveness of the proposed tracking control method. The case is derived from [24]. We assume that the nonlinear smooth function f R n is an unknown nonlinear drift function and g R n × m is a known function. The corresponding f [ x ( k ) ] and g [ x ( k ) ] are given as
f [ x ( k ) ] = f 1 [ x ( k ) ] f 2 [ x ( k ) ] = sin ( 0.5 x 2 ( k ) ) x 1 2 ( k ) cos ( 1.4 x 2 ( k ) ) sin ( 0.9 x 1 ( k ) ) g [ x ( k ) ] = ( x 1 ( k ) ) 2 + 1.5 0.1 0 0.2 ( ( x 1 ( k ) + x 2 ( k ) ) 2 + 1 )
The reference trajectory x d ( k ) for the above system is defined as
x d k = 0.25 sin ( 10 3 k ) 0.25 cos ( 10 3 k )
where u ( k ) R 2 [ u 1 ( k ) , u 2 ( k ) ] T , and t i m e ( s ) of y-axis is chosen to have a k ( 1 , , 10 , 000 ) multiplied by t s = 0.001 in the simulation.

4.1. Simulation Result of the Proposed Method

In this subsection, we give the simulation result for our proposed method.
Firstly, in order to deal with the unknown dynamics, we need to use two RBF networks to obtain the RBF identifier and the RBF steady-state controller. The RBF networks have a three-layer structure with two input neurons, hidden layers have nine neurons, and output layer have two neurons. The parameters c i and b j of the radial basis functions are chosen to be c i = 2 1.5 1.0 0.5 0 0.5 1.0 1.5 2 2 1.5 1.0 0.5 0 0.5 1.0 1.5 2 and b j = [ b 1 , b 2 ] = [ 2 , 2 ] , and the initial weights w 0 are chosen to be random numbers between ( 0 , 1 ) . For the RBF identifier with its weights updating law (21) to update the weights w f ^ , the unknown function f is identified by the input/output data x ( k ) . For the RBF steady-state controller, the reference trajectory data x d and the weights updating law (28) are used to update the weights w d ^ to identify the steady-state controller u d . Because g 1 L u = 1 , we can select g 1 = 5 . According to 0 < g 1 k 0 of Theorem 1, we can select k 0 = 10 . For the control parameters η , because hidden layers have nine neurons, l = 9 , 0 < η 1 l 1 9 , we select η = 0.1 . With control parameters γ , σ , we can know 0 < ( 1 + σ ) 9 γ 1 5 1 10 = 1 10 = 0.10 and 0 < ( 9 + σ ) γ 1 from Theorem 1 and thus select γ = 0.01 , σ = 0.001 . The initial state is set as x ( 0 ) = [ 0 , 0 ] T . We trained the RBF networks with 10,000 steps of acquired data. Figure 2 and Figure 3 show the RBF-NN identifiers to approximate the tracking curves of the unknown dynamics f ˜ [ f ˜ 1 [ x ( k ) ] , f ˜ 2 [ x ( k ) ] ] T .
Then, based on the ADP algorithm of Bellman optimality, Equation (6) was used to obtain the tracking error e and the optimal feedback control u e to train the critic network and the actor network, respectively. Meanwhile, the obtained standard control inputs u = u ^ e i + u d were used in system (1), which keeps on looping until the value function V i ( e ( k ) ) converge and the tracking error e ( k ) is zero, where the performance index is selected as Q = I and R = I , where I is the identity matrix with appropriate dimension. For the actor network and the critic network, we used the same parameter settings. The initial weights of the critic networks and actor networks are randomly chosen between ( 10 , 10 ) . The input layer has 2 neurons, the hidden layer has 15 neurons, the output layer has 2 neurons, and the learning rate is 0.1. The hidden layer uses the function t a n s i g and the function p u r e l i n , and the output layer uses the function t r a i n l m . Though parameter settings, we trained the actor network and the critic network with 5000 training steps to reach the given accuracy 1 × 10 9 . Figure 4 shows the curves of the system control u [ u 1 , u 2 ] . In Figure 5 and Figure 6, we can see the curves of the state trajectory x and the reference trajectory x d .
Based on above the results, the simulation results show that this tracking technique obtains a relatively satisfactory tracking performance for partially unknown discrete-time nonlinear systems.

4.2. Comparison with Other Methods

In this subsection, we will compare with the research results in [36], which use a BP neural network to approximate the unknown system dynamics. In the comparison, we use the same system dynamics and desired tracking trajectory as (61) and (62) with the initial state x ( 0 ) = [ 0 , 0 ] T and the performance index R = Q = I .
To begin with, an NN identifier is established by a three-layer BP neural network, which is chosen to have a 4–10–2 structure with four input neurons, eight hidden neurons, and two output neurons. The feedforward-neuro-controller is also established by a three-layer BP NN, which is chosen to have a 2–10–2 structure with two input neurons, eight hidden neurons, and two output neurons. For the NN identifier and the feedforward-neuro-controller, the parameter settings of the neural networks are identical, where the hidden layers use the sigmoidal function t a n s i g , the output layers use the linear function p u r e l i n , the learning rate is 0.1, and the initial weights are chosen to be random numbers between ( 0 , 1 ) .
For the actor network and the critic network, we also use the same parameter settings. A 2–15–2 structure is chosen for the critic networks and actor networks, the initial weights are randomly chosen between ( 10 , 10 ) , and the learning rate is 0.1. The hidden layer uses the function t a n s i g and the function p u r e l i n , and the output layer uses the function t r a i n l m . Then, the given accuracy is 1 × 10 9 . In Figure 7 and Figure 8, we can see the curves of the state trajectory x and the reference trajectory x d using tracking control methods for references.
Comparing the two methods, from Figure 5, Figure 6, Figure 7 and Figure 8, we can see that our method has better performance in tracking the reference trajectory.

5. Conclusions

This paper proposes an effective scheme to find the near-optimal tracking controller for a class of partially unknown discrete-time nonlinear systems based on RBF-NNs. In dealing with unknown variables, two RBF-NNs are used to approximate the unknown function and the steady-state controller. Moreover, the ADP algorithm is introduced to obtain the optimal feedback control for tracking the error dynamics, two feedforward neural networks are utilized as structures to approximate the cost function and the feedback controller. Finally, simulation results show a relatively satisfactory tracking performance, which verifies the effectiveness of the optimal tracking control technique. In future work, we may consider completely unknown dynamics and event-triggering conditions.

Author Contributions

Methodology, D.X.; software, Y.L.; investigation, J.H.; resources, Y.M. All the authors contributed equally to the development of the research. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 61463002, the Guizhou Province Natural Science Foundation of China under Grant No. Qiankehe Fundamentals-ZK[2021] General 322, and the Doctoral Foundation of Guangxi University of Science and Technology Grant No. Xiaokebo 22z04.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhai, D.; Lu, A.-Y.; Dong, J.; Zhang, Q. Adaptive Tracking Control for a Class of Switched Nonlinear Systems under Asynchronous Switching. IEEE Trans. Fuzzy Syst. 2018, 6, 1245–1256. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Wang, X.; Wang, Z. Discrete-Time Adaptive Fuzzy Finite-Time Tracking Control for Uncertain Nonlinear Systems. IEEE Trans. Fuzzy Syst. 2024, 2, 649–659. [Google Scholar] [CrossRef]
  3. Zhao, D.; Wang, Z.; Ho, D.W.C.; Wei, G. Observer-Based PID Security Control for Discrete Time-Delay Systems under Cyber-Attacks. IEEE Trans. Syst. Man Cybern. Syst. 2021, 6, 3926–3938. [Google Scholar] [CrossRef]
  4. Lewis, F.L.; Vrabie, D.; Syrmos, V.L. Optimal Control, 3rd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2012. [Google Scholar]
  5. Mannava, A.; Balakrishnan, S.N.; Tang, L.; Landers, R.G. Optimal tracking control of motion systems. IEEE Trans. Control Syst. Technol. 2012, 20, 1548–1558. [Google Scholar] [CrossRef]
  6. Sharma, R.; Tewari, A. Optimal nonlinear tracking of spacecraft attitude maneuvers. IEEE Trans. Control Syst. Technol. 2013, 12, 677–682. [Google Scholar] [CrossRef]
  7. Bellman, R.E. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  8. Lewis, F.L.; Syrmos, V.L. Optimal Control; Wiley: New York, NY, USA, 1995. [Google Scholar]
  9. Narendra, K.S.; Parthasarathy, K. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1990, 1, 4–27. [Google Scholar] [CrossRef] [PubMed]
  10. Narendra, K.S.; Mukhopadhyay, S. Adaptive control of nonlinear multivariable systems using neural networks. In Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, TX, USA, 15–17 December 1993; pp. 737–752. [Google Scholar]
  11. Poggio, T.; Girosi, F. Networks for approximation and learning. Proc. IEEE 1990, 78, 1481–1497. [Google Scholar] [CrossRef]
  12. Hartman, E.J.; Keeler, J.D.; Kowalski, J.M. Layered Neural Networks with Gaussian Hidden Units as Universal Approximations. Neural Comput. 1990, 2, 210–215. [Google Scholar] [CrossRef]
  13. Park, J. Universal approximation using radial basis function networks. Neural Comput. 1993, 3, 246–257. [Google Scholar] [CrossRef]
  14. Powell, M.J.D. Radial Basis Functions for Multivariable Interpolation: A Review. In Algorithms for Approximation; Mason, J.C., Cox, M.G., Eds.; Clarendon Press: Oxford, UK, 1987; pp. 143–167. [Google Scholar]
  15. Nelles, O.; Isermann, R. A Comparison between RBF Networks and Classical Methods for Identification of Nonlinear Dynamic Systems. In Adaptive Systems in Control and Signal Processing; Pergamon: Oxford, UK, 1995. [Google Scholar]
  16. Ge, S.S.; Zhang, J.; Lee, T.H. Adaptive MNN control for a class of non-affine NARMAX systems with disturbances. Syst. Control Lett. 2004, 53, 1–12. [Google Scholar] [CrossRef]
  17. Kobayashi, H.; Ozawa, R. Adaptive neural network control of tendon-driven mechanisms with elastic tendons. Automatica 2003, 1509–1519. [Google Scholar] [CrossRef]
  18. Zhang, H.; Luo, Y.; Liu, D. Neural-Network-Based Near-Optimal Control for a Class of Discrete-Time Affine Nonlinear Systems with Control Constraints. IEEE Trans. Neural Netw. 2009, 20, 1490–1503. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, J.K. Radial Basis Function (RBF) Neural Network Control for Mechanical Systems: Design, Analysis and Matlab Simulation; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  20. Powell, W. Approximate Dynamic Programming: Solving the Curses of Dimensionality; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  21. Vrabie, D.; Lewis, F. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems. Neural Netw. 2009, 4, 237–246. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, D.; Yang, X.; Li, H. Adaptive optimal control for a class of continuous-time affifine nonlinear systems with unknown internal dynamics. Neural Comput. Appl. 2013, 11, 1843–1850. [Google Scholar] [CrossRef]
  23. Bhasin, S.; Kamalapurkar, R.; Johnson, M.; Vamvoudakis, K.; Lewis, F.L.; Dixon, W.E. A novel actor–critic–identififier architecture for approximate optimal control of uncertain nonlinear systems. Automatica 2013, 1, 82–92. [Google Scholar] [CrossRef]
  24. Dierks, T.; Jagannathan, S. Optimal tracking control of affine nonlinear discrete-time systems with unknown internal dynamics. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) Held Jointly with 2009 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009. [Google Scholar]
  25. Al-Tamimi, A.; Lewis, F.L.; Abu-Khalaf, M. Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof. IEEE Trans. Syst. Man Cybern. B Cybern. 2008, 8, 943–949. [Google Scholar] [CrossRef] [PubMed]
  26. Prokhorov, D.V.; Wunsch, D.C. Adaptive critic designs. IEEE Trans. Neural Netw. 1997, 9, 997–1007. [Google Scholar] [CrossRef] [PubMed]
  27. Luo, Y.; Zhang, H. Approximate optimal control for a class of nonlinear discrete-time systems with saturating actuators. Prog. Natural Sci. 2008, 1023–1029. [Google Scholar] [CrossRef]
  28. Dierks, T.; Jagannathan, S. Online optimal control of nonlinear discrete-time systems using approximate dynamic programming. Control Theory Appl. 2011, 361–369. [Google Scholar] [CrossRef]
  29. Si, J.; Wang, Y.-T. Online learning control by association and reinforcement. IEEE Trans. Neural Netw. 2001, 5, 264–276. [Google Scholar] [CrossRef] [PubMed]
  30. Liu, D.; Wei, Q. Policy Iteration Adaptive Dynamic Programming Algorithm for Discrete-Time Nonlinear Systems. IEEE Trans. Neural Netw. Learn. Syst. 2014, 621–634. [Google Scholar] [CrossRef] [PubMed]
  31. Kiumarsi, B.; Lewis, F.L. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems. IEEE Trans. Neural Netw. Learn. Syst. 2017, 140–151. [Google Scholar] [CrossRef] [PubMed]
  32. Ren, L.; Zhang, G.; Mu, C. Data-based H control for the constrained-input nonlinear systems and its applications in chaotic circuit systems. IEEE Trans. Circuits Syst. 2020, 8, 2791–2802. [Google Scholar] [CrossRef]
  33. Zhao, F.; Gao, W.; Liu, T.; Jiang, Z.-P. Event-triggered robust adaptive dynamic programming with output-feedback for large-scale systems. IEEE Trans. Control Netw. Syst. 2023, 8, 63–74. [Google Scholar] [CrossRef]
  34. Zhang, H.; Cui, L.; Zhang, X.; Luo, Y. Data-Driven Robust Approximate Optimal Tracking Control for Unknown General Nonlinear Systems Using Adaptive Dynamic Programming Method, in IEEE Transactions on Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2011, 11, 2226–2236. [Google Scholar] [CrossRef]
  35. Lin, Q.; Wei, Q.; Liu, D. A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm. Int. J. Syst. Sci. 2017, 48, 525–534. [Google Scholar] [CrossRef]
  36. Huang, Y.; Liu, D. Neural-network-based optimal tracking control scheme for a class of unknown discrete-time nonlinear systems using iterative ADP algorithm. Neurocomputing 2014, 125, 46–56. [Google Scholar] [CrossRef]
  37. Zhang, Y.; Zhao, B.; Liu, D.; Zhang, S. Event-triggered control of discrete-time zero-sum games via deterministic policy gradient adaptive dynamic programming. IEEE Trans. Syst. Man Cybern. Syst. 2022, 8, 4823–4835. [Google Scholar] [CrossRef]
  38. Zhu, Y.; Zhao, D.; He, H. Invariant adaptive dynamic programming for discrete-time optimal control. IEEE Trans. Syst. Man Cybern. Syst. 2020, 11, 3959–3971. [Google Scholar] [CrossRef]
  39. Zhang, H.; Wei, Q.; Luo, Y. A Novel Infinite-Time Optimal Tracking Control Scheme for a Class of Discrete-Time Nonlinear Systems via the Greedy HDP Iteration Algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2008, 38, 937–942. [Google Scholar] [CrossRef] [PubMed]
  40. Li, J.; Chai, T.; Lewis, F.L.; Ding, Z.; Jiang, Y. Off-Policy Interleaved Q-Learning: Optimal Control for Affine Nonlinear Discrete-Time Systems. IEEE Trans. Neural Netw. Learn. Syst. 2019, 5, 1308–1320. [Google Scholar] [CrossRef]
  41. Sun, C.; Li, X.; Sun, Y. A parallel framework of adaptive dynamic programming algorithm with off-policy learning. IEEE Trans. Neural Netw. Learn. Syst. 2021, 8, 3578–3587. [Google Scholar] [CrossRef] [PubMed]
  42. Duan, J.; Guan, Y.; Li, S.E.; Ren, Y.; Sun, Q.; Cheng, B. Distributional soft actor-critic: Off-policy reinforcement learning for addressing value estimation errors. IEEE Trans. Neural Netw. Learn. Syst. 2022, 11, 6584–6598. [Google Scholar] [CrossRef] [PubMed]
  43. Song, S.; Zhu, M.; Dai, X.; Gong, D. Model-Free Optimal Tracking Control of Nonlinear Input-Affine Discrete-Time Systems via an Iterative Deterministic Q-Learning Algorithm. IEEE Trans. Neural Networks Learn. Syst. 2024, 1, 999–1012. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The structure schematic of the proposed technique.
Figure 1. The structure schematic of the proposed technique.
Mathematics 12 01146 g001
Figure 2. The unknown function f 1 ( x ) and approximation of the unknown function f ˜ 1 ( x ) .
Figure 2. The unknown function f 1 ( x ) and approximation of the unknown function f ˜ 1 ( x ) .
Mathematics 12 01146 g002
Figure 3. The unknown function f 2 ( x ) and approximation of the unknown function f 2 ˜ ( x ) .
Figure 3. The unknown function f 2 ( x ) and approximation of the unknown function f 2 ˜ ( x ) .
Mathematics 12 01146 g003
Figure 4. The system control input u 1 and the system control input u 2 .
Figure 4. The system control input u 1 and the system control input u 2 .
Mathematics 12 01146 g004
Figure 5. The state trajectory x 1 and the reference trajectory x 1 d using our tracking control method.
Figure 5. The state trajectory x 1 and the reference trajectory x 1 d using our tracking control method.
Mathematics 12 01146 g005
Figure 6. The state trajectory x 2 and the reference trajectory x 2 d using our tracking control method.
Figure 6. The state trajectory x 2 and the reference trajectory x 2 d using our tracking control method.
Mathematics 12 01146 g006
Figure 7. The state trajectory x 1 and the reference trajectory x 1 d using tracking control methods for references.
Figure 7. The state trajectory x 1 and the reference trajectory x 1 d using tracking control methods for references.
Mathematics 12 01146 g007
Figure 8. The state trajectory x 2 and the reference trajectory x 2 d using tracking control methods for references.
Figure 8. The state trajectory x 2 and the reference trajectory x 2 d using tracking control methods for references.
Mathematics 12 01146 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, J.; Xu, D.; Li, Y.; Ma, Y. Near-Optimal Tracking Control of Partially Unknown Discrete-Time Nonlinear Systems Based on Radial Basis Function Neural Network. Mathematics 2024, 12, 1146. https://doi.org/10.3390/math12081146

AMA Style

Huang J, Xu D, Li Y, Ma Y. Near-Optimal Tracking Control of Partially Unknown Discrete-Time Nonlinear Systems Based on Radial Basis Function Neural Network. Mathematics. 2024; 12(8):1146. https://doi.org/10.3390/math12081146

Chicago/Turabian Style

Huang, Jiashun, Dengguo Xu, Yahui Li, and Yan Ma. 2024. "Near-Optimal Tracking Control of Partially Unknown Discrete-Time Nonlinear Systems Based on Radial Basis Function Neural Network" Mathematics 12, no. 8: 1146. https://doi.org/10.3390/math12081146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop