Next Article in Journal
Stochastic Modeling of Smartphones GNSS Observations Using LS-VCE and Application to Samsung S20
Previous Article in Journal
A PUF-Based Key Storage Scheme Using Fuzzy Vault
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Event-Triggered Sliding Mode Neural Network Controller Design for Heterogeneous Multi-Agent Systems

1
School of Information Science and Engineering, Zhejiang Sci-Tech University, Hangzhou 310018, China
2
Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(7), 3477; https://doi.org/10.3390/s23073477
Submission received: 28 February 2023 / Revised: 20 March 2023 / Accepted: 23 March 2023 / Published: 26 March 2023
(This article belongs to the Section Intelligent Sensors)

Abstract

:
A class of heterogeneous second-order multi-agent consensus problems is studied, in which an event-triggered method is used to improve the feasibility of the control protocol. The sliding mode control method is used to achieve the robustness of the system. A special type of general radial basis function neural network is applied to estimate the uncertainties. The event-triggered mechanism is introduced to reduce the update frequency of the controller and the communication frequency among the agents. Zeno behavior is avoided by ensuring a lower bound between two adjacent trigger instants. Finally, the simulation results are provided to demonstrate that the time evolution of consensus errors eventually approaches zero. The consensus of multi-agent systems is achieved.

1. Introduction

The consensus problem is a fundamental issue in multi-agent systems (MASs) [1]. In recent years, the consensus problems have been widely discussed in many fields, such as the fixed topology [2] and switched topology [3,4]. Moreover, many methods to analyze consensus have been proposed. A class of adaptive fully distributed consensus problems has been investigated for heterogeneous nonlinear MASs [5,6]. The optimization problem of self-balancing robots under depleting battery conditions was analyzed [7]. Additionally, under-actuated systems using the error-magnitude-dependent self-tuning of the cost weighting factor via the adaptive state-space control law has been studied [8].
The sliding mode control scheme (SMCS) is superior to other control schemes in that it switches the modes of controllers with the current state [2,9]. There are various SMC strategies, such as terminal SMC [10,11], dynamic SMC [12] and adaptive SMC [13], and so on. SMC is widely used in practical engineering applications with excellent and robust performance [14], such as stochastic systems [15], robot MASs [16] and heterogeneous nonlinear MASs [17]. Compared with the traditional SMC method, the initial compensation term is utilized in the global sliding mode control scheme (GSMCS) so that the systems with GSMCS can approach the sliding surface at the beginning.
Traditional adaptive control methods require the prior information of the model. By using the self-learning ability of neural networks, the controller does not demand much system information. Thus, neural networks effectively solve the control problem with uncertain models. The practical application then depends on the generalization ability, which determines whether a network is effective or not [18,19,20]. Neural networks have been greatly developed in pattern recognition, signal processing, modeling technology and system control [21,22]. The hidden layer of the neural network adopts the activation function, which can approximate the arbitrary nonlinear function. Among them, radial basis function neural networks (RBFNN) can effectively improve the controller performance when the system has great uncertainty [23]. However, the pressure of parameter estimation increases significantly with an increase in neural network nodes or fuzzy rules. Thus, the minimum parameter learning method is proposed to solve this problem [24]. To address the problem of the low processing power of traditional agricultural machinery design systems in analyzing data, a novel agricultural machinery intelligent design system integrating image processing and knowledge reasoning is constructed [25]. The main advantage of this method is that a single parameter needs to be estimated online, regardless of the number of fuzzy rule bases used. The limitation of minimum parameter learning using radial basis function neural networks is that numerical values are used as parameter estimators. This will cause the estimate to deviate considerably from the true value.
It is noted that continuous communication between follower agents in [26] results in a computing resource waste. Therefore, a method to reduce communication pressure is urgently needed. The event-triggered mechanism (ETM) was proposed to decrease the load of information transmission. There are four types of ETM, which include the event-based sampling mechanism [27], model-based sampling mechanism [28], ETM based on sampled data [29,30] and self-triggered mechanism [31]. There are many applications of ETM, such as the consensus of MASs [32,33,34], sliding mode control [35,36] and convex optimization [37,38]. However, Zeno behavior is a very serious condition in the event triggering mechanism. If Zeno behavior exists in the system, it means that the control of the system is continuous. In this case, the advantages of the event triggering mechanism in saving network resources and reducing the network communication burden cannot be utilized, which means that the design of the event triggering mechanism is unreasonable [39].
Based on the above results, there are few works studying the SMC strategy regarding the minimum parameter learning method under the framework of heterogeneous MASs. This motivates us to design a global sliding mode controller (GSMC) with an event-triggered controller. Compared with some existing results, the event triggering mechanism used in this paper is innovatively combined with global sliding mode control and an RBF neural network. Comparing the exponential event trigger threshold and the constant event trigger threshold, the exponential event trigger threshold can better adapt to the control law and reduce the communication pressure. A strategy based on the minimum parameter learning method is proposed to solve the system uncertainty and the main contributions can be described as follows:
(a)
Compared with [40], the single agent system is extended to MASs. In actual production, it is difficult to ensure that every agent has the same dynamic performance. Thus, a class of heterogeneous second-order leader–follower MASs is discussed to make the result more practical. A class of distributed control laws is proposed to enable the follower to approach the leader’s trajectory.
(b)
The ETM is inserted into the design of GSMC. The initial compensation term is utilized in GSMC so that the systems with the GSMC scheme can approach the sliding surface at the beginning. Then, the robustness of heterogeneous nonlinear MASs is improved according to their insensitivity to disturbance. The communication pressure between agents is decreased by utilizing ETM.
(c)
The online learning ability of RBFNN has been introduced to deal with the uncertainty. Differing from general RBFNN control methods, the proposed control scheme does not need to update all the hidden layer weights. This method reduces the amount of calculation required for each iteration. The time for the system to reach a stable state is decreased.
This article is organized as follows. Firstly, the preliminaries and problem formulation are described in Section 2. In Section 3, a robust controller that is based on the sliding mode mechanism and ETM is presented. Then, the Zeno behavior is excluded. A simulation example is presented in Section 4 and the effectiveness of the proposed method is verified. Finally, some concluding remarks are provided in Section 5.

2. Preliminaries and System Statement

2.1. Graph Theory

The communication topology of systems is described as  G = ( V , E ) . The node set is  V = v 1 , v 2 , , v n E V × V is represented as the edge set. Then, the node j is considered as the neighbor of node i if node i can transmit information to node j. The matrix  A = a i j R N × N denotes the adjacency matrix. If  i = j , then  a i j = 0 . The Laplace matrix is defined as  L = l i j N × N . It should be pointed out that  l i i = j = 1 N a i j and the non-diagonal elements are  l i j = a i j . The leader–follower system is considered. Meanwhile, the communication topology graph of systems is augmented to  G ^ = V ^ , E ^ , B = d i a g b 1 , b 2 , , b N , which is represented as the input matrix. The Laplace matrix of the follower graph denotes  L B = L + B · denotes the spectral norm of the matrix.  · F denotes the Frobenius norm of the matrix.  · is the infinite norm of the vector.  ψ ¯ P is the maximal singular value of the matrix P and  ψ ̲ P is the minimal singular value of the matrix P t r · is the trace of the matrix.  I N represents the n-dimensional unit column vector.
There are some necessary lemmas.
Lemma 1.
For  ϕ , s R and  W , h R N , the inequality  s W T h 1 2 s 2 ϕ h T h + 1 2 holds.
Lemma 2
([18]). For sliding mode function  s = c · e + e ˙ , a compensating function is designed as  q t , which satisfies the following conditions:
(1) 
q 0 = c · e 0 + e ˙ 0 ;
(2) 
When  t q t ;
(3) 
q t has a bounded first derivative with respect to time,
where  e 0 is the state error. Then,  s 0  is always established.
Lemma 3
([40]). For function  H χ χ 0 , + satisfies the Holder continuous condition. For every  χ , ν 0 , + , the inequality
H χ H ν λ χ ν
holds and λ is a positive constant.
Lemma 4. 
If  V : R N R is a locally positive definite function,  V ˙ 0 is obtained in compact set  Ω c = x R N : V x c , where c is a constant. Define  § = x Ω c : V ˙ x = 0 . When  t , the trajectory approaches the maximum invariant set in §. If there is no other solution in § except  x ( t ) = 0 , the origin is asymptotically stable.

2.2. Problem Formulation

Consider a type of leader–follower MAS with N agents. The dynamics of agent i are denoted as
x ˙ i = v i v ˙ i = f i x + g i x u i + d i x
where  x = x i , v i T represents the state of the system.  u i R is the control input.  f i · and  g i · are known continuous smooth functions.  d i · is an unknown smooth uncertain function.  g i x R + is assumed. The dynamic function of the leader is expressed as
x ˙ 0 = v 0 v ˙ 0 = f 0 x + g 0 x u 0
The consensus errors are proffered as
e i x = j = 1 N a i j x i x j + b i x i x 0
e i v = j = 1 N a i j v i v j + b i v i v 0
The following definitions are provided:  ε 1 = e 1 x , e 2 x , , e i x T ε 2 = e 1 v , e 2 v , , e i v T x = x 1 , x 2 , , x i T d = d 1 , d 2 , , d i T and then  v = v 1 , v 2 , , v i T x ˘ = x I N x 0 v ˘ = v I N v 0 u = u 1 , u 2 , , u i T G = d i a g g 1 , g 2 , , g N and  B L = L B I N .
Then, global consensus errors are obtained as
ε 1 = B L x ˘
ε 2 = B L v ˘
According to Equations (5) and (6), the following equations are obtained:
ε ˙ 1 = ε 2
ε ˙ 2 = B L F I N f 0 + G · u I N g 0 · u 0 + D
where  F = f 1 , f 2 , , f N T and  D = d 1 , d 2 , , d N T . According to Lemma 3, the global sliding mode function of i-th agent is proposed as
ς i t = c i · e i x t + e i v t ϱ i t
where  ϱ i t = ϱ i 0 e k i t e i x 0 and  e i v 0 are the initial consensus error. Then, the derivative of t can be calculated as
ς ˙ i t = c i e ˙ i x t + e ˙ i v t ϱ ˙ i t = f i + g i + d i v ˙ 0 + c i e ˙ i x ϱ ˙ i
Let  ς t = ς 1 t , ς 2 t , , ς N t T and  ς ˙ t = ς ˙ 1 t , ς ˙ 2 t , , ς ˙ N t T , and the corresponding sliding mode function and its first-order derivative are designed as
ς t = ω · ε 1 t + ε 2 t ϱ t
ς ˙ t = ω · ε ˙ 1 t + ε ˙ 2 t ϱ ˙ t = ω · ε 2 + B L F + G · u I N v 0 + D ϱ ˙ t
where  ω = d i a g c 1 , c 2 , , c N . Then, a scheme is proposed that uses neural networks to approximate the uncertainty term of the systems (1). The function of the estimation of  d i x is denoted as
d i x = W i * T · h i x + σ i x
where  W i * = W i 1 * , W i 2 * , , W i m * T R m is the ideal weight vector of RBFNN of agent i. m is the number of hidden layers.  σ i x is the estimation error of the RBFNN network.
h j i x = exp x c j i 2 b j i 2
where x is adopted as the input of the neural networks and  j = 1 , 2 , , m . The center of the corresponding field denotes  c j i b j i is the width of the Gaussian function. In addition,  h i x = h i 1 x , h i 2 x , , h i m x T represents the output Gaussian basis function of agent i. The estimated output of the RBFNN is denoted as
ρ ^ i x = W ^ i * T · h i x
where  W ^ i * is the estimated weight. We denote the corresponding radial basis function vector, the error of the RBFNN and the optimal weight matrix as  h x = h 1 T x , , h N T x T W x = b l k d i a g W 1 x , , W N x and  σ x = σ 1 x , , σ N x T , and the global output of the RBFNN is written as
ρ x = W T · h x + σ
Meanwhile, the estimation of  ρ x is
ρ ^ x = W ^ * T · h x
Instead of the weight matrix, an adaptive variable is defined as  ϕ i = W i 2 2 from the minimum parameter learning method. Meanwhile,  ϕ ^ i is the estimation of  ϕ i and  ϕ ˜ i = ϕ ^ i ϕ i . For convenience, we denote  h i T h T as  H i . The adaptive control law of  ϕ ^ i is designed as
ϕ ^ ˙ i = γ B L 2 ς i 2 H i κ γ ϕ ^ i
where  γ is any positive constant. Moreover,  κ is the positive parameter to be designed.
Remark 1.
The structure of the RBF neural network is similar to that of a multi-layer forward network. It is generally composed of an input layer, hidden layer and output layer. The first layer is the input layer, which is composed of signal source nodes and transmits signals to the hidden layer. The second layer is the hidden layer, and the transformation function of the hidden layer node is a non-negative nonlinear function with radial symmetry and attenuation to the central point. The third layer is the output layer, which is generally a simple linear function that responds to the input pattern. Figure 1 shows the general structure of the RBF neural network.
Remark 2.
The width of the Gaussian basis function affects the range of network mapping. It is generally designed to be an appropriate value. According to the input of RBFNN, the center point coordinate vector should keep the Gaussian basis function within the valid mapping range.

3. Main Results

3.1. Adaptive Control Law Design with GSMC

In this section, the GSMC using the minimum parameter learning method is designed. The GSMC law forces the target system to follow the ideal sliding mode trajectory. The uncertainties are compensated by using a class of RBFNN methods.
Then, the i-th agent of GSMC is proposed as
u i ( t ) = i = 1 , i j N a i j u j ( t ) + 1 g i ς i ϕ ^ H i / 2 + v 0 c i e i x f i η i sgn ς i μ i ς i + ϱ i
where  η i and  μ i are the positive parameters to be designed and  u j ( t ) represents the control law of adjacent agent j. Let  u t = u 1 t , , u N t T ϕ = d i a g ϕ 1 , , ϕ N ϕ ^ = d i a g ϕ ^ 1 , ϕ ^ 2 , , ϕ ^ N ϕ ˜ = d i a g ϕ ˜ 1 , ϕ ˜ 2 , , ϕ ˜ N μ = d i a g μ 1 , μ 2 , , μ N and  η = d i a g η 1 , η 2 , , η N . The controller  u i t presented in this article consists of several parts. The control input of adjacent agents is obtained so that the current agents can gradually reduce the adjoint errors. The RBFNN function can fit the unknown part of the system. Part of the global sliding mode control rate makes the system robust when subjected to external interference. The control protocol of the systems (1)–(2) is developed with
u ( t ) = G 1 · B L 1 ω · ε 2 η sgn ( ς ) μ ς 1 2 ς ϕ ^ h T h F + v 0 I N + q
Substituting (20) into (12), (12) is rewritten as
ς ˙ ( t ) = B L W T h + σ 1 2 ϕ ^ H ς η sgn ( ς ) μ ς
Theorem 1.
The leader–follower MAS (1)–(2) is considered and the GSMC (19) with the adaptive minimum parameter learning mechanism (17) is designed. When the time approaches infinity, the consensus of the MASs is achieved.
Proof. 
See Appendix A. □

3.2. Adaptive Control Law Design with ETM

To minimize the communication pressure of the nonlinear system, the GSMC based on the event triggering mechanism is designed in this section. After the smart sensor receives the current states of the plants, the ETC compares the current system’s states with the last event-triggered system’s states. The state measurement errors between the current moment and event-triggered moment are defined as
Δ i x = x i t k i x i t
Δ i v = v i t k i v i t
Let  t k i be the k-th triggered instant of i-th agent. The measurement errors of the leader are expressed as
Δ 0 x = x 0 t k 0 x 0 t
Δ 0 v = v 0 t k 0 v 0 t
where  t k 0 is the moment when the ETC of the leader works k times. The state error between the last triggering instant and the current instant is denoted as  Δ i = Δ i x , Δ i v T . According to Theorem 1, the control strategy is rewritten as
u i t = u i t k i t k + 1 i = inf t > t k i | Δ i e α t t t k i > T
where  α is a designed positive constant.  u i t represents the output of the agent i controller and  u i t k i is equivalent to control at time  t k i t k + 1 i indicates the next triggered instant. In addition, there is upper bound time T between the instant of event-triggered  t k i and execution time t. Moreover, the controller does not update until the measurement error satisfies  Δ i e α t . Define  Δ = Δ 1 , Δ 2 , , Δ N T . According to the corresponding control strategy, the function of the control of i-th agent is rewritten as
u i ( t ) = i = 1 N a i j u j t k i + 1 g i x t k i ς i t k i ϕ ^ t k i h i T x i t k i h i x i t k i / 2 + v 0 t k i c i e i x t k i f i x t k i μ i ς i t k i + ϱ i t k i
where  ς i t k i = c i · e i x t k i + e i v t k i ϱ i t k i is i-th agent in the sliding mode function. Meanwhile,   v 0 t k i = f 0 t k i + g 0 t k i u 0 t k i is the leader velocity function.
The global function of the controller is calculated as
u ( t ) = G 1 x t k · B L 1 ω · ε 2 t k η sgn ς t k μ ς t k 1 2 ς t k ϕ ^ t k h T h F x t k + v 0 t k + ϱ t k
where
  • ε 2 t k = e 1 v t k 1 , e 2 v t k 2 , , e N v t k N T is the corresponding error of the system velocity;
  • F x k = F x k 1 F x k 2 , , F x k N T is the dynamic function term of the system;
  • G x k = d i a g g 1 x k 1 , g 2 x k 2 , , g N x k N is the dynamic gain of the system;
  • v 0 t k = v 0 t k 1 , v 0 t k 2 , , v 0 t k N T is the velocity output of the leader in various event-triggered times;
  • ς t k = ς 1 t k 1 , ς 2 t k 2 , , ς N t k N T is the sliding mode function of the system;
  • s g n ς t k = s g n ς 1 t k 1 , s g n t k 2 , , s g n ς 1 t k N T is the sign function of the sliding mode function;
  • H x t k = H 1 x t k 1 , H 2 x t k 2 , , H N x t k N T is the output Gaussian basis function of the system;
  • ϱ t k = ϱ 1 t k 1 , ϱ 2 t k 2 , , ϱ N t k N T is the uncertainty term of the system;
  • ω = d i a g c 1 , c 2 , , c N and  μ = d i a g μ 1 , μ 2 , , μ N are the control parameters.
Theorem 2.
Considering the leader–follower MASs (1)–(2), the sliding mode control strategy (19) is proposed with the adaptive minimum parameter learning mechanism (17) and event-triggered law (32). When the time approaches infinity, the consensus of the systems is achieved. Finally, the sliding mode states of agent i converge to the ultimate bound
Ω = x i ( t ) R , v i ( t ) R : ς i ( t ) ϑ
Proof. 
See Appendix B. □
Theorem 3.
Considering the nonlinear systems (1)–(2) with the GSMC (34) and the event-triggered strategy (32), the interval between two triggered instants is  T k i = t k + 1 i t k i . Meanwhile, the positive lower bound is given as
T k i 1 λ 0 ¯ I n 1 + λ ¯ 0 n ¯
Proof. 
See Appendix C. □

4. Simulations

A simulation of a flexible-joint manipulator system that contains a leader and four followers is presented.
A type of heterogeneous second-order flexible-joint manipulator system is considered, and the control goal is to enable the manipulator with different sizes to follow the same desired trajectory. The physical model is borrowed from [40], where
I i q ¨ i + K i q ˙ i + m i g l i = u i τ + d i τ
where  i = 1 , 2 , 3 , 4 q i R represents the horizontal angular position of the connecting rod of the agent i q ˙ i q ¨ i are the angle velocity and acceleration of the agent i. The moment of inertia is proposed as  I i = 4 m i l i 2 / 3 . In addition,  m i is the i-th agent manipulator mass, and  l i is the distance from the center of mass of the manipulator linked to the center of the connecting rod. g and  u i τ are the acceleration of gravity and the control torque input, respectively.  K i is the viscous friction coefficient of the agent i d i τ denotes the uncertain external disturbances and the unknown system parameter. In order to reduce the impact of external distractions, the virtual desired leader is considered as
I 0 q ¨ 0 + K 0 q ˙ 0 + m 0 g l 0 = u 0 τ
where  I 0 = 4 m 0 l 0 2 / 3 and  K 0 are the inertial moment and viscous friction coefficient of the leader. The mass of the leader is denoted as  m 0 l 0 is the distance from the center of mass of the manipulator linked to the center of the connecting rod for the leader. The ideal control torque input is designed as  u 0 τ = c o s 0.1 t to force the follower to approach the desired trajectory. The control goal is to enable the followers to approach the position of the leader as quickly as possible.
Denoting  x i t = q i , v i t = q ˙ i , the dynamic function of the proposed leader–follower MASs is written as
x ˙ i = v i v ˙ i = f i x i , v i + g i x i , v i u i + d i x i , v i
where  f i x i , v i = 3 I i x i / 4 m i l i 2 3 g / 4 l i and  g i x i , v i = 3 / 4 m i l i 2 i = 0 , 1 , 2 , 3 , 4 . Then,  d i x i , v i = 3 d i τ / 4 m i l i 2 i = 1 , 2 , 3 , 4 .
In the communication topological graph in Figure 2, the leader node is zero, and the follower nodes are denoted as node one to node four. The Laplacian matrix is written as
W = 1 0 1 0 0 0 0 0 0 1 1 0 0 0 1 1
B = d i a g b 1 , b 2 , b 3 , b 4 = 1 , 1 , 0 , 0 is the input matrix. The dynamic parameters are given as  m 0 = m 1 = m 2 = m 3 = m 4 = 1 kg;  l 0 = l 1 = 0.25 m,  l 2 = 0.3 m,  l 3 = 0.35 m,  l 4 = 0.4 m;  K 0 = K 1 = K 2 = K 3 = K 4 = 2 g = 9.8 m/s 2 . The angular position and velocity at the initial time are denoted as  x 0 0 , v 0 0 T = 0 , 0 T x 1 0 , v 1 0 T = 0.5 , 0.5 T x 2 0 , v 2 0 T = 1 , 1 T x 3 0 , v 3 0 T = 1.5 , 1.5 T x 4 0 , v 4 0 T = 2 , 2 T . The number of hidden layers is chosen as 5. Then, the parameters of RBFNN are given as  c j 1 = c j 2 = c j 3 = c j 4 in which  c 1 i = 2 , 2 T c 2 i = 1 , 1 T c 3 i = 0 , 0 T c 4 i = 1 , 1 T c 5 i = 2 , 2 T and  b i j = 1 .
The sliding surface is defined as
ς ˙ i = f i + g i u i + d i v ˙ 0 + c i · e i x ϱ ˙ i
Relying on Theorem 2, the continuous control law is proposed as
u i τ t = i = 1 N a i j u j τ + ς i t k i ϕ ^ i t k i h i T x i t k i h i x i t k i / 2 + v 0 t k i c i e i x t k i f i x t k i μ i ς i t k i + ϱ i t k i / g i x t k i
where  u j τ is the control law of the adjacent agent j. The assumed external disturbance and uncertainty can be denoted as  d i τ = v i s i n x i + 0.1 c o s t . Figure 3a and Figure 4a are the response curves of the horizontal angular position and the angular velocity, respectively. From these figures, the asymptotic consensus is achieved.
From Table 1, the whole sampling time is 10,000 and the execution times of the agents are 4839, 3890, 4366, 4815, respectively. As a result, the proposed method reduces the stress of communication.
The results obtained by different event triggering thresholds are shown in Table 2. From the table, the larger the event trigger threshold, the more times the event is triggered but the shorter the stabilization time. Considering the stability of operation and the minimum communication pressure, the event triggering threshold should be selected to be around 1. The threshold selected in this simulation is 0.8. The execution effects of the other agents are similar. In order to display the results as easily as possible, only the running result of agent 1 is listed in the table. The SMC with the general RBFNN is described as
u ¯ i r ( t ) = i = 1 N a i j u ¯ j r t k i + ς i t k i W ^ T t k i h i x i t k i / 2 + v 0 t k i c i e i x t k i f i x t k i μ i ς i t k i + ϱ i t k i / g i x t k i
The time evolutions of the horizontal angular position and the angular velocity are shown in Figure 3b and Figure 4b, respectively. Then, the time evolutions of the control output are shown in Figure 5b. Comparing Figure 3a with Figure 3b, the followers take around four seconds to approach the trajectory of the leader with the SMC based on the general RBFNN. On the other hand, two seconds will be required by followers with the proposed scheme to achieve a similar effect. Comparing Figure 4a and Figure 4b, the velocity change of the scheme mentioned flattens out after two seconds. In the general RBFNN scheme, the velocity changes more dramatically at the same time. As a result, the dynamic performance of the systems under different control schemes is clearly demonstrated.
The event triggering mechanism used in this paper is innovatively combined with global sliding mode control and radial basis function neural networks. Moreover, the exponential event trigger threshold can better adapt to the control law and reduce the communication pressure. A strategy based on the minimum parameter learning method is proposed to solve the system uncertainty. According to Figure 3, Figure 4 and Figure 5, the dynamic performance of the systems under different control schemes is clearly demonstrated.
In order to verify the control performance of the proposed controller under a non-vanishing disturbance, a bounded non-vanishing disturbance is given in the simulation. The status image of the system is given in Figure 6. To compare the performance of the system under different non-vanishing disturbances, the two types of disturbances are set as 10 and 20, respectively. It can be seen from Figure 6 that when the disturbance increases, the fluctuation of the system increases in the initial operating state. The MASs will eventually reach a consensus.
This section describes numerical simulations to demonstrate the effectiveness of the proposed method. Therefore, the time evolutions of consensus errors eventually approach zero. The consensus of MASs (1)–(2) can be achieved. The results indicate that the designed control protocol is efficient in reducing the adjustment time and overshoot.

5. Conclusions

The consensus for heterogeneous second-order MASs is analyzed. Firstly, the SMC based the ON RBFNN scheme is designed. Then, the robustness of the method is analyzed. The RBFNN is used to approximate and compensate for the uncertainty of the system. Secondly, ETM is used so that the transmission time is reduced significantly. In addition, the Zeno behavior is eliminated by a control law. Finally, the simulation results show the superiority of the given method. Moreover, a comparison between the general RBFNN method and the proposed strategy is performed. However, the sliding mode controller still has shortcomings. Due to the discontinuous switching characteristic of the GSMC scheme, chattering occurs in the system. In addition, the agent consensus problem under network attacks is studied based on the results of [41,42].

Author Contributions

Conceptualization, X.S. and J.G.; validation, X.S. and J.G.; investigation, X.S., J.G. and P.X.L.; data curation, X.S. and J.G.; writing—original draft preparation, X.S.; writing—review and editing, P.X.L. and J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62073296 and the Zhejiang Province Natural Science Foundation of China under Grant LZ23F030010.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AbbreviationComplete Phrase
MASsmulti-agent systems
SMCSsliding mode control scheme
SMCsliding mode controller
GSMCSglobal sliding mode control scheme
GSMCglobal sliding mode controller
ETMevent-triggered mechanism
RBFNNradial basis function neural network

Appendix A. Proof of Theorem 1

Proof. 
We choose the Lyapunov function  L = L 1 + L 2 , where  L 1 = 1 2 ς T ς and  L 2 = 1 2 γ t r ϕ ˜ T ϕ ˜ L 1 represents the robustness of the sliding mode term and  L 2 represents the robustness of the minimum parameter learning method.
In the above equation,  γ > 0 . Then, taking the time derivative of  L 1 as
L ˙ 1 = ς T ς ˙ = ς T B L W T h + σ 1 2 ϕ ^ H ς η sgn ( ς ) μ ς B L σ ς + 1 2 N 1 2 t r ϕ ˜ T ς ς T H η ς μ ς 2
According to the definition of RBFNN parameter  ϕ i , it is obvious that the time derivative of  ϕ i is zero. Thus, the second term of the Lyapunov–Krasovskii function is taken as a derivative as
L ˙ 2 = 1 γ tr ϕ ˜ T ϕ ^ ˙ = 1 γ tr ϕ ˜ T γ 2 B L ς ς T H κ γ ϕ ^ = 1 2 B L t r ϕ ˜ T ς ς T H κ t r ϕ ˜ T ϕ ^ 1 2 B L t r ϕ ˜ T ς ς T H κ ϕ ^ F ϕ ˜ F
Substituting (A1), (A2) and (17) into the first derivative of the Lyapunov–Krasovskii function, the following inequality is acquired:
L ˙ μ ς 2 κ ϕ ^ F ϕ ˜ F + N 2 B L
Choosing  κ = 2 μ γ , it follows that
L ˙ μ ς 2 μ γ ϕ ˜ F 2 ϕ F 2 + N 2 B L = 2 μ 1 2 γ ϕ ˜ F 2 + 1 2 ς 2 + N 2 B L + κ 2 ϕ F 2 = 2 μ 1 2 ς T ς + 1 2 γ tr ϕ ˜ T ϕ ˜ + N 2 B L + κ 2 ϕ F 2 = 2 μ L + Q
where  Q = N 2 B L + κ 2 ϕ F 2 . L is bounded and the differential inequality function can be solved as
L Q 2 R + L ( 0 ) Q 2 R e 2 R t
where  R = μ . Therefore,  ϕ ˜ F is bounded. Because  ϕ i is a constant,  ϕ ^ F is bounded. From the inequality (A5), the states can converge to a bounded value as the time tends to infinity and other signals are ultimately bounded according to the invariance principle.
lim t L = Q 2 R = κ 2 ϕ F 2 + 1 2 N B L 2 R = ϕ F 2 2 γ + N B L 4 R
Thus, it is concluded that the trajectories of the system states are asymptotically stable and all signals are ensured to be bounded. In conclusion, the follower agent systems (1) converge to the leader system state (2) as the time approaches infinity. □

Appendix B. Proof of Theorem 2

Proof. 
We choose  L 3 = L 4 + L 5 as the Lyapunov–Krasovskii function, where  L 4 = 1 2 ς T ς and  L 5 = 1 2 γ t r ϕ ˜ T ϕ ˜ L 4 represents the robustness of the sliding mode term, and  L 5 represents the robustness of the minimum parameter learning method.
Then, we take the derivative of  L 4 as follows:
L ˙ 4 = ς T ( t ) ς ˙ ( t ) = ς T B L W T h + σ + ω · ε 2 ( t ) ϱ ( t ) + B L F ( x ) + G ( x ) · u ( t ) V 0 ( t ) 1 2 ς T B L ϕ H ς + N 2 B L 1 2 ς T B L σ η sgn ( ς ) μ ς B L σ + 1 2 N ς + 1 2 H F ϕ ˜ F ς 2 η ς μ ς 2
Let  g ^ i x = g i 1 x . According to the assumption and property of  g x , f x and  h x , there are positive constants  λ 0 i , λ 1 i , λ 2 i , λ 3 i that satisfy the following inequality:
g ^ i x g ^ i x t k i λ 0 i x x t k i
f i x f i x t k i λ 1 i x x t k i
g i x g i x t k i λ 2 i x x t k i
H i x H i x t k i λ 3 i x x t k i
For a clear analysis, the first time derivative of  L 4 is divided into several parts. Firstly,
ς T B L F ( x ) ς T B L G ( x ) G 1 x k B L 1 F x k = ς T B L F ( x ) ς T B L F x k + ς T B L F x k ς T B L G ( x ) G 1 x k F x k N B L λ 1 Δ ς + ψ ¯ B L / ψ ̲ B L N λ 0 Δ F x k F ς = ζ 1 ς
where  ζ 1 = N B L λ 1 Δ + ψ ¯ B L / ψ ̲ B L N λ 0 Δ F x k F . By using a similar method, (A6) and (A7) are obtained as follows:
ς T ω · ε 2 ( t ) ς T B L G ( x ) G 1 x k B L 1 ω · ε 2 t k = ς T ω · ε 2 ( t ) ς T ω · ε 2 t k + ς T ω · ε 2 t k ς T B L G ( x ) G 1 x k B L 1 ω · ε 2 t k N B L ω · Δ + Δ 0 ς + φ ¯ B L / φ ̲ B L ω G ( x ) F λ 0 Δ F x k F ε 2 t k F ς = ζ 2 ς
where  ζ 2 = N B L ω · Δ + Δ 0 ς + φ ¯ B L / φ ̲ B L ω G ( x ) F · λ 0 Δ F x k F ε 2 x k F .
ς T B L G ( x ) G 1 x k B L 1 ϱ t k ς T ϱ ( t ) = ς T B L G ( x ) G 1 x k B L 1 ϱ t k ς T ϱ t k + ς T ϱ t k ς T ϱ ( t ) φ ¯ B L / φ ̲ B L G ( x ) F λ 0 Δ ς = ζ 3 ς
where  ζ 3 = φ ¯ B L φ ̲ B L G x F λ 0 Δ . From the function of  ε 2 t ε 2 t = B L v ˜
ς T B L V 0 ( t ) ς T B L G ( x ) G 1 x k B L 1 V 0 t k = ς T B L V 0 ( t ) ς T B L V 0 t k + ς T B L V 0 t k ς T B L G ( x ) G 1 x k B L 1 V 0 t k N B L Δ ς + φ ¯ B L / φ ̲ B L G ( x ) F λ 0 Δ V 0 t k ς = ζ 4 ς
where  ζ 4 = N B L Δ + φ ¯ B L φ ̲ B L G x F λ 0 Δ V 0 t k . In the next term,  ς i t ς i t k is obtained so that
ς i ( t ) ς i t k = c i · e i x ( t ) + e i v ( t ) ϱ i ( t ) c i · e i x t k e i ν t k + ϱ i t k B L c i · Δ i x + Δ 0 x + Δ i ν + Δ 0 ν B L c i + 1 Δ i + Δ 0 = ϑ i
where  ϑ i = B L c i + 1 Δ i + Δ 0 . According to the bound of  ς i t ς i t k and  ς i t , one can obtain that
ς i 2 t ς i 2 t k = ς i t ς i t k ς i t + ς i t k 2 ϑ i ς t
Then,  ϕ ^ i ϕ ^ i t k i is obtained by integrating the adaptive adaptive minimum parameter learning mechanism (17) from  t k i to t, where  t t k i , t k + 1 i , so that
ϕ ^ i ϕ ^ i t k i = B L H i ( x ) t k i t γ 2 ς i 2 ( t ) d t + k y t k i t ϕ ^ i ( t ) d t 2 B L H i x k i + λ 3 i Δ x ϑ ς ( t ) T k y T ϕ ^ i ϕ ^ i t k i ϕ ^ i ϕ ^ i t k i 2 1 + k y T δ 1 i ϑ i B L ς ( t ) T
where  δ 1 i = H i x k i + λ 3 i Δ i . Denoting  g ¯ i t = ϕ ^ i H i ς i , the following inequality is written as
g ¯ i ( t ) g ¯ i t k = H i t k i t ϕ ^ ˙ ( t ) ς i ( t ) d t δ 1 i ϕ ^ t k i + 1 1 + k y T 2 δ 1 i ϑ i B L ς ( t ) T ϑ i T
Let  δ 3 i = δ 1 i ϕ ^ t k i + 2 δ 1 i ϑ i B L ς t T / 1 + k γ T ϑ i T
g ^ i ( x ) g ¯ i ( t ) g ^ i x k i g ¯ i t k i = g ^ i ( x ) g ¯ i ( t ) g ^ i ( x ) g ¯ i t k i + g ^ i ( x ) g ¯ i t k i g ^ i x k i g ¯ i t k i δ 3 i g ^ i ( x ) + λ i 0 Δ i + g ¯ i t k i λ i 0 Δ i
Thus, the next term of  L ˙ 4 is rewritten as
1 2 ς T B L ϕ ^ H ς 1 2 ς T B L G ( x ) G 1 x k B L 1 ϕ ^ t k H x k ς t k 1 2 B L δ 3 ς + 1 2 B L G ( x ) F λ 0 Δ H x k F ς t k ς = ζ 5 ς
where  ζ 5 = 1 2 B L δ 3 + 1 2 B L G ( x ) F λ 0 Δ H x k F ς t k . Applying the transformation of the last two terms of  L ˙ 3 , (A15) is obtained as
ς T B L G ( x ) G 1 x k B L 1 sgn ς t k = ς T B L G ( x ) G 1 x k B L 1 sgn ( ς ( t ) ) sgn ς t k ς T B L G ( x ) G 1 ( x ) B L 1 sgn ( ς ( t ) ) sgn ς t k + ς T sgn ( ς ( t ) ) sgn ς t k + ς T B L G ( x ) G 1 ( x ) B L 1 sgn ( ς ( t ) ) ς T B L G ( x ) G 1 x k B L 1 sgn ( ς ( t ) ) ς T sgn ( ς ( t ) ) 3 φ ¯ B L / φ B L G ( x ) F λ 0 Δ ς ς = ζ 6 ς ς
where  ζ 6 = 3 φ ¯ B L / φ B L G ( x ) F λ 0 Δ . In a similar way, (A16) is obtained as
ς T B L G ( x ) G 1 x k B L 1 ς t k φ ¯ B L / φ ̲ B L θ G ( x ) F λ 0 Δ ς + φ ¯ B L / φ ̲ B L G ( x ) F λ 0 Δ ς 2 ς 2 = ζ 7 + ζ 8 ς ς 2
where  ζ 7 = φ ¯ B L / φ ̲ B L θ G ( x ) F λ 0 Δ and  ζ 8 = φ ¯ B L / φ ̲ B L G ( x ) F λ 0 Δ ς . Relying on (A7), (A13)–(A15) and (A21)–(A24), the Lyapunov function is derived as
L ˙ 3 = L ˙ 4 + L ˙ 5 B L σ ς + 1 2 N + 1 2 tr ϕ ˜ T ς ς T H η ς μ ς 2 + ζ 1 + ζ 2 + ζ 3 + ζ 4 + ζ 5 + ζ 6 + ζ 7 + ζ 8 ς + 1 γ tr { ϕ ˜ ϕ ˙ }
Using a positive value  η ensures that
η = ζ 1 + ζ 2 + ζ 3 + ζ 4 + ζ 5 + ζ 6 + ζ 7 + ζ 8 + τ
L ˙ 3 μ ς 2 κ ϕ ^ F ϕ ˜ F + N 2 B L
Choosing  κ = 2 μ γ , we have
L ˙ 3 2 μ L 3 + Q 2
where  Q 2 = N 2 B L + κ 2 ϕ F 2 . According to Lemma 1.1 in Ge and Wang (2004),  L t is bounded and the differential inequality function is solved as
L 3 Q 2 R + L 3 0 Q 2 R e 2 R t
lim t L 3 = Q 2 R = κ 2 ϕ F 2 + 1 2 N B L 2 R = ϕ F 2 2 γ + N B L 4 R
Similarly, when the time reaches infinity, the Lyapunov function approaches a bounded value and asymptotic stability is achieved from the inequality (A29). When the state trajectories approach the neighbor of the sliding surface,  s i g n ς i t s i g n ς i t k i is observed. We move to the second case. It is obtained from the definitions of the measurement errors (22) and (23) that
ς ( t ) = ω ε 1 ( t ) + ε 2 ( t ) ϱ ( t ) = ω B L x ˜ t k Δ x + Δ 0 x I N + B L v ˜ t k Δ v + Δ 0 v I N ϱ ( t ) + ϱ t k = ς t k ω B L Δ x Δ 0 x I N + Δ v Δ 0 v I N ϱ ( t ) ϱ t k
Then, the bound of the sliding mode function is deduced as
ς i ( t ) B L ω Δ i x + Δ 0 x + Δ i ν + Δ 0 ν B L ( ω + 1 ) Δ i + Δ 0 = ϑ
According to the above analysis, regardless of whether the agent states are beyond the bound  s i g n ς i t s i g n ς i t k i , the states of the agents are constrained to the bound. □

Appendix C. Proof of Theorem 3

Proof. 
A positive constant  λ 4 i that satisfies  ς i 2 ( t ) ς i 2 ( t k i ) λ 4 i t t k i = λ 4 i T k i exists. In addition, a positive constant  λ 5 i that satisfies  w ¯ i w ¯ i x k i = λ 5 i x x k i exists.
d d t Δ i ( t ) = Δ ˙ i ( t ) = v i t k i Δ i v ( t ) f i x i + g x i u i t k i + d i ( t )
d d t Δ i ( t ) v i t k i + Δ i ( t ) + g x i g 1 x i f i x i + i = 1 N u j t k i + g 1 x i d i ( t ) + g i 1 x t k i 1 2 s i t k i ϕ ^ t k i h i T h i + v 0 t k i c i e i x t k i f i x t k i μ i ς i t k i + ϱ i t k i
λ ¯ 0 = 1 + g i x λ 5
Relying on (A10)–(A12), the following inequalities are given as
ϕ ^ ϕ ^ t k i = t k t γ 2 h i T h i ς i 2 κ γ ϕ d t γ T k i 2 1 + κ γ T k i h i T x i k h i x i k + λ 3 i Δ i k ς i 2 t k i + λ 4 i T k i = δ 4
m ¯ m ¯ t k i = 1 2 h i T h i t k t ϕ ^ i ς i d t 1 2 λ 3 i Δ i x + h i T x i k h i x i k ϕ ^ i t k i + δ 4 ς i t k i + ϑ i T k i = δ 5
Let  Λ t = v 0 t c i e i x t μ i ς i t , and then
d d t Δ i ( t ) v i t k i + Δ i ( t ) + g x i L 6 i Δ i ( t ) + δ 5 + i = 1 N u j t k i + 1 2 + g i 1 x t k i Λ t k i + ς i ( 0 )
For convenience,  n ¯ is defined as
n ¯ = v i t k i + g x i δ 5 + g x i i = 1 N u j t k i + 1 2 + g i 1 x t k i Λ t k i + ς i ( 0 )
The differential inequality equation is solved as
Δ i t n ¯ / λ ¯ 0 e λ ¯ 0 t t k i 1
It is concluded that a positive bound exists between two time intervals and the Zeno behavior is excluded. □

References

  1. Dgroot, M.H. Reaching a consensus. J. Am. Stat. Assoc. 1974, 69, 118–121. [Google Scholar] [CrossRef]
  2. Wang, X.H.; Huang, N.J. Finite-time consensus of multi-agent systems driven by hyperbolic partial differential equations via boundary control. Appl. Math. Mech. 2021, 42, 1799–1816. [Google Scholar] [CrossRef]
  3. Olfati, S.R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time delays. IEEE Trans. Autom. Control. 2004, 49, 1520–1533. [Google Scholar] [CrossRef] [Green Version]
  4. Wu, Z.G.; Xu, Y.; Lu, R.Q.; Wu, Y.Q.; Huang, T.W. Event-triggered control for consensus of multiagent systems with fixed/switching topologies. IEEE Trans. Syst. Man-Cybern.-Syst. 2018, 48, 1736–1746. [Google Scholar] [CrossRef]
  5. Feng, X.Q.; Yang, Y.C.; Wei, D.X. Adaptive fully distributed consensus for a class of heterogeneous nonlinear multi-agent systems. Neurocomputing 2004, 428, 12–18. [Google Scholar] [CrossRef]
  6. Dong, Y.A.; Han, Q.L.; Lam, J. Network-based robust h-infinity control of systems with uncertainty. Automatica 2005, 41, 999–1007. [Google Scholar]
  7. Saleem, O.; Rizwan, M.; Shiokolas, P.S.; Ali, B. Genetically optimized ANFIS-based PID controller design for posture-stabilization of self-balancing-robots under depleting battery conditions. Control. Eng. Appl. Inform. 2019, 21, 22–33. [Google Scholar]
  8. Saleem, O.; Mahmood-ul-Hasan, K. Adaptive State-space Control of Under-actuated Systems Using Error magnitude Dependent Self-tuning of Cost Weighting-factor. Int. J. Control. 2021, 19, 931–941. [Google Scholar] [CrossRef]
  9. Zhang, H.G.; Qu, Q.X.; Xiao, G.Y.; Cui, Y. Optimal guaranteed cost sliding mode control for constrained-input nonlinear systems with matched and unmatched Disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2112–2126. [Google Scholar] [CrossRef]
  10. Xu, G.H.; Li, M.; Chen, J.; Lai, Q.; Zhao, X.W. Formation tracking control for multi-agent networks with fixed time convergence via terminal sliding mode control Approach. Sensors 2021, 21, 1416. [Google Scholar] [CrossRef]
  11. Mohammad, S.P.; Fatemeh, S. Hybrid super-twisting fractional-order terminal sliding mode control for rolling spherical robot. Int. J. Control. 2021, 23, 2343–2358. [Google Scholar]
  12. Ye, M.Y.; Gao, G.Q.; Zhong, J.W. Finite-time stable robust sliding mode dynamic control for parallel robots. Int. J. Control. 2021, 19, 3026–3036. [Google Scholar] [CrossRef]
  13. Qin, J.H.; Zhang, G.S.; Zheng, W.X.; Yu, K. Adaptive sliding mode consensus tracking for second-order nonlinear multiagent systems with actuator faults. IEEE Trans. Cybern. 2019, 49, 1605–1615. [Google Scholar] [CrossRef] [PubMed]
  14. Mironova, A.; Mercorelli, P.; Zedler, A. A multi input sliding mode control for Peltier Cells using a cold-hot sliding surface. J. Frankl.-Inst.-Eng. Appl. Math. 2018, 355, 9351–9373. [Google Scholar] [CrossRef]
  15. Yang, Y.; Xu, D.Z.; Ma, T.D.; Su, X.J. Adaptive cooperative terminal sliding mode control for distributed energy storage systems. IEEE Trans. Circuits Syst. I: Regular Pap. 2021, 68, 434–443. [Google Scholar] [CrossRef]
  16. Yang, R.M.; Zhang, G.Y.; Sun, L.Y. Observer-based finite-time robust control of nonlinear time-delay systems via Hamiltonian function method. Int. J. Control. 2020, 94, 3533–3550. [Google Scholar] [CrossRef]
  17. Mishra, R.K.; Sinha, A. Event-triggered sliding mode based consensus tracking in second order heterogeneous nonlinear multi-agent system. Eur. J. Control. 2019, 45, 30–44. [Google Scholar] [CrossRef]
  18. Zheng, Q.H.; Yang, M.Q.; Yang, J.J.; Zhang, Q.R.; Zhang, X.X. Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
  19. Zheng, Q.H.; Zhao, P.H.; Li, Y.; Wang, H.J.; Yang, Y. Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification. Neural Comput. Appl. 2021, 33, 7723–7745. [Google Scholar] [CrossRef]
  20. Zhao, M.Y.; Chang, C.H.; Xie, W.B.; Xie, Z.; Hu, J.Y. Cloud shape classification system based on multi-channel CNN and improved FDM. IEEE Access 2021, 8, 44111–44124. [Google Scholar] [CrossRef]
  21. Shi, P.; Shen, Q.K. Cooperative control of multi-agent systems with unknown state-dependent controlling effects. IEEE Trans. Autom. Sci. Eng. 2015, 12, 827–834. [Google Scholar] [CrossRef]
  22. Hu, Y.Z.; Si, B.L. A reinforcement learning neural network for robotic manipulator control. Neural Comput. 2018, 30, 1983–2004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Wu, Q.H.; Wan, X.J.; Shen, Q.H. Research on dynamic modeling and simulation of axial-flow pumping system based on RBF neural network. Neurocomputing 2016, 12, 200–206. [Google Scholar] [CrossRef]
  24. Chen, B.; Liu, X.P.; Liu, K.F.; Lin, C. Direct adaptive fuzzy control of nonlinear strict-feedback systems. Automatica 2009, 45, 1530–1535. [Google Scholar] [CrossRef]
  25. Li, C.E.; Tang, Y.C.; Zou, X.J.; Zhang, P.; Lin, J.Q.; Lian, G.P. A novel agricultural machinery intelligent design system based on integrating image processing and knowledge reasoning. Appl. Sci. 2022, 12, 7900. [Google Scholar] [CrossRef]
  26. Rezaei, M.H.; Menhaj, M.B. Stationary average consensus for high-order multi-agent systems. IET Control. Theory Appl. 2016, 11, 723–731. [Google Scholar] [CrossRef]
  27. Xu, Y.; Su, H.Y.; Pan, Y.J.; Wu, Z.G. Stationary average consensus for high-order multi-agent systems. Signal Process. 2013, 93, 1794–1803. [Google Scholar] [CrossRef]
  28. Garcia, E.; Cao, Y.C.; Yu, H. Decentralised event-triggered cooperative control with limited communication. Internatinal J. Control. 2013, 86, 1479–1488. [Google Scholar] [CrossRef]
  29. Guo, G.; Ding, L.; Han, Q.L. A distributed event-triggered transmission strategy for sampled-data consensus of multi-agent systems. Automatica 2014, 50, 1479–1488. [Google Scholar] [CrossRef]
  30. Han, Z.Y.; Tang, W.K.S.; Jia, Q. Event-Triggered Synchronization for Nonlinear Multi-Agent Systems With Sampled Data. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 67, 3553–3561. [Google Scholar] [CrossRef]
  31. Fan, Y.; Liu, L.; Feng, G.; Wang, Y. Self-triggered consensus for multi-agent systems with Zeno-free triggers. IEEE Trans. Autom. Control. 2015, 60, 2779–2784. [Google Scholar] [CrossRef]
  32. Ma, Y.J.; Li, H.J.; Jun, Z. Output consensus for switched multi-agent systems with bumpless transfer control and event-triggered communication. Inf. Sci. 2021, 544, 585–598. [Google Scholar] [CrossRef]
  33. Xing, M.L.; Deng, F.Q. Tracking control for stochastic multi-agent systems based on hybrid event-Triggered mechanism. Asian J. Control. 2021, 21, 2352–2363. [Google Scholar] [CrossRef]
  34. Meng, X.Y.; Xie, L.H.; Soh, Y.C. Event-triggered output regulation of heterogeneous multi-agent networks. IEEE Trans. Autom. Control. 2019, 63, 4429–4434. [Google Scholar] [CrossRef]
  35. Yang, Y.Z.; Liu, F.; Yang, H.Y.; Li, Y.L. Distributed finite-time integral sliding-mode control for multi-agent systems with multiple disturbances based on nonlinear disturbance observers. J. Syst. Complex. 2021, 34, 995–1013. [Google Scholar] [CrossRef]
  36. Yao, D.Y.; Liu, M.; Lu, R.Q.; Xu, Y. Adaptive sliding mode controller design of Markov jump systems with time-varying actuator faults and partly unknown transition probabilities. Nonlinear Anal. Hybrid Syst. 2018, 28, 105–122. [Google Scholar] [CrossRef]
  37. Liu, S.; Xie, L.H.; Quevedo, D.E. Event-triggered quantized communication-based distributed convex optimization. IEEE Trans. Netw. Syst. 2018, 5, 167–178. [Google Scholar] [CrossRef]
  38. Wang, A.Q.; Liu, L.; Qiu, J.B.; Feng, G. Event-triggered communication and data rate constraint for distributed optimization of Multiagent Systems. IEEE Trans. Syst. 2018, 48, 1908–1919. [Google Scholar]
  39. Wei, L.L.; Chen, M.; Li, T. Disturbance-observer-based formation-containment control for UAVs via distributed adaptive event-triggered mechanisms. J. Frankl. Inst. 2021, 358, 5305–5333. [Google Scholar] [CrossRef]
  40. Wang, N.N.; Hao, F. Event-triggered sliding mode control with adaptive neural networks for uncertain nonlinear systems. Neurocomputing 2021, 436, 184–197. [Google Scholar] [CrossRef]
  41. Wang, X.; Park, J.H.; Liu, H.; Zhang, X. Cooperative Output-Feedback Secure Control of Distributed Linear Cyber-Physical Systems Resist Intermittent DoS Attacks. IEEE Trans. Cybern. 2021, 51, 4924–4933. [Google Scholar] [CrossRef] [PubMed]
  42. Zhu, J.W.; Gu, C.Y.; Ding, S.X.; Zhang, W.A.; Wang, X.; Yu, L. A new observer-based cooperative fault-tolerant tracking control method with application to networked multiaxis motion control system. IEEE Trans. Ind. Electron. 2021, 68, 7422–7432. [Google Scholar] [CrossRef]
Figure 1. The general structure of the RBF neural network.
Figure 1. The general structure of the RBF neural network.
Sensors 23 03477 g001
Figure 2. Graph of the communication topology.
Figure 2. Graph of the communication topology.
Sensors 23 03477 g002
Figure 3. The comparison of the response curves for the horizontal angular position.
Figure 3. The comparison of the response curves for the horizontal angular position.
Sensors 23 03477 g003
Figure 4. The comparison of response curves for horizontal angular velocity.
Figure 4. The comparison of response curves for horizontal angular velocity.
Sensors 23 03477 g004
Figure 5. The comparison of the controllers.
Figure 5. The comparison of the controllers.
Sensors 23 03477 g005
Figure 6. The performance of the system under different non-vanishing disturbances.
Figure 6. The performance of the system under different non-vanishing disturbances.
Sensors 23 03477 g006
Table 1. Communication decrease for each agent.
Table 1. Communication decrease for each agent.
ControllerExecution TimeCommunication Decrease
Time Mechanism10,000-
agent 1483951.6%
agent 2389061.1%
agent 3436656.3%
agent 4481551.8%
Table 2. Times of execution under different event triggering thresholds.
Table 2. Times of execution under different event triggering thresholds.
ET ThresholdsExecution TimeCommunication DecreaseStabilization Time
1668633.14%3.2
0.5359864.02%5.9
593236.77%1.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, X.; Gao, J.; Liu, P.X. Event-Triggered Sliding Mode Neural Network Controller Design for Heterogeneous Multi-Agent Systems. Sensors 2023, 23, 3477. https://doi.org/10.3390/s23073477

AMA Style

Shen X, Gao J, Liu PX. Event-Triggered Sliding Mode Neural Network Controller Design for Heterogeneous Multi-Agent Systems. Sensors. 2023; 23(7):3477. https://doi.org/10.3390/s23073477

Chicago/Turabian Style

Shen, Xinhai, Jinfeng Gao, and Peter X. Liu. 2023. "Event-Triggered Sliding Mode Neural Network Controller Design for Heterogeneous Multi-Agent Systems" Sensors 23, no. 7: 3477. https://doi.org/10.3390/s23073477

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop