Next Article in Journal
Hardware Design and Implementation of a Lightweight Saber Algorithm Based on DRC Method
Next Article in Special Issue
Flocking Control for Cucker–Smale Model Subject to Denial-of-Service Attacks and Communication Delays
Previous Article in Journal
Fingerprint Fusion Location Method Based on Wireless Signal Distribution Characteristic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network-Based Robust Bipartite Consensus Tracking Control of Multi-Agent System with Compound Uncertainties and Actuator Faults

1
School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
3
AVIC Chengdu Aircraft Design and Research Institute, Chengdu 610041, China
4
Chinese Aeronautical Establishment, Beijing 100029, China
5
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(11), 2524; https://doi.org/10.3390/electronics12112524
Submission received: 8 April 2023 / Revised: 29 May 2023 / Accepted: 1 June 2023 / Published: 2 June 2023

Abstract

:
This paper addresses the challenging problem of bipartite consensus tracking of multi-agent systems that are subject to compound uncertainties and actuator faults. Specifically, the study considers a leader agent with fractional-order nonlinear dynamics unknown to the followers. In addition, both cooperative and competitive interactions among agents are taken into account. To tackle these issues, the proposed approach employs a fully distributed robust bipartite consensus tracking controller, which integrates a neural network approximator to estimate the uncertainties of the leader and the followers. The adaptive laws of neural network parameters are continuously updated online based on the bipartite consensus tracking error. Furthermore, an adaptive control technique is utilized to generate the fault-tolerant component to mitigate the partial loss caused by actuator effectiveness faults. Compared with the existing works on nonlinear multi-agent systems, we consider the compound uncertainties, actuator faults and cooperative–competition interactions simultaneously. By implementing the proposed control scheme, the robustness of the closed-loop system can be significantly improved. Finally, numerical simulations are performed to validate the effectiveness of the control scheme.

1. Introduction

In recent years, researchers have paid much more attention to coordination control of multi-agent systems due to its benefits, such as low operating costs, high robustness, and flexible scalability [1,2,3]. Although agents can transmit information among themselves via multiple hops, in practice, not every follower has access to the leader’s information due to constraints such as transmission distance or environment in the communication network. Therefore, it is particularly important to consider the problem of distributed control in a non-complete topology. Distributed coordination control refers to agents working cooperatively through decentralized controllers by using local information and limited communication interactions. Consensus plays an important role in the research of distributed coordination control because of its wide range of applications, including sensor networks [4], multi-robots [5], and multi-UAVs [6]. Depending on the number of leader agents, the classical consensus can be divided into two categories: leaderless consensus [7,8,9] and consensus tracking [10,11,12]. As an important part, consensus tracking has become very popular in many fields, such as formation tracking [13], flocking [14], containment [15], etc. Among them, consensus tracking indicates that all followers can asymptotically follow a leader or an average of group leaders.
It is worth noting that most of the current research on distributed coordination control mainly focuses on the cooperation between agents, and describes the interactions among agents as an unsigned graph [16]. In the 1940s, Heider summarized the pattern of positive and negative interactions between individuals from the triangle of interpersonal relationships. He noticed that in social networks, the existence of both friend and foe relationships and the simultaneous existence of cooperation and competition interactions is the systemic norm. Further examples include: (1) biology, where neurons have both facilitating and resisting relationships [17]; (2) multimedia, where users have trust or distrust in the attitudes of other users [18]; (3) multi-party states, where there are cooperative and competitive relationships between two/multiple parties; (4) international relations, where there are cooperative and hostile relationships between states [19]. Unlike unsigned digraphs, in which positive edges (sign “+”) can be used to represent positive relationships between individuals and negative ones (sign “−”), the complex relationships mentioned above can be described through signed networks. Based on this, the simultaneous existence of cooperation and competition interactions has important implications for the distribution group behavior of multi-agent systems [20]. Therefore, the study of distributed control for multi-agent systems over signed networks has extensive research implications.
Theoretically, the interactions between agents determine the behavior of the final group [21,22]. Consensus tracking is achieved by relying on cooperation among the agents, and eventually, the state of all followers remains the same as that of the leader. However, considering two groups of agents over signed networks, if there is cooperation among neighboring agents within subgroups and competition among neighboring agents across different subgroups, eventually, follower agents in the same subgroup as the leader converge to the same state as the leader, while the other follower agents approach the opposite state as the leader [23]. This dynamical phenomenon is known as bipartite consensus tracking over signed networks. In recent years, this issue has received significant attention from researchers. In [24], an adaptive bipartite consensus tracking control approach was introduced for second-order multi-agent systems on competition networks, where the agents are suffering from unknown disturbances. Furthermore, the interventional bipartite consensus tracking issue of high-order multi-agent systems was addressed in [25] and neural network-based adaptive estimators are designed to approximate and compensate for the nonlinear uncertainties of agents. Then, the research in [26] focused on exploring the bipartite tracking consensus issue for linear multi-agent systems that include a dynamic leader, and the control input of the leader agent was unknown for each follower agent. A novel non-smooth control scheme was proposed on the basis of the relative state of agents. In [27], the authors proposed the bipartite trackingconsensus protocol for generic linear multi-agent systems over directed cooperation–competition networks and obtained the convergence conditions based on the matrix product technique. Moreover, the authors in [28] presented a novel sub-super-stochastic matrix method for stability analysis of the bipartite tracking control issue over signed networks. Thus, the research results of bipartite consensus tracking were further enriched. It is noteworthy that the aforementioned results are less likely to study the problem under the simultaneous uncertainty of leaders and followers. Meanwhile, disturbances, such as unknown external disturbances and model uncertainties, are always prevalent in the dynamics of leader and follower agents and have a serious impact on the control performance of the multi-agent system. The authors in [29] solved the bipartite consensus tracking problem and designed a robust controller, which combined the neural network and extended the high-gain observer (called NN-EHGO). While this control structure is relatively complex, it is more difficult to apply directly in practice.
Until now, numerous studies have been conducted on bipartite consensus tracking problems for multi-agent systems with integer-order dynamics, such as first-order systems [30], second-order systems [31], and high-order systems [32]. However, in practice, many systems cannot be described by integer-order homogeneous systems when the agent works in a complex environment [33], e.g., underwater robots with a large amount of viscous material or micro-organisms on the seafloor, unmanned aerial vehicles with a large number of particles in a complex space, etc. In contrast, fractional-order systems exhibit system properties well in various fields due to their powerful memory and genetic properties. Additionally, owing to the system’s scale and complexity, agents are more likely to develop faults that will degrade performance or eventually cause the system to completely fail [34]. It is crucial to take into account the multi-agent system’s fault-tolerant performance to guarantee the system’s lengthy stability. Currently, there are limited results available for the bipartite consensus tracking problem of uncertain multi-agent systems when actuator faults are present.
Drawing from the aforementioned discussions, this paper’s primary goal is to resolve the bipartite consensus tracking control issue for multi-agent systems that encounter compound uncertainties and actuator faults. The interaction topology among the agents is represented by a signed graph. Moreover, the leader agent is modeled by an uncertain fractional-order equation, which is unknown to all followers. The following summarizes the primary contributions of this paper:
(1)
A robust fully distributed bipartite consensus tracking control scheme is designed for the multi-agent system with compound uncertainties and actuator faults. According to the property of fractional calculus, a function transformation relation is constructed, which effectively solves the problem of mismatching the derivative orders of the leader and follower dynamic models.
(2)
Neural networks are used to approximate the uncertainties of both the leader and follower agents, while online adaptive laws are employed to continually update the parameters of the neural network. Furthermore, the impact of neural network approximation errors on the performance of bipartite consensus tracking is considered. By introducing an approximation error estimation term into the control input, the accuracy of neural network approximation and the robustness of the controller are effectively enhanced.
(3)
To compensate for the partial loss of the actuator effectiveness problem, fault-tolerant components are built into the bipartite consensus tracking controllers for uncertain second-order multi-agent systems. It is demonstrated that all signals are guaranteed to be bounded, and followers’ states are able to track the leader’s state (or its opposite) as closely as possible.
The remaining sections of the paper are structured as follows: Section 2 provides an introduction to relevant mathematical theories that will be utilized in subsequent sections. Section 3 focuses on the bipartite consensus tracking problem, while Section 4 presents theoretical solutions to the bipartite consensus tracking control problem with zero approximation errors. Additionally, this section further discusses the design of robust bipartite consensus tracking controllers in the presence of neural network approximation errors. In Section 5, numerical simulations are used to validate the effectiveness of the designed control schemes. Finally, in Section 6, the entire paper is concluded, and potential future research topics are discussed.

2. Preliminaries

The following section will cover mathematical concepts associated with graphs, which are used in network connectivity, as well as fractional calculus. Prior to that, several frequently used symbols will be defined.
The notation diag i n [ β i ] represents a diagonal matrix with diagonal entries β 1 , β 2 , , β n , while col i n [ β i ] creates a column vector with entries β 1 T , β 2 T , , β n T arranged vertically. Alternatively, col n [ β ] creates a column vector by stacking β T vertically n times. We can define the one vector as 1 n col n [ 1 ] .

2.1. Signed Graph Theories

In this paper, a signed graph is denoted as G and is composed of nodes V and weighted edges E . An agent is represented by one node, and an edge from the ith agent to the jth agent (denoted by E i j ) indicates that the jth agent receives information from the ith agent. The graph topology represents the network structure among the agents. The weight of edge E i j is denoted as a i j , the ith agent and the jth agent are considered neighbors if and only if a i j 0 , where a i j represents the weight of the edge connecting them. However, a i j is set to 0 if and only if the edge does not exist (i and j are not connected) or if i = j (a node cannot be considered its own neighbor). The interactions between agents are either cooperative ( a i j = a j i > 0 ) or antagonistic ( a i j = a j i < 0 ). If a i j = a j i for all the agents, the graph is undirected. A signed graph G is said to be structurally balanced if there is a division of its nodes into two disjoint sets V 1 , V 2 , with V 1 V 2 = V and V 1 V 2 = , such that positive edges only exist between nodes within the same set, while negative edges only exist between nodes in different sets. For a structurally balanced graph, we define the signature matrix σ = col i n [ σ i ] R n × 1 , where σ i = 1 if ν i V 1 and σ i = 1 if ν i V 2 . The signed graph considered in this paper is structurally balanced. The two agent subgroups of multi-agent systems will gradually approach two different states with opposite signs but the same magnitude. Furthermore, for a structurally balanced graph, when a i j 0 , it follows that σ i = σ j sgn ( a i j ) .
The definition of the Laplacian matrix L for a signed graph G with n nodes is given by
L = diag i n [ j = 1 n | a i j | ] { a i j } R n × n .
Lemma 1
([35]). For a connected undirected graph with n nodes, the Laplacian matrix L ¯ can be obtained by diag [ σ ] L diag [ σ ] . The resulting matrix L ¯ has a single zero eigenvalue and the eigenvector corresponding to this eigenvalue is 1 n . All other eigenvalues of L ¯ are positive. Therefore, L ¯ is a semi-positive definite matrix, and it satisfies L ¯ 1 n = 1 n T L ¯ = 0 .
Based on Lemma 1, for a connected undirected signed graph with n nodes, the Laplacian matrix L has exactly one zero eigenvalue, and its corresponding eigenvector is denoted by σ . This can be expressed mathematically as L σ = σ T L = 0 .

2.2. Fractional Derivatives

Consider a smooth function f ( t ) , the fractional order is defined as p. Based on the different definitions of fractional-order derivatives, there are now three mainstream forms of the formulation, namely “Grünwald-Letnikov derivatives”, “Riemann–Liouville derivatives” and “Caputo derivative”, denoted as D 0 G L t p f ( t ) , D 0 R L t p f ( t ) , and D 0 C t p f ( t ) with respect to time t, respectively. Details of the above three definitions of fractional-order derivatives can be found in [36].
For the purpose of notation simplification, D t p f ( t ) is employed to represent any of the previously established derivatives. Presented below is one property pertaining to the fractional derivatives denoted by D t p f ( t ) :
Property 1.
The composition rule [36]:
D t p D t q f ( t ) = D t p + q f ( t ) .

3. Robust Bipartite Tracking Consensus Problem Description

Considering a multi-agent system comprised of n follower agents, where each agent can be identified by an index i (i.e., i { 1 , 2 , , n } I ). Then, the dynamics of the ith follower agent is:
x ˙ i ( t ) = w i ( t ) + f i ( x i , t ) , w ˙ i ( t ) = ϱ i u i ( t ) + g i ( x i , w i , t ) ,
where x i ( t ) and w i ( t ) represent the position and velocity states, respectively. u i ( t ) denotes the control input. Moreover, the functions f i ( x , t ) and g i ( x , w , t ) are smooth but not known to the agents. Furthermore, these functions can differ arbitrarily between agents. The system is represented by an undirected network connecting the agents, and the information shared among neighboring agents is denoted by output y i ( t ) .
Assumption 1.
The variable ϱ i represents the actuator fault (actuator health indicator) for the ith agent. When ϱ i = 1 , it indicates that the actuator is functioning normally. If 0 < ϱ i < 1 , it suggests that the actuator has partially lost its actuating power, but is still operational.
On the other hand, the fractional differential equation is used to model the target agent indexed as 0:
D t q x 0 ( t ) = f 0 ( x 0 , t ) ,
where the smooth function f 0 ( x , t ) is unknown to all followers with the exception of the leader agent, the fractional order q satisfies 0 < q 2 . The state variable is denoted as x 0 ( t ) . The leader agent is the target agent, while all other agents ( i I ) are considered followers. At least one follower can access the output x 0 ( t ) . In this paper, x i ( 0 ) and w i ( 0 ) represent the initial states at the start time t = 0 . That is the reason why 0 is given as the base point of D t q .
We aim to devise a control strategy for the followers that solves the bipartite consensus tracking control problem defined below:
Definition 1
(Bipartite Consensus Tracking Control). If for all followers i I , the state variables x i ( t ) converge to the same value as the leader’s, i.e., lim t [ x i ( t ) σ i x 0 ( t ) ] = 0 for all i I , then the control protocol u i ( t ) in (3) is considered to achieve bipartite consensus tracking control for the multi-agent system.
The primary difficulties associated with bipartite consensus tracking control problems include:
(1)
Due to the nonlinearity of the unknown functions, which are part of the agent dynamics, conventional control approaches relying on local feedback linearization are not applicable;
(2)
The actuator fault ϱ i exists in the velocity-loop and ϱ i and ϱ j should be different; thus, it is important to design the fault tolerance bipartite consensus tracking control scheme;
(3)
Since the followers cannot obtain the information of the leader’s dynamics, including its order q, the conventional reference tracking control design cannot be employed;
(4)
When the uncertain components of each agent’s model exhibit significant variations, it becomes unfeasible to perform system stability analysis using a standardized model-based approach.
Remark 1.
Unlike most existing works on nonlinear multi-agent systems that only consider cooperative interactions among agents, this paper considers both cooperative and competitive interactions simultaneously. Specifically, the study focuses on bipartite interactions where each sub-group seeks to cooperate internally while also competing with its counterpart, often found in various real-world applications such as transportation or communication networks.

4. Main Results

This section aims to develop a robust control strategy for achieving bipartite consensus tracking of multi-agent systems that suffer from both compound uncertainties and actuator faults. At first, the leader agent will be modeled using fractional-order nonlinear dynamics, which are unknown to followers. Specifically, neural network-based approximators are designed to estimate uncertainties in both the leader and follower agents. Then, an adaptive control technique will also be adopted to generate a fault-tolerant component that can address partial loss resulting from actuator effectiveness faults. Here, we first describe how to design a neural network-based approximator. Based on the following assumptions, the linearly parameterized neural networks can estimate the values of unknown functions.
Assumption 2.
Linearly parameterized neural networks can be used to model two unknown functions, namely, f i ( x , t ) and g i ( x , w , t ) . These functions are represented on compact sets, Ω f i R 2 and Ω g i R 3 , respectively.
f i ( x , t ) = ϕ f i T ( x , t ) θ f i + e f i , g i ( x , w , t ) = ϕ g i T ( x , w , t ) θ g i + e g i ,
where the basis functions
ϕ f i ( x , t ) = col k h f i [ ϕ f i , k ( x , t ) ] R h f i , ϕ g i ( x , w , t ) = col k h g i [ ϕ g i , k ( x , w , t ) ] R h g i
are both known. Moreover, the parameters θ f i = col k h f i [ θ f i , k ] R h f i and θ g i = col k h g i [ θ g i , k ] R h g i are unknown constant vectors; while the errors in the neural network approximation are denoted by e f i and e g i .
Assumption 3.
Linearly parameterized neural networks can be used to express the equation γ ( x , t ) D 1 q t f 0 ( x , t ) on a prescribed compact set Ω γ i R 2 .
γ ( x , t ) = ϕ γ T ( x , t ) θ γ + e γ ,
where the known basis function ϕ γ T ( x , t ) is represented as ϕ γ T ( x , t ) = col k h γ i [ ϕ γ i , k ( x , t ) ] R h γ i . Besides, θ γ is defined as θ γ = col k h γ i [ θ γ i , k ] R h γ i , whose elements are unknown. The neural network approximation error is denoted by e γ .
In this paper, we denote the estimate of a quantity α by α ^ , and its estimation error by α ˜ α ^ α . The ith agent’s estimate of f i ( x , t ) , g i ( x , w , t ) , and γ ( x , t ) are represented by f ^ i ( x , t ) ϕ f i T ( x , t ) θ ^ f i , g ^ i ( x , w , t ) ϕ g i T ( x , w , t ) θ ^ g i , and γ ^ i ( x , t ) ϕ γ T ( x , t ) θ ^ γ i , respectively. Therefore, linear expressions based on neural networks can be designed for estimating leaders with fractional order model uncertainties and constructing feedforward terms in the control inputs to improve bipartite consensus tracking accuracy and reliability. Moreover, the corresponding estimation errors are designated by f ˜ i ( x , t ) , g ˜ i ( x , w , t ) , and γ ˜ i ( x , t ) . It is evident that by defining
f ˜ i ̲ ( x , t ) ϕ f i T ( x , t ) θ ˜ f i , g ˜ i ̲ ( x , w , t ) ϕ g i T ( x , w , t ) θ ˜ g i , γ ˜ i ̲ ( x , t ) ϕ γ T ( x , t ) θ ˜ γ i = ϕ γ T ( x , t ) [ θ ^ γ i θ γ ] ,
the estimation errors can also be expressed as
f ˜ i ( x , t ) = f ˜ i ̲ ( x , t ) e f i , g ˜ i ( x , w , t ) = g ˜ i ̲ ( x , w , t ) e g i , γ ˜ i ( x , t ) = γ ˜ i ̲ ( x , t ) e γ .
With reference to the universal approximation theorem of neural networks as presented in previous literature [37], we propose the following assumption:
Assumption 4.
The errors in approximation, denoted by e f i , e g i , and e γ , are constrained by unknown constants δ f i , δ g i , and δ γ within their respective compact sets Ω f i , Ω g i , and Ω γ . In other words, | e f i | δ f i , | e g i | δ g i , and | e γ i | δ γ i .
Assumption 5.
The errors in approximation, as defined in Assumptions 2 and 3, are such that e f i = e g i = e γ = 0 .
Remark 2.
The linear parametric modeling of unknown nonlinear dynamics has been extensively studied in classical adaptive control theory. Assumptions 2 and 3 are satisfied when the basis functions are properly chosen so that the acceptance domain can cover the value domain of the unknown smooth functions.

4.1. Bipartite Consensus Tracking of the Fractional-Order Leader with Zero Approximation Errors

In this section, we will proceed to design a robust bipartite consensus tracking controller under the Assumption 5. To achieve this, we will first introduce the following variables:
p 1 i = x i σ i x 0 , p 2 i = z 2 i σ i γ ^ i ( x i , t ) , z 2 i = w i + ϕ f i T ( x i , t ) θ ^ f i ,
one gets
p ˙ 2 i = ϱ i u i + g i ( x i , w i , t ) + [ ϕ f i T ( x i , t ) θ ^ ˙ f i σ i ϕ γ T ( x i , t ) θ ^ ˙ γ i ] + [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] x ˙ i + [ t ϕ f i T ( x i , t ) θ ^ f i σ i t ϕ γ T ( x i , t ) θ ^ γ i ] = ϱ i u i + ϕ g i T ( x i , w i , t ) ( θ ^ g i θ ˜ g i ) + [ ϕ f i T ( x i , t ) θ ^ ˙ f i σ i ϕ γ T ( x i , t ) θ ^ ˙ γ i ] + [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] [ z 2 i ϕ f i T ( x i , t ) θ ˜ f i ] + [ t ϕ f i T ( x i , t ) θ ^ f i σ i t ϕ γ T ( x i , t ) θ ^ γ i ] .
Constructing the robust bipartite consensus tracking control algorithm as:
u i 0 = k ϵ ϵ 1 i k p p 2 i ϕ g i T ( x i , w i , t ) θ ^ g i [ ϕ f i T ( x i , t ) θ ^ ˙ f i σ i ϕ γ T ( x i , t ) θ ^ ˙ γ i ] [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] z 2 i [ t ϕ f i T ( x i , t ) θ ^ f i σ i t ϕ γ T ( x i , t ) θ ^ γ i ] , u i = ϱ ^ i * u i 0 .
where
ϵ 1 i = j = 1 n | a i j | ( x i sign ( a i j ) x j ) + | b i | ( x i σ i x 0 )
is the output feedback mechanism that relies on the information transmitted through the network.
Remark 3.
The proposed robust bipartite consensus tracking controller incorporates a neural network approximator to estimate both the uncertainties of the leader and follower agents in multi-agent systems suffering from compound uncertainties. This technique offers improved accuracy, robustness, and adaptability, and can handle complex non-linear dynamics, which is essential for achieving bipartite consensus tracking. Moreover, by updating the neural network parameters online, the approach can learn and adapt to changing system parameters, making it more versatile.
Furthermore, we obtain the derivative of p 2 i as
p ˙ 2 i = k ϵ ϵ 1 i k p p 2 i ϕ g i T ( x i , w i , t ) θ ˜ g i [ x ϕ f i T ( x i , t ) θ ^ f i x ϕ γ T ( x i , t ) θ ^ γ i ] ϕ f i T ( x i , t ) θ ˜ f i + ϱ i ϱ ˜ i * u i 0 .
We define the following variables:
P 1 col i n [ p 1 i ] , P 2 col i n [ p 2 i ] , X 0 x 0 1 n , E 1 col i n [ ϵ 1 i ] , R ^ col i n [ σ i γ ^ i ( x i , t ) ] ,
R ˜ col i n [ σ i γ ˜ ( x i , t ) ] , Ψ ˜ col n { x ϕ f i T ( x i , t ) θ ^ f i x ϕ γ T ( x i , t ) θ ^ γ i } ϕ f i T ( x i , t ) θ ˜ f i .
Apparently, we have E 1 = ( L + B ) P 1 .
Then, we can derive the matrix representation of variables as follows:
P 1 = X diag [ σ ] X 0 , P 2 = Z 2 R ^ = X ˙ + F ˜ R ^ ,
with the compact model:
P ˙ 1 = X ˙ diag [ σ ] X ˙ 0 , P ˙ 2 = k ϵ ( L + B ) P 1 k p P 2 G ˜ Ψ ˜ + T u .
Additionally, the adaptive laws governing the neural network approximation parameters are:
θ ^ ˙ f i = k f [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] ϕ f i ( x i , t ) p 2 i + k f k ϵ ϕ f i ( x i , t ) ϵ 1 i , θ ^ ˙ γ i = σ i k γ k ϵ ϵ 1 i ϕ γ ( x i , t ) , θ ^ ˙ g i = k g ϕ g i ( x i , w i , t ) p 2 i , ϱ ^ ˙ i * = p 2 i k ϱ u i 0 .
Theorem 2
(Bipartite Consensus Tracking Control). Assuming that Assumptions 2 and 3 hold, the control input u i given by Equation (11), along with the adaptive laws presented in Equation (16), can effectively address the robust bipartite consensus tracking problem outlined in definition Definition 1, provided that the subsequent conditions are met.
x γ ( x , t ) < 0 .
Proof. 
In order to demonstrate the stability of the closed-loop system, a Lyapunov function is created in the following form:
V = k ϵ 2 P 1 T ( L + B ) P 1 + 1 2 P 2 T P 2 T + 1 2 k g i = 1 n θ ˜ g i T θ ˜ g i + 1 2 k f i = 1 n θ ˜ f i T θ ˜ f i + 1 2 k f i = 1 n θ ˜ γ i T θ ˜ γ i + 1 2 k ϱ i = 1 n ϱ i ϱ ˜ i * T ϱ ˜ i * .
It is clear that V is positive definite, the derivative of V with respect to the trajectory of Equations (15) and (16) can be expressed as:
V ˙ = k ϵ P 1 T ( L + B ) ( X ˙ diag [ σ ] X ˙ 0 ) k ϵ ( X ˙ + F ˜ R ^ ) T ( L + B ) P 1 P 2 T ( G ˜ + Ψ ˜ T u ) + 1 k g i = 1 n θ ˜ g i T θ ^ ˙ g i + 1 k f i = 1 n θ ˜ f i T θ ^ ˙ f i + 1 k f i = 1 n θ ˜ γ i T θ ^ ˙ γ i + 1 2 k ϱ i = 1 n ϱ i ϱ ˜ i * T ϱ ˜ ˙ i * k p P 2 T P 2 = k ϵ ( R diag [ σ ] X ˙ 0 ) T ( L + B ) P 1 P 2 T ( G ˜ + Ψ ˜ T u ) k ϵ ( F ˜ R ˜ ) T E 1 + 1 k g i = 1 n θ ˜ g i T θ ^ ˙ g i + 1 k f i = 1 n θ ˜ f i T θ ^ ˙ f i + 1 k f i = 1 n θ ˜ γ i T θ ^ ˙ γ i + 1 2 k ϱ i = 1 n ϱ i ϱ ˜ i * T ϱ ˜ ˙ i * k p P 2 T P 2 .
According to (16),
P 2 T ( G ˜ + Ψ ˜ T u ) + k ϵ ( F ˜ R ˜ ) T E 1 = 1 k g i = 1 n θ ˜ g i T θ ^ ˙ g i + 1 k f i = 1 n θ ˜ f i T θ ^ ˙ f i + 1 k f i = 1 n θ ˜ γ i T θ ^ ˙ γ i + 1 2 k ϱ i = 1 n ϱ i ϱ ˜ i * T ϱ ˜ ˙ i * ,
holds.
According to the composition rule in (2), we have
x ˙ 0 = D t 1 q D t q x 0 ( t ) = D t 1 q f 0 ( x 0 , t ) = γ ( x 0 , t ) ,
then we can obtain:
R diag [ σ ] X ˙ 0 = col i n [ γ ( x i , t ) σ i x ˙ 0 ] = col i n [ γ ( x i , t ) σ i γ ( x 0 , t ) ] = col i n [ x { γ ( ξ i , t ) } ( x i σ i x 0 ) ] = diag i n [ x γ ( ξ i , t ) ] P 1 Ξ P 1 .
We ensure that the variable ξ i satisfies the well-known mean value theorem, where ξ i [ min ( x i , σ i x 0 ) , max ( x i , σ i x 0 ) ] . By conforming to (17), we know that Ξ is positive definite. Therefore, we can conclude that:
V ˙ = k ϵ P 1 T Ξ ( L + B ) P 1 k p P 2 T P 2 0 .
Hence, we can observe that V continues to decrease until the point where P 1 and P 2 become zero, i.e., P 1 P 2 0 . This leads to P 1 ( ) 0 , implying that for all i I , lim t ( x i σ i x 0 ) 0 . Therefore, based on the Definition 1, the proof of the theorem is completed. □
Remark 4.
To overcome the issue of partial loss caused by actuator effectiveness faults, this paper proposes a fault-tolerant component using an adaptive control technique. This innovative approach improves not only the robustness but also the safety of the system since losses can be compensated before leading to total or catastrophic failure. Compared to traditional methods such as redundancy or backups, this method reduces computational complexity while offering comparable reliability.

4.2. Bipartite Consensus Tracking of the Fractional-Order Leader in the Presence of Nonzero Approximation Errors

In the preceding section, we formulate a bipartite consensus tracking control strategy that ensures multi-agent systems achieve zero approximation errors. However, Assumption 5 does not necessarily hold. Therefore, we should design the controller when the neural network approximation errors are not zero. On the basis of the previous section, we utilize the coordinate transformation p 1 i and p 2 i as defined in (9) once more. Nevertheless, due to the existence of approximation errors, the first derivative of p 2 i will contain two additional terms related to e g i and e f i .
p ˙ 2 i = ϱ i u i + ϕ g i T ( x i , w i , t ) ( θ ^ g i θ ˜ g i ) + [ ϕ f i T ( x i , t ) θ ^ ˙ f i σ i ϕ γ T ( x i , t ) θ ^ ˙ γ i ] + [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] [ z 2 i ϕ f i T ( x i , t ) θ ˜ f i ] + [ t ϕ f i T ( x i , t ) θ ^ f i σ i t ϕ γ T ( x i , t ) θ ^ γ i ] + e g i + [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] e f i m i .
As the system cannot obtain the approximation errors, it is not possible to eliminate the term m i by the control input. Nevertheless, as stated in Assumption 4, we can utilize the estimates of the bounds of ( δ ^ f i , δ ^ g i , δ ^ γ i ) and devise an adaptive law for them.
Then, the robust bipartite consensus tracking controller u i is designed as:
u i 0 = k ϵ ϵ 1 i k p p 2 i ϕ g i T ( x i , w i , t ) θ ^ g i [ ϕ f i T ( x i , t ) θ ^ ˙ f i σ i ϕ γ T ( x i , t ) θ ^ ˙ γ i ] [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] z 2 i [ t ϕ f i T ( x i , t ) θ ^ f i σ i t ϕ γ T ( x i , t ) θ ^ γ i ] , rec ( p 2 i ) { | k ϵ ϵ 1 i + p 2 i [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] | δ ^ f i + | p 2 i | δ ^ g i + σ i | k ϵ ϵ 1 i | δ ^ γ i } d i , u i = ϱ ^ i * u i 0 .
The control input (24) consists of four main components: a state feedback term to ensure the basic performance of the system, a neural network fitting feedforward term related to the leader acceleration information to improve the system tracking control performance, a neural network estimation term to eliminate the uncertainty term in the follower model, and a neural network estimation error compensation term to reduce the impact of the approximation error on the estimation performance.
Assuming the safe reciprocal function is denoted by rec ( α ) in the following form
rec ( α ) = 1 α , α 0 , 0 , α = 0 .
Then the derivative of p 2 i becomes:
p ˙ 2 i = k ϵ ϵ 1 i k p p 2 i ϕ g i T ( x i , w i , t ) θ ˜ g i + m i d i [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] ϕ f i T ( x i , t ) θ ˜ f i + ϱ i ϱ ˜ i * u i 0 .
The matrices M, Δ , E f , and E γ are denoted by the following column vectors M = col i n [ m i ] , Δ = col i n [ d i ] , E f = col i n [ e f i ] , E γ = col i n [ e γ i ] , respectively, one can modify Equations (14) and (15) into
P 2 = X ˙ + F ˜ R ^ = X ˙ R + F ˜ R ˜ E f + E γ , P ˙ 2 = k ϵ ( L + B ) P 1 k p P 2 G ˜ Ψ ˜ + M Δ .
Further adaptive rules for the neural network parameters estimation and approximation error bounds are:
θ ^ ˙ f i = k f [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] ϕ f i ( x i , t ) p 2 i + k f k ϵ ϕ f i ( x i , t ) ϵ 1 i , θ ^ ˙ γ i = σ i k f k ϵ ϵ 1 i ϕ γ ( x i , t ) , θ ^ ˙ g i = k g ϕ g i ( x i , w i , t ) p 2 i , δ ^ ˙ f i = k δ | k ϵ ϵ 1 i + p 2 i [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] | , δ ^ ˙ g i = k δ | p 2 i | , δ ^ ˙ γ i = k δ σ i | k ϵ ϵ 1 i | , ϱ ^ ˙ i * = p 2 i k ϱ u i 0 .
Theorem 3
(Bipartite Consensus Tracking Control with nonzero approximation errors). If Equation (17) is satisfied, the bipartite consensus tracking problem described by Definition 1 under Assumptions 2–4 can be achieved by utilizing the adaptive algorithm designed by Equation (28) in combination with the control protocol u i given by (24).
Proof. 
To prove the stability of the closed-loop system, the following positive definite Lyapunov function is chosen:
V = k ϵ 2 P 1 T ( L + B ) P 1 + 1 2 P 2 T P 2 T + 1 2 k g i = 1 n θ ˜ g i T θ ˜ g i + 1 2 k f i = 1 n θ ˜ f i T θ ˜ f i + 1 2 k f i = 1 n θ ˜ γ i T θ ˜ γ i + 1 2 k δ i = 1 n δ ˜ g i T δ ˜ g i + 1 2 k δ i = 1 n δ ˜ f i T δ ˜ f i + 1 2 k δ i = 1 n δ ˜ γ i T δ ˜ γ i ,
and its derivative is:
V ˙ = k ϵ P 1 T ( L + B ) ( X ˙ X ˙ 0 ) k ϵ ( X ˙ R + F ˜ R ˜ E f + E γ ) T ( L + B ) P 1 P 2 T ( G ˜ + Ψ ˜ ) + P 2 T ( M Δ ) + 1 k g i = 1 n θ ˜ g i T θ ^ ˙ g i + 1 k f i = 1 n θ ˜ f i T θ ^ ˙ f i + 1 k f i = 1 n θ ˜ γ i T θ ^ ˙ γ i + 1 k δ i = 1 n δ ˜ f i T δ ^ ˙ f i + 1 k δ i = 1 n δ ˜ g i T δ ^ ˙ g i + 1 k δ i = 1 n δ ˜ γ i T δ ^ ˙ γ i k p P 2 T P 2 = D 1 + D 2 ,
where
D 1 = k ϵ ( R X ˙ 0 ) T ( L + B ) P 1 P 2 T ( G ˜ + Ψ ˜ ) k ϵ ( F ˜ R ˜ ) T E 1 + 1 k g i = 1 n θ ˜ g i T θ ^ ˙ g i + 1 k f i = 1 n θ ˜ f i T θ ^ ˙ f i + 1 k f i = 1 n θ ˜ γ i T θ ^ ˙ γ i k p P 2 T P 2 ,
and
D 2 = ( k ϵ ( E f E γ ) T E 1 + P 2 T M ) P 2 T Δ + 1 k δ i = 1 n δ ˜ f i T δ ^ ˙ f i + 1 k δ i = 1 n δ ˜ g i T δ ^ ˙ g i + 1 k δ i = 1 n δ ˜ γ i T δ ^ ˙ γ i .
According to Equations (19)–(22) in Theorem 2, one gets
D 1 = k ϵ P 1 T Ξ ( L + B ) P 1 k p P 2 T P 2 0 .
For the sake of simplification, define s f i k ϵ ϵ 1 i + p 2 i [ x ϕ f i T ( x i , t ) θ ^ f i σ i x ϕ γ T ( x i , t ) θ ^ γ i ] . Accordingly, based on the Equations (23), (24), (28) and Assumption 4,
D 2 = i = 1 n s f i e f i | s f i | ( δ ^ f i δ ˜ f i ) + i = 1 n p 2 i e g i | p 2 i | ( δ ^ g i δ ˜ g i ) + i = 1 n k ϵ ϵ 1 i e γ i | k ϵ ϵ 1 i | ( δ ^ γ i δ ˜ γ i ) i = 1 n | s f i | · | e f i | | s f i | · δ f i + i = 1 n | p 2 i | · | e g i | | p 2 i | · δ g i + i = 1 n | k ϵ ϵ 1 i | · | e γ i | | k ϵ ϵ 1 i | · δ γ i 0 ,
is almost universally true along the time axis as long as P 2 0 , and it is obvious that if the following condition P 2 0 and E 1 0 holds, one gets D 2 0 . Based on this information, it can be inferred that
V ˙ = D 1 + D 2 k ϵ P 1 T Ξ ( L + B ) P 1 k p P 2 T P 2 0 ,
and based on the outcome of Theorem Theorem 2, the proof of the theorem is completed. □
Remark 5.
A class of robust control methods for system uncertainties and disturbance estimation and compensation has been extensively studied in the existing literature. The idea of these methods is to use the system input and output information to estimate the uncertainties and equate them to the input for feedforward compensation, in order to achieve the goal of simplifying the controller design without reducing the impact of system uncertainties and disturbances. Methods developed based on such ideas include disturbance observer (DOB) [38] and uncertainty and disturbance estimation (UDE) [39], but these methods are based on model design and often require an accurate system model to achieve an efficient estimation of unknown uncertainties. In contrast, the approximation principle of neural networks for unknown models can make up for the shortcomings of traditional control methods, which do not rely on accurate mathematical models, use learning and adaptation capabilities to complete the mapping of control systems from input to output, and have strong robustness to system parameter uptake and uncertain perturbations. Therefore, the robust control scheme based on neural network estimation designed in this paper is more general than the traditional filter-based estimation methods, such as DOB and UDE.
Remark 6.
All control protocols in this paper assume that the signed network has the property of structural balance. On this basis, the multi-agent system can be naturally divided into two subgroups and show relative motion trends by the positive and negative nature of the connection weights. If the signed network no longer has the structurally balanced characteristic and cannot be grouped directly, how to design the control protocol for the multi-agent system to achieve bipartite consensus tracking is still an open topic.

5. Simulations

In the following section, we will employ numerical simulations to investigate the theoretical results of the robust bipartite consensus tracking control. To evaluate the effectiveness of the bipartite tracking control algorithm proposed in Theorem 3, we will consider a group of four agents that are interconnected in accordance with the topology illustrated in Figure 1. The dynamics (3) of these agents incorporate non-linear components, as specified by the following equation:
x ˙ 1 = w 1 + cos ( x 1 ) 1 t 2 + 1 , w ˙ 1 = ϱ 1 u 1 + 0.1 sin ( x 1 + w 1 ) e t , x ˙ 2 = w 2 x 2 e t 2 + sin ( x 2 ) , w ˙ 2 = ϱ 2 u 2 0.2 x 2 e w 2 2 t , x ˙ 3 = w 3 + e | x 3 | t 1 , w ˙ 3 = ϱ 3 u 3 x 3 sin ( w 3 ) cos x 3 ( t + 1 ) 2 , x ˙ 4 = w 3 + e x 4 s i n ( x 4 t ) , w ˙ 4 = ϱ 4 u 4 e x 4 t c o s ( w 4 ) 1 .
The leader dynamics model (4) with the uncertain part is defined as follows:
D t q x 0 = 0.3 x 0 + cos ( t 0.7 ) + 0.01 t + 1 .
It can be seen from Figure 1 that only the first agent can obtain the output state information of the leader with the corresponding weight b 1 = 2 , while b 2 = b 3 = b 4 = 0 .
In order to approximate the unknown functions f, g, and γ , we utilize the radial basis function (RBF) neural networks. Specifically, we opt for the commonly utilized Gaussian functions to act as the basis functions ϕ f i ( x , t ) , ϕ g i ( x , w , t ) and γ ( x , t ) in Equation (5). These Gaussian functions are defined as:
ϕ f i , k ( x , t ) = e ( x μ f i x , k ) 2 + ( t μ f i t , k ) 2 η f i , k 2 , k { 1 , 2 , , h f i } , ϕ g i , k ( x , w , t ) = e ( x μ g i x , k ) 2 + ( w μ g i w , k ) 2 + ( t μ g i t , k ) 2 η g i , k 2 , k { 1 , 2 , , h g i } , ϕ γ i , k ( x , t ) = e ( x μ γ i x , k ) 2 + ( t μ γ i t , k ) 2 η γ i , k 2 , k { 1 , 2 , , h γ i } .
The aforementioned Gaussian functions are characterized by their center μ * , k and their width η * , k with each k corresponding to a distinct receptive field.
In this simulation, the neural network ϕ f i t ( x , t ) θ f i consists of h f i = 17 × 17 nodes. the centers ( μ f i x , k , μ f i t , k ) are uniformly distributed in the space [ 25 , 25 ] × [ 0 , 30 ] . Additionally, the neural network ϕ g i t ( x , w , t ) θ g i contains h g i = 17 × 17 × 17 nodes, with centers ( μ g i x , k , μ g i w , k , μ g i t , k ) evenly spaced in [ 25 , 25 ] × [ 25 , 25 ] × [ 0 , 30 ] . The values for the widths were selected to be η f i , k = η g i , k = 6 , and η γ i , k = 6 . The number of nodes was determined as h γ i = 17 × 17 , with their centers ( μ γ i x , k , μ γ i t , k ) being evenly distributed in the range of [ 25 , 25 ] × [ 0 , 30 ] . For the initial state of the leader agent, the values of x 0 ( 0 ) = 3 and θ ^ γ i ( 0 ) = 0 were set.
The initial conditions of the multi-agent system were set as X ( 0 ) = [ 3 , 1 , 1 , 1.5 ] T , W ( 0 ) = [ 2 , 3 , 2 , 1 ] T , θ ^ f i ( 0 ) = θ ^ g i ( 0 ) = 0 , and parameters related to the proposed controller were selected as: k e = k z = 10 , k f = k g = k δ = k ϱ = 1 . Moreover, the actuator health indicators were defined as: ρ 1 = 0.8 , ρ 2 = 0.65 , ρ 3 = 0.45 and ρ 4 = 0.75 .
Figure 2a illustrates the state trajectories of the multi-agent system under the compound uncertainties and actuator faults. It depicts that the state trajectories of all followers are divided into two groups, where one group tracks the trajectory of the leader, while the other group is symmetric around axis y = 0 . Additionally, the bipartite consensus error displayed in Figure 2b converges gradually to zero, which indicates that the multi-agent system eventually accomplishes the bipartite consensus tracking control. To demonstrate the performance of the neural network approximator, Figure 3a,b and Figure 4a are presented, which reveal the estimation error of the uncertain function f ˜ i ( x , t ) , g ˜ i ( x , w , t ) , and γ ˜ i ( x , t ) obtained by applying the neural network approximator. The results indicate that the final estimation error achieves convergence within a certain range, while the small fluctuations discernible in the figure arise due to the presence of the approximation error. Furthermore, Figure 4b displays the trajectories of the control input. In this designed experiment, by selecting a suitable neural network approximator and utilizing adaptive control technology, the proposed method achieves the estimation of the uncertainties in multi-agent systems. Additionally, the issue of partial loss caused by actuator effectiveness faults is addressed by utilizing adaptive control technology. As a result, robust bipartite consensus tracking control of the multi-agent system is achieved in the presence of compound uncertainties and actuator faults.
In order to compare the bipartite consensus tracking control performance with different feedback gains, four different sets of parameters were considered, as shown in Table 1.
The simulation results are shown in Figure 5. It can be seen that the convergence speed of the bipartite tracking error can be improved by increasing the feedback gain appropriately. However, an excessive feedback gain also increases the transient amount of the control input and burdens the actuator. Therefore, a trade-off between control performance and controller capability is needed to select the appropriate controller parameters.
Moreover, to examine the effectiveness of followers in tracking different types of fractional order leaders, further simulations were conducted. Considering the leader dynamic equations as D t q x 0 = e | x 0 | 0.5 + sin ( 2 x 0 ) + cos ( t 0.7 ) + 0.01 t + 1 (leader type 2) and D t q x 0 = 0.5 x 0 3 e x 0 + cos ( t 0.7 ) + 0.01 t + 1 (leader type 3), respectively. Defining the fractional order q = 0.6 , the simulation results are shown in Figure 6 and Figure 7. Among them, in Figure 6, when the leader trajectory had a large slope, i.e., when the leader velocity varied widely, local oscillations appeared in the tracking curve. This is because conventional tracking control often assumes that the curvature of the reference trajectory is within a certain range, and if the rate of change is large, the higher-order information term of the reference trajectory is large, while often the control input is bounded and cannot be infinite. Therefore, tracking will appear in a certain situation of oscillation phenomenon.

6. Conclusions

This paper has addressed the problem of distributed bipartite consensus tracking for a class of uncertain second-order multi-agent systems, where the tracking target’s fractional dynamics are unknown. An adaptive control strategy is proposed by using linearly parameterized neural networks to estimate the uncertain components of the models. Mathematical simulations and theoretical proofs are presented to demonstrate the effectiveness of the proposed method. However, the proposed control schemes in this paper are only verified from the perspective of simulation, and further experiments can be conducted for the actual system. In addition, the multi-agent system studied in this paper has only second-order dynamics, but UAVs and unmanned vehicles often exhibit higher-order dynamics, and the system itself is strongly nonlinear. Therefore, in future research, we will focus on extending the control scheme proposed in this paper to solve the problem of robust cooperative control for nonlinear high-order multi-agent systems.

Author Contributions

T.L.: conceptualization, methodology, writing—original draft. M.S. and B.L.: writing—original draft, numerical simulation. K.Q.: validation, writing—review and editing. B.J., Q.H., and H.L.: validation, writing—review and editing, supervision. K.Q.: writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Natural Science Foundation of Sichuan Province (2022NSFSC0037), the Sichuan Science and Technology Programs (2022JDR0107, 2021YFG0130), the Fundamental Research Funds for the Central Universities (ZYGX2020J020), the Wuhu Science and Technology Plan Project (2022yf23) and the Sichuan Science and Technology Innovation Seedling Project (MZGC20230069) and the Pre-research Project of AVIC Chengdu Aircraft Design and Research Institute.

Conflicts of Interest

No conflict of interest exists in the submission of this manuscript, and all authors approve the manuscript for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and is not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed.

References

  1. Chen, F.; Ren, W. Multi-agent control: A graph-theoretic perspective. J. Syst. Sci. Complex. 2021, 34, 1973–2002. [Google Scholar] [CrossRef]
  2. Zeng, D.; Zhou, J.; Wu, D. Multi-agent flocking formation driven by distributed control with topological specifications. Int. J. Control. 2022, 95, 3226–3240. [Google Scholar] [CrossRef]
  3. Liu, T.; Jiang, Z.P. Distributed control of multi-agent systems with pulse-width-modulated controllers. Automatica 2020, 119, 109020. [Google Scholar] [CrossRef]
  4. Shi, L.; Zheng, W.X.; Liu, Q.; Liu, Y.; Shao, J. Privacy-Preserving Distributed Iterative Localization for Wireless Sensor Networks. IEEE Trans. Ind. Electron. 2022, 70, 11628–11638. [Google Scholar] [CrossRef]
  5. Wu, C.; Zeng, R.; Pan, J.; Wang, C.C.; Liu, Y.J. Plant phenotyping by deep-learning-based planner for multi-robots. IEEE Robot. Autom. Lett. 2019, 4, 3113–3120. [Google Scholar] [CrossRef]
  6. Yan, Z.; Han, L.; Li, X.; Dong, X.; Li, Q.; Ren, Z. Event-Triggered Formation Control for Time-delayed Discrete-Time Multi-Agent System Applied to Multi-UAV Formation Flying. J. Frankl. Inst. 2023, 360, 3677–3699. [Google Scholar] [CrossRef]
  7. Stamouli, C.J.; Bechlioulis, C.P.; Kyriakopoulos, K.J. Robust dynamic average consensus with prescribed transient and steady state performance. Automatica 2022, 144, 110503. [Google Scholar] [CrossRef]
  8. Li, W.; Shi, L.; Shi, M.; Lin, B.; Qin, K. Seeking Velocity-Free Consensus for Multi-Agent Systems With Nonuniform Communication and Measurement Delays. IEEE Trans. Signal Inf. Process. Over Netw. 2023, 9, 295–303. [Google Scholar] [CrossRef]
  9. Zhang, J.; Lu, J.; Lou, J. Privacy-preserving average consensus via finite time-varying transformation. IEEE Trans. Netw. Sci. Eng. 2022, 9, 1756–1764. [Google Scholar] [CrossRef]
  10. Shao, J.; Shi, L.; Cheng, Y.; Li, T. Asynchronous Tracking Control of Leader–Follower Multiagent Systems With Input Uncertainties Over Switching Signed Digraphs. IEEE Trans. Cybern. 2021, 52, 6379–6390. [Google Scholar] [CrossRef]
  11. Cao, Y.; Li, B.; Wen, S.; Huang, T. Consensus tracking of stochastic multi-agent system with actuator faults and switching topologies. Inf. Sci. 2022, 607, 921–930. [Google Scholar] [CrossRef]
  12. Long, J.; Wang, W.; Wen, C.; Huang, J.; Lü, J. Output feedback based adaptive consensus tracking for uncertain heterogeneous multi-agent systems with event-triggered communication. Automatica 2022, 136, 110049. [Google Scholar] [CrossRef]
  13. Yue, J.; Qin, K.; Shi, M.; Jiang, B.; Li, W.; Shi, L. Event-Trigger-Based Finite-Time Privacy-Preserving Formation Control for Multi-UAV System. Drones 2023, 7, 235. [Google Scholar] [CrossRef]
  14. Shi, L.; Cheng, Y.; Shao, J.; Sheng, H.; Liu, Q. Cucker-Smale flocking over cooperation-competition networks. Automatica 2022, 135, 109988. [Google Scholar] [CrossRef]
  15. Li, W.; Shi, M.; Shi, L.; Lin, B.; Qin, K. Containment Tracking for Networked Agents Subject to Nonuniform Communication Delays. IEEE Trans. Netw. Sci. Eng. 2023. [Google Scholar] [CrossRef]
  16. Ren, J.; Song, Q.; Gao, Y.; Lu, G. Leader-following bipartite consensus of second-order time-delay nonlinear multi-agent systems with event-triggered pinning control under signed digraph. Neurocomputing 2020, 385, 186–196. [Google Scholar] [CrossRef]
  17. Parisien, C.; Anderson, C.H.; Eliasmith, C. Solving the problem of negative synaptic weights in cortical models. Neural Comput. 2008, 20, 1473–1494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Guha, R.; Kumar, R.; Raghavan, P.; Tomkins, A. Propagation of trust and distrust. In Proceedings of the 13th International Conference on World Wide Web, New York, NY, USA, 17–20 May 2004; pp. 403–412. [Google Scholar]
  19. Ghosn, F.; Palmer, G.; Bremer, S.A. The MID3 data set, 1993–2001: Procedures, coding rules, and description. Confl. Manag. Peace Sci. 2004, 21, 133–154. [Google Scholar] [CrossRef]
  20. Hu, H.X.; Zhou, Q.; Wen, G.; Yu, W.; Kong, W. Robust distributed stabilization of heterogeneous agents over cooperation–competition networks. IEEE Trans. Circuits Syst. II Express Briefs 2019, 67, 1419–1423. [Google Scholar] [CrossRef]
  21. Shi, L.; Cheng, Y.; Zhang, X.; Shao, J. Consensus tracking control of discrete-time second-order agents over switching signed digraphs with arbitrary antagonistic relations. Int. J. Robust Nonlinear Control. 2020, 30, 4826–4838. [Google Scholar] [CrossRef]
  22. Chen, X.; Yu, H.; Hao, F. Prescribed-time event-triggered bipartite consensus of multiagent systems. IEEE Trans. Cybern. 2020, 52, 2589–2598. [Google Scholar] [CrossRef] [PubMed]
  23. Shi, L.; Cheng, Y.; Shao, J.; Zhang, X. Collective behavior of multileader multiagent systems with random interactions over signed digraphs. IEEE Trans. Control. Netw. Syst. 2021, 8, 1394–1405. [Google Scholar] [CrossRef]
  24. Hu, J.; Zhu, H. Adaptive bipartite consensus on coopetition networks. Phys. D Nonlinear Phenom. 2015, 307, 14–21. [Google Scholar] [CrossRef]
  25. Wu, Y.; Hu, J.; Zhang, Y.; Zeng, Y. Interventional consensus for high-order multi-agent systems with unknown disturbances on coopetition networks. Neurocomputing 2016, 194, 126–134. [Google Scholar] [CrossRef]
  26. Wen, G.; Wang, H.; Yu, X.; Yu, W. Bipartite tracking consensus of linear multi-agent systems with a dynamic leader. IEEE Trans. Circuits Syst. II Express Briefs 2017, 65, 1204–1208. [Google Scholar] [CrossRef]
  27. Shao, J.; Zheng, W.X.; Shi, L.; Cheng, Y. Bipartite tracking consensus of generic linear agents with discrete-time dynamics over cooperation–competition networks. IEEE Trans. Cybern. 2020, 51, 5225–5235. [Google Scholar] [CrossRef]
  28. Shi, L.; Zheng, W.X.; Shao, J.; Cheng, Y. Sub-super-stochastic matrix with applications to bipartite tracking control over signed networks. SIAM J. Control. Optim. 2021, 59, 4563–4589. [Google Scholar] [CrossRef]
  29. Li, W.; Qin, K.; Li, G.; Shi, M.; Zhang, X. Robust bipartite tracking consensus of multi-agent systems via neural network combined with extended high-gain observer. ISA Trans. 2022, 136, 31–45. [Google Scholar] [CrossRef]
  30. Zhao, M.; Peng, C.; Tian, E. Finite-time and fixed-time bipartite consensus tracking of multi-agent systems with weighted antagonistic interactions. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 68, 426–433. [Google Scholar] [CrossRef]
  31. Ning, B.; Han, Q.L.; Zuo, Z. Bipartite consensus tracking for second-order multiagent systems: A time-varying function-based preset-time approach. IEEE Trans. Autom. Control 2020, 66, 2739–2745. [Google Scholar] [CrossRef]
  32. Ai, X. Adaptive robust bipartite consensus of high-order uncertain multi-agent systems over cooperation-competition networks. J. Frankl. Inst. 2020, 357, 1813–1831. [Google Scholar] [CrossRef]
  33. Shi, M.; Qin, K.; Liang, J.; Liu, J. Distributed control of uncertain multiagent systems for tracking a leader with unknown fractional-order dynamics. Int. J. Robust Nonlinear Control 2019, 29, 2254–2271. [Google Scholar] [CrossRef]
  34. Qin, J.; Zhang, G.; Zheng, W.X.; Kang, Y. Adaptive sliding mode consensus tracking for second-order nonlinear multiagent systems with actuator faults. IEEE Trans. Cybern. 2018, 49, 1605–1615. [Google Scholar] [CrossRef] [PubMed]
  35. Godsil, C.D.; Royle, G. Algebraic Graph Theory; Springer: Berlin/Heidelberg, Germany, 2001; Volume 8. [Google Scholar]
  36. Podlubny, I. Fractional Differential Equations; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  37. Lewis, F.L.; Jagannathan, S.; Yesildirek, A. Neural Network Control of Robot Manipulators and Nonlinear Systems; Taylor & Francis, Inc.: London, UK, 1998. [Google Scholar]
  38. Xu, C.; Xu, H.; Su, H.; Liu, C. Disturbance-observer based consensus of linear multi-agent systems with exogenous disturbance under intermittent communication. Neurocomputing 2020, 404, 26–33. [Google Scholar] [CrossRef]
  39. Li, W.; Qin, K.; Chen, B.; Lin, B.; Shi, M. Passivity-based distributed tracking control of uncertain agents via a neural network combined with UDE. Neurocomputing 2021, 449, 342–356. [Google Scholar] [CrossRef]
Figure 1. The topology among the agents.
Figure 1. The topology among the agents.
Electronics 12 02524 g001
Figure 2. The fractional order q = 1.2 . (a) State trajectories x i ( t ) of multi-agent systems. (b) Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems.
Figure 2. The fractional order q = 1.2 . (a) State trajectories x i ( t ) of multi-agent systems. (b) Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems.
Electronics 12 02524 g002
Figure 3. The fractional order q = 1.2 . (a) Neural Network estimation error for unknown function f i ( x i , t ) . (b) Neural Network estimation error for unknown function g i ( x i , w i , t ) .
Figure 3. The fractional order q = 1.2 . (a) Neural Network estimation error for unknown function f i ( x i , t ) . (b) Neural Network estimation error for unknown function g i ( x i , w i , t ) .
Electronics 12 02524 g003
Figure 4. The fractional order q = 1.2 . (a) Neural Network estimation error for unknown function γ i ( x i , t ) . (b) Trajectories of control input u i ( t ) .
Figure 4. The fractional order q = 1.2 . (a) Neural Network estimation error for unknown function γ i ( x i , t ) . (b) Trajectories of control input u i ( t ) .
Electronics 12 02524 g004
Figure 5. Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems (a) case 1; (b) case 2; (c) case 3; (d) case 4.
Figure 5. Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems (a) case 1; (b) case 2; (c) case 3; (d) case 4.
Electronics 12 02524 g005
Figure 6. The fractional order q = 0.6 and leader type is 2. (a) State trajectories x i ( t ) of multi-agent systems. (b) Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems.
Figure 6. The fractional order q = 0.6 and leader type is 2. (a) State trajectories x i ( t ) of multi-agent systems. (b) Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems.
Electronics 12 02524 g006
Figure 7. The fractional order q = 0.6 and leader type is 3. (a) State trajectories x i ( t ) of multi-agent systems. (b) Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems.
Figure 7. The fractional order q = 0.6 and leader type is 3. (a) State trajectories x i ( t ) of multi-agent systems. (b) Bipartite consensus error x i ( t ) σ i x 0 ( t ) trajectories of multi-agent systems.
Electronics 12 02524 g007
Table 1. Four different sets of feedback gains k e and k z .
Table 1. Four different sets of feedback gains k e and k z .
Case1234
Feedback gains k e = 5 , k z = 5 k e = 15 , k z = 5 k e = 15 , k z = 15 k e = 5 , k z = 15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, T.; Qin, K.; Jiang, B.; Huang, Q.; Liu, H.; Lin, B.; Shi, M. Neural Network-Based Robust Bipartite Consensus Tracking Control of Multi-Agent System with Compound Uncertainties and Actuator Faults. Electronics 2023, 12, 2524. https://doi.org/10.3390/electronics12112524

AMA Style

Li T, Qin K, Jiang B, Huang Q, Liu H, Lin B, Shi M. Neural Network-Based Robust Bipartite Consensus Tracking Control of Multi-Agent System with Compound Uncertainties and Actuator Faults. Electronics. 2023; 12(11):2524. https://doi.org/10.3390/electronics12112524

Chicago/Turabian Style

Li, Tong, Kaiyu Qin, Bing Jiang, Qian Huang, Hui Liu, Boxian Lin, and Mengji Shi. 2023. "Neural Network-Based Robust Bipartite Consensus Tracking Control of Multi-Agent System with Compound Uncertainties and Actuator Faults" Electronics 12, no. 11: 2524. https://doi.org/10.3390/electronics12112524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop