Next Article in Journal
Drone-Based Visible–Thermal Object Detection with Transformers and Prompt Tuning
Previous Article in Journal
Multi-Node Joint Jamming Scheme for Secure UAV-Aided NOMA-CDRT Systems: Performance Analysis and Optimization
Previous Article in Special Issue
Formation Cooperative Intelligent Tactical Decision Making Based on Bayesian Network Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Distributed Adaptive Bipartite Consensus Tracking Control of Networked Euler–Lagrange Systems with an Application to Quadrotor Drone Groups

1
School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
3
National Laboratory on Adaptive Optics, Chengdu 610209, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(9), 450; https://doi.org/10.3390/drones8090450 (registering DOI)
Submission received: 12 July 2024 / Revised: 24 August 2024 / Accepted: 28 August 2024 / Published: 1 September 2024
(This article belongs to the Special Issue UAV Trajectory Generation, Optimization and Cooperative Control)

Abstract

:
Actuator faults and external disturbances, which are inevitable due to material fatigue, operational wear and tear, and unforeseen environmental impacts, cause significant threats to the control reliability and performance of networked systems. Therefore, this paper primarily focuses on the distributed adaptive bipartite consensus tracking control problem of networked Euler–Lagrange systems (ELSs) subject to actuator faults and external disturbances. A robust distributed control scheme is developed by combining the adaptive distributed observer and neural-network-based tracking controller. On the one hand, a new positive definite diagonal matrix associated with an asymmetric Laplacian matrix is constructed in the distributed observer, which can be used to estimate the leader’s information. On the other hand, neural networks are adopted to approximate the lumped uncertainties composed of unknown matrices and external disturbances in the follower model. The adaptive update laws are designed for the unknown parameters in neural networks and the actuator fault factors to ensure the boundedness of estimation errors. Finally, the proposed control scheme’s effectiveness is validated through numerical simulations using two types of typical ELS models: two-link robot manipulators and quadrotor drones. The simulation results demonstrate the robustness and reliability of the proposed control approach in the presence of actuator faults and external disturbances.

1. Introduction

The cooperative control of multi-agent systems (MASs) has a broad range of potential applications in a variety of areas, including intelligent transportation [1], drone swarms [2], and aerospace [3]. Distributed consensus control is a fundamental issue in the study of cooperative control. Its objective is to enable all agents to reach a consensus on their states or outputs without the need for central coordination [4]. Early research focused on distributed leaderless consensus, where agents only rely on their information and that of their neighbors to make decisions, ultimately achieving consensus on the states of all agents [5]. To enable MASs to follow a desired trajectory, the consensus tracking problem emerged [6]. To address the consensus tracking problem, researchers introduced the concept of the leader, allowing followers to move according to the leader’s trajectory.
In practical applications, the system inevitably encounters external environmental disturbances and system modeling uncertainties, so ensuring its reliability and robustness is particularly important when studying the consensus tracking control problem of MASs. For example, an adaptive low-gain feedback method was proposed in [7] to deal with the semi-global robust tracking consensus problem for uncertain MASs with input saturation. A new distributed adaptive control protocol based on the back-stepping method was proposed in [8] to achieve the asymptotic consensus tracking control of nonlinear high-order MAS affected by mismatched unknown parameters and uncertain external disturbances. The authors of [9] first proposed a new adaptive bounded consensus tracking control scheme for a class of uncertain nonlinear MASs. Furthermore, MASs always suffer from actuator faults, which will significantly deteriorate the controller’s performance and may even lead to system divergence without some effective countermeasures. Therefore, it is necessary to study fault-tolerant control techniques. In [10], the authors present a solution to the robust consensus tracking control problem for MASs with actuator faults and external disturbances. Their approach integrates neural networks and techniques for estimating uncertainty and disturbances. Moreover, the authors of [11] developed a neural-network-based adaptive consensus scheme to address the consensus problem for nonlinear MASs with actuator failure faults and bias faults. In [12], the authors investigated the distributed tracking control problem for general linear MASs with two types of actuator faults: additive and multiplicative.
Most studies in the field of consensus tracking control have focused exclusively on idealized leaders, which assume the dynamic information of the leaders is accurate and known. However, in reality, due to the complexity of the environment, the leader’s dynamics are often subject to various uncertainties, such as parameter uncertainties, random disturbances, and external influences. Therefore, searching for methods to suppress the effects of uncertainties in leader systems has become a vital issue in the research area. In [13], the authors studied how to achieve robust tracking consensus of MASs with uncertain leaders within a finite time and designed a continuous nonlinear distributed tracking protocol based on relative position information. The authors of [14] studied the tracking consensus problem for higher-order heterogeneous nonlinear MASs under directed signed graphs, where the leader’s nonlinear dynamics were unknown. In [15], the authors solved the general nonlinear consensus tracking problem of networked systems with an uncertain leader and imperfect tracking paths. Currently, the majority of research conducted within the field of distributed control has concentrated on the analysis of linear systems exhibiting either first-order or second-order dynamics. However, the real world contains many nonlinear physical models, which can often be described by Euler–Lagrange equations, such as robotic manipulators [16], fully actuated marine vehicles [17], and spacecraft [18]. An adaptive distributed control strategy was proposed in [19] that enables multiple uncertain ELSs to achieve consensus in the presence of an unknown dynamic leader. The authors studied the finite-time control problem for MASs with uncertain Euler–Lagrange dynamics and a dynamic leader under a directed communication network in [20]. A distributed continuous estimator and an adaptive control law were proposed in [21] to estimate the uncertainties and solve the distributed consensus tracking problem for networked ELSs.
The research in the above literature primarily focuses on multi-agent ELSs under undirected graph topologies. However, the single-way communication described by directed graph topologies can save communication resources compared to the bidirectional communication described by undirected graph topologies. As shown in [22], the both-way communication mode between agents under undirected graphs imposes high demands on the bandwidth resources of networked systems. Therefore, the research on multi-agent networked systems under directed graphs is of excellent research significance and value. For instance, Bin et al. [23] proposed a distributed continuous algorithm to achieve consensus tracking of ELSs on directed graphs, simultaneously avoiding the chattering effect. An adaptive consensus control design method was proposed in [24] based on the inverse optimal H∞ control criterion for networked ELSs composed of fully actuated mobile robots under the directed graph. In addition, early research mainly focused on the cooperative interaction relationships between agents. However, the competitive interaction relationships between agents are also meaningful in the research on consensus tracking of networked systems. For example, individuals in a biological community may acquire limited resources through cooperation or competition to maintain their survival. Therefore, it is necessary and meaningful to consider the coexistence of cooperative and competitive relationships between agents in research. An adaptive protocol based on disturbance compensation techniques under an event-triggered control framework is proposed in [25] to solve the bipartite consensus tracking problem for networked ELSs. An adaptive distributed observer method is proposed in [26] based on networked ELSs, which can achieve leader–follower bipartite consensus under system uncertainties and deception attacks. In [27], the author studied the bipartite consensus control issue for networked ELSs under directed topologies with positive and negative interaction weights and proposed an adaptive bipartite consensus controller for systems with uncertain parameters.
Motivated by the considerations outlined above, this paper explores the adaptive bipartite consensus tracking control for uncertain Euler–Lagrange systems (ELSs) with actuator faults within directed graph structures. The leader agent’s state matrix is characterized by unknown parameters, and its output matrix is unknown to the follower agents. An innovative control strategy is introduced in this paper, integrating a distributed observer with a neural-network-enhanced robust tracking controller. The principal contributions of this work are delineated as follows:
(1)
This paper examines networked ELSs subject to lumped uncertainties, which encompass unknown system matrices, external disturbances, and actuator faults. A neural-network-based adaptive estimator is proposed to offer feedforward compensation for these uncertainties. Furthermore, adaptive updating laws are formulated to guarantee that the estimation errors associated with the lumped uncertainties and actuator faults are bounded, thereby tackling the robust tracking control challenge. In contrast to existing control strategies tailored for ideal systems [11,12,13,15,24], the most significant novelty of this work is the enhanced robustness of unknown systems, enabling the control scheme to be applicable across a range of tasks, including the cooperative control of ELSs in dynamic and complex environments.
(2)
This paper addresses the issue of uncertainties in leader agents, where their higher-order dynamic information is globally unknown. To tackle this challenge, an adaptive distributed observer is utilized to estimate the states, state matrix unknown parameters, and output matrix of the leader agent. The convergence of the observer’s estimates is rigorously proven through Lyapunov function analysis. This approach diverges from related works that presuppose the availability of the leader’s complete dynamic information to the follower agents, such as the state matrix or output matrix [7,11,23,28]. The proposed control scheme’s adaptability enhances its applicability to a wider array of consensus-tracking control tasks, including trajectory tracking in intricate marine environments.
(3)
This paper introduces a novel positive definite diagonal matrix to facilitate the construction of a distributed observer. This innovative design effectively addresses the challenge of asymmetric Laplacian matrices inherent in directed graphs, thereby offering a viable solution for bipartite consensus tracking control of ELSs under general directed graph conditions. In contrast to existing control schemes confined to undirected graph scenarios [4,9,14,15,21], the controller proposed herein demonstrates a capacity to conserve communication resources.
The content of this paper is organized as follows: Section 2 provides an overview of the foundational work and the essential mathematical preliminaries. Section 3 elaborates on the design of the proposed control scheme and substantiates the system’s stability through rigorous analysis. Section 4 showcases the results of simulation experiments that validate the theoretical findings. In conclusion, Section 5 encapsulates the findings and presents the conclusions drawn from this research.
Notations: I n is an identity matrix with the dimension n × n . ⊗ stands for the Kronecker product. diag i n [ s i ] = Δ diag s 1 , , s n defines a diagonal matrix, where s i is the i t h diagonal element or block. col i n [ υ i ] = Δ [ υ 1 T , , υ n T ] T creates a column vector. 0 m × n represents a zero matrix with dimension m × n . 1 N = Δ [ 1 , , 1 ] T N . For matrix K = [ k 1 , k 2 , , k n ] m × n with column vectors k i m , Col ( K ) = Δ col i n ( k i ) m n is a compound column vector where the columns k i of K are serially arranged in one column. sgn ( · ) { + 1 , 1 , 0 } denotes the sign function.

2. Preliminaries and Problem Formulation

2.1. Graph Theory

This paper considers a network of agents consisting of one leader and N followers. The leader is denoted as agent 0, and the followers are denoted as agents 1 to N. The interactions among the N agents are represented by a directed graph G = ( V , E , A ) , where V = v 1 , , v N is the set of nodes, E = ( v i , v j ) V × V is the set of edges, and A = [ a i j ] N × N is the adjacency matrix. If agent i can access the information of agent j, then a i j 0 ; otherwise, a i j = 0 . a i j > 0 and a i j < 0 , respectively, represent the cooperative and competitive relationships between agents i and j. If a i j a j i , the graph is directed; otherwise, it is undirected. The Laplacian matrix is defined as L = D A , where D = diag i N [ ζ i ] , with ζ i = j = 1 N a i j . The in-degree matrix D is a diagonal matrix, and for a directed graph, the diagonal elements of the in-degree matrix ζ i are the sums of the corresponding row in the adjacency matrix. An additional leader agent is introduced, with the node set V 0 = v 0 , and the overall node set is V ¯ = V V 0 .
For a structurally balanced signed graph V ¯ , the node set can be divided into two mutually exclusive subsets V ¯ 1 and V ¯ 2 . The signature matrix Δ is defined as Δ = diag i n [ d i ] , where, if V 0 V ¯ q , then v i V ¯ q , d i = 1 , indicating a cooperative relationship between the followers and the leader; otherwise, d i = 1 , indicating an antagonistic relationship. B = diag i N [ b i ] , where if the i-th agent is aware of the leader’s information, then b i > 0 ; otherwise, b i = 0 . The extended Laplacian matrix is L ¯ = L + B , which describes the communication topology between the leader and the followers.

2.2. RBF Neural Network

Radial basis function (RBF) neural networks adjust centers and widths to approximate unknown functions. Figure 1 depicts a typical structure of neural networks, clearly illustrating the weighted process from input to output.
Neural networks often utilize Gaussian functions as their basis functions, which take the form ϕ i j ( z ) = exp ( z c i j 2 δ i j 2 ) , where z represents the input vector, while σ and c are the width and center parameters of the Gaussian function, respectively. This approach enables the approximation of continuous functions with arbitrary precision over compact sets. The neural network’s output is represented by y = ϕ T ( z ) ω . Crucially, neural networks possess innate function approximation capabilities that are unaffected by the inherent properties of the system being modeled. By employing a sufficient number of Gaussian functions, this neural network architecture can approximate continuous functions to an arbitrary degree of accuracy within compact domains. When designing controllers for systems with model uncertainties or disturbances, leveraging neural networks for estimation can enhance the overall robustness of the control system.
The following Table 1 lists some important parameters and their meanings in this article.

2.3. Problem Description

Considering a networked agent system composed of one leader and N followers. The following linear system models the leader.
q ˙ = Q ( w ) q , y = E q ,
where q m and y n denote the leader’s state and output. Q ( w ) represents the state matrix, where w l is an unknown parameter vector. Moreover, E n × m denotes the output matrix, which is unknown to any follower.
The following Euler–Lagrange equation is utilized to describe the dynamics of the ith follower.
M i ( q i ) q ¨ i + C i ( q i , q ˙ i ) q ˙ i + G i ( q i ) = p i u i + f i ( q i , q ˙ i , q ¨ i ) , i = 1 , 2 , N ,
where q i , q ˙ i , q ¨ i n are the generalized coordinate, velocity, and acceleration, respectively. M i ( q i ) n × n is the inertia matrix, C i ( q i , q ˙ i ) n × n represents the Coriolis and centrifugal terms, G i ( q i ) n is the vector of gravitational force, u i n is the control input, and f i ( q i , q ˙ i , q ¨ i ) n represents the unknown external disturbance. p i is an unknown constant. And p i u i represents the actuator fault. In this paper, it is assumed that M i ( q i ) , C i ( q i , q ˙ i ) , and G i ( q i ) are unknown to any follower.
The dynamics (2) exhibit the following two properties [30].
Property 1. 
M i ( q i ) is symmetric and positive definite.
Property 2. 
M ˙ i ( q i ) 2 C i ( q i , q ˙ i ) is skew-symmetric, i.e., for all x n , it yields x T [ M ˙ i ( q i ) 2 C i ( q i , q ˙ i ) ] x = 0 .
To realize the bipartite consensus tracking, it is necessary to give the following assumptions.
Assumption 1. 
A connected, structure-balanced, and directed graph can be used to model the communication topology of the networked MASs. In addition, it also ensures that at least one follower will be informed about the information given by the leader.
Assumption 2. 
The eigenvalues of Q ( w ) are all simple with zero real components.
Under Assumption 2, it is assumed that
Q ( w ) = diag i l [ w i ] v ,
where v = 0 1 1 0 . It can be obtained that Q ( w ) is a skew-symmetric matrix.
Remark 1. 
Assumption 2 is rather standard in the literature on consensus tracking of networked systems [31,32]. Under Assumption 2, the leader agent (1) can generate a set of signals commonly seen in the industry, known as “multitone sinusoidal signals” with varying frequencies, amplitudes, and initial phases, which align more closely with real-world scenarios [33,34]. The application scenario of the control scheme in this paper is the signal tracking of Euler–Lagrange systems in complex sea conditions. Hence, the control objective of this paper is to make the follower ’s state q i and the leader ’s output y tend to be consistent.
Lemma 1 
([35]). For any α l and δ , ϖ 2 l , we define
φ ( ϖ ) = ϖ 2 ϖ 1 0 0 ϖ 2 l ϖ 2 l 1 ,
and then the following equation holds:
φ ( ϖ ) δ = φ ( δ ) ϖ , Q ( α ) ϖ = φ T ( ϖ ) α ,
where Q ( α ) = diag i l [ α i ] v , v = 0 1 1 0 , and α = col i l [ α i ] , ϖ = col i 2 l [ α i ] , δ = col i 2 l [ δ i ] .
Lemma 2 
([22]). Under Assumption 1, there exists a positive diagonal matrix W N × N such that W L ¯ can be diagonalized and possesses eigenvalues that are both real and positive. Moreover, there exists a symmetric positive definite matrix O N × N such that
J = O W L ¯ ,   T = J W L ¯ + L ¯ T W J
are symmetric positive definite.
Remark 2. 
The analysis of networked systems on undirected graphs hinges upon the symmetry exhibited by specific matrices, such as the Laplacian matrix, which reflects the symmetric topology characterizing undirected graphs. However, the asymmetry of the Laplacian matrix presents a primary challenge when analyzing networked systems on directed graphs. Lemma 2 presents a method for constructing two symmetric matrices, J and O, derived from the asymmetric matrix L ¯ . By employing these two symmetric matrices in the formulation of Lyapunov equations, it becomes feasible to extend the control scheme from undirected graphs to encompass directed graphs.
Definition 1. 
Consider the leader–follower systems (1) and (2). If the equation
lim t ( q i ( t ) d i y ( t ) ) = 0 ,   i = 1 , N ,
holds, then the leader–follower MASs (1) and (2) will achieve the bipartite consensus tracking.

3. Neural-Network-Based Control Scheme Design and Analysis

The control scheme comprises a distributed observer and a neural-network-based robust controller. The design and analysis details of the aforementioned two parts are as follows.

3.1. Distributed Observer Design and Stability Analysis

In this part, we will design a distributed observer to estimate the leader’s dynamic information.
Firstly, we define the observation error vectors as
e q i = j = 1 N a i j ( q ^ j sgn ( a i j ) q ^ i ) + b i ( q q ^ i ) , e y i = j = 1 N a i j ( y ^ j sgn ( a i j ) y ^ i ) + b i ( y y ^ i ) ,
where q ^ i and y ^ i represent the estimations of q and y.
Then, the distributed observer is constructed in the following way:
q ^ ˙ i = H ( w ^ i ) q ^ i + ρ 1 η i e q i , w ^ ˙ i = ρ 2 η i φ ( e q i ) q ^ i , E ^ ˙ i = η i e y i q ^ i T ,
where w ^ i and E ^ i represent the estimations of w and E. Parameters ρ 1 , ρ 2 and η i will be specified later. And y ^ i = E ^ i q ^ i .
Remark 3. 
As shown in (6), the distributed observer proposed in this paper estimates the leader’s state q, the unknown parameter w, and the output matrix E, respectively, where w and E are globally unknown. Compared with the works of [28,36,37,38,39,40,41,42,43,44], which assume that some of the leader’s dynamic information, such as the state matrix or the output matrix, is accessible to uninformed agents, observer (6) is more consistent with the characteristics of distributed networked systems and can be applied to tracking scenarios with complex leader dynamics, such as rotating machinery tasks and operations in complex sea conditions.
We define the estimation errors as q ˜ i = q ^ i d i q , w ˜ = w ^ i w , and E ˜ i = E ^ i E . The objective of designing the observer (6) is to make the following equations hold:
lim t q ˜ i ( t ) = 0 , lim t w ˜ i ( t ) = 0 , lim t E ˜ i ( t ) = 0 .
Let W = diag i N [ η i ] , e q = col i N [ e q i ] , q ¯ = 1 N q , w ¯ = 1 N w , w ^ = col i N [ w ^ i ] , and q ^ = col i N [ q ^ i ] . Then, we can obtain the result that
q ^ ˙ = Q ¯ ( w ^ ) q ^ + ρ 1 ( W I m ) e q , w ^ ˙ = ρ 2 φ ¯ ( e q ) ( W I m ) q ^ , e q = ( L ¯ I m ) ( q ^ q ¯ ) ,
where
φ ¯ ( e q ) = diag i N [ φ ( e q i ) ] , Q ¯ ( w ^ ) = diag i N [ Q ( w ^ i ) ] .
Then, it yields
q ^ ˙ = Q ¯ ( w ^ ) q ^ ρ 1 ( W L ¯ I m ) ( q ^ q ¯ ) , w ^ ˙ = ρ 2 φ ¯ ( e q ) ( W I m ) q ^ .
Remark 4. 
To solve the asymmetric issue of L ¯ , we introduce the positive definite diagonal matrix W in (6), which is constructed in Lemma 2. We utilize two symmetric positive definite matrices J and O to design the Lyapunov Equation (16), thus extending the control scheme from undirected topologies to generally directed topologies.
We define q ˜ = col i N [ q ˜ i ] and w ˜ = col i N [ w ˜ i ] . It can be obtained that
q ˜ ˙ = ( I N Q ( w ) ρ 1 W L ¯ I m ) q ˜ + H ¯ ( w ˜ ) q ^ , w ˜ ˙ = ρ 2 φ ¯ ( e q ) ( W I m ) q ^ .
According to Lemma 1, we have
q ˜ ˙ = ( I N Q ( w ) ρ 1 W L ¯ I m ) q ˜ φ ¯ T ( q ^ ) w ˜ , w ˜ ˙ = ρ 2 φ ¯ ( q ^ ) ( W L ¯ I m ) q ˜ ,
where φ ¯ ( q ^ ) = diag i N [ φ ( q ^ i ) ] . By derivation of E ˜ , we can obtain the result that
E ˜ ˙ i = η i j = 0 N a i j ( E ˜ j E ˜ i ) q q T + χ i ( t ) ,
where χ i ( t ) = η i j = 0 N a i j ( ( E ˜ j E ˜ i ) q q ˜ i T ) + η i E e q i q ^ i T + η i j = 0 N a i j ( E ˜ j q ˜ j q ^ i T E ˜ i q ˜ i q ^ i T ) .
Define the column vector of output matrix observation errors ς i = Col [ E ˜ i ] , ς 0 = Col [ E ] and ξ i = Col [ χ i ( t ) ] ; then Equation (11) turns to
ς ˙ i = ( q q T I n ) η i j = 0 N a i j ( ς j ς i ) + ξ i ,
where ξ i ( t ) = η i j = 0 N a i j [ ( q ˜ i q T I n ) ( ς j ς i ) ) + ( q ^ i q ˜ j T I n ) ς j ( q ^ i q ˜ i T I m ) ς j ] + η i ( q ^ i e q i T I n ) ς 0 . Define ς = col i N [ ς i ] and ξ = col i N [ ξ i ] , which yields
ς ˙ = ( I N ( q q T I n ) ) ( W L ¯ I n m ) ς + ξ = ( W L ¯ ( q q T I n ) ) ς + ξ .
Based on the above derivation using coordinate transformation, the objective of Section 3.2 is to investigate the stability of the systems (11) and (14) to demonstrate the convergence of the observer (6).
In this part, we will investigate the stability of the systems (11) and (14).
Theorem 1. 
Consider systems (1) and (6). Define two compact sets Υ 0 m and Υ 1 l with 0 Υ 0 ,   0 Υ 1 . Under Assumptions 1 and 2, for any w ^ i ( 0 ) Υ 0 ,   w Υ 0 ,   q ^ i ( 0 ) Υ 1 ,   q ( 0 ) Υ 1 , ρ 2 > 0 , then it yields that
lim t q ˜ ( t ) = 0 , lim t w ˜ ( t ) = 0 .
Proof. 
The Lyapunov function is employed as
V = q ˜ T ( J I m ) q ˜ + ρ 2 1 w ˜ T ( O I l ) w ˜ ,
where J N × N and O N × N are mentioned in Lemma 2. q ˜ values represent the estimation errors of q. w ˜ values represent the estimation errors of w. I m and I l represent identity matrixes. ρ 2 represents a constant. Then, it yields
V ˙ = 2 q ˜ T ( J Q ( w ) ) q ˜ ρ 1 q ˜ T ( ( J W L ¯ + L ¯ T W J ) I m ) q ˜ 2 q ˜ T ( J I m ) φ ¯ T ( q ˜ ) w ˜ 2 q ˜ T ( J I m ) φ ¯ T ( q ¯ ) w ˜ + 2 ρ 2 1 w ˜ T ( O I l ) w ˜ ˙ .
Given the skew-symmetric property of Q ( w ) and the symmetry of J, it follows that J Q ( w ) is also skew-symmetric. Subsequently, we have
V ˙ = ρ 1 q ˜ T ( T I m ) q ˜ 2 q ˜ T ( J I m ) ) φ ¯ T ( q ˜ ) w ˜ 2 q ˜ T ( J φ T ( q ) ) w ˜ + 2 ρ 2 1 w ˜ T ( O I l ) w ˜ ˙ .
Substituting (11) into (18), we can obtain the result that
V ˙ = ρ 1 q ˜ T ( T I m ) q ˜ 2 q ˜ T ( J I m ) ) φ ¯ T ( q ˜ ) w ˜ 2 q ˜ T ( J φ T ( q ) ) w ˜ + 2 w ˜ T ( O I l ) φ ¯ ( q ^ ) ( W L ¯ I m ) q ˜ = ρ 1 q ˜ T ( T I m ) q ˜ 2 q ˜ T ( J I m ) ) φ ¯ T ( q ˜ ) w ˜ + 2 w ˜ T ( O I l ) φ ¯ ( q ˜ ) ( W L ¯ I m ) q ˜ + 2 w ˜ T ( O W L ¯ φ ( q ) ) q ˜ 2 q ˜ T ( J φ T ( q ) ) w ˜ .
Based on Lemma 2, it yields
w ˜ T ( O W L ¯ φ ( q ) ) q ˜ = q ˜ T ( J φ ( q ) ) w ˜ .
From Equation (19) and (20), we can obtain the result that
V ˙ = ρ 1 q ˜ T ( T I m ) q ˜ 2 q ˜ T ( J I m ) ) φ ¯ T ( q ˜ ) w ˜ + 2 w ˜ T ( O I l ) φ ¯ ( q ˜ ) ( W L ¯ I m ) q ˜ ρ 1 q ˜ T ( T I m ) q ˜ + 2 w ˜ φ ¯ ( q ˜ ) ( J I m ) q ˜ + 2 w ˜ ( O I l ) φ ¯ ( q ˜ ) ( W L ¯ I m ) q ˜ .
According to Property 9.4.11 in [45], we can obtain the result that φ ¯ ( q ˜ ) Col [ φ ¯ ( q ˜ ) ] = q ˜ . Then it yields
V ˙ ρ 1 q ˜ T ( T I m ) q ˜ + 2 w ˜ J q ˜ 2 + 2 w ˜ O W L ¯ q ˜ 2 .
Define r = 2 J + 2 O W L ¯ , and the smallest eigenvalue of the matrix T is denoted as λ T . Then, according to Lemma 2, it follows that
V ˙ ( ρ 1 λ T r w ˜ ( t ) ) q ˜ 2 .
According to Lemma 7 in [22], w ˜ ( t ) is uniformly bounded. Define w ˜ = sup w ˜ ( t ) and ρ 1 = w ˜ r + 1 λ T . Then, we can obtain the result that for ρ 1 ρ 1 ,
V ˙ q ˜ 2 .
Therefore, we can obtain the result that V ˙ is negative semidefinite and lim t q ˜ ( t ) = 0 .
Next, we will demonstrate the convergence of w ˜ ( t ) . According to (10), it yields
q ˜ ¨ = ( I N Q ( w ) ρ 1 W L ¯ I m ) q ˜ ˙ φ ¯ T ( q ^ ) w ˜ ˙ φ ¯ T ( q ^ ˙ ) w ˜ .
Since q ˜ , w ˜ , and q ^ are uniformly bounded (see Lemma 6 in [22]), according to (9) and (11), we have q ˜ ˙ , w ˜ ˙ , and q ^ ˙ , which are also uniformly bounded. Therefore, q ˜ ¨ is uniformly bounded. By applying Barbalat’s lemma, we have lim t q ˜ ˙ ( t ) = 0 . From lim t q ˜ ( t ) = 0 and (10), we can obtain the result that
lim t φ ¯ T ( q ^ ) w ˜ ( t ) = 0 , lim t w ˜ T ( t ) φ ¯ ( q ^ ) φ ¯ T ( q ^ ) w ˜ ( t ) = 0 .
From Lemma 7 in [22], it yields that for ρ 1 ρ 1 , the following inequality holds
ϖ 1 I m 1 T t t + T φ ¯ ( q ^ ) φ ¯ T ( q ^ ) d q ^ ϖ 0 I m , t T 0 ,
where ϖ 1 , ϖ 0 , T 0 , T are positive constants. Then, according to Lemma 1 in [46], we can obtain the result that lim t w ˜ ( t ) = 0 . □
Theorem 2. 
Consider systems (1) and (6). Define two compact sets Υ 0 m and Υ 1 l with 0 Υ 0 , 0 Υ 1 . Under Assumptions 1 and 2, for any w ^ i ( 0 ) Υ 0 ,   w Υ 0 ,   q ^ i ( 0 ) Υ 1 ,   q ( 0 ) Υ 1 , ρ 2 > 0 , it yields the result that lim t E ˜ i ( t ) = 0 .
Proof. 
Firstly, we consider system (14) with ξ = 0 as
ς ˙ = ( W L ¯ ( q ( t ) q T ( t ) I n ) ) ς .
According to Lemma 2, since W L ¯ can be diagonalized and possesses eigenvalues that are real and positive, there exists a matrix M H such that M H W L ¯ M H 1 = diag i N [ λ u i ] = K H , where λ u i > 0 are eigenvalues of W L ¯ . Define k h = ( M H I n m ) ς , so Equation (25) can be transformed as
k ˙ h = ( K H ( q ( t ) q T ( t ) I n ) ) k h .
Define ( W L ¯ ( q ( t ) q T ( t ) I N ) ) = B ¯ , so Equation (25) can be simplified to
ς ˙ = B ¯ ς .
According to Lemma 1 in [32], q ( t ) satisfies the following inequality:
b 1 I m t t + T 0 q ( τ ) q T ( τ ) d τ b 2 I m ,
where b 1 , b 2 and T 0 are positive constants. Furthermore, we can obtain the result that
b 1 K H I n m t t + T 0 K H ( q ( τ ) q T ( τ ) I n ) d τ b 2 K H I n m .
From Theorem 1 in [47], it can be deduced that system (26) is asymptotically stable. Equivalently, system (25) is also asymptotically stable. Therefore, for a positive definite matrix F 1 ( t ) satisfying F 1 ( t ) l 3 , where l 3 is a positive constant. There exists a positive definite matrix k h ( t ) that satisfies l 1 k h ( t ) l 2 , where l 1 and l 2 are positive constants. The subsequent equation holds:
k ˙ h ( t ) = k h ( t ) B ¯ ( t ) B ¯ T ( t ) k h ( t ) F 1 ( t ) .
Next, we analyze system (14) with ξ 0 . Construct the following Lyapunov function:
V a = ς T k h ( t ) ς .
Then, we can obtain the result that
V ˙ a = ς T ( Z ˙ 1 ( t ) + k h ( t ) B ¯ ( t ) + B ¯ T ( t ) k h ( t ) ) ς + 2 ς T k h ( t ) ξ = ς T F 1 ( t ) ς + 2 ς T k h ( t ) ξ l 3 ξ 2 + 2 k h ( t ) ς ξ .
From Lemma 6 in [22], we have q ^ i , which is uniformly bounded. Furthermore, for q ( 0 ) m in system (1), we have q ( t ) = e Q ( w ) t q ( 0 ) . According to Assumption 2, it yields q ( t ) = q ( 0 ) . There exists a sufficiently large constant q m such that q ^ ( t ) q m and q q m . According to Equation (13), we have
ξ i η i ( j = 0 N a i j ( q ˜ i q T I n ) ( ς j ς i ) + ( q ^ i e q i T I n ) ς 0 + j = 0 N a i j ( q ^ i q ˜ j T I n ) ς j ( q ^ i q ˜ i T I m ) ς i ) η i ( j = 0 N a i j ( q ˜ i q T I n ) ( ς j ς i ) + ( q ^ i e q i T I n ) ς 0 + j = 0 N a i j ( q ^ i q ˜ j T I n ς j + q ^ i q ˜ i T I m ς i ) ) η i ( 2 j = 0 N a i j q ˜ i q T ς + j = 0 N a i j q ^ i e q i T ς + j = 0 N a i j q ^ i q ˜ i T ς + q ^ i e q i T E ) η i ( q m ς j = 0 N a i j ( 3 q ˜ i + q ˜ j ) + q m e q i E ) l e η i ( 4 q m ς q ˜ j = 0 N a i j + q m e q i E ) .
Since ξ ( t ) i = 1 N ξ i ( t ) , it yields
ξ ( t ) η ( ς ( t ) χ 0 ( t ) + χ 1 ( t ) ) ,
where η = col i N [ η i ] , χ 0 ( t ) = 4 q m q ˜ ( t ) i = 0 N j = 0 N a i j , and χ 1 ( t ) = N q m E e q ( t ) . Substituting into Equation (32), it follows that
V ˙ a l 3 ς 2 + 2 α l 2 ς 2 χ 0 ( t ) + 2 α l 2 ς χ 1 ( t ) l 3 ς 2 + 2 α l 2 ς 2 χ 0 ( t ) + α l 3 4 ς 2 + 4 α l 2 2 c 3 χ 1 2 ( t ) = ( 4 α ) l 3 4 ς 2 + 2 l 2 χ 0 ( t ) ς 2 + χ 1 2 ( t ) w h ( ( 4 α ) l 3 4 l 2 2 l 2 χ 0 ( t ) l 1 ) V a + χ 1 2 ( t ) w h ,
where w h = l 3 4 l 2 2 , and from Theorem 1, we have lim t χ 1 ( t ) = χ 0 ( t ) = 0 . Since l 3 and l 2 are positive constants, as long as we ensure that η i is no larger than 4, it can be obtained that V ˙ a is negative definite. Therefore, we have lim t ς ( t ) = 0 .
The proof is completed. □
Remark 5. 
According to Theorem 1 and 2, we can obtain the result that the following equations hold: lim t q ˜ i = lim t w ˜ i = lim t E ˜ i = lim t y ˜ i = y i ^ d i y and lim t e q i = lim t e y i = 0 .

3.2. Robust Tracking Controller Design and Stability Analysis

We will propose a neural-network-based robust controller to let the tracking error converge to zero in this part. In the meantime, adaptive update laws are devised to ensure the boundedness of the estimation errors of lumped uncertainties composed of unknown matrices and external disturbances in the follower model.
Firstly, we define the tracking error vectors as
e 1 i = q i y ^ i , e 2 i = q ˙ i β i ,
where β i = E ^ i Q ( w ^ i ) q ^ i K 1 i e 1 i and K 1 i is a positive definite matrix. Subsequently, we need to estimate the following lumped uncertainty composed of unknown matrices and external disturbances in the follower model.
δ i = G i C i β i M i β ˙ i + f i .
For the sake of simplicity, we assume that the lumped uncertainty δ i can be formulated over a designated compact set Ω f r and using a neural network as
δ i ( q i , q ˙ i ) = ϕ i T ( q i , q ˙ i ) ω i + ϖ f i ,
where ϕ i T ( q i , q ˙ i ) r × b is the basis function, and ω i b is an unknown parameter vector. ϖ f i is the neural network estimation error, which is bounded by a vector χ f i , that is, ϖ f i χ f i . The estimation of ω i is represented as ω ^ i ( t ) . The estimated value of δ i is represented as δ ^ i . Then, δ ^ i can be represented as follows:
δ ^ i ( q i , q ˙ i ) = ϕ i T ( q i , q ˙ i ) ω ^ i ( t ) .
The estimation errors are defined as ω ˜ i = ω i ω ^ i and δ ˜ i = δ i δ ^ i . To achieve the approximation of the follower to the dynamic of the leader, the compact set of states can be determined based on the sensor’s initial rough estimate of the distance between the leader and the follower. This generates a compact set with a wide range initially. Then, the compact set range can be continuously adjusted to optimize the neural network model.
Remark 6. 
As shown in Equation (37), δ i is a lumped uncertainty composed of unknown matrices and external disturbances in the follower model. Neural networks are employed to estimate the lumped uncertainty δ i online and design a feedforward term ϕ i T ω ^ i in the controller (40) to compensate it, thus improving the robustness of the system.
The neural-network-based robust controller is designed as follows:
u i = p ^ i ( K 2 i e 2 i + e 1 i + ϕ i T ω ^ i + τ i sgn ( e 2 i ) ) ,
where p ^ i is the estimation of p i = 1 p i . K 2 i is a positive definite matrix, and τ i is a positive constant satisfying τ i χ f i .
The adaptive update laws are designed as follows:
ω ^ ˙ i = m f ϕ i T e 2 i , p ^ ˙ i = e 2 i θ i ,
where m f is a positive constant and θ i = K 2 i e 2 i + e 1 i + ϕ i T ω ^ i + τ i sgn ( e 2 i ) . Then, we define p ˜ i = p i p ^ i and p ¯ i = p i p ˜ i θ i .
Remark 7. 
The design of the neural-network-based robust controller comprises three components. The state feedback term K 2 i e 2 i e 1 i serves to realize the fundamental functions of the controller. The neural network estimation feedforward term ϕ i T ω ^ i is employed to enhance the stability of the system. The term τ i sgn ( e 2 i ) is utilized to mitigate the impact of neural network estimation errors on the system.
To summarize, the control strategy of the closed-loop tracking control system is illustrated in the diagram shown in Figure 2. The control scheme proposed in this paper consists of a distributed observer and a neural-network-based robust controller. For the leader model (1) with uncertain parameters w and uncertain output matrix E, a distributed observer (6) is proposed to estimate the global information of the leader (1). On this basis, the estimations of the leader information w ^ i , q ^ i , E ^ i , and y ^ i are substituted into the neural-network-based robust controller (40). Then, we design adaptive update laws (41) to ensure the effectiveness of the neural network estimation of the lumped uncertainty. We can see that the estimations w ^ i , q ^ i and E ^ i obtained through the distributed observer (6) play a role in the adaptive laws.
Theorem 3. 
Consider the leader–follower systems (1) and (2). Suppose Assumptions 1 and 2 hold. If the neural-network-based robust controller and adaptive update laws are designed as (40) and (41), then the leader–follower systems (1) and (2) will achieve consensus tracking.
Proof. 
By the derivation of e 1 i and e 2 i , we can obtain the result that
e ˙ 1 i = e 2 i + β i y ^ ˙ i , e ˙ 2 i = q ¨ i β ˙ i .
Consider the Lyapunov function as
V b i = 1 2 e 1 i T e 1 i .
By the derivation of (43), and according to (42), it yields
V ˙ b i = e 1 i T K 1 i e 1 i + e 1 i T e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) .
Substituting (42) into (2), it can be obtained that
M i e ˙ 2 i + C i e 2 i = u i + δ i .
Then, we consider the second Lyapunov function as
V c i = V b i + 1 2 e 2 i T M i e 2 i .
Then, we have
V ˙ c i = e 1 i T K 1 i e 1 i + e 1 i T e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) + 1 2 e 2 i T M ˙ i ( q i ) e 2 i + e 2 i T M i ( q i ) e ˙ 2 i .
Substituting (45) into (47), and according to Property 2, it follows that
V ˙ c i = e 1 i T K 1 i e 1 i + e 1 i T e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) + e 2 i T ( u i + δ i ) .
Then, we consider the third Lyapunov function as
V d i = V c i + ω ˜ i T ω ˜ i 2 m f + i = 1 N p i ( p ˜ i ) 2 2 .
Then, we have
V ˙ d i = V ˙ c i ω ˜ i T ω ^ ˙ i m f i = 1 N p i p ˜ i p ˜ ˙ i
Substituting (38), (40), (48) into (50), we can obtain the result that
V ˙ d i = e 1 i T K 1 i e 1 i + e 1 i T e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) + e 2 i T ( K 2 i e 2 i e 1 i ϕ i T ω ^ i τ i sgn ( e 2 i ) + ϕ i T ω i + ϖ f i + p ¯ i ) i = 1 N p i p ˜ i p ˜ ˙ i ω ˜ i T ω ^ ˙ i m f = e 1 i T K 1 i e 1 i + e 1 i T e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) e 2 i T K 2 i e 2 i e 2 i T e 1 i e 2 i T ϕ i T ω ^ i e 2 i T τ i sgn ( e 2 i ) + e 2 i T ϕ i T ω i + e 2 i T ϖ f i ω ˜ i T ω ^ ˙ i m f + e 2 i T p ¯ i i = 1 N p i p ˜ i p ˜ ˙ i = e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) e 2 i T τ i sgn ( e 2 i ) + e 2 i T ϕ i T ω ˜ i + e 2 i T ϖ f i ω ˜ i T ω ^ ˙ i m f + e 2 i T P ¯ i i = 1 N p i p ˜ i p ˜ ˙ i .
Substituting (41) into (51), it yields
V ˙ d i = e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) e 2 i T τ i sgn ( e 2 i ) + e 2 i T ϖ f i e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) τ i e 2 i + e 2 i ϖ f i e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) τ i e 2 i + e 2 i χ f i e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i y ^ ˙ i ) .
From (6), we have
V ˙ d i e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i E ^ ˙ i q ^ i E ^ i q ^ ˙ i ) = e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( E ^ i Q ( w ^ i ) q ^ i η i e y i q ^ i T q ^ i E ^ i Q ( w ^ i ) q ^ i E ^ i ρ 1 η i e q i ) = e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i + e 1 i T ( e y i η i q ^ i T q ^ i E ^ i ρ 1 η i e q i ) ,
where e q i and e y i are the observation errors defined in (5). According to Remark 5, we have lim t e q i = lim t e y i = lim t y ˜ i = 0 . Since K 1 i and K 2 i are positive definite, it yields lim t V ˙ d i e 1 i T K 1 i e 1 i e 2 i T K 2 i e 2 i ; that is, V ˙ d i is negative definite when t tends to infinity. Therefore, it yields lim t e 1 i ( t ) = lim t e 2 i ( t ) = 0 . Then, we have lim t e 1 i = lim t q i y ^ i = lim t q i d i y = 0 . Based on Definition 1, the leader–follower systems described by Equations (1) and (2) can reach bipartite consensus tracking. The proof is finished. □

4. Simulation Results

Numerical simulations are carried out in this part. Two different simulation models are employed to verify the designed consensus tracking control protocol through simulation; one is a two-link robot manipulator, and the other is a mathematical model of a quadrotor drone. The results are shown below.

4.1. Example 1: Two-Link Robot Manipulator

A two-link robot manipulator benchmark model is used for simulation in this subsection, as shown in Figure 3.
Based on the Euler–Lagrange mechanism, the manipulator model is obtained given by
M i ( q i ) q ¨ i + C i ( q i , q ˙ i ) q ˙ i + G i ( q i ) = p i u i + f i ( q i , q ˙ i , q ¨ i ) , i = 1 , 2 , 4 ,
where
q i = col [ q i 1 , q i 2 ] , M i ( q i ) = d i 1 + d i 2 + 2 d i 3 cos ( q i 2 ) d i 2 + 2 d i 3 cos ( q i 2 ) d i 2 + 2 d i 3 cos ( q i 2 ) d i 2 , C i ( q i , q ˙ i ) = d i 3 sin ( q i 2 ) q ˙ i 2 d i 3 sin ( q i 2 ) ( q ˙ i 1 + q ˙ i 2 ) d i 3 sin ( q i 2 ) q ˙ i 1 0 , G i ( q i ) = d i 4 g cos ( q i 1 ) + d i 4 g cos ( q i 1 + q i 2 ) d i 5 g cos ( q i 1 + q i 2 ) , d i 1 = J i 1 + m i 2 l i 1 2 , d i 2 = 0.25 m i 2 l i 2 2 + J i 2 , d i 3 = 0.5 m i 2 l i 1 l i 2 , d i 4 = ( 0.5 m i 1 + m i 2 ) l i 1 , d i 5 = 0.5 m i 2 l i 2 , f i ( q i , q ˙ i , q ¨ i ) = 5 % · [ M i ( q i ) q ¨ i + C i ( q i , q ˙ i ) q ˙ i + G i ( q i ) ] ,
g = 9.8 m/s2 is the gravitational acceleration. m i 1 , m i 2 are masses of links. l i 1 denotes lengths of links. l i 2 represents the distance from the center of the mass to the joints. J i 1 and J i 2 denote moments of inertia of links. d i 1 ~ d i 5 represent different constant values in the inertia matrix, Coriolis and centrifugal force matrix, and gravity force vector. q i = col [ q i 1 , q i 2 ] represent the different joint angle positions of the two mechanical arms of the robotic manipulator, where i represents the i t h agent. We consider a group of two-link manipulators consisting of four followers. The communication topology among manipulators is shown in Figure 4. The first follower can obtain the information of the leader. Simulation parameters are given in Table 2. The initial values are selected as follows:
q 11 ( 0 ) = 5 π , q 12 ( 0 ) = 5 π , q 21 ( 0 ) = 10 π , q 22 ( 0 ) = 5 π , q 31 ( 0 ) = 15 π , q 32 ( 0 ) = 5 π , q 41 ( 0 ) = 20 π , q 42 ( 0 ) = 5 π , p 1 = 0.98 , p 2 = 0.95 , p 3 = 0.9 , p 4 = 0.92 , q ˙ i 1 ( 0 ) = q ˙ i 2 ( 0 ) = 0 , q ( 0 ) = col ( 2 , 0.6 , 2 , 0.8 , 2 , 1 ) .
The following linear system is used to represent the dynamics of the leader, which can denote a signal generator in complex sea conditions under Assumption 2.
q ˙ = Q ( w ) q , y = E q .
The leader’s system matrices are set as:
w = col [ 2 , 5 , 8 ] , E = 0.1 0 0.2 0 0.3 0 0 0.3 0 0.2 0 0.1 .
The activation functions of each follower are described as
ϕ 1 i ( z ) = ϕ 2 i ( z ) = [ ϕ i 1 ( z ) , , ϕ i 6 ( z ) ] T .
In this paper, we choose Gaussian functions as activation functions. The details are as follows: ϕ i j ( z ) = exp ( z c i j 2 δ i j 2 ) , j = 1 , , 6 , where z = q i , q ˙ i , β i T 6 . We assume that all the followers have the same activation functions. c i j represents the center of the receptive field, which is evenly distributed in 5 , 5 4 × 0.5 , 0.5 2 . δ i j = 2 is the width of the Gaussian function. The initial weights ω ^ 1 i and ω ^ 2 i are chosen as ω ^ 1 i ( 0 ) = ω ^ 2 i ( 0 ) = 0 6 × 2 . The control parameters are selected as K 1 i = K 2 i = 40 I 2 ,   τ i = 10 ,   ρ 1 = 80 , and ρ 2 = 60 . We select W = diag ( α 1 , α N ) = diag ( 1 , 2 , 3 , 4 ) . In order to show the numerical simulations intuitively, we define q ¯ 0 i = q ˜ i ,   w ¯ i = w ˜ i , and E ¯ i = E ˜ i .
Table 3 lists the comparison results between this article and [22].
In Table 3, e ¯ represents the average value of the tracking errors, and δ represents the standard deviation of the tracking errors. By comparison, it can be observed that the method adopted in this paper results in smaller tracking errors and also smaller fluctuation ranges, thereby enhancing the robustness of the system.
The systems described by Equations (1) and (2) can achieve bipartite consensus tracking, as illustrated in Figure 5. Furthermore, Figure 6 demonstrates the convergence of the tracking errors of the networked agent system to zero. Figure 6 and Figure 7 indicate that the distributed observer (6) can effectively estimate the leader’s dynamic information, including the state q, the unknown parameter w, and the output matrix E. The control inputs shown in Figure 8 indicate that by using neural networks, the estimation errors of the lumped uncertainties δ i are bounded.
To highlight this paper’s contribution, we compare it with article [22], a leading work in the consensus tracking of an uncertain leader under directed graphs. We introduce the same external disturbance f i to the follower model in article [22] and compare the results by observing the figures of tracking errors (Figure 6 and Figure 9).
Remark 8. 
Figure 6 and Figure 9 show that the neural-network-based robust controller proposed in this paper effectively suppresses external disturbances and enhances the robustness of the system. Compared with the control strategy in article [22], the proposed scheme offers notable research significance in real-world applications.

4.2. Example 2: Quadrotor Drone

In this subsection, a mathematical model of a quadrotor drone is utilized for simulation as follows:
M i ( χ i ) χ ¨ i + C i ( χ i , χ ˙ i ) χ ˙ i = u χ i + f χ i
where χ i = [ ϕ i , θ i , ψ i ] T denotes the attitude vector of the quadrotor rigid body expressed in the inertial frame. The matrix C i ( χ i , χ i ˙ ) is the Coriolis term containing the gyroscopic and centrifugal terms, and M i ( χ i ) is the inertia matrix. u χ i is the total control torque for the rotational motion, f χ i denotes the disturbance torque. The explicit expression of M i ( χ i ) and C i ( χ i , χ i ˙ ) can be seen in [48].
The following linear system is used to represent the dynamics of the leader, which can denote a signal generator in complex sea conditions under Assumption 2.
q ˙ = Q ( w ) q , y = E q .
The leader’s system matrices are set as
w = col [ 2 , 5 , 8 ] , E = 0.1 0 0.2 0 0.3 0 0 0.3 0 0.2 0 0.1 0.1 0.3 0.2 0.2 0.1 0.2 .
The rest of the control parameters are the same as in Example 1. We define the tracking error vectors as ϕ ˜ i = ϕ i d i y 1 , θ ˜ i = θ i d i y 2 , and ψ ˜ i = ψ i d i y 3 .
In order to verify that the proposed control scheme’s adaptability enhances its applicability to a wider array of consensus tracking control tasks, we conduct an additional set of experiments by altering the output matrix of the leader system. The leader’s output matrix is set as
E = 0.5 0.1 0.1 0.1 0.1 0.2 0.1 0.5 0.1 0.1 0.3 0.1 0.1 0.1 0.5 0.1 0.1 0.3
As shown in Figure 10, Figure 11, Figure 12 and Figure 13, the systems described by (1) and (54) can achieve bipartite consensus tracking. Figure 11 and Figure 13, Figure 14 and Figure 15 indicate that the tracking errors of attitudes of quadrotor drones converge to zero.

5. Conclusions

This paper has investigated the bipartite consensus tracking problem for networked ELSs with an uncertain leader agent across directed topologies. The ELSs under consideration are afflicted by unknown dynamic matrices, external disturbances, and actuator faults. A novel control strategy, integrating a distributed observer with a neural-network-based robust controller, has been devised. Neural networks are employed to approximate the lumped uncertainties, which include unknown matrices and external disturbances within the follower model. The distributed observer is tailored to estimate the leader’s dynamic information. Adaptive update laws have been formulated for the unknown parameters within the neural networks and the actuator fault factors, ensuring the boundedness of the estimation errors. A neural-network-based robust controller is proposed to ensure that tracking errors asymptotically converge to zero. Numerical simulations have been conducted to substantiate the efficacy of the proposed control strategy. In future work, we will delve into the bipartite consensus tracking control scheme with event-triggered communications, which is anticipated to further enhance the efficiency and practicality of the system.

Author Contributions

Z.L. and H.H.: Conceptualization, Methodology, Writing—original draft. M.S. and B.L.: Writing—original draft, Numerical simulation. K.Q.: Validation, Writing—review and editing. C.H. and H.H.: Validation, Writing—review and editing, Supervision. K.Q.: Writing—review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest, and all authors approve the manuscript for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and is not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed.

References

  1. Dong, J.; Yassine, A.; Armitage, A.; Hossain, M.S. Multi-Agent Reinforcement Learning for Intelligent V2G Integration in Future Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2023, 1, 15974–15983. [Google Scholar] [CrossRef]
  2. Xu, D.; Chen, G. The research on intelligent cooperative combat of UAV cluster with multi-agent reinforcement learning. Aerosp. Syst. 2022, 5, 107–121. [Google Scholar] [CrossRef]
  3. Wang, H.; Tao, J.; Peng, T.; Brintrup, A.; Kosasih, E.E.; Lu, Y.; Tang, R.; Hu, L. Dynamic inventory replenishment strategy for aerospace manufacturing supply chain: Combining reinforcement learning and multi-agent simulation. Int. J. Prod. Res. 2022, 60, 4117–4136. [Google Scholar] [CrossRef]
  4. Peng, X.J.; He, Y.; Chen, W.H.; Liu, Q. Bipartite consensus tracking control for periodically-varying-delayed multi-agent systems with uncertain switching topologies. Commun. Nonlinear Sci. Numer. Simul. 2023, 121, 107226. [Google Scholar] [CrossRef]
  5. Wang, H.; Han, Q.L. Distribution of Roots of Quasi-Polynomials of Neutral Type and Its Application-Part II: Consensus Protocol Design of Multi-Agent Systems Using Delayed State Information. IEEE Trans. Autom. Control 2023, 69, 4058–4065. [Google Scholar] [CrossRef]
  6. Zhao, M.; Peng, C.; Tian, E. Finite-time and fixed-time bipartite consensus tracking of multi-agent systems with weighted antagonistic interactions. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 68, 426–433. [Google Scholar] [CrossRef]
  7. Wang, B.; Chen, W.; Zhang, B. Semi-global robust tracking consensus for multi-agent uncertain systems with input saturation via metamorphic low-gain feedback. Automatica 2019, 103, 363–373. [Google Scholar] [CrossRef]
  8. Wang, W.; Wen, C.; Huang, J. Distributed adaptive asymptotically consensus tracking control of nonlinear multi-agent systems with unknown parameters and uncertain disturbances. Automatica 2017, 77, 133–142. [Google Scholar] [CrossRef]
  9. Wang, X.; Niu, B.; Zhang, J.; Wang, H.; Jiang, Y.; Wang, D. Adaptive Event-Triggered Consensus Tracking Control Schemes for Uncertain Constrained Nonlinear Multi-Agent Systems. IEEE Trans. Autom. Sci. Eng. 2023. [Google Scholar] [CrossRef]
  10. Ren, Z.; Lin, B.; Shi, M.; Li, Z.; Qin, K. UDE-based Consensus Tracking Control of Multi-Agent Systems with Actuator Faults and External Disturbances. In Proceedings of the 2023 6th International Conference on Electronics Technology (ICET), Chengdu, China, 12–15 May 2023; pp. 1282–1288. [Google Scholar]
  11. Qin, J.; Zhang, G.; Zheng, W.X.; Kang, Y. neural-network-based adaptive consensus control for a class of nonaffine nonlinear multiagent systems with actuator faults. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3633–3644. [Google Scholar]
  12. Bo, Z.; Wei, W.; Hao, Y. Distributed consensus tracking control of linear multi-agent systems with actuator faults. In Proceedings of the 2014 IEEE Conference on Control Applications (CCA), Juan Les Antibes, France, 8–10 October 2014; pp. 2141–2146. [Google Scholar]
  13. Lu, X.; Wang, Y.; Yu, X.; Lai, J. Finite-time control for robust tracking consensus in MASs with an uncertain leader. IEEE Trans. Cybern. 2016, 47, 1210–1223. [Google Scholar] [CrossRef] [PubMed]
  14. Lui, D.G.; Petrillo, A.; Santini, S. Bipartite tracking consensus for high-order heterogeneous uncertain nonlinear multi-agent systems with unknown leader dynamics via adaptive fully-distributed PID control. IEEE Trans. Netw. Sci. Eng. 2022, 10, 1131–1142. [Google Scholar] [CrossRef]
  15. Tahoun, A.; Arafa, M. Adaptive leader–follower control for nonlinear uncertain multi-agent systems with an uncertain leader and unknown tracking paths. ISA Trans. 2022, 131, 61–72. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, G.; Cheng, D. Adaptive fault-tolerant guaranteed performance control for Euler–Lagrange systems with its application to a 2-link robotic manipulator. IEEE Access 2020, 8, 184160–184171. [Google Scholar] [CrossRef]
  17. Liang, Z.; Wen, H.; Yao, B.; Mao, Z.; Lian, L. Adaptive actuator fault and the abnormal value reconstruction for marine vehicles with a class of Euler–Lagrange system. Ocean Eng. 2024, 297, 117095. [Google Scholar] [CrossRef]
  18. Wei, C.; Luo, J.; Dai, H.; Yuan, J. Adaptive model-free constrained control of postcapture flexible spacecraft: A Euler–Lagrange approach. J. Vib. Control 2018, 24, 4885–4903. [Google Scholar] [CrossRef]
  19. Lu, M.; Liu, L. Leader–following consensus of multiple uncertain Euler–Lagrange systems with unknown dynamic leader. IEEE Trans. Autom. Control 2019, 64, 4167–4173. [Google Scholar] [CrossRef]
  20. Dong, Y.; Chen, Z. Fixed-time synchronization of networked uncertain Euler–Lagrange systems. Automatica 2022, 146, 110571. [Google Scholar] [CrossRef]
  21. Mei, J.; Ren, W.; Ma, G. Distributed coordinated tracking with a dynamic leader for multiple Euler–Lagrange systems. IEEE Trans. Autom. Control 2011, 56, 1415–1421. [Google Scholar] [CrossRef]
  22. Wang, S.; Zhang, H.; Chen, Z. Adaptive cooperative tracking and parameter estimation of an uncertain leader over general directed graphs. IEEE Trans. Autom. Control 2022, 68, 3888–3901. [Google Scholar] [CrossRef]
  23. Cheng, B.; Li, Z. Coordinated tracking of Euler–Lagrange systems over directed graphs via distributed continuous controllers. In Proceedings of the 2016 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 8144–8149. [Google Scholar]
  24. Miyasato, Y. Adaptive H-inf consensus control of Euler–Lagrange systems on directed network graph. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–6. [Google Scholar]
  25. Deng, Q.; Peng, Y.; Han, T.; Qu, D. Event-triggered bipartite consensus in networked Euler–Lagrange systems with external disturbance. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 2870–2874. [Google Scholar] [CrossRef]
  26. Li, B.; Han, T.; Xiao, B.; Zhan, X.S.; Yan, H. Leader-following bipartite consensus of multiple uncertain Euler–Lagrange systems under deception attacks. Appl. Math. Comput. 2022, 428, 127227. [Google Scholar] [CrossRef]
  27. Liu, J.; Li, H.; Luo, J. Bipartite consensus in networked Euler–Lagrange systems with uncertain parameters under a cooperation-competition network topology. IEEE Control Syst. Lett. 2019, 3, 494–498. [Google Scholar] [CrossRef]
  28. Huang, J.; Xiang, Z. Leader-following bipartite consensus with disturbance rejection for uncertain multiple Euler–Lagrange systems over signed networks. J. Frankl. Inst. 2021, 358, 7786–7803. [Google Scholar] [CrossRef]
  29. Nardi, F. Neural Network Based Adaptive Algorithms for Nonlinear Control; Georgia Institute of Technology: Atlanta, GA, USA, 2000. [Google Scholar]
  30. Lewis, F.L.; Dawson, D.M.; Abdallah, C.T. Robot Manipulator Control: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar]
  31. Cai, H.; Huang, J. The leader-following consensus for multiple uncertain Euler–Lagrange systems with an adaptive distributed observer. IEEE Trans. Autom. Control 2015, 61, 3152–3157. [Google Scholar] [CrossRef]
  32. Wang, S.; Huang, J. Adaptive leader-following consensus for multiple Euler–Lagrange systems with an uncertain leader system. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2188–2196. [Google Scholar] [CrossRef] [PubMed]
  33. Hsu, L.; Ortega, R.; Damm, G. A globally convergent frequency estimator. IEEE Trans. Autom. Control 1999, 44, 698–713. [Google Scholar] [CrossRef]
  34. Fuller, C.R.; von Flotow, A.H. Active control of sound and vibration. IEEE Control Syst. Mag. 1995, 15, 9–19. [Google Scholar] [CrossRef]
  35. Wang, S.; Huang, J. Cooperative output regulation of linear multi-agent systems subject to an uncertain leader system. Int. J. Control 2021, 94, 952–960. [Google Scholar] [CrossRef]
  36. Zhao, G.; Hua, C. Leader-following consensus of multiagent systems via asynchronous sampled-data control: A hybrid system approach. IEEE Trans. Autom. Control 2021, 67, 2568–2575. [Google Scholar] [CrossRef]
  37. Xu, C.; Xu, H.; Su, H.; Liu, C. Adaptive bipartite consensus of competitive linear multi-agent systems with asynchronous intermittent communication. Int. J. Robust Nonlinear Control 2022, 32, 5120–5140. [Google Scholar] [CrossRef]
  38. Yuan, S.; Yu, C.; Sun, J. Adaptive event-triggered consensus control of linear multi-agent systems with cyber attacks. Neurocomputing 2021, 442, 1–9. [Google Scholar] [CrossRef]
  39. He, C.; Huang, J. Leader-following consensus for multiple Euler–Lagrange systems by distributed position feedback control. IEEE Trans. Autom. Control 2021, 66, 5561–5568. [Google Scholar] [CrossRef]
  40. Li, H.; Liu, C.L.; Zhang, Y.; Chen, Y.Y. Practical fixed-time consensus tracking for multiple Euler–Lagrange systems with stochastic packet losses and input/output constraints. IEEE Syst. J. 2021, 16, 6185–6196. [Google Scholar] [CrossRef]
  41. Cao, W.; Zhang, J.; Ren, W. Leader–follower consensus of linear multi-agent systems with unknown external disturbances. Syst. Control Lett. 2015, 82, 64–70. [Google Scholar] [CrossRef]
  42. Wang, Y.; Yuan, Y.; Liu, J. Finite-time leader-following output consensus for multi-agent systems via extended state observer. Automatica 2021, 124, 109133. [Google Scholar] [CrossRef]
  43. Lu, M.; Han, T.; Wu, J.; Zhan, X.S.; Yan, H. Adaptive bipartite output consensus for heterogeneous multi-agent systems via state/output feedback control. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3455–3459. [Google Scholar] [CrossRef]
  44. Wei, Q.; Wang, X.; Zhong, X.; Wu, N. Consensus control of leader-following multi-agent systems in directed topology with heterogeneous disturbances. IEEE/CAA J. Autom. Sin. 2021, 8, 423–431. [Google Scholar] [CrossRef]
  45. Bernstein, D.S. Matrix Mathematics: Theory, Facts, and Formulas; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  46. Zhang, Q.; Delyon, B. A new approach to adaptive observer design for MIMO systems. In Proceedings of the 2001 American Control Conference (Cat. No. 01CH37148), Arlington, VA, USA, 25–27 June 2001; Volume 2, pp. 1545–1550. [Google Scholar]
  47. Anderson, B. Exponential stability of linear equations arising in adaptive identification. IEEE Trans. Autom. Control 1977, 22, 83–88. [Google Scholar] [CrossRef]
  48. Xiao, B.; Yin, S. A new disturbance attenuation control scheme for quadrotor unmanned aerial vehicles. IEEE Trans. Ind. Inform. 2017, 13, 2922–2932. [Google Scholar] [CrossRef]
Figure 1. RBF neural network structure diagram [29].
Figure 1. RBF neural network structure diagram [29].
Drones 08 00450 g001
Figure 2. Schematic diagram of the closed-loop tracking control system.
Figure 2. Schematic diagram of the closed-loop tracking control system.
Drones 08 00450 g002
Figure 3. The two-link robot manipulator model.
Figure 3. The two-link robot manipulator model.
Drones 08 00450 g003
Figure 4. The communication topology among the networked agent system.
Figure 4. The communication topology among the networked agent system.
Drones 08 00450 g004
Figure 5. The trajectories of (a) q i 1 ( t ) and y 1 ( t ) ; (b) q i 2 ( t ) and y 2 ( t ) .
Figure 5. The trajectories of (a) q i 1 ( t ) and y 1 ( t ) ; (b) q i 2 ( t ) and y 2 ( t ) .
Drones 08 00450 g005
Figure 6. The trajectories of (a) tracking errors e 1 i ( t ) ; (b) estimation errors q ¯ 0 i ( t ) .
Figure 6. The trajectories of (a) tracking errors e 1 i ( t ) ; (b) estimation errors q ¯ 0 i ( t ) .
Drones 08 00450 g006
Figure 7. The trajectories of (a) estimation errors w ¯ i ( t ) ; (b) estimation errors E ¯ i ( t ) .
Figure 7. The trajectories of (a) estimation errors w ¯ i ( t ) ; (b) estimation errors E ¯ i ( t ) .
Drones 08 00450 g007
Figure 8. The trajectories of (a) control inputs u i ( t ) ; (b) estimation errors δ ˜ i ( t ) .
Figure 8. The trajectories of (a) control inputs u i ( t ) ; (b) estimation errors δ ˜ i ( t ) .
Drones 08 00450 g008
Figure 9. The trajectories of the tracking errors in [22].
Figure 9. The trajectories of the tracking errors in [22].
Drones 08 00450 g009
Figure 10. The trajectories of (a) ϕ i ( t ) and y 1 ( t ) ; (b) θ i ( t ) and y 2 ( t ) .
Figure 10. The trajectories of (a) ϕ i ( t ) and y 1 ( t ) ; (b) θ i ( t ) and y 2 ( t ) .
Drones 08 00450 g010
Figure 11. The trajectories of (a) ψ i ( t ) and y 1 ( t ) ; (b) ϕ ˜ i ( t ) .
Figure 11. The trajectories of (a) ψ i ( t ) and y 1 ( t ) ; (b) ϕ ˜ i ( t ) .
Drones 08 00450 g011
Figure 12. The trajectories of (a) ϕ i ( t ) and y 1 ( t ) ; (b) θ i ( t ) and y 2 ( t ) .
Figure 12. The trajectories of (a) ϕ i ( t ) and y 1 ( t ) ; (b) θ i ( t ) and y 2 ( t ) .
Drones 08 00450 g012
Figure 13. The trajectories of (a) ψ i ( t ) and y 1 ( t ) ; (b) ϕ ˜ i ( t ) .
Figure 13. The trajectories of (a) ψ i ( t ) and y 1 ( t ) ; (b) ϕ ˜ i ( t ) .
Drones 08 00450 g013
Figure 14. The trajectories of (a) θ ˜ i ( t ) ; (b) ψ ˜ i ( t ) .
Figure 14. The trajectories of (a) θ ˜ i ( t ) ; (b) ψ ˜ i ( t ) .
Drones 08 00450 g014
Figure 15. The trajectories of (a) θ ˜ i ( t ) ; (b) ψ ˜ i ( t ) .
Figure 15. The trajectories of (a) θ ˜ i ( t ) ; (b) ψ ˜ i ( t ) .
Drones 08 00450 g015
Table 1. Parameters and their meanings.
Table 1. Parameters and their meanings.
SymbolMeaning
qleader’s state
yleader’s output
Q ( w ) the leader’s state matrix
wan unknown parameter vector
Ethe leader’s output matrix
q i , q ˙ i , q ¨ i the generalized coordinate, velocity, and acceleration
M i ( q i ) the inertia matrix
C i ( q i , q ˙ i ) the Coriolis and centrifugal terms
G i ( q i ) the vector of gravitational force
u i the control input
f i ( q i , q ˙ i , q ¨ i ) the unknown external disturbance
p i u i the actuator fault
q ^ i the estimations of q
y ^ i the estimations of y
w ^ i the estimations of w
E ^ i the estimations of E
ζ i the column vector of output matrix observation errors
p ^ i the estimation of p i = 1 / p i
e 1 i , e 2 i the tracking error vectors
χ = [ ϕ , θ , ψ ] T the attitude vector of the quadrotor rigid body
q ˜ the estimation errors of q
w ˜ the estimation errors of w
Table 2. Parameters of two-link manipulators.
Table 2. Parameters of two-link manipulators.
ParameterManipulator 1Manipulator 2Manipulator 3Manipulator 4
l i 1  (m)0.9810.961
l i 2  (m)10.9511.02
m i 1  (kg)1.020.961.011.04
m i 2  (kg)1.121.151.071.09
J i 1  (kg · m2)0.230.210.190.21
J i 2  (kg · m2)0.410.40.420.41
Table 3. The comparison results.
Table 3. The comparison results.
The Tracking ErrorsOverall
Performance
e ¯ and δ
Overall
Performance in [22]
e ¯ and δ
Steady-State
Performance
e ¯ and δ
Steady-State
Performance in [22]
e ¯ and δ
e 11 −0.039, 0.2800.979, 1.475−0.006, 0.0081.623, 0.325
e 12 0.065, 0.6520.571, 1.760−0.003, 0.0091.360, 0.324
e 13 0.072, 0.9720.473, 1.708−0.006, 0.0141.062, 0.322
e 14 −0.011, 1.1920.091, 1.5950.006, 0.0141.322, 0.366
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; He, H.; Han, C.; Lin, B.; Shi, M.; Qin, K. The Distributed Adaptive Bipartite Consensus Tracking Control of Networked Euler–Lagrange Systems with an Application to Quadrotor Drone Groups. Drones 2024, 8, 450. https://doi.org/10.3390/drones8090450

AMA Style

Li Z, He H, Han C, Lin B, Shi M, Qin K. The Distributed Adaptive Bipartite Consensus Tracking Control of Networked Euler–Lagrange Systems with an Application to Quadrotor Drone Groups. Drones. 2024; 8(9):450. https://doi.org/10.3390/drones8090450

Chicago/Turabian Style

Li, Zhiqiang, Huiru He, Chenglin Han, Boxian Lin, Mengji Shi, and Kaiyu Qin. 2024. "The Distributed Adaptive Bipartite Consensus Tracking Control of Networked Euler–Lagrange Systems with an Application to Quadrotor Drone Groups" Drones 8, no. 9: 450. https://doi.org/10.3390/drones8090450

Article Metrics

Back to TopTop