Next Article in Journal
Design, Optimization, and Experimental Evaluation of Slow Light Generated by π-Phase-Shifted Fiber Bragg Grating for Use in Sensing Applications
Previous Article in Journal
Poisoning Attacks against Communication and Computing Task Classification and Detection Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Event-Triggered Consensus of Multi-Agent Systems in Sense of Asymptotic Convergence

1
School of Automation, Guangdong Polytechnic Normal University, Guangzhou 523952, China
2
Guangzhou Institute of Advanced Technology, Guangzhou 511458, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(2), 339; https://doi.org/10.3390/s24020339
Submission received: 22 November 2023 / Revised: 16 December 2023 / Accepted: 3 January 2024 / Published: 6 January 2024
(This article belongs to the Section Intelligent Sensors)

Abstract

:
In this paper, the asymptotic consensus control of multi-agent systems with general linear agent dynamics is investigated. A neighbor-based adaptive event-triggering strategy with a dynamic triggering threshold is proposed, which leads to a fully distributed control of the multi-agent system, depending only on the states of the neighboring agents at triggering moments. By using the Lyapunov method, we prove that the states of the agents converge asymptotically. In addition, the proposed event-triggering strategy is proven to exclude Zeno behavior. The numerical simulation results illustrate that the agent states achieve consensus in sense of asymptotic convergence. Furthermore, the proposed strategy is shown to be scalable in case of variable agent numbers.

1. Introduction

The recent research into multi-agent systems (MASs) paves the way for energy management and scheduling in smart grids [1], robot formation [2] and sensor networks [3,4]. Due to its wide applications, the consensus of MAS has attracted widespread attention [5,6]. As the size of MAS increases, the limits of the communication bandwidth and energy resources of the agents (such as mobile robots powered by batteries) have become difficult issues that need to be resolved in controller design. To deal with these problems, distributed communication was proposed by means of exchanging local information between neighboring agents [7]. Event-triggered strategies provide an effective solution in reducing the communication frequency and thus energy consumption of agents, by transmitting the state of agents only when the triggering condition is activated [7,8,9,10].
The cooperation control of MAS enables the organized agents to accomplish complex tasks. The research in this area attracts large attention for decades [11]. During this time, it evolves from centralized to distributed control [12]. Normally, centralized methods depend on massive communication, with which the bandwidth of communication of the MAS in large scale is heavily burdened. In addition, a high communication frequency leads to fast energy consumption, which shortens the usage time of the system powered by batteries. For these reasons, distributed control has been paid more attention. In most works of distributed control, the agents are assumed to communicate continuously. Furthermore, agents are aware of the topology of the communication network, such as in [13,14]. However, the continuous control requires the high-frequency communication of agents.
To overcome this drawback, sampled-data control is proposed, which utilizes time-triggered strategies with a predetermined sampling sequence. Recently, a large amount of works on sampled-data control was developed, such as [15,16,17]. In these works, agents need to communicate with their neighbors synchronously and select the appropriate sampling period, which is difficult to implement in practice.
The event-triggered control, which is more efficient than the sampled-data control in reducing unnecessary information transmission, is originally proposed in [18]. This method is then implemented in MASs in [19,20,21,22,23]. The critical issue of the event-triggered control is to determine the events and the triggering mechanism. The updates to the controller and exchanges of information occur exclusively when the triggering condition is satisfied. In general, the implementation of event-triggered strategies in MASs usually depends on the eigenvalues of the Laplacian matrix, which is global information associated with the communication graph, as shown in [24,25,26,27,28,29]. In order to improve the strategy, fully distributed event-triggered control was proposed very recently [30,31]. The consensus error is proven to be uniformly ultimately bounded.
Recently, adaptive control was successfully proposed in MASs. The agent with first-order dynamics in MASs was considered in [27]. Then, the control strategy was developed in agents with general linear agent dynamics [28]. Within this framework, more issues such as actuator and sensor faults were considered [32]. Considering bounded uncertainties, a static non-smooth event-triggered protocol is investigated in [29]. In [33], the external disturbances are considered. Notably, in the aforementioned papers, real-time feedback of the neighbor state is required.
To further reduce the frequency of communication between agents, dynamic event-triggered adaptive control was proposed recently [34,35], in which the event-triggered threshold was a dynamic variable. In [34], the dependence of eigenvalues of Laplacian matrix is mandatory. It is then released in [35], where the neighbors broadcast their information at the agents’ triggering instants. This increases the communication burden at triggering moments.
In this paper, we make significant modifications to the control strategy for MASs with a linear dynamic, which is proposed in [35]. Differently from the objective of the research in [23], where the consensus control of discrete multi-agent systems with parameter uncertainties was investigated, we focus on the adaptive event-triggered control for general linear agents in this paper. The main contributions are twofold. First, the proposed control strategy is independent on global information, and the consensus error converges asymptotically which is different from [31]. Second, continuously reading and listening to neighbor states is not required for any agents. Each agent exclusively broadcasts its information to neighbors when the event-triggered condition is satisfied. We also prove that the Zeno behavior can be excluded. The simulation results show that the consensus error converges asymptotically and the communication frequency is significantly decreased.
The rest of this paper is organized as follows. In Section 2, some preliminaries on graph theory are given. In Section 3, the adaptive event-triggered strategy is proposed. In Section 4, illustrative numerical simulation is carried out, which demonstrates the effectiveness of the theoretical results. Some convolutions are given in Section 5.

2. Preliminaries

2.1. Problem Statement

We consider a multi-agent system with N agents. Each agent is modeled by general linear dynamics as follows.
x ˙ i ( t ) = A x i ( t ) + B u i ( t ) , i = 1 , 2 , , N ,
where x i ( t ) R n and u i ( t ) R m represent the state and control input of agent i, respectively. Matrices A R n × n and B R n × m are both constant matrices.
The objective of this paper is to design a fully distributed event-triggered consensus protocol for a leaderless network of the multi-agent system with agent dynamics modeled by (1). The requirements for this goal are as follows: the communication of the agent is distributed; each agent can only communicate with its neighboring agents and can only obtain their state information. The state variables of all agents ultimately reach asymptotic consensus (the definition of consensus in the multi-agent system is given later). Considering the constraints of communication network bandwidth and energy in real systems, it is also required that agents cannot communicate in real-time.
Definition 1 ([34]). 
Consensus of Multi-Agent Systems. In a multi-agent system with N agents, for any given initial state,
x ( 0 ) = [ x 1 T ( 0 ) , x 2 T ( 0 ) , , x N T ( 0 ) ] T ,
if lim t x i ( t ) x j ( t ) = 0 , where i , j = 1 , 2 , , N , then the states of agents achieve consensus.

2.2. Graph Theory

In a multi-agent system, the communication between agents can be modeled using graph theory. In the system, agents are represented by nodes, and the communication between agents is represented by edges. A graph G = ( V , E ) , where V is a non-empty set of nodes in the graph, and E V × V is a set of edges that do not intersect with the nodes. It is worth noting that an element in E is denoted by ( i , j ) , which indicates that node i can send information to node j. In this case, node i is also called a neighbor of node j, and node j is an out-neighbor of node i. The set of neighbors of node i is denoted as N i = { j : ( j , i ) E } , and the number of neighbors is defined as | N i | . If, for any ( i , j ) E in the graph, there must exist ( j , i ) E , then the graph is called an undirected graph. In an undirected graph, if there exists a path (consisting of one or more edges) between every pair of distinct nodes, the undirected graph is connected; otherwise, it is not connected. For a graph G , the adjacency matrix is denoted by A = [ a i j ] R N × N , where the elements are defined as a i i = 0 , the off diagonal elements a i j = 1 if ( i , j ) E , or a i j = 0 otherwise. The Laplacian matrix associated with graph G is denoted by L = [ l i j ] R N × N , where l i i = j = 1 N a i j , l i j = a i j ( i j ) . Thus, | N i | = l i i . Two assumptions are given as follows.
Assumption 1. 
The graph G is undirected and connected.
Assumption 2. 
(A, B) is stabilizable.
According to Assumption A2, the algebraic Riccati equation R A + A T R R B B T R + I = 0 has a solution R > 0 [35].
Lemma 1 ([31]). 
The Laplacian matrix has a zero eigenvalue, and the corresponding eigenvector is 1 n , which is a column vector with all elements equal to 1. Moreover, all non-zero eigenvalues of the Laplacian matrix have positive real parts. In an undirected graph, if the graph is connected, its corresponding Laplacian matrix L also has a single zero eigenvalue. The smallest non-zero eigenvalue of the matrix L , denoted as λ 2 ( L ) , satisfies the equality λ 2 ( L ) = m i n x 0 , 1 T x = 0 ( x T L x x T x ) .

3. Main Results

In this section, an adaptive event-triggered consensus protocol is proposed. A schema of the controller is shown in Figure 1. In general, an event-based consensus protocol mainly consists of an event-based control law and a triggering function of the agent [30]. The controller design input and the triggering function are given in the sequel.

3.1. The Consensus Control Module

Inspired by paper [31], we propose the adaptive control input for agent i as follows.
u i ( t ) = E j = 1 N c i j ( t ) a i j ( x ^ i ( t ) x ^ j ( t ) ) , c ˙ i j ( t ) = ( x ^ i ( t ) x ^ j ( t ) ) T F ( x ^ i ( t ) x ^ j ( t ) ) , i = 1 , , N ,
where c i j ( 0 ) 0 . Matrices E R m × n and F = R B B T R R n × n are feedback gains of the controller. The continuous state of agent i is estimated by the following equation.
x ^ i ( t ) = x i ( t k i ) + g i ( t ) ( t t k i ) , t [ t k i , t k + 1 i )
where
g ( t ) = 0 , t t 2 i x i ( t k i ) x i ( t k 1 i ) t k i t k 1 i , t > t 2 i
Note that x i ( t 1 i ) = x i ( 0 ) .
Remark 1. 
In (2), the adaptive parameter c i j ( t ) is used to regulate the weights of communication links on the topology. The protocol of c i j ( t ) is designed only based on the state variables of agent i and j at triggering moments. Therefore, the controller is fully distributed. In addition, the continuous measuring and listening of agent states are avoided.
Remark 2. 
If c i j ( 0 ) 0 , we conclude that c i j ( t ) is always greater than zero, since c ˙ i j 0 .

3.2. The Event-Triggering Protocol

The triggering function is designed as follows:
f i ( t ) = j = 1 N ( 1 + c i j ) a i j e i T F e i 1 8 j = 1 N a i j x ^ i x ^ j T F ( x ^ i x ^ j ) θ i , θ ˙ i = ρ i θ i σ i j = 1 N ( 1 + c i j ) a i j e i T F e i 1 8 j = 1 N a i j x ^ i x ^ j T F ( x ^ i x ^ j ) ,
where e i ( t ) x ^ i ( t ) x i ( t ) , i = 1 , , N is the estimating error of the state of agent i. Parameters ρ i and σ i are both positive constant scalars, which are not equal for agents. The selection of these parameters is discussed later.
The triggering time for agent i is defined by t k + 1 i inf { t > t k i | f i ( t ) 0 } , where t k i represents the k-th triggering time of agent i, and f i ( t ) 0 is called the event trigger condition. Note that t 1 i = 0 . As can be seen from the event-based consensus framework in Figure 1, the update of the controller signal of agent i depends on its own states at its triggering moments and the states of its neighbors at their latest triggering moments.
In the second equation in (4), by choosing some positive scalar ρ i and σ i , we can obtain θ ˙ i > ( ρ i + σ i ) θ i except in the triggering moments. According to the comparison principle, it holds that θ i ( t ) > θ i ( 0 ) exp [ ( ρ i + σ i ) t ] .

3.3. Consensus Analysis

In the sequel, we will prove that the multi-agent system achieves asymptotic consensus under the proposed protocol on avoiding Zeno behavior. Let us define by ξ i x i ( 1 / N ) j = 1 N x j the consensus error of agent i. The compact form of ξ i for all the agents is represented by vector ξ T = [ ξ 1 , , ξ N ] .
Theorem 1. 
With Assumptions 1 and 2 and choosing E = B T R , where R > 0 is the solution of the algebraic Riccati equation (ARE) R A + A T R R B B T R + I = 0 , then, the consensus error ξ ultimately achieves asymptotic consensus and the adaptive parameter c i j in Equation (2) is uniformly ultimately bounded, if the adaptive protocol (2) satisfies c i j ( 0 ) 0 , and the event-triggering function (4) satisfies θ i ( 0 ) > 0 , σ i < 1 , and σ i + ρ i > 1 .
Proof. 
Based on the design of the control input u i ( t ) = E j = 1 N c i j ( t ) a i j ( x ^ i ( t ) x ^ j ( t ) ) and the ARE, we chose the feedback matrix in the control input as E = B T R and the Lyapunov function candidate as follows:
V 1 = i = 1 N ξ i T R ξ i ,
Its time derivative yields ξ i ˙ = A ξ i + B E j = 1 N c i j ( t ) a i j ( x ^ i x ^ j ) , and ξ i ξ j = x i x j .
Then, the derivative of V 1 yields:
V ˙ 1 = 2 i = 1 N ξ i T R ξ i ˙ = i = 1 N ξ i T ( R A + A T R ) ξ i + 2 i = 1 N ξ i T R B E j = 1 N c i j a i j ( x ^ i x ^ j ) .
Since we assume that the communication topology network of MAS in this paper is undirected. The entries in the adjacency matrix satisfy a i j = a j i = 1 . Then, the adaptive gains are given such that c i j ( t ) = c j i ( t ) . Due to E = B T R , F = R B B T R , we have
i = 1 N ξ i T R B E j = 1 N c i j a i j ( x ^ i x ^ j ) = 1 2 i = 1 N j = 1 N c i j a i j ( ξ i ξ j ) T F ( x ^ i x ^ j ) .
By substituting Equation (7) in (6), we can rewrite V ˙ 1 as follows.
V ˙ 1 = i = 1 N ξ i T ( R A + A T R ) ξ i i = 1 N j = 1 N c i j a i j ( ξ i ξ j ) T F ( x ^ i x ^ j ) .
Recalling that ξ i ξ j = x i x j and e i = x ^ i x i , we obtain the following equation.
ξ i ξ j = ( x ^ i x ^ j ) ( e i e j ) .
According to (9), we can rewrite (8) as follows.
V ˙ 1 = i = 1 N ξ i T ( R A + A T R ) ξ i i = 1 N j = 1 N c i j a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) + i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( x ^ i x ^ j ) .
Using Young’s inequality [31], we can obtain
i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( x ^ i x ^ j ) 1 4 i = 1 N j = 1 N c i j a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) + i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( e i e j ) .
Substituting (11) into (10), we can rewrite V ˙ 1 as follows.
V ˙ 1 i = 1 N ξ i T ( R A + A T R ) ξ i 3 4 i = 1 N j = 1 N c i j a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) + i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( e i e j ) .
To utilize the algebraic Riccati equation (ARE), we add a second term V 2 into (5), which is given as follows.
V 1 + V 2 = i = 1 N ξ i T R ξ i + i = 1 N j = 1 N 3 a i j ( c i j α ) 2 8
where α is a positive constant. Recall that c ˙ i j ( t ) = ( x ^ i ( t ) x ^ j ( t ) ) T F ( x ^ i ( t ) x ^ j ( t ) ) , i = 1 , , N ; the time derivative of Equation (13) yields
V ˙ 1 + V ˙ 2 i = 1 N ξ i T ( R A + A T R ) ξ i 3 4 i = 1 N j = 1 N α a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) + i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( e i e j ) .
Note that
i = 1 N j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) = i = 1 N j = 1 N a i j ( ξ i ξ j ) T F ( ξ i ξ j ) + i = 1 N j = 1 N a i j ( e i e j ) T F ( e i e j ) + 2 i = 1 N j = 1 N a i j x i x j T F ( e i e j ) ,
where
i = 1 N j = 1 N a i j ( x i x j ) T F ( e i e j ) 1 4 i = 1 N j = 1 N a i j ( x i x j ) T F ( x i x j ) + i = 1 N j = 1 N a i j ( e i e j ) T F ( e i e j ) ,
Combining (15) and (16), we can obtain
i = 1 N j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) 1 2 i = 1 N j = 1 N a i j ( ξ i ξ j ) T F ( ξ i ξ j ) + i = 1 N j = 1 N a i j ( e i e j ) T F ( e i e j ) .
Substituting (17) into (14), we have
V ˙ 1 + V ˙ 2 i = 1 N ξ i T ( R A + A T R ) ξ i α 4 i = 1 N j = 1 N a i j ( ξ i ξ j ) T F ( ξ i ξ j ) + α 2 i = 1 N j = 1 N a i j ( e i e j ) T F ( e i e j ) 1 4 i = 1 N j = 1 N α a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) + i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( e i e j ) .
We rewrite Equation (18) as follows.
V ˙ 1 + V ˙ 2 ξ T I N ( R A + A T R ) α 4 L F ξ + α 2 i = 1 N j = 1 N ( 1 + 2 α · c i j ) a i j ( e i e j ) T F ( e i e j ) 1 2 j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) ,
where ξ = [ ξ 1 T , , ξ N T ] T .
Note that
i = 1 N j = 1 N c i j a i j ( e i e j ) T F ( e i e j ) 2 i = 1 N j = 1 N c i j a i j e i T F e i + 2 i = 1 N j = 1 N c i j a i j e j T F e j = 4 i = 1 N j = 1 N c i j a i j e i T F e i .
Substituting (20) into (19), we obtain
V ˙ 1 + V ˙ 2 ξ T I N ( R A + A T R ) α 4 L F ξ + 2 α i = 1 N j = 1 N 1 + 2 α · c i j a i j e i T F e i 1 8 j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) .
According to Lemma 1, we have ξ T ( L F ) ξ λ 2 ( L ) ξ T I N F ξ . Then, we can rewrite (21) as follows.
V ˙ 1 + V ˙ 2 ξ T I N ( R A + A T R ) λ 2 ( L ) α 4 I N F ξ + 2 α i = 1 N j = 1 N 1 + 2 α · c i j a i j e i T F e i 1 8 j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) ,
where α satisfies α max { 2 , 4 / λ 2 ( L ) } . Then,
V ˙ 1 + V ˙ 2 ξ T I N ( R A + A T R ) I N F ξ + 2 α i = 1 N j = 1 N 1 + c i j a i j e i T F e i 1 8 j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) ξ T ξ + 2 α i = 1 N j = 1 N 1 + c i j a i j e i T F e i 1 8 j = 1 N a i j ( x ^ i x ^ j ) T F ( x ^ i x ^ j ) .
We now reconstruct the Lyapunov function by adding a third term V 3 = 2 α i = 1 N θ i into (13), as follows.
V 1 + V 2 + V 3 = i = 1 N ξ i T R ξ i + i = 1 N j = 1 N 3 a i j ( c i j α ) 2 8 + 2 α i = 1 N θ i
where θ i ( 0 ) > 0 . We calculate the derivative of (24), with h i = j = 1 N ( 1 + c i j ) a i j e i T F e i 1 8 j = 1 N a i j x ^ i x ^ j T F ( x ^ i x ^ j ) , which yields
V ˙ 1 + V ˙ 2 + V ˙ 3 ξ T ξ + 2 α i = 1 N ρ i θ i + ( 1 σ i ) h i ,
where σ i < 1 . Considering (4), the inequality (25) yields
V ˙ 1 + V ˙ 2 + V ˙ 3 ξ T ξ + 2 α i = 1 N ( 1 σ i ρ i ) θ i ,
By choosing σ i + ρ i > 1 , we have V ˙ 1 + V ˙ 2 + V ˙ 3 0 . The Lyapunov function is equal to zero if and only if ξ = 0 and θ = 0 . Hence, the largest invariant set is composed by Ω = { ξ = 0 , θ = 0 } , where θ = [ θ 1 , , θ n ] T . By using LaSalle’s invariance principle, we conclude that ξ ( t ) 0 , θ ( t ) 0 as t and c i j c ¯ where c ¯ is a limited positive constant. Thus, the consensus of (1) is achieved in the sense of asymptotic convergence. That ends the proof. □
In the sequel, we will discuss how the Zeno behavior is excluded. Let us define Δ k i = t k + 1 i t k i .
Theorem 2. 
If the parameters of the controller (2) and triggering function (4) are selected satisfying c i j ( 0 ) 0 , θ i ( 0 ) > 0 , σ i < 1 , and σ i + ρ i > 1 , then, there does not exist any positive finite constant scalars T, which makes lim m k = 0 m Δ k i < T .
Proof. 
Recall that e i ( t ) x ^ i ( t ) x i ( t ) , i = 1 , , N ; then,
e ˙ i = g i ( t ) ( A x i + j = 1 N c i j a i j B E ( x ^ i x ^ j ) ) .
The derivative of e i ( t ) yields
d e i ( t ) d t = e i T e i e ˙ i e ˙ i A e i + g i + A x i + j = 1 N c i j a i j B E ( x ^ i x ^ j ) .
where t [ t k i , t k + 1 i ) . According to theorem 1, we infer that x ˙ = A x i + j = 1 N c i j a i j B E ( x ^ i x ^ j ) is bounded. Thus, ω ¯ i satisfies ω ¯ i > g i + A x i + j = 1 N c i j a i j B E ( x ^ i x ^ j ) .
We process the proof by contradiction in the sequel. We first assume that the Zeno-behavior exists, such that
lim k t k i = lim k t k + 1 i
a.
When A 0 ,
By using the comparison principle, we can rewrite (28) as follows.
e i ( t ) ω ¯ i A e A ( t t k i ) 1 , t [ t k i , t k + 1 i )
Therefore,
e i T ( t ) R B ( ω ¯ i ) R B A e A ( t t k i ) 1 , t [ t k i , t k + 1 i ) .
According to (4), we can infer that at triggering instant t = t k + 1 i ,
j = 1 N ( 1 + c i j ( t k + 1 i ) ) a i j e i ( t k + 1 i ) T F e i ( t k + 1 i ) 1 8 j = 1 N a i j ( x ^ i ( t k + 1 i ) x ^ j ( t k + 1 i ) ) T F ( x ^ i ( t k + 1 i ) x ^ j ( t k + 1 i ) ) θ i ( 0 ) e ρ i + σ i t k + 1 i ,
Then,
e i T ( t k + 1 i ) R B 2 1 8 j = 1 N q j T ( t k + 1 i ) R B 2 + θ i ( 0 ) e ρ i + σ i t k + 1 i l i i ( 1 + c ¯ ) .
where q j ( t k + 1 i ) = x ^ i ( t k + 1 i ) x ^ j ( t k + 1 i ) .
According to (29), we have
1 8 j = 1 N q j T ( t k + 1 i ) R B 2 + θ i ( 0 ) e ρ i + σ i t k + 1 i l i i ( 1 + c ¯ ) ω ¯ i R B A e A ( t k + 1 i t k i ) 1 2
which infers that θ i ( 0 ) < 0 . It contradicts the condition θ i ( 0 ) > 0 . Therefore, Equation (29) is not true, i.e., the Zeno behavior does not exist.
b.
When A = 0 , according to Equations (28) and (29), Equation (31) yields,
1 8 j = 1 N q j T ( t k + 1 i ) R B 2 + θ i ( 0 ) e ρ i + σ i t k + 1 i d i ( 1 + c ¯ ) ( ω ¯ i ) R B ( t k + 1 i t k i ) , t [ t k i , t k + 1 i ) .
which also infers that θ i ( 0 ) < 0 . It contradicts the condition θ i ( 0 ) > 0 . Therefore, Equation (29) is not true, i.e., the Zeno behavior does not exist.
According to the aforementioned analysis, the Zeno behavior of MAS is excluded. That ends the proof. □

4. Simulation Results

In this section, we carry out the numerical simulation to validate the proposed theoretical result and compare it to some related works to show the improvements.
We consider the following two scenarios. In the first scenario, a multi-agent system with five agents is considered. The dynamics of each agent are modeled by a third-order linear system (34).
A = 0 1 0 0 0 1 0 0 0 , B = 0 0 1
The initial states of the five agents are given by x 1 ( 0 ) = [ 0.25 , 0.25 , 0.25 ] T , x 2 ( 0 ) = [ 1.25 , 1.25 , 1.25 ] T , x 3 ( 0 ) = [ 2 , 2 , 2 ] T , x 4 ( 0 ) = [ 2.5 , 2.5 , 2.5 ] T and x 5 ( 0 ) = [ 4 , 4 , 4 ] T .
The communication topology of the MAS is shown in Figure 2. It is evident that the graph is undirected and connected, as its Laplacian yields
L = 3 1 1 1 0 1 2 1 0 0 1 1 2 0 0 1 0 0 2 1 0 0 0 1 1 .
In controller (2), we choose c i j ( 0 ) = 0.25 , if j N i . The gain matrices are obtained by solving the ARE R A + A T R R B B T R + I = 0 , as follows.
E = 1.000 2.4142 2.4142 , F = 1.0000 2.4142 2.4142 2.4142 5.8284 5.8284 2.4142 5.8284 5.8284 .
In (4), the initial value of the event-triggered threshold is assigned by θ i ( 0 ) = 0.25 . The scalars ρ i and σ i are given by ρ i = 0.3 , σ i = 0.8 , where i V .
For the sake of comparison, we, respectively, carried out the simulation using the control strategies proposed in [31,35]. In both simulations, the agent model and initial states are given the same as in (34). In [31], the adaptive parameters are also initialized by c i j ( 0 ) = 0.25 , if j N i . In [35], the parameters of the triggering threshold are given by ρ i = 0.3 , σ i = 0.8 , where i V as well.
In the second scenario, we consider a MAS with eight agents. The dynamic of each agent is modeled as follows.
A = 0 1 0 0 0 1 0 0.4 0.5 , B = 0 0 1
The initial states of the eight agents are given by x 1 ( 0 ) = [ 0.25 , 0.25 , 0.25 ] T , x 2 ( 0 ) = [ 1.5 , 1.5 , 1.5 ] T , x 3 ( 0 ) = [ 1.25 , 1.25 , 1.25 ] T , x 4 ( 0 ) = [ 2 , 2 , 2 ] T , x 5 ( 0 ) = [ 2.5 , 2.5 , 2.5 ] T , x 6 ( 0 ) = [ 3.25 , 3.25 , 3.25 ] T , x 7 ( 0 ) = [ 4.5 , 4.5 , 4.5 ] T and x 8 ( 0 ) = [ 4.75 , 4.75 , 4.75 ] T .
The network topology of the MAS with eight agents is shown in Figure 3. The graph also is undirected and connected, and its Laplacian matrix is as follows.
L = 4 1 1 1 0 1 0 0 1 3 1 0 0 0 1 0 1 1 4 1 1 0 0 0 1 0 1 5 1 1 0 1 0 0 1 1 4 0 1 1 1 0 0 1 0 2 0 0 0 1 0 0 1 0 2 0 0 0 0 1 1 0 0 2 .
In controller (2), we choose c i j ( 0 ) = 0.25 , if j N i . The gain matrices are obtained by solving the ARE R A + A T R R B B T R + I = 0 , as follows.
E = 1 1.9956 1.7893 , F = 1 1.9956 1.7893 1.9956 3.9823 3.5707 1.7893 3.5707 3.2018 .
In (4), the initial value of the event-triggered threshold is assigned by θ i ( 0 ) = 0.25 . The scalars ρ i and σ i are given by ρ i = 0.3 , σ i = 0.8 , where i V .
For the sake of comparison, we, respectively, carried out the simulation using the control strategies proposed in [31,35]. In both simulations, the agent model and initial states are given the same as in (35). In [31], the adaptive parameters are also initialized by c i j ( 0 ) = 0.25 , if j N i . In [35], the parameters of the triggering threshold are given by ρ i = 0.3 , σ i = 0.8 , where i V as well.
Using the controllers proposed in this paper and in [31,35], the comparison of the first components ξ i ( 1 ) , i V of consensus error ξ is given in Figure 4, in which we observe that by using the proposed control, the MAS achieves consensus asymptotically and the convergence speed is faster than that in [31,35].
By using the three methods, we also compare the control outputs of the agents in Figure 5. We can see that both the control outputs using the proposed strategy and that in [35] have less fluctuation than [31]. However, our proposed method has fewer triggering times than [35], as shown in Figure 6.
Table 1 and Table 2, respectively, show the statistics of the triggering times of five and eight agents, using the three control strategies. According to Table 1 and Table 2, we can observe that the triggering frequency is significantly reduced by using our proposed controller, compared with that in papers [31,35].
In order to show the scalability of the proposed strategy, we consider two scenarios, i.e., one agent joins (leaves) the group of agents at a certain time instant. We reconsider the MAS represented by the topology in Figure 2.
In the first scenario, the 6th agent joints the group at 3 s. Then, the topology of the MAS becomes Figure 7. By using the proposed strategy, the consensus errors ξ are shown in Figure 8, in which we can observe that the MAS achieves consensus asymptotically even if the 6th agent joints to the network at 3 s. In the second scenario, the 2nd agent is disconnected with the other agents at 3 s. Then, the topology of the MAS becomes Figure 9. The consensus error ξ is shown in Figure 10, in which we can observe that the MAS also achieves consensus asymptotically by using the proposed strategy, even if the 2nd agent is disconnected with neighbors at 3 s.

5. Conclusions and Perspectives

In this paper, we address the consensus problem of multi-agent systems with general linear dynamics. A dynamic event-triggered adaptive control strategy is proposed. Compared with the existing works, our proposed strategy leads to a consensus of the agents’ states in the sense of asymptotic convergence. Furthermore, it improves the convergence speed and reduces the triggering frequency. The proposed strategy is fully distributed and scalable. Global information is not required by using the strategy. The agent states achieve consensus asymptotically even if the communication topology is switching. Under this strategy, continuous communication among agents and simultaneous broadcasts of the neighbors’ information are avoided.
In practice, time delay widely exists in discontinuous communications. The consensus problem with an uncertain and stochastic communication delay is an open topic for further study. On the other hand, the actuator saturation of agents should be considered, which leads to nonlinearities in the agent dynamics. In this case, the contradiction of triggering frequency and the response rapidity should be treated in the future.

Author Contributions

Z.H. contributes to the conception and idea of the study, and he completes most reduction of the manuscript. Z.Z. contributes to the derivation of some theoretical results and part of the simulation. H.Y. performs the analysis with constructive discussions. W.W. contributes significantly to the manuscript preparation. J.W. performs the data analyses and some write of the manuscript. Z.X. performs the analysis of the simulation results. All authors have read and agreed to the published version of the manuscript.

Funding

This work is carried out with the funds of National Key Research and Development Program of China, No.2018YFA0902900, and Chinese Universities Industry-academia-research Innovation Fund No.2021ZYA04001.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mahela, O.P.; Khosravy, M.; Gupta, N.; Khan, B.; Alhelou, H.H.; Mahla, R.; Patel, N.; Siano, P. Comprehensive Overview of Multi-Agent Systems for Controlling Smart Grids. Csee J. Power Energy Syst. 2022, 8, 115–131. [Google Scholar] [CrossRef]
  2. Ge, X.; Han, Q.L. Distributed Formation Control of Networked Multi-Agent Systems Using a Dynamic Event-Triggered Communication Mechanism. IEEE Trans. Ind. Electron. 2017, 64, 8118–8127. [Google Scholar] [CrossRef]
  3. Ge, X.; Han, Q.L. Distributed Sampled-Data Asynchronous H∞ Filtering of Markovian Jump Linear Systems over Sensor Networks. Signal Process. 2016, 127, 86–99. [Google Scholar] [CrossRef]
  4. Wang, J.; Zhang, X.M.; Han, Q.L. Event-Triggered Generalized Dissipativity Filtering for Neural Networks With Time-Varying Delays. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 77–88. [Google Scholar] [CrossRef] [PubMed]
  5. Zhan, J.; Li, X. Consensus in Networked Multiagent Systems With Stochastic Sampling. IEEE Trans. Circuits Syst. II Express Briefs 2017, 64, 982–986. [Google Scholar] [CrossRef]
  6. Wen, G.; Wang, H.; Yu, X.; Yu, W. Bipartite Tracking Consensus of Linear Multi-Agent Systems With a Dynamic Leader. IEEE Trans. Circuits Syst. II Express Briefs 2018, 65, 1204–1208. [Google Scholar] [CrossRef]
  7. Ding, L.; Han, Q.L.; Ge, X.; Zhang, X.M. An Overview of Recent Advances in Event-Triggered Consensus of Multiagent Systems. IEEE Trans. Cybern. 2018, 48, 1110–1123. [Google Scholar] [CrossRef]
  8. Ge, X.; Han, Q.L.; Ding, D.; Zhang, X.M.; Ning, B. A Survey on Recent Advances in Distributed Sampled-Data Cooperative Control of Multi-Agent Systems. Neurocomputing 2018, 275, 1684–1701. [Google Scholar] [CrossRef]
  9. Nowzari, C.; Garcia, E.; Cortés, J. Event-Triggered Communication and Control of Networked Systems for Multi-Agent Consensus. Automatica 2019, 105, 1–27. [Google Scholar] [CrossRef]
  10. Yang, D.; Ren, W.; Liu, X. Decentralized Consensus for Linear Multi-Agent Systems under General Directed Graphs Based on Event-Triggered/Self-Triggered Strategy. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 1983–1988. [Google Scholar] [CrossRef]
  11. Murray, R.M. Recent Research in Cooperative Control of Multivehicle Systems. J. Dyn. Syst. Meas. Control. 2007, 129, 571–583. [Google Scholar] [CrossRef]
  12. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination. IEEE Trans. Ind. Informatics 2013, 9, 427–438. [Google Scholar] [CrossRef]
  13. Li, Z.; Duan, Z.; Chen, G.; Huang, L. Consensus of Multiagent Systems and Synchronization of Complex Networks: A Unified Viewpoint. IEEE Trans. Circuits Syst. Regul. Pap. 2010, 57, 213–224. [Google Scholar] [CrossRef]
  14. Ma, C.; Zhang, J. Necessary and Sufficient Conditions for Consensusability of Linear Multi-Agent Systems. IEEE Trans. Autom. Control. 2010, 55, 1263–1268. [Google Scholar] [CrossRef]
  15. Yu, W.; Zheng, W.X.; Chen, G.; Ren, W.; Cao, J. Second-Order Consensus in Multi-Agent Dynamical Systems with Sampled Position Data. Automatica 2011, 47, 1496–1503. [Google Scholar] [CrossRef]
  16. Wen, G.; Duan, Z.; Yu, W.; Chen, G. Consensus of Multi-Agent Systems with Nonlinear Dynamics and Sampled-Data Information: A Delayed-Input Approach: Consensus of Multi-Agent Systems with Sampled-Data Information. Int. J. Robust Nonlinear Control. 2013, 23, 602–619. [Google Scholar] [CrossRef]
  17. Ding, L.; Guo, G. Sampled-Data Leader-Following Consensus for Nonlinear Multi-Agent Systems with Markovian Switching Topologies and Communication Delay. J. Frankl. Inst. 2015, 352, 369–383. [Google Scholar] [CrossRef]
  18. Tabuada, P. Event-Triggered Real-Time Scheduling of Stabilizing Control Tasks. IEEE Trans. Autom. Control. 2007, 52, 1680–1685. [Google Scholar] [CrossRef]
  19. Dimarogonas, D.V.; Frazzoli, E.; Johansson, K.H. Distributed Event-Triggered Control for Multi-Agent Systems. IEEE Trans. Autom. Control. 2012, 57, 1291–1297. [Google Scholar] [CrossRef]
  20. Seyboth, G.S.; Dimarogonas, D.V.; Johansson, K.H. Event-Based Broadcasting for Multi-Agent Average Consensus. Automatica 2013, 49, 245–252. [Google Scholar] [CrossRef]
  21. Zhu, W.; Jiang, Z.P.; Feng, G. Event-Based Consensus of Multi-Agent Systems with General Linear Models. Automatica 2014, 50, 552–558. [Google Scholar] [CrossRef]
  22. Nowzari, C.; Cortés, J. Distributed Event-Triggered Coordination for Average Consensus on Weight-Balanced Digraphs. Automatica 2016, 68, 237–244. [Google Scholar] [CrossRef]
  23. Liu, X.; Xuan, Y.; Zhang, Z.; Diao, Z.; Mu, Z.; Li, Z. Event-Triggered Consensus for Discrete-Time Multi-agent Systems with Parameter Uncertainties Based on a Predictive Control Scheme. J. Syst. Sci. Complex. 2020, 33, 706–724. [Google Scholar] [CrossRef]
  24. Cheng, Y.; Ugrinovskii, V. Event-Triggered Leader-Following Tracking Control for Multivariable Multi-Agent Systems. Automatica 2016, 70, 204–210. [Google Scholar] [CrossRef]
  25. Hu, W.; Liu, L. Cooperative Output Regulation of Heterogeneous Linear Multi-Agent Systems by Event-Triggered Control. IEEE Trans. Cybern. 2017, 47, 105–116. [Google Scholar] [CrossRef] [PubMed]
  26. Xu, W.; Ho, D.W.C.; Li, L.; Cao, J. Event-Triggered Schemes on Leader-Following Consensus of General Linear Multiagent Systems Under Different Topologies. IEEE Trans. Cybern. 2017, 47, 212–223. [Google Scholar] [CrossRef] [PubMed]
  27. Xie, D.; Xu, S.; Zhang, B.; Li, Y.; Chu, Y. Consensus for Multi-agent Systems with Distributed Adaptive Control and an Event-triggered Communication Strategy. Iet Control. Theory Appl. 2016, 10, 1547–1555. [Google Scholar] [CrossRef]
  28. Zhu, W.; Zhou, Q.; Wang, D. Consensus of Linear Multi-Agent Systems via Adaptive Event-Based Protocols. Neurocomputing 2018, 318, 175–181. [Google Scholar] [CrossRef]
  29. Cheng, B.; Li, Z. Designing Fully Distributed Adaptive Event-Triggered Controllers for Networked Linear Systems With Matched Uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3645–3655. [Google Scholar] [CrossRef]
  30. Li, T.; Qiu, Q.; Zhao, C. A Fully Distributed Protocol with an Event-Triggered Communication Strategy for Second-Order Multi-Agent Systems Consensus with Nonlinear Dynamics. Sensors 2021, 21, 4059. [Google Scholar] [CrossRef]
  31. Cheng, B.; Li, Z. Fully Distributed Event-Triggered Protocols for Linear Multiagent Networks. IEEE Trans. Autom. Control 2019, 64, 1655–1662. [Google Scholar] [CrossRef]
  32. Ye, D.; Chen, M.M.; Yang, H.J. Distributed Adaptive Event-Triggered Fault-Tolerant Consensus of Multiagent Systems With General Linear Dynamics. IEEE Trans. Cybern. 2019, 49, 757–767. [Google Scholar] [CrossRef] [PubMed]
  33. Qian, Y.Y.; Liu, L.; Feng, G. Distributed Event-Triggered Adaptive Control for Consensus of Linear Multi-Agent Systems with External Disturbances. IEEE Trans. Cybern. 2020, 50, 2197–2208. [Google Scholar] [CrossRef] [PubMed]
  34. He, W.; Xu, B.; Han, Q.L.; Qian, F. Adaptive Consensus Control of Linear Multiagent Systems With Dynamic Event-Triggered Strategies. IEEE Trans. Cybern. 2020, 50, 2996–3008. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, K.; Ji, Z. Dynamic Event-Triggered Consensus of General Linear Multi-Agent Systems With Adaptive Strategy. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3440–3444. [Google Scholar] [CrossRef]
Figure 1. Event-triggered strategy framework for agent i.
Figure 1. Event-triggered strategy framework for agent i.
Sensors 24 00339 g001
Figure 2. Undirected network topology of the MASs with five agents.
Figure 2. Undirected network topology of the MASs with five agents.
Sensors 24 00339 g002
Figure 3. Undirected network topology of the MASs with eight agents.
Figure 3. Undirected network topology of the MASs with eight agents.
Sensors 24 00339 g003
Figure 4. First components of consensus errors. (a) First components of consensus errors of five agents ξ i ( 1 ) , i V , using the control strategies in [31,35] and this paper respectively. (b) First components of consensus errors of eight agents ξ i ( 1 ) , i V , using the control strategies in [31,35] and this paper respectively.
Figure 4. First components of consensus errors. (a) First components of consensus errors of five agents ξ i ( 1 ) , i V , using the control strategies in [31,35] and this paper respectively. (b) First components of consensus errors of eight agents ξ i ( 1 ) , i V , using the control strategies in [31,35] and this paper respectively.
Sensors 24 00339 g004
Figure 5. Control outputs. (a) Control outputs of the five agents, using the strategies in [31,35] and this paper. (b) Control outputs of the eight agents, using the strategies in [31,35] and this paper.
Figure 5. Control outputs. (a) Control outputs of the five agents, using the strategies in [31,35] and this paper. (b) Control outputs of the eight agents, using the strategies in [31,35] and this paper.
Sensors 24 00339 g005
Figure 6. Triggering instants of agent i, i = 1 , , 5 . (a) The triggering instants of agent i under the control strategy proposed in [35]; (b) The triggering instants of agent i under the control strategy proposed in this paper.
Figure 6. Triggering instants of agent i, i = 1 , , 5 . (a) The triggering instants of agent i under the control strategy proposed in [35]; (b) The triggering instants of agent i under the control strategy proposed in this paper.
Sensors 24 00339 g006
Figure 7. Undirected network topology of the MASs with five agents where 6th agent joints at 3 s.
Figure 7. Undirected network topology of the MASs with five agents where 6th agent joints at 3 s.
Sensors 24 00339 g007
Figure 8. Consensus errors ξ i , i = 1 , , 6 under the control strategy of this paper.
Figure 8. Consensus errors ξ i , i = 1 , , 6 under the control strategy of this paper.
Sensors 24 00339 g008
Figure 9. Undirected network topology of the MASs with five agents where 2nd agent is disconnected at 3 s.
Figure 9. Undirected network topology of the MASs with five agents where 2nd agent is disconnected at 3 s.
Sensors 24 00339 g009
Figure 10. Consensus errors ξ i , i = 1 , , 5 under the control strategy of this paper.
Figure 10. Consensus errors ξ i , i = 1 , , 5 under the control strategy of this paper.
Sensors 24 00339 g010
Table 1. The number of events driven by event triggers in MAS with five agents.
Table 1. The number of events driven by event triggers in MAS with five agents.
Agents12345Total
Under [31]23989214323981735204
Under [35]374626712721255855512,251
Proposed control9991992309906133031
Table 2. The number of events driven by event triggers in MAS with eight agents.
Table 2. The number of events driven by event triggers in MAS with eight agents.
Agents12345678Total
Under [31]1529187220152623712913915145481
Under [35]2351822142068145310923944567669
Proposed control5444624046444813084505163809
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, Z.; Zhou, Z.; Yuan, H.; Wang, W.; Wang, J.; Xu, Z. Adaptive Event-Triggered Consensus of Multi-Agent Systems in Sense of Asymptotic Convergence. Sensors 2024, 24, 339. https://doi.org/10.3390/s24020339

AMA Style

Hou Z, Zhou Z, Yuan H, Wang W, Wang J, Xu Z. Adaptive Event-Triggered Consensus of Multi-Agent Systems in Sense of Asymptotic Convergence. Sensors. 2024; 24(2):339. https://doi.org/10.3390/s24020339

Chicago/Turabian Style

Hou, Zhicheng, Zhikang Zhou, Hai Yuan, Weijun Wang, Jian Wang, and Zheng Xu. 2024. "Adaptive Event-Triggered Consensus of Multi-Agent Systems in Sense of Asymptotic Convergence" Sensors 24, no. 2: 339. https://doi.org/10.3390/s24020339

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop