Next Article in Journal
Restricted Distance-Type Gaussian Estimators Based on Density Power Divergence and Their Applications in Hypothesis Testing
Next Article in Special Issue
From Zeroing Dynamics to Zeroing-Gradient Dynamics for Solving Tracking Control Problem of Robot Manipulator Dynamic System with Linear Output or Nonlinear Output
Previous Article in Journal
Modeling Under-Dispersed Count Data by the Generalized Poisson Distribution via Two New MM Algorithms
Previous Article in Special Issue
Photovoltaic Power Prediction Based on VMD-BRNN-TSP
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Optimization Control for Heterogeneous Multiagent Systems under Directed Topologies

School of Mathematics and Statistics, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1479; https://doi.org/10.3390/math11061479
Submission received: 11 February 2023 / Revised: 10 March 2023 / Accepted: 13 March 2023 / Published: 17 March 2023
(This article belongs to the Special Issue Mathematic Control and Artificial Intelligence)

Abstract

:
This paper focuses on the solutions for the distributed optimization coordination problem (DOCP) for heterogeneous multiagent systems under directed topologies. To begin with, a different convex optimization problem is proposed, which implies a weighted average of the objective function of each agent. Sufficient conditions are set to ensure the unique solution for the DOCP. Then, despite the external disruption, a distributed control mechanism is constructed to drive the state of each agent to the auxiliary state in a finite time. Furthermore, it is demonstrated that the outputs of all agents can achieve the optimal value, ensuring global convergence. Moreover, the controller design rule is expanded with event-triggered communication, and there is no Zeno behavior. Finally, to exemplify the usefulness of the theoretical conclusions, a simulation example is offered.

1. Introduction

The distributed cooperative control of multiagent systems (MASs) has received a substantial amount of attention in recent years due to its wide range of applications in consensus [1,2], the rendezvous of mobile vehicles [3], cooperative monitoring [4,5], the synchronous operation of distant generators in the power grid [6], and other domains. In recent years, consensus as a material collective behavior in MASs has received a substantial amount of attention from academics and practitioners alike [7,8,9,10]. On account of the conceivable utilization of this technology in various fields of engineering, physics, mathematics, and sociology, substantial effort has been concentrated on designing the distributed protocols to achieve consensus under various considerations.
Amidst the extensive investigation of MASs, the distributed optimization problem (DOP), in which all agents reach the ideal value that minimizes the total of local objective functions [11], is one of the most rudimental and widely studied problems. The DOP is widely used in a variety of real-world applications [12,13]. There have been numerous studies about discrete-time and continuous-time optimization protocols, including the gradient algorithms in [14,15] and the Lagrangian-based algorithms in [16,17,18], to solve the DOP. In [19], distributed subgradient methods were proposed to solve the DOP. The adaptive optimization problem was studied in [20] based on an innovative distributed adaptive algorithm with irregular gradient gains. A fully distributed optimal algorithm was proposed in [21] for the continuous-time MASs. The optimization problem was turned into a tracking problem by estimating the property of the global optimal state in [22].
The traditional DOP was recently extended to the distributed optimum coordination problem (DOCP) in [23,24] by regarding the conception of the virtual-objective systems. DOCP stands for the collaboration of various continuous-time objective systems for optimum overall performance. As pointed out in [24], the existing distributed optimization algorithms cannot be straightly enforced for the sake of the agent dynamic behaviors. The DOCP can be resolved by constructing an integrated control law that does not require the best solution to be present in the closed-loop system [24,25,26]. The stability of the agents can be assured due to the characteristics of the complete graph Laplacian matrix, which means that the approaches in [24] may not be used to the directed graph. With different considerations on gradients and local objective functions, Mao et al. [25] devised a distributed ETC for solving DOCP with time-varying networks. In [26], a distributed optimization control strategy for heterogeneous agents with disruption was proposed to solve the DOCP with an undirected graph. Li et al. [27] investigated the distributed optimal output consensus of heterogeneous linear multi-agent systems on unbalanced directed networks, in which the system disregards the impact of outside disturbances. The distributed consensus optimization problem of a multi-agent system with a delay on weighted–balanced networks was studied in [28], which uses a continuous-time distributed optimization algorithm. In [29], a generalized distributed optimization problem for second-order multi-agent systems over a detail-balanced graph was studied based on a centralized event-firing control algorithm. Despite the effectiveness of the aforementioned approaches, there are only a few works that examine the DOCP for heterogeneous MASs with directed topologies, which is still open and remain challenging.
Directed topologies are found in the bulk of practical networks, including industrial transmission lines and quotation systems [30]. Synchronization and consensus in directed networks have received a substantial amount of attention so far [31,32]. It is worthwhile to point out that, in the aforementioned works, many approaches developed for the DOCP can be applied to MASs only under an undirected network. A type of distributed coordination algorithms to work out the DOCP with a weight-balanced digraph is proposed in [33], which contains the global information. According to [24,33,34], since the network topology is undirected, the average state of the multiagent system is the optimal value of the DOCP. This feature may not be guaranteed in directed topologies, necessitating a rethinking of the optimization goal in directed topologies.
Among the aforementioned distributed optimization methods, event-triggered communication (ETC) is particularly appealing since it allows for fewer updating instants while still conserving resources. The ETC strategy, as illustrated in [35,36], can reduce communication and computing overhead in networked coupled systems while maintaining control performance. In ETC, instead of using the continuous state to reach a consensus, the interaction between agents is piecewise-constant. Consequently, many problems, such as data congestion and continuous communication, can be mitigated to a large extent. Many efforts have been concentrated on developing the DOCP for multiagent systems with ETC, which have been influenced by the aforementioned idea of ETC [33,34,37,38]. Wu et al. [34] investigated continuous-time optimization by utilizing adaptive event-based methods that rely solely on neighboring agents’ relevant information. Deng et al. [37] proposed a distributed optimization algorithm combining gradient measurement and ETC to guarantee the exponential convergence of the system. The fundamental challenge in developing event-triggered control is avoiding Zeno behavior, which can disrupt the controller’s normal operation and result in endless triggers on a limited period. The suggested event-triggered algorithm in [39] can handle a broad range of sensor network problems; however, it is not guaranteed to prevent Zeno behavior. In [33,37], Zeno behavior is avoided by placing an upper constraint on the communication frequency.
In this paper, we continue the previous research by looking at the DOCP for heterogeneous MASs with directed topologies, which is known to be quite challenging. Based on output-regulation techniques, some criteria are proposed to ensure that the DOCP under directed topologies is solved. The following are the primary contributions of this paper. (1) Distinct from [24,25,26,33,40], our work investigates the DOCP with directed topologies and does not require the networks to be node-balanced. By exploring the properties of the directed network, a different convex optimization problem is proposed, which implies a weighted average of the objective function of each agent. (2) Compared to the previous works [24,25,26,33,40,41], a novel distributed control law is suggested, which is composed of the solutions of carefully chosen matrix equations. It is demonstrated that all agents’ outputs can attain the ideal value that minimizes the sum of local objective functions, ensuring global convergence. (3) The ETC method is added to the proposed control law. Distinct from [33,37], our work designs two different triggering conditions into our ETC strategy, and the Zeno behavior is precluded without a communication frequency upper restriction.
This paper is organized as follows. Some required definitions and lemmas are provided in Section 2. In Section 3, a different convex optimization problem and some assumptions are proposed. In Section 4, the distributed optimization schemes with continuous communication and ETC are designed for heterogeneous MASs over directed networks. A numerical example is provided in Section 5 to demonstrate the usefulness of the theoretical results. Finally, in Section 6, the conclusion is drawn.

2. Preliminaries

2.1. Notations

Let R , R n , R n × m denote sets of real numbers, real n-dimensional vectors, and real n × m -dimensional matrices, respectively. R > 0 denotes the set of positive real numbers. I n represents the n-dimensional identity matrix, and 1 n stands for an n-dimensional vector with all components equal to 1. · is the induced 2-norm of matrices or the Euclidean norm of a vector. Given vectors x 1 , , x N , col x 1 , , x N = x 1 T , , x N T T . A diagonal matrix Σ is denoted by Σ = diag σ 1 , , σ N , where σ i , i 1 , 2 , , N , is the diagonal element. For a matrix A R n × n , A T is the transpose of A ; A 0 (or A 0 ) implies that A is positive definite (or positive semidefinite). The symbol ⊗ denotes the Kronecker product of matrices. For a differentiable function f : R n R , f is the gradient of f. σ m i n ( A ) represents the smallest singular value of nonsquare matrix A.

2.2. Graph Theory

For a directed network G = ( V , E , A ) with N agents, and V = { 1 , 2 , , N } is the set of agents, E V × V is the set of links. For every pair of nodes i and j , i , j V . If a directed path exists between i and j, then the directed network G is said to be strongly connected. The weighted adjacency matrix A = a i j N × N of a directed network G is defined as a i j > 0 if ( i , j ) E , and a i j = 0 if ( i , j ) E . The out- and in-degrees of the node i , i V are σ i out = j = 1 N a j i and σ i in = j = 1 N a i j , respectively. The degree matrix of the directed network G is defined as Σ = diag σ 1 in , σ 2 in , , σ N in . The weighted Laplacian matrix associated with the directed network G is defined as L = Σ A .
Lemma 1 
([42]). If L is irreducible, then rank ( L ) = N 1 , and 1 = ( 1 , 1 , , 1 ) T is the right eigenvector of L corresponding to eigenvalue 0 with multiplicity 1, i.e., L · 1 = 0 . Let ξ T = ξ 1 , , ξ N T be the left eigenvector of L corresponding to the eigenvalue 0, i.e., ξ T L = 0 , and its multiplicity is 1. Then, ξ i > 0 , i = 1 , 2 , , N . In the following, we always assume that i = 1 N ξ i = 1 . Let Ξ = d i a g ξ 1 , , ξ N , then L ^ = 1 2 Ξ L + L T Ξ is a symmetric matrix with all row sums equal to zeros, and it has zero eigenvalue with algebraic dimension one.
Lemma 2 
([43]). If Q R n × n is such that q i j = q j i and q i i = j = 1 , j i N q i j , i , j = 1 , , n , then for all vectors, x = x 1 , x 2 , , x n T , y = y 1 , y 2 , , y n T
x T Q y = j > i n q i j x i x j y i y j .
Lemma 3 
([44]). The general algebraic connectivity of the matrix L under the strongly connected directed network G is defined as
a δ ( L ) = min z T ξ = 0 , z 0 z T L ^ z z T Ξ z .
Lemma 4 
([45]). The linear matrix inequality K is defined by
K = K 1 K 2 K 2 T K 3 ,
where K 1 = K 1 T , and K 3 = K 3 T . Therefore, K 0 is equal to one of the following conditions:
( 1 ) K 1 0 , K 3 K 2 T K 1 1 K 2 0 . ( 2 ) K 3 0 , K 1 K 2 K 3 1 K 2 T 0 .
Lemma 5 
([46]). Consider a continuous and positive definite function F ( x ) : Q R . The system x ˙ = f ( x ) , x R m , f 0 n = 0 n is locally finite-time-stable. If there is a neighborhood Q 0 U of the origin which makes F ˙ ( x ) + α 1 F α 2 ( x ) 0 , x Q 0 0 , real numbers α 1 > 0 and α 2 ( 0 , 1 ) hold. If Q = Q 0 = R m holds, the system is globally finite-time-stable, and the finite convergence time T satisfies T V 1 α 2 x 0 α 1 ( 1 α 2 ) .

3. Problem Description

Consider the following equations that explain the dynamics of a group of N heterogeneous agents:
x ˙ i ( t ) = A i x i ( t ) + B i ( u i ( t ) + d i ( t ) ) y i ( t ) = C i x i ( t ) + D i u i ( t ) , i = 1 , 2 , , N
where x i R n i , y i R q , and u i R p i are the sate, the output, and the control input of the ith agent, respectively. A i R n i × n i , B i R n i × p i ,   C i R q × n i , D i R q × p i are constant matrices. The continuous nonlinear time-varying function d i ( t ) R p i denotes external disruption.
Assumption 1. 
The directed communication topology is strongly connected.
Assumption 2. 
For any t > 0 , there exists a constant d ¯ > 0 , such that m a x d i ( t ) d ¯ , i = 1 , 2 , , N .
Remark 1. 
Compared with [27,28,29], this paper considers the effects of external disturbances on the system as well as the regulation of control input on the output. Distinct from the network considered in [28,29], the directed network considered in this paper does not require balancing, which renders it more general.
The major goal of this work is to provide a distributed optimization method for each agent that allows all of the agents’ outputs to attain the optimal value y * , which solves the following optimization problem:
min y R q i = 1 N ξ i g i ( y ) ,
where g i : R q R is a local objective function of the ith agent.
Remark 2. 
Unlike the convex optimization problem considered in [22,23,24,25,26] with undirected networks, the DOCP with heterogeneous agent dynamics of directed networks is considered in our work. Combined with the properties of the Laplacian matrix of the directed graph, a new convex optimization problem (2) is proposed, which implies the weighted average of the objective function of each node.
Remark 3. 
The optimization problem (2) considered in this paper has the same form as that in [27,28,29] when ξ i = 1 N . If we take into account how weighted average π i affects the objective function g i , then the optimization problem min y R q i = 1 N π i g i ( y ) can be transformed into min y R q Π i = 1 N ξ i g i ( y ) with Π = i = 1 N π i , ξ i = π i Π , which has the same form as problem (2) because i = 1 N ξ i = 1 and Π is a constant.
If we define g ˜ ( y ) = i = 1 N ξ i g i y i , y = col y 1 , , y N , the optimization problem (2) can be reformulated as
min y R N q g ˜ ( y ) = min y i R q i = 1 N ξ i g i y i ,
s . t . L I q y = 0 N q ,
Assumption 3 
([24]). For each agent i, the local objective function g i : R q R , i = 1 , 2 , , N is strongly convex, differentiable on R q . Its gradient g i ( z ) is locally Lipschitz on R q and satisfies g i ( x ) g j ( y ) κ x y , with κ > 0 , for all x , y R q , i , j = 1 , 2 , , N .
Remark 4. 
To guarantee the unique solution to problem (3), the local objective function g i ( y ) must be strictly convex. That means that the control input u i ( t ) needs to be redesigned so that the outputs of all agents can attain the optimal value y * = argmin y R N q g ˜ ( y ) , i.e. g ˜ y * = min y R N q g ˜ ( y ) and g ˜ y * = i = 1 N ξ i g i y i * = 0 , y * = col ( y 1 * , , y N * ) .
Assumption 4. 
A i , B i is controllable, σ m i n ( B i T ) > 0 and
rank C i B i D i A i B i B i = n i + q , i = 1 , 2 , , N .
Lemma 6 
([24]). Under Assumption 4, the linear matrix equations
B i Y 1 i Ψ i = 0 n i × q ,
B i Y 2 i + A i Ψ i = 0 n i × q ,
C i Ψ i + D i Y 2 i = I q × q , i = 1 , , N
have solution triplets ( Y 1 , i , Y 2 , i , Ψ i ) , respectively.

4. Main Results

4.1. DOCP for MASs with Continuous Communications

In this section, by using the output regulation techniques, a distributed control law is proposed to solve the DOCP. According to the solutions of matrices Equation (4), the distributed control method is given by (5)
u i = K 1 i x i + Y 1 i η ˙ i + Y 2 i K 1 i Ψ i η i c 1 sgn K 2 i x i Ψ i η i ,
η ˙ i = g i y i α 1 j = 1 N L i j y j α 2 j = 1 N L i j 0 t y j ( σ ) d σ ,
where η i R q is the auxiliary state of agent i; c 1 R > 0 ,   α 1 R > 0 ,   α 2 R > 0 ,   K 1 i R p i × n i , and K 2 i R p i × n i are matrices to be determined; and Y 1 i , Y 2 i , Ψ i are the solutions of ( 4 ) .
Remark 5. 
The first three items in the controller u i are designed for ensuring that the state x i approaches the auxiliary state η i ( t ) ; c 1 sgn K 2 i x i Ψ i η i is the term to remove the influence of external disruption d i ( t ) ; g i y i is the gradient term to guide the agents for optimization; α 1 j = 1 N L i j y j α 2 j = 1 N L i j 0 t y j ( σ ) d σ is the consensus term to ensure that all the agents converge to the optimal state.
The closed-loop system can be stated by inputting the control input (5) into system (1)
x ˙ i ( t ) = A i + B i K 1 i x i ( t ) + B i Y 1 i η ˙ i + B i Y 2 i B i K 1 i Ψ i η i c 1 B i sgn K 2 i x i ( t ) Ψ i η i + B i d i ( t ) .
Substituting (4a) and (4b) into (6), the following compact form of system (6) can be obtained:
x ˙ Ψ η ˙ = A + B K 1 ( x Ψ η ) c 1 B sgn K 2 ( x Ω η ) + B d ,
where x = col x 1 , , x N , η = col η 1 , , η N , d = col d 1 , , d N , A = diag ( A 1 , , A N ) , B = diag B 1 , , B N , K 1 = diag K 11 , , K 1 N , K 2 = diag K 21 , , K 2 N , Ψ = diag Ψ 1 , , Ψ N , g ˜ ( y ) = col g 1 y 1 , , g N y N .
Let δ x = x Ψ η , Π A c = A + B K 1 . Then, (7) can be written as
δ x ˙ = A c δ x c 1 B sgn K 2 δ x + B d ,
where K 2 = B T P is the feedback control with matrix P to be determined later. Therefore, we have the following theorem.
Theorem 1. 
Suppose Assumptions 1–4 hold. A i + B i K 1 i is Hurwitz by choosing the appropriate matrices K 1 i , i = 1 , 2 , , N , and Y 1 i , Y 2 i , Ψ i that are given in (4). Then, δ x can converge to zero in a finite time T 1 if there exists a positive matrix P and constant c 1 > 0 satisfying
c 1 > d ¯ ; ( H 1 ) P A c + ( A c ) T P 0 . ( H 2 )
Proof. 
Define the Lyapunov function candidate as follows:
V 1 = 1 2 δ x T P δ x .
Taking the derivative of V 1 along with (8) yields
V 1 ˙ = δ x T P A c δ x c 1 B sgn K 2 δ x + B d = 1 2 δ x T P A c + ( A c ) T P δ x i + δ x T P B d c 1 δ x T P B sgn B T P δ x B T P δ x d c 1 B T P δ x 2 ( c 1 d ¯ ) σ m i n ( B T ) σ m i n ( P 1 2 ) V 1 1 2 .
Thus, on the basis of the condition of the theorem, one has
V 1 ˙ + 2 ( c 1 d ¯ ) σ m i n ( B T ) σ m i n ( P 1 2 ) V 1 1 2 0 .
By Lemma 5, one has lim t T 1 V 1 0 . Therefore, δ x converges to zero in a finite time T 1 . According to (9), the convergence time has the following form:
T 1 = V 1 2 ( 0 ) 2 ( c 1 d ¯ ) σ m i n ( B T ) σ m i n ( P 1 2 ) .
By (4c), one has y i = C i x i + D i u i = η i , t > T 1 , i V . The control law (5b) can be further obtained as follows:
y ˙ i = g i y i α 1 j = 1 N L i j y j α 2 j = 1 N L i j 0 t y j ( σ ) d σ .
Remark 6. 
It is clear from the foregoing analysis that when t > T 1 , x i = Ψ i η i , y i = η i , i V hold. In fact, the solutions of carefully designed matrix Equation (4) ensures that the output of all agents can be tracked to the auxiliary variable, thereby facilitating the solving of the DOCP (3). To ensure the solvability of the matrix equations, sufficient conditions in terms of constant matrix are constructed. The matrices Equation (4) is an important part of the control law (5).
Let y ˜ = i = 1 N ξ i y i be the average state and δ y i = y i y ˜ be the consensus error for the sake of convenience later in this study. Given that j = 1 N L i j y ˜ = 0 , (10) is converted into the following form
y ˙ i = g i y i α 1 j = 1 N L i j δ y j α 2 j = 1 N L i j 0 t δ y j ( σ ) d σ .
If we let φ j ( t ) = 0 t δ y j ( σ ) d σ , the control law (5) can be written as
u i = K 1 i x i + Y 1 i π i + Y 2 i K 1 i Ψ i η i c 1 sgn K 2 i x i Ψ i η i ;
y ˙ i = g i y i α 1 j = 1 N L i j δ y j α 2 j = 1 N L i j φ j ( t ) ;
φ ˙ i = δ y i .
Let φ ( t ) = ( φ 1 T ( t ) , , φ N T ( t ) ) T , y = y 1 T , , y N T T , δ y ( t ) = ( δ y 1 T ( t ) , δ y 2 T ( t ) , , δ y N T ( t ) ) T . Then, (12) can be rewritten in a compact form as
y ˙ ( t ) = g ˜ ( y ) α 1 L I q δ y ( t ) α 2 L I q φ ( t ) ;
φ ˙ = δ y ( t ) .
From (13), the equilibrium point y ¯ satisfies
0 = g ˜ ( y ¯ ) α 1 L I q δ y ¯ ( t ) α 2 L I q φ ¯ ( t ) .
Then, by multiplying both sides of (14) by ξ T I q , and because ξ T L = 0 , one obtains i = 1 N ξ i g i ( y ¯ ) = 0 , which means that y ¯ is an optimal solution of the DOCP (3).
If M = I N 1 N ξ T , it is easy to check that L M = M L = L . Then, the error system can be obtained as follows:
δ y ˙ ( t ) = ( M I q ) g ˜ ( y ) α 1 L I q δ y ( t ) α 2 L I q φ ( t ) ; φ ˙ = δ y ( t ) .
The compact matrix form can be recast as
δ y ˙ ( t ) φ ˙ ( t ) = Q δ y ( t ) φ ( t ) + ( M I q ) g ˜ ( y ) 0 N q ,
where
Q = α 1 L I q α 2 L I q I N q 0 N q .
We are now in a position to present our major result on closed-loop system (13) convergence.
Theorem 2. 
Suppose Assumptions 1–4 hold. A i c is Hurwitz by choosing the appropriate matrices K 1 i , i = 1 , 2 , , N , and Y 1 i , Y 2 i , Ψ i that are given in (4). The control law stated in (12) can be used to solve the DOCP for multiagent system (1) if
α 1 > 2 ( κ + 2 ) α 2 a δ ( L ) + κ 2 + κ 2 a δ ( L ) ;
α 2 > κ 2 a δ ( L ) .
Proof. 
Consider the following Lyapunov function
V 2 = 1 2 δ y T ( t ) φ T ( t ) R δ y ( t ) φ ( t ) ,
where
R = Ξ I q α 2 α 1 Ξ I q α 2 α 1 Ξ I q 2 α 2 ( L ^ I q ) .
The proof is completed in two parts. In the first part, we show that V 2 0 and V 2 = 0 only when δ y = 0 and φ ( t ) = 0 . In the second part, we construct criteria in which V ˙ 2 0 and V ˙ 2 = 0 only if δ y = 0 and φ ( t ) = 0 .
Part 1: From Lemma 3, one can obtain
V 2 ( t ) 1 2 δ y T ( t ) Ξ I q δ y ( t ) + α 2 a δ ( L ) φ T ( t ) Ξ I q φ ( t ) + 1 2 α 2 α 1 φ T ( t ) Ξ I q δ y ( t ) + 1 2 α 2 α 1 φ T ( t ) Ξ I q δ y ( t ) = 1 2 δ y T ( t ) φ T ( t ) R ˜ δ y ( t ) φ ( t ) ,
where
R ˜ = Ξ I q α 2 α 1 Ξ I q α 2 α 1 Ξ I q 2 α 2 a δ ( L ) ( Ξ I q ) .
By Lemma 4 and conditions (16), (17), we have a δ ( L ) α 2 2 α 1 2 . Therefore, R ˜ 0 , and one can conclude that V 2 0 and V 2 = 0 only when δ y ( t ) = 0 and φ ( t ) = 0 .
Part 2: Taking the derivative of V 2 along (15) yields
V 2 ˙ = δ y T ( t ) φ T ( t ) R ( M I q ) g ˜ ( y ) 0 N q + δ y T ( t ) φ T ( t ) R Q δ y ( t ) φ ( t ) = W 1 ( t ) + W 2 ( t ) ,
where
W 1 ( t ) = δ y T ( t ) φ T ( t ) R ( M I q ) g ˜ ( y ) 0 N q = δ y T ( t ) ( Ξ M I q ) g ˜ ( y ) α 2 α 1 φ T ( t ) ( Ξ M I q ) g ˜ ( y ) .
By Lemma 2 and Assumption 3, one can obtain
δ y T ( t ) ( Ξ M I q ) g ˜ ( y ) κ 2 i = 1 N j i ξ i ξ j y i y j 2 κ i = 1 N ξ i δ y i ( t ) 2 .
Similarly, we can obtain that
φ T ( t ) ( Ξ M I q ) g ˜ ( y ) κ 2 i = 1 N ξ i φ i 2 + κ 2 i = 1 N ξ i δ y i ( t ) 2 .
Then, by substituting (20), (21) into (19), one obtains that
W 1 ( t ) κ 2 α 2 α 1 i = 1 N ξ i φ i ( t ) 2 + κ 2 α 2 α 1 + κ i = 1 N ξ i δ y i ( t ) 2 .
In addition,
1 2 ( R Q + Q T R ) = α 1 ( L ^ I q ) + α 2 α 1 Ξ I q 0 0 α 2 2 α 1 ( L ^ I q ) .
By the definition of a δ ( L ) , it yields that
W 2 ( t ) = δ y T ( t ) φ T ( t ) 1 2 ( R Q + Q T R ) δ y ( t ) φ ( t ) = δ y T ( t ) α 2 α 1 ( Ξ I q ) α 1 ( L ^ I q ) δ y ( t ) α 2 2 α 1 φ T ( t ) ( L ^ I q ) φ ( t ) δ y T ( t ) α 2 α 1 α 1 a δ ( L ) ( Ξ I q ) δ y ( t ) α 2 2 α 1 a δ ( L ) φ T ( t ) ( Ξ I q ) φ ( t ) = α 2 α 1 α 1 a δ ( L ) i = 1 N ξ i δ y i ( t ) 2 α 2 2 α 1 a δ ( L ) i = 1 N ξ i φ i ( t ) 2 .
Then, by substituting (22), (23) into (18) and using (16), (17), one obtains that
V 2 ˙ κ 2 α 2 α 1 + κ + α 2 α 1 α 1 a δ ( L ) i = 1 N ξ i δ y i ( t ) 2 + κ 2 α 2 α 1 α 2 2 α 1 a δ ( L ) i = 1 N ξ i φ i ( t ) 2 0 .
Thus, V ˙ 2 0 and V ˙ 2 = 0 only when δ y ( t ) = 0 and φ ( t ) = 0 . Therefore, the multiagent system (1) reaches global consensus. Finally, all of the agents’ outputs reach the equilibrium point y ¯ , and the DOCP (3) is solved.
Remark 7. 
Control protocol design for the DOCP under a directed network is acknowledged to be tough. In this paper, we put the proportional integral control protocols into the control strategy (5) and use the output regulation techniques so that all of the agents’ outputs reach the equilibrium point, and the DOCP (3) is solved. Theorem 2 shows that the perturbations of parameters α 1 and α 2 do not change the consensus performance as long as condition (16) is satisfied.

4.2. DOCP for MASs with ETC

Due to the control law’s communication structure (12), agents must constantly collect local messages and adjust their control signals, which is extravagant and would result in a waste of resources. To this end, to tackle problem (3) for heterogeneous MASs (1), an ETC strategy is developed, in which communications are required only at specific time instants, and no Zeno behavior is seen. The ETC law is proposed as
u i = K 1 i x i + Y 1 i π i + Y 2 i K 1 i Ψ i η i c 1 sgn K 2 i x i Ψ i η i ;
y ˙ i = g i y i α 1 j = 1 N L i j δ y j t k i ( t ) i α 2 j = 1 N L i j φ j t k i ( t ) i ;
φ ˙ i = δ y i .
where t k i is the kth triggering instant of the ith agent, and k i ( t ) = a r g m a x k t k i t . The triggering criteria, which will be defined later, decide the triggering instants of the ith agent t 1 i , , t k i , .
For simplicity for t [ t k i , t k + 1 i ) , k = 1 , 2 , , we denote θ i ( t ) = j = 1 N L i j δ y j ( t ) , ϑ i ( t ) = j = 1 N L i j φ j ( t ) , e θ i ( t ) = θ i ( t k i ) θ i ( t ) , e ϑ i ( t ) = ϑ i ( t k i ) ϑ i ( t ) , e θ ( t ) = [ e θ 1 ( t ) , , e θ N ( t ) ] T ,
e ϑ ( t ) = [ e ϑ 1 ( t ) , , e ϑ N ( t ) ] T .
Then, one can obtain
δ y ˙ ( t ) = ( M I q ) g ˜ ( y ) α 1 L I q δ y ( t ) α 2 L I q φ ( t ) + α 1 ( M I q ) e θ ( t ) + α 2 ( M I q ) e ϑ ( t ) ; φ ˙ ( t ) = δ y ( t ) .
Similar to (15) is the following:
δ y ˙ ( t ) φ ˙ ( t ) = Q δ y ( t ) φ ( t ) + Z 0 N q . ,
where Z = ( M I q ) g ˜ ( y ) + α 1 ( M I q ) e θ ( t ) + α 2 ( M I q ) e ϑ ( t ) .
When we say that agent i triggers at time t k i i , we imply that agent i recommences its control value at time t k i + 1 i and sends its current state to its out-neighbors. The measurement errors e θ i ( t ) and e ϑ i ( t ) are both reset to zero meanwhile. Given that agent i is triggered at the instant t k i i , the following triggering condition can be used to calculate its next triggering instant.
t k i + 1 i i n f { t : t > t k i i , Π θ i ( t ) 0 Π ϑ i ( t ) 0 } ,
where
Π θ i ( t ) = e θ i ( t ) 2 1 μ 1 e σ 1 t ; Π ϑ i ( t ) = e ϑ i ( t ) 2 1 μ 2 e σ 2 t ,
where μ 1 , μ 2 , σ 1 , σ 2 are positive constants. An event is triggered for agent i when one of the triggering conditions of e θ i ( t ) 2 1 μ 1 e σ 1 t and e ϑ i ( t ) 2 1 μ 2 e σ 2 t is fulfilled.
Remark 8. 
Compared with the centralized event-triggering mechanism designed in [29], which needs global information, the distributed event-triggering mechanism designed in this needs only the information of neighbor nodes, which effectively reduces the communication burden.
We are now in a position to provide our second main result on system (25) convergence.
Theorem 3. 
Suppose Assumptions 1–4 hold. A i + B i K 1 i is Hurwitz by choosing the appropriate matrices K 1 i , i = 1 , 2 , , N , and Y 1 i , Y 2 i , Ψ i that are given in (4). With the triggering condition (26), the DOCP for the multiagent system (1) is solved by the event-triggered control law (24) if
α 1 > 2 ( κ + 6 ) α 2 a δ ( L ) + κ 2 + κ 2 a δ ( L ) ;
α 2 > κ + 4 2 a δ ( L ) .
Furthermore, the closed-loop system (25) does exhibit the Zeno behavior.
Proof. 
The time derivative of V 2 along the trajectory of (25) can be calculated using the same method as in the demonstration of Theorem 2.
V 2 ˙ = δ y T ( t ) φ T ( t ) R Z 0 N q + δ y T ( t ) φ T ( t ) R Q δ y ( t ) φ ( t ) = W 1 ( t ) + W 2 ( t ) + W 3 ( t ) .
where W 1 ( t ) , W 2 ( t ) are the same as in Theorem (2).
Let U = E M , and its eigenvalues (counting multiplicities) are as follows: 0 = χ 1 χ 2 χ q . Let μ 1 = ( α 1 3 α 2 + α 1 α 2 ) χ q 2 , μ 2 = ( α 2 3 α 1 + α 1 α 2 ) χ q 2 . Thus, we have
W 3 ( t ) = α 1 δ y T ( t ) ( U I q ) e θ ( t ) + α 2 δ y T ( t ) ( U I q ) e ϑ ( t ) + α 2 φ T ( t ) ( U I q ) e θ ( t ) + α 2 2 α 1 φ T ( t ) ( U I q ) e ϑ ( t ) 2 α 2 α 1 i = 1 N ξ i δ y i ( t ) 2 + μ 1 i = 1 N e θ i ( t ) 2 + 2 α 2 α 1 i = 1 N ξ i φ i ( t ) 2 + μ 2 i = 1 N e ϑ i ( t ) 2 .
Then, by substituting (22), (23), (30) into (29), one obtains that
V 2 ˙ κ 2 α 2 α 1 + κ + 3 α 2 α 1 α 1 a δ ( L ) i = 1 N ξ i δ y i ( t ) 2 + κ 2 α 2 α 1 α 2 2 α 1 a δ ( L ) + 2 α 2 α 1 i = 1 N ξ i φ i ( t ) 2 + μ 1 i = 1 N e θ i ( t ) 2 + μ 2 i = 1 N e ϑ i ( t ) 2 .
Combining condition (26), we deduce that
V 2 ˙ κ 2 α 2 α 1 + κ + 3 α 2 α 1 α 1 a δ ( L ) i = 1 N ξ i δ y i ( t ) 2 + κ 2 α 2 α 1 α 2 2 α 1 a δ ( L ) + 2 α 2 α 1 i = 1 N ξ i φ i ( t ) 2 + N e σ 1 t + N e σ 2 t .
Let V 3 = V 2 + N σ 1 e σ 1 t + N σ 2 e σ 2 t . By using (27), (28), one has
V 3 ˙ = V 2 ˙ N e σ 1 t N e σ 2 t κ 2 α 2 α 1 + κ + 3 α 2 α 1 α 1 a δ ( L ) i = 1 N ξ i δ y i ( t ) 2 + κ 2 α 2 α 1 α 2 2 α 1 a δ ( L ) + 2 α 2 α 1 i = 1 N ξ i φ i ( t ) 2 0 .
Thus, the multiagent system (1) reaches global consensus, and the DOCP (3) is solved.
Then, we prove that the Zeno behavior could be excluded. From (24), over the interval [ t k i , t k + 1 i ) , the upper right-hand Dini derivative of e θ i ( t ) can be expressed as
D + e θ i ( t ) = j = 1 N L i j g i y i + α 1 j = 1 N L i j θ j t k i + α 2 j = 1 N L i j ϑ j t k i .
By noting that e θ i ( t k i ) = 0 , the solution of e θ i ( t ) is given as
e θ i ( t ) = t k i t ( j = 1 N L i j g i y i ( τ ) + α 1 j = 1 N L i j θ j t k i + α 2 j = 1 N L i j ϑ j t k i ) d τ , .
Define t k + 1 i as the next triggering instant of agent i. Since d i ( t ) , i = 1 , 2 , , N , is bounded, lim t δ y ( t ) = 0 , and lim t φ ( t ) = 0 , one can obtain the boundedness of the variables g i y i ,   θ j ( t ) ,   ϑ j ( t ) . Define h 0 ,   h 1 ,   h 2 ,   ζ R > 0 such that g i y i h 0 ,   θ j ( t ) h 1 ,   ϑ j ( t ) h 2 ,   i = 1 , , N , and denote m a x L i i ζ ,   i = 1 , 2 , , N . Thus, it is concluded that
e θ i ( t ) 2 ζ h 0 + 2 ζ α 1 h 1 + 2 ζ α 2 h 2 ( t t k i ) .
By noting the triggering condition (26), one obtains
e θ i ( t ) 2 1 μ 1 e σ 1 t , t [ t k i , t k + 1 i ) .
Then, a lower bound τ k i of t k + 1 i t k i can be obtained by solving the following inequality
1 μ 1 e σ 1 τ k i + t k i 2 ζ h 0 + 2 ζ α 1 h 1 + 2 ζ α 2 h 2 τ k i .
It should be noticed that τ k i always exists and is strictly positive in a finite time by (32). Consequently, no Zeno behavior is exhibited, and this completes the proof. □
Remark 9. 
By carefully designing the ETC parameters μ 1 , μ 2 , σ 1 , σ 2 , it can be ensured that the DOCP can be solved and global convergence can be guaranteed. The exponential functions in the triggering condition (26) play a significant role in restricting measurement errors and eliminating Zeno behavior without placing an upper bound on the communications frequency.

5. Illustrative Examples

In this section, a numerical example is given to visualize the theoretical results in Section 4. Consider a heterogeneous multiagent with seven agents described by (1), where A 1 = [ 0 , 1 ; 0 , 0 ] , B 1 = [ 0 , 1 ; 1 , 2 ] , C 1 = [ 1 , 1 ] , A 2 = [ 0 , 1 ; 1 , 0 ] , B 2 = [ 0 , 2 ; 1 , 1 ] , C 2 = [ 1 , 1 ] , A 3 = [ 0 , 1 ; 1 , 2 ] , B 3 = [ 1 , 0 ; 3 , 1 ] , C 3 = [ 1 , 1 ] , A 4 = [ 1 , 0 ; 2 , 2 ] , B 4 = [ 1 , 2 ; 3 , 4 ] , C 4 = [ 1 , 1 ] , A 5 = [ 0 , 0 ; 1 , 0 ] , B 5 = [ 1 , 2 ; 1 , 0 ] , C 5 = [ 1 , 1 ] , A 6 , 7 = [ 0 , 0 ; 0 , 1 ] , B 6 , 7 = [ 1 , 2 ; 1 , 4 ] , C 6 , 7 = [ 1 , 1 ] , D 1 , 2 , 3 = [ 1 , 1 ] , D 4 , 5 = [ 1 , 1 ] , D 6 , 7 = [ 1 , 1 ] . The Laplacian matrix L of the directed and strongly connected network described in Figure 1 is
L = 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1
The local objective function of the ith agent is given by g i ( y ) = s i n ( i 10 y ) , i = 1 , 2 7 . As an example, take the bounded external disturbance as follows:
d 1 , 2 = sin ( t ) , cos ( t ) T , d 3 , 4 = 2 cos ( t ) , sin ( t ) T , d 5 , 6 , 7 = 2 cos ( t ) , cos ( t ) T .
The control law (5) and triggering function (26) parameters are chosen as follows: K 11 = [ 2 , 3 ; 1 , 1 ] , Y 11 = [ 0 ; 0.2 ] , Y 21 = [ 0.8 ; 0.4 ] , Ψ 1 = [ 0.4 ; 0.4 ] ; K 12 = [ 1.5 , 1.5 ; 0.5 , 0.5 ] , Y 12 = [ 0.25 ; 0.25 ] , Y 22 = [ 0.5 ; 0 ] , Ψ 2 = [ 0.5 ; 0 ] ; K 13 = [ 1 , 1 ; 2 , 2 ] , Y 13 = [ 0.333 ; 0 ] , Y 23 = [ 1 ; 1.333 ] , Ψ 3 = [ 0.333 ; 1 ] ; K 14 = [ 2 , 3 ; 1 , 1.5 ] , Y 14 = [ 0 ; 0.09 ] , Y 24 = [ 0.73 ; 0.45 ] , Ψ 4 = [ 0.18 ; 0.36 ] ; K 15 = [ 1 , 1 ; 1 , 0.5 ] , Y 15 = [ 0 ; 0.2 ] , Y 25 = [ 0.4 ; 0.2 ] , Ψ 5 = [ 0.4 ; 0 ] ; K 16 = [ 2 , 2 ; 0.5 , 1 ] , Y 16 = [ 0 ; 0.25 ] , Y 26 = [ 1 ; 0.5 ] , Ψ 6 = [ 0.5 ; 1 ] ; K 17 = [ 2 , 2 ; 0.5 , 1 ] , Y 17 = [ 0 ; 0.25 ] , Y 27 = [ 1 ; 0.5 ] , Ψ 7 = [ 0.5 ; 1 ] , c 1 = 5 , α 1 = 5 , α 2 = 1.5 , σ 1 = σ 2 = 0.2 , a δ ( L ) = 0.3765 . By Theorem 3, we select α 1 = 10 and α 2 = 7 in the control protocols (24). The initial values x ( 0 ) are randomly given, and φ ( 0 ) = 0 .
It is clear from Figure 1 that the network is strongly connected. Figure 2 displays that the consensus errors δ y i ( t ) , i = 1 , 2 , , 7 converge to 0 in a finite time, which means the output of all agents y i ( t ) achieves an optimal value of y * = 0.4101 . The evolution of the optimization goal is depicted in Figure 3, where we can see that the global objective function i = 1 N ξ i g i ( y i ) converges to the global optimal solution i = 1 N ξ i g i ( y * ) = 0.1684 . Therefore, the optimization problem (2) is solved. Figure 4 depicts the triggering instants of all agents, which reveal that the communication is discrete. Thus, no Zeno behavior is displayed.

6. Conclusions

Firstly, a control rule for the DOCP of heterogeneous MASs with directed topology is provided. By using the output regulation techniques, the proposed control law can solve the DOCP and ensure that global convergence despite the existence of bounded external disturbances. The proposed control law is then extended to ETC methods, which enable agents to avoid continuous communication. It is demonstrated that with this algorithm, consensus can be achieved, and the Zeno behavior can be avoided. One numerical example validates the efficiency of the theoretical conclusions. The future study will focus on extending this paper to directed networks with a switching topology.

Author Contributions

D.L. performed the data analyses and wrote the manuscript; J.F. contributed to the conception of the study; Y.Z. and J.F. helped perform the analysis with constructive discussions; J.W. performed the experiment. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Science Foundation of China under Grant 62006159.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, F.; Chen, J. Minimum-energy distributed consensus control of multiagent systems: A network approximation approach. IEEE Trans. Autom. Control 2019, 65, 1144–1159. [Google Scholar] [CrossRef]
  2. Meng, M.; Xiao, G.; Zhai, C.; Li, G.; Wang, Z. Distributed consensus of heterogeneous multi-agent systems subject to switching topologies and delays. J. Frankl. Inst. 2020, 357, 6899–6917. [Google Scholar] [CrossRef]
  3. Yan, M.; Ma, W.; Zuo, L.; Yang, P. Dual-mode distributed model predictive control for platooning of connected vehicles with nonlinear dynamics. Int. J. Control Autom. Syst. 2019, 17, 3091–3101. [Google Scholar] [CrossRef]
  4. Li, Z.; Gao, L.; Chen, W.; Xu, Y. Distributed adaptive cooperative tracking of uncertain nonlinear fractional-order multi-agent systems. IEEE CAA J. Autom. Sin. 2020, 7, 292–300. [Google Scholar] [CrossRef]
  5. Fu, Q.; Shen, Q.; Jia, Z. Cooperative adaptive tracking control for unknown nonlinear multi-agent systems with signal transmission faults. Circuits Syst. Signal Process. 2020, 39, 1335–1352. [Google Scholar] [CrossRef]
  6. Chen, C.; Zhou, X.; Li, Z.; He, Z.; Li, Z.; Lin, X. Novel complex network model and its application in identifying critical components of power grid. Phys. A Stat. Mech. Its Appl. 2018, 512, 316–329. [Google Scholar] [CrossRef]
  7. Chen, F.; Chen, Z.; Xiang, L.; Liu, Z.; Yuan, Z. Reaching a consensus via pinning control. Automatica 2009, 45, 1215–1220. [Google Scholar] [CrossRef]
  8. Wang, X.; Jiang, G.P.; Su, H.; Wang, X. Robust global coordination of networked systems with input saturation and external disturbances. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 7788–7800. [Google Scholar] [CrossRef]
  9. Liu, C.; Li, R.; Liu, B. Group-bipartite consensus of heterogeneous multi-agent systems over signed networks. Phys. A Stat. Mech. Its Appl. 2022, 592, 126712. [Google Scholar] [CrossRef]
  10. Cai, J.; Feng, J.; Wang, J.; Zhao, Y. Event-Based Leader-Following Synchronization of Coupled Harmonic Oscillators Under Jointly Connected Switching Topologies. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 958–962. [Google Scholar] [CrossRef]
  11. Zhang, Q.; Gong, Z.; Yang, Z.; Chen, Z. Distributed convex optimization for flocking of nonlinear multi-agent systems. Int. J. Control Autom. Syst. 2019, 17, 1177–1183. [Google Scholar] [CrossRef]
  12. Yi, P.; Li, L. Distributed Nonsmooth Convex Optimization over Markovian Switching Random Networks with Two Step-Sizes. J. Syst. Sci. Complex. 2021, 34, 1–21. [Google Scholar] [CrossRef]
  13. Chen, J.; Kai, S. Cooperative transportation control of multiple mobile manipulators through distributed optimization. Sci. China Inf. Sci. 2018, 61, 1–17. [Google Scholar] [CrossRef]
  14. Wu, Z.; Li, Z.; Yu, J. Designing Zero-Gradient-Sum Protocols for Finite-Time Distributed Optimization Problem. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 1–9. [Google Scholar] [CrossRef]
  15. Chen, W.; Ren, W. Event-triggered zero-gradient-sum distributed consensus optimization over directed networks. Automatica 2016, 65, 90–97. [Google Scholar] [CrossRef] [Green Version]
  16. Gharesifard, B.; Cortés, J. Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs. IEEE Trans. Autom. Control 2014, 59, 781–786. [Google Scholar] [CrossRef] [Green Version]
  17. Yang, S.; Liu, Q.; Wang, J. Distributed Optimization Based on a Multiagent System in the Presence of Communication Delays. IEEE Trans. Syst. Man Cybern. Syst 2017, 47, 717–728. [Google Scholar] [CrossRef]
  18. Wang, X.; Wang, X.; Su, H.; Lam, J. Reduced-order interval observer based consensus for MASs with time-varying interval uncertainties. Automatica 2022, 135, 109989. [Google Scholar] [CrossRef]
  19. Chen, S.; Garcia, A.; Shahrampour, S. On Distributed Non-convex Optimization: Projected Subgradient Method For Weakly Convex Problems in Networks. IEEE Trans. Autom. Control 2021, 67, 1. [Google Scholar]
  20. Mo, L.; Liu, X.; Cao, X.; Yu, Y. Distributed second-order continuous-time optimization via adaptive algorithm with nonuniform gradient gains. J. Syst. Sci. Complex. 2020, 33, 1914–1932. [Google Scholar] [CrossRef]
  21. Zhao, Y.; Liu, Y.; Wen, G.; Chen, G. Distributed Optimization for Linear Multiagent Systems: Edge- and Node-Based Adaptive Designs. IEEE Trans. Autom. Control 2017, 62, 3602–3609. [Google Scholar] [CrossRef]
  22. Huang, B.; Zou, Y.; Meng, Z.; Ren, W. Distributed Time-Varying Convex Optimization for a Class of Nonlinear Multiagent Systems. IEEE Trans. Autom. Control 2020, 65, 801–808. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Deng, Z.; Hong, Y. Distributed optimal coordination for multiple heterogeneous Euler–Lagrangian systems. Automatica 2017, 79, 207–213. [Google Scholar] [CrossRef]
  24. Li, Z.; Wu, Z.; Li, Z.; Ding, Z. Distributed Optimal Coordination for Heterogeneous Linear Multiagent Systems With Event-Triggered Mechanisms. IEEE Trans. Autom. Control 2020, 65, 1763–1770. [Google Scholar] [CrossRef]
  25. Mao, S.; Dong, Z.w.; Du, W.; Tian, Y.C.; Liang, C.; Tang, Y. Distributed Non-Convex Event-Triggered Optimization over Time-varying Directed Networks. IEEE Trans. Ind. Inform. 2021, 18, 4737–4748. [Google Scholar] [CrossRef]
  26. Wang, X.; Hong, Y.; Ji, H. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection. IEEE Trans. Cybern. 2016, 46, 1655–1666. [Google Scholar] [CrossRef] [PubMed]
  27. Li, L.; Yu, Y.; Li, X.; Xie, L. Exponential convergence of distributed optimization for heterogeneous linear multi-agent systems over unbalanced digraphs. Automatica 2022, 141, 110259. [Google Scholar] [CrossRef]
  28. Yan, J.; Yu, H. Distributed optimization of multiagent systems in directed networks with time-varying delay. J. Control. Sci. Eng. 2017, 2017, 7937916. [Google Scholar] [CrossRef] [Green Version]
  29. Yang, F.; Yu, Z.; Huang, D.; Jiang, H. Distributed optimization for second-order multi-agent systems over directed networks. Mathematics 2022, 10, 3803. [Google Scholar] [CrossRef]
  30. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47. [Google Scholar] [CrossRef] [Green Version]
  31. Cai, S.; Zhou, F.; He, Q. Fixed-time cluster lag synchronization in directed heterogeneous community networks. Phys. A Stat. Mech. Its Appl. 2019, 525, 128–142. [Google Scholar] [CrossRef]
  32. Ruan, X.; Xu, C.; Feng, J.; Wang, J.; Zhao, Y. Adaptive dynamic event-triggered control for multi-agent systems with matched uncertainties under directed topologies. Phys. A Stat. Mech. Its Appl. 2022, 586, 126450. [Google Scholar] [CrossRef]
  33. Kia, S.S.; Cortés, J.; Martínez, S. Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication. Automatica 2015, 55, 254–264. [Google Scholar] [CrossRef] [Green Version]
  34. Wu, Z.; Li, Z.; Ding, Z.; Li, Z. Distributed Continuous-Time Optimization With Scalable Adaptive Event-Based Mechanisms. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 3252–3257. [Google Scholar] [CrossRef] [Green Version]
  35. Mazo, M.; Tabuada, P. Decentralized Event-Triggered Control Over Wireless Sensor/Actuator Networks. IEEE Trans. Autom. Control 2011, 56, 2456–2461. [Google Scholar] [CrossRef] [Green Version]
  36. Cai, J.; Feng, J.; Wang, J.; Zhao, Y. Tracking Consensus of Multi-agent Systems Under Switching Topologies via Novel SMC: An Event-Triggered Approach. IEEE Trans. Netw. Sci. Eng. 2022, 9, 2150–2163. [Google Scholar] [CrossRef]
  37. Deng, Z.; Wang, X.; Hong, Y. Distributed optimisation design with triggers for disturbed continuous-time multi-agent systems. IET Contr. Theory Appl. 2017, 11, 282–290. [Google Scholar] [CrossRef]
  38. Li, Y.; Zhang, H.; Huang, B.; Han, J. A distributed Newton–Raphson-based coordination algorithm for multi-agent optimization with discrete-time communication. Neural Comput. Appl. 2020, 32, 4649–4663. [Google Scholar] [CrossRef]
  39. Wan, P.; Lemmon, M.D. Event-triggered distributed optimization in sensor networks. In Proceedings of the 2009 International Conference on Information Processing in Sensor Networks, San Francisco, CA, USA, 13–16 April 2009; pp. 49–60. [Google Scholar]
  40. Li, R.; Yang, G.H. Consensus Control of a Class of Uncertain Nonlinear Multiagent Systems via Gradient-Based Algorithms. IEEE Trans. Cybern. 2019, 49, 2085–2094. [Google Scholar] [CrossRef]
  41. Lin, P.; Ren, W.; Yang, C.; Gui, W. Distributed Continuous-Time and Discrete-Time Optimization With Nonuniform Unbounded Convex Constraint Sets and Nonuniform Stepsizes. IEEE Trans. Autom. Control 2019, 64, 5148–5155. [Google Scholar] [CrossRef] [Green Version]
  42. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  43. Wu, C.W.; Chua, L. Synchronization in an array of linearly coupled dynamical systems. IEEE Trans. Circuits Syst. I-Fundam. Theor. Appl. 1995, 42, 430–447. [Google Scholar]
  44. Godsil, C.; Royle, G.F. Algebraic Graph Theory; Springer: Berlin/Heidelberg, Germany, 2001; Volume 207. [Google Scholar]
  45. Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
  46. Bhat, S.P.; Bernstein, D.S. Geometric homogeneity with applications to finite-time stability. Math. Control Signal Syst. 2005, 17, 101–127. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Topologies of a strongly connected network.
Figure 1. Topologies of a strongly connected network.
Mathematics 11 01479 g001
Figure 2. Trajectory of consensus error δ y i ( t ) , i = 1 , 2 , , 7 .
Figure 2. Trajectory of consensus error δ y i ( t ) , i = 1 , 2 , , 7 .
Mathematics 11 01479 g002
Figure 3. The sum of local objection functions i = 1 N ξ i g i ( y i ) .
Figure 3. The sum of local objection functions i = 1 N ξ i g i ( y i ) .
Mathematics 11 01479 g003
Figure 4. Triggering instants of each agent.
Figure 4. Triggering instants of each agent.
Mathematics 11 01479 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Liu, D.; Feng, J.; Zhao, Y. Distributed Optimization Control for Heterogeneous Multiagent Systems under Directed Topologies. Mathematics 2023, 11, 1479. https://doi.org/10.3390/math11061479

AMA Style

Wang J, Liu D, Feng J, Zhao Y. Distributed Optimization Control for Heterogeneous Multiagent Systems under Directed Topologies. Mathematics. 2023; 11(6):1479. https://doi.org/10.3390/math11061479

Chicago/Turabian Style

Wang, Jingyi, Danqi Liu, Jianwen Feng, and Yi Zhao. 2023. "Distributed Optimization Control for Heterogeneous Multiagent Systems under Directed Topologies" Mathematics 11, no. 6: 1479. https://doi.org/10.3390/math11061479

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop