Next Article in Journal
Community Evolution Prediction Based on Multivariate Feature Sets and Potential Structural Features
Previous Article in Journal
PARS: Proxy-Based Automatic Rank Selection for Neural Network Compression via Low-Rank Weight Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Optimization for Second-Order Multi-Agent Systems over Directed Networks

1
College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
2
Department of Mathematics and Physics, Xinjiang Institute of Engineering, Urumqi 830023, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3803; https://doi.org/10.3390/math10203803
Submission received: 30 August 2022 / Revised: 9 October 2022 / Accepted: 11 October 2022 / Published: 15 October 2022
(This article belongs to the Section Engineering Mathematics)

Abstract

:
This paper studies a generalized distributed optimization problem for second-order multi-agent systems (MASs) over directed networks. Firstly, an improved distributed continuous-time algorithm is proposed. By using the linear transformation method and Lyapunov stability theory, some conditions are obtained to guarantee all agents’ states asymptotically reach the optimal solution. Secondly, to reduce unnecessary communication transmission and control cost, an event-triggered algorithm is designed. Moreover, the convergence of the algorithm is proved, and the Zeno behavior can be avoided based on strict theoretic analysis. Finally, one example is given to verify the good performance of the proposed algorithms.

1. Introduction

In recent years, the MASs [1] with swarm intelligence have been applied in many fields, such as attitude alignment and consensus [2], automated highway systems [3], flocking [4], formation control [5,6], and so on. In practical applications, MASs should optimize the global objective and reach consensus under appropriate control protocols. Different from traditional optimization methods, distributed optimization can effectively avoid the dependence of centralized control to improve privacy protection [7], reliability [8] and scalability [9].
The major assignment of distributed optimization is to design appropriate distributed control protocols, which not only enable agents to achieve efficient swarm behaviors through local collaborations but also optimize the global objective. Therefore, the distributed optimization of MASs is widely used in the maximization of social welfare [10], optimization of allocate resources [11], energy management of microgrids [12,13,14,15], and sensor networks [16]. According to different information updating and transmission rules, existing distributed optimization models can be generally divided into two categories: discrete-time models and continuous-time models. Early researches mainly focused on discrete-time models [17,18,19,20,21,22,23]. Subsequently, the distributed optimization problem for MASs with continuous-time models was considered because it can effectively avoid the analysis of time complexity in discrete-time algorithms with the assistance of modern control theory. At present, some meaningful results have been obtained. For instance, an objective optimization problem for MASs over strongly connected weighted balanced networks was studied in [24]. Considering the constraints of control input, an optimization algorithm with input saturation was designed in [25]. To shorten the convergence time, the finite-time and fixed-time optimization algorithms were proposed in [26,27], respectively.
Most distributed optimization problems were considered based on first-order or second-order MASs. In [28], a first-order distributed optimization problem with an information restriction policy was studied. In [29,30], the first-order distributed optimization problems over directed network structure were discussed. In [31,32], some gradient-based algorithms based on second-order MASs with undirected networks were proposed. In addition, several distributed optimization problems with exogenous disturbances were considered in [33]. However, most distributed optimization algorithms for second-order MASs were proposed based on undirected network structures. How to design distributed optimization algorithms on directed networks is challenging. This is one of the motives of this paper.
In practical application, agents are usually limited by communication resources. Continuous communication inevitably leads to high energy consumption. To reduce unnecessary communication, the event-triggered control technique was employed in the design of the distributed optimal control protocol. For example, some event-triggered subgradient and consensus algorithms were proposed to address optimization problems for first-order MASs over undirected networks in [34,35], respectively. In [36], an event-triggered zero-gradient-sum algorithm over directed networks was proposed. Additionally, a distributed optimization problem for second-order MASs with undirected networks was considered based on the event-triggered communication algorithm in [37]. To our knowledge, the distributed optimization problems for second-order MASs over directed networks have not been considered by using an event-triggered communication algorithm.
Based on the above discussion, this paper considers the distributed optimization problem for second-order MASs over directed networks. Both continuous-time and event-triggered communication protocols are proposed. The main contributions of this paper can be broadly summarized as follows:
(1)
Motivated by [38,39], this paper considers a more general distributed optimization problem, which can be regarded as a generalization of the existing one in [33,37]. The global optimization objective is the weighted sum of local objective functions, which can adjust the global optimization objective more flexibly. Thus, it has a wider application prospect.
(2)
A directed network is constructed based on the considered optimization problem, and a second-order continuous protocol is proposed according to the constructed directed network. Some sufficient conditions are obtained to ensure the solvability of the considered optimization problem.
(3)
In order to reduce the limited resource cost, an event-triggered communication protocol is designed. Combined with the linear transformation method and Lyapunov stability theory, we can prove that the optimization problem can be solved under the proposed protocols.
The remaining parts of this paper are organized as follows. Some preliminaries are presented in Section 2. In Section 3, the continuous-time communication protocol is proposed. In Section 4, an improved event-triggered protocol is given. A simulation example is presented in Section 5. Finally, a conclusion is given in Section 6.
Notations. Let R N and R N × n denote the N-dimensional Euclidean space and N × n dimensional matrices. I n is the identity matrix with n × n dimension. 1 N and 0 N denote the column vectors with N ones and N zeros, respectively. ⊗ and · denote the Kronecker product and the Euclidean norm, respectively. f ( · ) represents the gradient of function f ( · ) . Q T ( x T ) is the transposition of matrix Q (vector x), and Q 1 is the inverse of matrix Q . N + = { 1 , 2 , } denotes the positive integer set.

2. Preliminaries

2.1. Graph Theory

Consider the MASs with N agents, and the interactions among agents are described by a directed graph G = { V , E , A } , in which V = { v 1 , v 2 , , v N } is a set of finite vertices and E V × V is an edge set. For a directed edge ( v i , v j ) E , v i and v j represent the tail vertex and head vertex, respectively. The neighbor set of vertex v i is denoted as N i = { v j V : ( v j , v i ) E } . A = [ a i j ] R N × N is the weighted adjacency matrix of graph G , where a i j > 0 if ( v j , v i ) E , and a i i = 0 . The graph G is called strongly connected if there is a directed path between any two different nodes. The in-degree matrix of the graph G is defined as D = diag { d 1 , d 2 , , d N } , where d i = j N i a i j . The corresponding Laplacian matrix L = [ l i j ] R N × N is defined by L = D A . A digraph G is called detail-balanced [40] if there exists a positive vector ϑ = ( ϑ 1 , ϑ 2 , ⋯, ϑ N ) T , such that ϑ i a i j = ϑ j a j i for i , j = 1 , 2 , , N .

2.2. Basic Definition

Definition 1.
The function f ( · ) : R n R is called s strongly convex if for any x 1 , x 2 R n , x 1 x 2 there exists s > 0 , such that ( x 1 x 2 ) T ( f ( x 1 ) f ( x 2 ) ) s x 1 x 2 2 2 .
Definition 2.
The function f ( · ) : R n R is called the gradient f i ( · ) m-Lipschitz if for any x 1 , x 2 R n , x 1 x 2 there exists m > 0 , such that f ( x 1 ) f ( x 2 ) m x 1 x 2 .

2.3. Problem Statement

In this paper, we mainly study the following optimization problem in a distributed method
min x R n f ˜ ( x ) = i = 1 N ξ i f i ( x ) ,
where x R n is the decision vector and f i ( · ) : R n R is the local cost function of agent v i . ξ i ( 0 , 1 ) is a weight parameter satisfying i = 1 N ξ i = 1 , and f ˜ ( x ) is the global optimization function. Our aim is to solve this optimization problem (1) by proposing some distributed algorithms.
Remark 1.
In existing works [26,27,33,36,37,41,42], the global optimization function is given by min x R n f ˜ ( x ) = i = 1 N f i ( x ) . However, the optimization problem (1) is considered in this paper. When we choose ξ i = 1 N , i = 1 , 2 , , N , the optimization problem (1) becomes min x R n f ˜ ( x ) = 1 N i = 1 N f i ( x ) , which has the same optimal solution as min x R n f ˜ ( x ) = i = 1 N f i ( x ) . Therefore, the optimization problem (1) can be regarded as a generalization of the existing one. Furthermore, the optimization problem (1) is equivalent to the optimization problem min x R n F ˜ ( x ) = i = 1 N θ i f i ( x ) with θ i > 0 . If we let Θ = i = 1 N θ i and ξ i = θ i / Θ , it has F ˜ ( x ) = Θ i = 1 N ξ i f i ( x ) with i = 1 N ξ i = 1 . As Θ is a constant, it has the same optimal solution as problem (1). Hence, the optimization problem (1) is more general and has more extensive applications.
For the optimization problem (1), we use x i R n to represent the estimation of the decision vector x of the ith agent. Denote x T = [ x 1 T , x 2 T , , x N T ] T R N n . Assume that the network topology G among agents is directed and strongly connected, and the corresponding Laplacian matrix is denoted by L . Then, the problem (1) can be written as the following problem:
min x R N n F ( x ) = i = 1 N ξ i f i ( x i ) , s . t . ( L I n ) x = 0 .
The proof is similar to the one in [43].
To solve the above problem, the network topology G needs to designed carefully. Based on the weighted parameters ξ i for i = 1 , , N , we need to construct a directed graph to satisfy the distributed optimization problem (2). In the literature [41], a directed detail-balanced network construction method was proposed, which is also feasible in this paper. Hence, the specific construction algorithm is omitted here. In the constructed network, the adjacency matrix A needs to satisfy the following two conditions:
(1).
The directed graph G is detail-balanced with the weight ξ = [ ξ 1 , , ξ N ] T . That means ξ i a i j = ξ j a j i , for i , j = 1 , 2 , , N .
(2).
The directed graph G is strongly connected.
Let Ξ = diag { ξ 1 , ξ 2 , , ξ N } and G be a new undirected graph where the adjacency matrix is A = [ ξ i a i j ] N × N , so that the corresponding Laplacian matrix is W = Ξ L = [ w i j ] R N × N . Then, the matrix W is a symmetric matrix, and its eigenvalues are represented by λ 1 , λ 2 , , λ N , satisfying 0 = λ 1 < λ 2 λ N . Define Q = ( q 1 , Q 2 ) R N × N , in which
q 1 = 1 N N , Q 2 R N × ( N 1 ) , q 1 T Q 2 = 0 N 1 T , Q 2 T Q 2 = I N 1 ,
and Q satisfying Q T Q = I N . The matrix Q is an orthogonal matrix which satisfies Q T W Q = diag { λ 1 , λ 2 , , λ N } .
In order to analyze the above problem, the following Assumption bout the objective function is widely used, and details can be found in works [24,33,37].
Assumption 1.
The objective function f i ( · ) : R n R of agent i is differentiable and s i -strongly convex on R n .
Assumption 2.
The gradient f i ( · ) is m i -Lipschitz.
Assumption 3.
The network topology G is directed strongly connected and ξ-detailed balanced.

3. The Continuous-Time Communication Protocol

In this section, we assume that the communication among agents is continuous. Based on directed graph G , the following algorithm is presented
x ˙ i ( t ) = ξ i y i ( t ) , y ˙ i ( t ) = k 1 y i ( t ) k 2 f i ( x i ( t ) ) k 3 h i ( t ) , h ˙ i ( t ) = k 4 j N i a i j ( x j ( t ) x i ( t ) ) , i = 1 , 2 , , N ,
where x i ( t ) , y i ( t ) R n denote the position and the velocity states of agent i, respectively. k 1 , k 2 , k 3 , and k 4 are positive parameters. h i ( t ) R n is an auxiliary variable, and the initial value satisfies i = 1 N ξ i h i ( 0 ) = 0 .
Remark 2.
The distributed optimal algorithm (3) not only contains the consensus term k 4 j N i a i j ( x j ( t ) x i ( t ) ) and optimization term k 2 f i ( x i ( t ) ) but also has auxiliary variable k 3 h i ( t ) . Motivated by [37,42], the auxiliary variable h i ( t ) , as the integral of k 4 j N i a i j ( x j ( t ) x i ( t ) ) , is designed for ensuring that the equilibrium point of (3) is the optimal value through the transmission of local information.
Remark 3.
Compared with [37], each item of (3) has a weighted coefficient to efficiently adjust the influence strength on the distributed optimization algorithm. It is well known that the sensitivity of the second-order MASs is higher than that of the first-order MASs, so the stability analysis becomes more difficult. The algorithm (3) is suitable for second-order MASs because of their improved flexibility and higher fineness. In addition, different from the algorithm in [42], the dynamics of y i ( t ) do not use the position information of agents in this paper. Thus, the calculation consumption is much simpler than the existing one.
Because of the strict convexity of function f i ( · ) , the objective function F ( · ) is also a differentiable and strongly convex function. Based on Assumption 1, the optimization problem (1) has a unique optimal solution
x * arg min x R n f ( x ) , < x * < + .
According to the definition of Laplacian L , the distributed optimization algorithm (3) can be rewritten as the following compact form:
x ˙ ( t ) = ( Ξ I n ) y ( t ) , y ˙ ( t ) = k 1 y ( t ) k 2 f ¯ ( x ( t ) ) k 3 h ( t ) , h ˙ ( t ) = k 4 ( L I n ) x ( t ) ,
where x ( t ) = [ x 1 T ( t ) , x 2 T ( t ) , , x N T ( t ) ] T , y ( t ) = [ y 1 T ( t ) , y 2 T ( t ) , ⋯, y N T ( t ) ] T , f ¯ ( x ( t ) ) = [ ( f 1 ( x 1 ( t ) ) ) T , ( f 2 ( x 2 ( t ) ) ) T , ⋯, ( f N ( x N ( t ) ) ) T ] T , h ( t ) = [ h 1 T ( t ) , h 2 T ( t ) , , h N T ( t ) ] T .
Assumption 4.
For algorithm (3), i = 1 N ξ i h i ( 0 ) = 0 is satisfied, and there exist parameters k 1 , k 2 > 0 , 0 < k 4 < k 1 3 ϵ 0 2 ( 1 ϵ 0 ) 2 k 3 λ N , such that π 1 = k 1 k 2 ϵ 0 ξ ˇ s ˇ k 3 k 4 λ N > 0 and π 2 = k 1 ( 1 ϵ 0 ) > 0 , where ξ ˇ = min { ξ 1 , ξ 2 , , ξ N } , s ˇ = min { s 1 , s 2 , , s N } , and ϵ 0 > 0 is a constant.
Lemma 1.
For algorithm (3), if Assumptions 1 and 2 are satisfied, then there is a unique solution for any initial value on [ 0 , + ) .
Proof. 
Based on the strict convexity of function f i ( · ) , one can easily know that there exists one solution for algorithm (3) at least. Therefore, we only need to prove the uniqueness of the solution. Let P ( t ) = [ x T ( t ) , y T ( t ) , h T ( t ) ] T , then one can obtain
P ˙ ( t ) = ( Ξ I n ) y ( t ) k 1 y ( t ) k 2 f ¯ ( x ( t ) ) k 3 h ( t ) k 4 ( L I n ) x ( t ) .
It is noticed that f ¯ ( x ( t ) ) is Lipschitz continuous, then the solution of P ( t ) exists. Assume there are two different solutions of Equation (4), which are P a ( t ) and P b ( t ) satisfying initial condition P a ( 0 ) = P b ( 0 ) . Based on the Assumption 1, we can obtain that
P ˙ a ( t ) P ˙ b ( t ) Ξ I n y a ( t ) y b ( t ) + k 1 y a ( t ) y b ( t ) + k 2 f ¯ a ( x ( t ) ) f ¯ b ( x ( t ) ) + k 3 h a ( t ) h b ( t ) + k 4 L I n x a ( t ) x b ( t ) ξ ^ y a ( t ) y b ( t ) + k 1 y a ( t ) y b ( t ) + k 2 f ¯ a ( x ( t ) ) f ¯ b ( x ( t ) ) + k 3 h a ( t ) h b ( t ) + k 4 L x a ( t ) x b ( t ) ι P a ( t ) P b ( t ) ,
where ξ ^ = max { ξ 1 , ξ 2 , , ξ N } and ι = ξ ^ + k 1 + k 2 + k 3 + k 4 L . Additionally, then, it has
1 2 P a ( t ) P b ( t ) 2 0 t d d s ( 1 2 P a ( s ) P b ( s ) 2 ) d s 0 t ι P a ( s ) P b ( s ) 2 d s .
By using the Gronwall inequality, one has
1 2 P a ( t ) P b ( t ) 2 0 ,
which means that P a ( t ) = P b ( t ) for t [ 0 , + ) . Therefore, Lemma 1 is proved. □
If Assumption 1 and equality i = 1 N ξ i h i ( 0 ) = 0 holds, then there exists an equilibrium point ( x * , y * , h * ) of algorithm (4). Therefore, the following equation is satisfied for the ( x * , y * , h * ) :
( Ξ I n ) y * = 0 , k 1 y * k 2 f ¯ ( x * ) k 3 h * = 0 , k 4 ( L I n ) x * = 0 .
Since G is strongly connected and detail-balanced, then it has ( ξ I n ) T ( L I n ) = 0 . Left multiplying ( ξ I n ) T for the third equation of (5), it has ( ξ I n ) T h ˙ ( t ) = k 4 ( ξ I n ) T ( L I n ) x ( t ) = 0 ; equivalently, i = 1 N ξ i h ˙ i ( t ) = 0 . In combination with i = 1 N ξ i h i ( 0 ) = 0 , it has
i = 1 N ξ i h i ( t ) = i = 1 N ξ i h i ( 0 ) = 0 , t 0 .
Because Ξ is an inevitable diagonal matrix, then the first equation of (5) implies y * = 0 . Substituting y * = 0 into the second equation of (5), and left multiplying ( ξ I n ) T , it has k 2 ( ξ I n ) T f ( x * ) = 0 , that is i = 1 N ξ i f i ( x i * ) = 0 . Hence, x * is an optimal value of (2).
We define x ^ ( t ) = x ( t ) x * , y ^ ( t ) = y ( t ) y * , h ^ ( t ) = h ( t ) h * and ϕ ( t ) = f ¯ ( x ^ ( t ) + x * ) f ¯ ( x * ) , then the system (4) is described as follows:
x ^ ˙ ( t ) = ( Ξ I n ) y ^ ( t ) , y ^ ˙ ( t ) = k 1 y ^ ( t ) k 2 ϕ ( t ) k 3 h ^ ( t ) , h ^ ˙ ( t ) = k 4 ( L I n ) x ^ ( t ) .
Denote x ˇ ( t ) = ( Q T I n ) x ^ ( t ) , y ˇ ( t ) = ( Q T Ξ I n ) y ^ ( t ) and h ˇ ( t ) = ( Q T Ξ I n ) h ^ ( t ) , it has
x ˇ ˙ ( t ) = y ˇ ( t ) , y ˇ ˙ ( t ) = k 1 y ˇ ( t ) k 2 ( Q T Ξ I n ) ϕ ( t ) k 3 h ˇ ( t ) , h ˇ ˙ ( t ) = k 4 ( Q T W Q I n ) x ˇ ( t ) .
Then, the variables of equality (7) is divided as follows x ˇ ( t ) = ( x ˇ 1 T ( t ) , x ˇ 2 : N T ( t ) ) T , y ˇ ( t ) = ( y ˇ 1 T ( t ) , y ˇ 2 : N T ( t ) ) T , and h ˇ ( t ) = ( h ˇ 1 T ( t ) , h ˇ 2 : N T ( t ) ) T , in which x ˇ 1 ( t ) , y ˇ 1 ( t ) , h ˇ 1 ( t ) R n and x ˇ 2 : N ( t ) , y ˇ 2 : N ( t ) , h ˇ 2 : N ( t ) R ( N 1 ) × n . Therefore, equality (7) can be decomposed as follows
x ˇ ˙ 1 ( t ) = y ˇ 1 ( t ) , y ˇ ˙ 1 ( t ) = k 1 y ˇ 1 ( t ) k 2 ( q 1 T Ξ I n ) ϕ ( t ) , h ˇ ˙ 1 ( t ) = 0 ,
and
x ˇ ˙ 2 : N ( t ) = y ˇ 2 : N ( t ) , y ˇ ˙ 2 : N ( t ) = k 1 y ˇ 2 : N ( t ) k 2 ( Q 2 T Ξ I n ) ϕ ( t ) k 3 h ˇ 2 : N ( t ) , h ˇ ˙ 2 : N ( t ) = k 4 ( Q 2 T W Q 2 I n ) x ˇ 2 : N ( t ) .
Remark 4.
Different from the dynamics in [37], the parameter k 3 is added to adjust the changing rate of h i ( t ) in the proposed dynamics. Hence, the application scope of the second-order system is generalized. In the meantime, the matrix transformation from system (6) to system (7) solves the difficulties caused by the asymmetry of directed network topology.
Theorem 1.
Suppose that Assumptions 1–4 hold, then the optimization problem (1) can be solved via distributed algorithm (3).
Proof. 
Construct the following Lyapunov function
V 1 ( t ) = W 1 ( t ) + W 2 ( t ) + k 2 W 3 ( t ) ,
where
W 1 ( t ) = k 1 2 ϵ 0 2 x ˇ 1 T ( t ) x ˇ 1 ( t ) + k 1 ϵ 0 x ˇ 1 T ( t ) y ˇ 1 ( t ) + 1 2 y ˇ 1 T ( t ) y ˇ 1 ( t ) , W 2 ( t ) = k 1 2 ϵ 0 2 x ˇ 2 : N T ( t ) x ˇ 2 : N ( t ) + k 1 ϵ 0 x ˇ 2 : N T ( t ) y ˇ 2 : N ( t ) + 1 2 y ˇ 2 : N T ( t ) y ˇ 2 : N ( t )
+ k 3 x ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) + k 1 k 3 ϵ 0 2 k 4 h ˇ 2 : N T ( t ) ( ( Q 2 T W Q 2 ) 1 I n ) h ˇ 2 : N ( t ) ,
and
W 3 ( t ) = F ( x ( t ) ) ( F ( x * ) ) T x ( t ) F ( x * ) .
Note that W 1 ( t ) , W 2 ( t ) > 0 if ϵ 0 < 1 . W 3 ( t ) 0 and W 3 ( t ) = 0 if and only if x ˇ = 0 . Then, V 1 ( t ) 0 and V 1 ( t ) = 0 if and only if x ˇ = 0 , y ˇ = 0 , h ˇ = 0 . Take the time derivation of W 1 ( t ) , W 2 ( t ) and W 3 ( t ) , it has
W ˙ 1 ( t ) = k 1 2 ϵ 0 x ˇ 1 T ( t ) x ˇ ˙ 1 ( t ) + k 1 ϵ 0 x ˇ ˙ 1 T ( t ) y ˇ 1 ( t ) + k 1 ϵ 0 x ˇ 1 T ( t ) y ˇ ˙ 1 ( t ) + y ˇ 1 T ( t ) y ˇ ˙ 1 ( t ) = k 1 ( 1 ϵ 0 ) y ˇ 1 T ( t ) y ˇ 1 ( t ) k 2 y ˇ 1 T ( t ) ( q 1 T Ξ I n ) ϕ ( t ) k 1 k 2 ϵ 0 x ˇ 1 T ( t ) ( q 1 T Ξ I n ) ϕ ( t ) , W ˙ 2 ( t ) = k 1 2 ϵ 0 x ˇ 2 : N T ( t ) x ˇ ˙ 2 : N ( t ) + k 1 ϵ 0 x ˇ ˙ 2 : N T ( t ) y ˇ 2 : N ( t ) + k 1 ϵ 0 x ˇ 2 : N T ( t ) y ˇ ˙ 2 : N ( t ) + y ˇ 2 : N T ( t ) y ˇ ˙ 2 : N ( t ) + k 3 x ˇ ˙ 2 : N T ( t ) h ˇ 2 : N ( t ) + k 3 x ˇ 2 : N T ( t ) h ˇ ˙ 2 : N ( t ) + k 1 k 3 ϵ 0 k 4 h ˇ 2 : N T ( t ) ( ( Q 2 T W Q 2 ) 1 I n ) h ˇ ˙ 2 : N ( t ) = k 1 ( 1 ϵ 0 ) y ˇ 2 : N T ( t ) y ˇ 2 : N ( t ) k 2 y ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) k 1 k 2 ϵ 0 x ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) x ˇ 2 : N ( t ) ,
and
W ˙ 3 ( t ) = ( F ( x ( t ) ) T x ˙ ( t ) ( F ( x * ) ) T x ˙ ( t ) = i = 1 N ξ i ( f i ( x ^ i ( t ) + x * ) ) T ξ i y i ( t ) i = 1 N ξ i ( f i ( x * ) ) T ξ i y i ( t ) = i = 1 N ξ i ( f i ( x ^ i ( t ) + x * ) f i ( x * ) ) T ξ i y ^ i ( t ) = y ^ T ( t ) ( Ξ 2 I n ) ϕ ( t ) = y ˇ T ( t ) ( Q T Ξ I n ) ϕ ( t ) .
Therefore, the time derivation of V 1 ( t ) is obtained as
V ˙ 1 ( t ) = W ˙ 1 ( t ) + W ˙ 2 ( t ) + k 2 W ˙ 3 ( t ) = k 1 ( 1 ϵ 0 ) y ˇ T ( t ) y ˇ ( t ) k 1 k 2 ϵ 0 x ˇ T ( t ) ( Q T Ξ I n ) ϕ ( t ) + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) x ˇ 2 : N ( t ) .
Since
k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) x ˇ 2 : N ( t ) k 3 k 4 λ N x ˇ 2 : N ( t ) 2 ,
and
k 1 k 2 ϵ 0 x ˇ T ( t ) ( Q T Ξ I n ) ϕ ( t ) = k 1 k 2 ϵ 0 i = 1 N ξ i x ^ i T ( t ) ( f i ( x ^ i ( t ) + x * ) f i ( x * ) ) k 1 k 2 ϵ 0 i = 1 N s i ξ i x ^ i T ( t ) x ^ i ( t ) k 1 k 2 ϵ 0 ξ ˇ s ˇ x ^ ( t ) 2 .
As Q is an identity orthogonal matrix, and x ˇ ( t ) = x ^ ( t ) , then one has x ˇ ( t ) 2 = x ˇ 1 ( t ) 2 + x ˇ 2 : N ( t ) 2 .
Therefore, it has that
V ˙ 1 ( t ) k 1 ( 1 ϵ 0 ) y ˇ T ( t ) y ˇ ( t ) k 1 k 2 ϵ 0 ξ ˇ s ˇ x ^ ( t ) 2 + k 3 k 4 λ N x ˇ 2 : N ( t ) 2 = π 1 x ˇ 2 : N ( t ) 2 π 2 y ˇ ( t ) 2 π 3 x ˇ 1 ( t ) 2 ,
where π 1 = k 1 k 2 ϵ 0 ξ ˇ s ˇ k 3 k 4 λ N , π 2 = k 1 ( 1 ϵ 0 ) , and π 3 = k 1 k 2 ϵ 0 ξ ˇ s ˇ . By comparing the above inequalities, we can easily know that π 3 > 0 . Thus, we only need to prove π i > 0 , i = 1 , 2 . From the condition of Theorem 1, V ˙ 1 ( t ) 0 , and V ˙ 1 ( t ) = 0 if and only if y ˇ = 0 , x ˇ = 0 , h ˇ = 0 . Therefore, consensus can be achieved. □
Remark 5.
The distributed optimization problem for MASs with directed networks is more difficult than those on undirected networks, which are mainly reflected in two aspects. The first aspect is the symmetry of the Laplacian matrix of the network topology. The symmetry of the Laplacian matrix of undirected topology can be fully utilized as in [33,37], but the Laplacian matrix is usually asymmetric for directed topology. Therefore, to solve the optimization problem on directed networks, other more complicated methods need to be adopted. The second aspect is that the optimization problem is usually only with the sum of local objective functions and without the convex hull. However, in order to solve more general distributed optimization problems, the directed network needs to be designed carefully in this paper.
Remark 6.
It is not difficult to see that the above algorithm relies on continuous-time communication, which contains distinct shortcomings. In practical application, a continuous-time algorithm inevitably needs a lot of communication costs. As we all know, the bandwidth, transmission energy, and communication frequency are limited by the current industrial level and cannot be used indefinitely. Therefore, in the next subsection, we propose a new algorithm in order to improve this drawback.

4. The Event-Triggered Communication Protocol

Based on directed communication topology G , the following algorithm is presented
x ˙ i ( t ) = ξ i y i ( t ) , y ˙ i ( t ) = k 1 y i ( t ) k 2 f i ( x i ( t ) ) k 3 h i ( t ) , h ˙ i ( t ) = k 4 j N i a i j ( x j ( t k ) x i ( t k ) ) , t [ t k , t k + 1 ) ,
where x i ( t k ) denotes the kth event instant of agent i for k N + . Other parameters are consistent with the ones in algorithm (3).
Before analyzing the distributed optimal algorithm, the measurement error e i ( t ) of agent i is defined as e i ( t ) = x i ( t k ) x i ( t ) , t [ t k , t k + 1 ) . Hence, with the definition of e i ( t ) , the distributed optimal algorithm (12) for t [ t k , t k + 1 ) can be rewritten as follows
x ˙ i ( t ) = ξ i y i ( t ) , y ˙ i ( t ) = k 1 y i ( t ) k 2 f i ( x i ( t ) ) k 3 h i ( t ) , h ˙ i ( t ) = k 4 j N i a i j [ ( x j ( t ) x i ( t ) ) + ( e j ( t ) e i ( t ) ) ] .
According to the definition of Laplacian L , the distributed optimal algorithm can be rewritten as the following compact matrix–vector form:
x ˙ ( t ) = ( Ξ I n ) y ( t ) , y ˙ ( t ) = k 1 y ( t ) k 2 f ¯ ( x ( t ) ) k 3 h ( t ) , h ˙ ( t ) = k 4 ( L I n ) ( x ( t ) + e ( t ) ) ,
where x ( t ) = [ x 1 T ( t ) , x 2 T ( t ) , , x N T ( t ) ] T , y ( t ) = [ y 1 T ( t ) , y 2 T ( t ) , ⋯, y N T ( t ) ] T , f ¯ ( x ( t ) ) = [ ( f 1 ( x 1 ( t ) ) ) T , ( f 2 ( x 2 ( t ) ) ) T , , ( f N ( x N ( t ) ) ) T ] T , h ( t ) = [ h 1 T ( t ) , h 2 T ( t ) , , h N T ( t ) ] T , e ( t ) = [ e 1 T ( t ) , e 2 T ( t ) , , e N T ( t ) ] T .
We perform a similar coordination transformation as before. Define x ^ ( t ) = x ( t ) x * , y ^ ( t ) = y ( t ) y * , h ^ ( t ) = h ( t ) h * and ϕ ( t ) = f ¯ ( x ( t ) ) f ¯ ( x * ) , then the new transformed system is described as follows:
x ^ ˙ ( t ) = ( Ξ I n ) y ^ ( t ) , y ^ ˙ ( t ) = k 1 y ^ ( t ) k 2 ϕ ( t ) k 3 h ^ ( t ) , h ^ ˙ ( t ) = k 4 ( L I n ) ( x ^ ( t ) + e ( t ) ) .
Denote x ˇ ( t ) = ( Q T I n ) x ^ ( t ) , y ˇ ( t ) = ( Q T Ξ I n ) y ^ ( t ) , h ˇ ( t ) = ( Q T Ξ I n ) h ^ ( t ) and e ˇ ( t ) = ( Q T I n ) e ( t ) . Then, we divide the variables as follows: x ˇ ( t ) = [ x ˇ 1 T ( t ) , x ˇ 2 : N T ( t ) ] T , y ˇ ( t ) = [ y ˇ 1 T ( t ) , y ˇ 2 : N T ( t ) ] T , h ˇ ( t ) = [ h ˇ 1 T ( t ) , h ˇ 2 : N T ( t ) ] T and e ˇ ( t ) = [ e ˇ 1 T ( t ) , e ˇ 2 : N T ( t ) ] T , where x ˇ 1 ( t ) , y ˇ 1 ( t ) , h ˇ 1 ( t ) , e ˇ 1 ( t ) R n and x ˇ 2 : N ( t ) , y ˇ 2 : N ( t ) , h ˇ 2 : N ( t ) , e ˇ 2 : N ( t ) R ( N 1 ) n . Hence, equality (15) can be rewritten as follows:
x ˇ ˙ 1 ( t ) = y ˇ 1 ( t ) , y ˇ ˙ 1 ( t ) = k 1 y ˇ 1 ( t ) k 2 ( q 1 T Ξ I n ) ϕ ( t ) , h ˇ ˙ 1 ( t ) = 0 ,
and
x ˇ ˙ 2 : N ( t ) = y ˇ 2 : N ( t ) , y ˇ ˙ 2 : N ( t ) = k 1 y ˇ 2 : N ( t ) k 2 ( Q 2 T Ξ I n ) ϕ ( t ) k 3 h ˇ 2 : N ( t ) , h ˇ ˙ 2 : N ( t ) = k 4 ( Q 2 T W Q 2 I n ) ( x ˇ 2 : N ( t ) + e ˇ 2 : N ( t ) ) .
Assumption 5.
For MAS (12), i = 1 N ξ i h i ( 0 ) = 0 is satisfied, and there exist parameters k 1 , k 2 > 0 , 0 < k 4 < k 1 3 ϵ 0 2 ( 1 ϵ 0 ) 2 k 3 λ N , such that ϑ 1 = k 1 ( 1 ϵ 0 ) + ϵ k 1 k 2 ( ϵ + k 1 k 2 ) k 3 4 k 2 ζ 2 ϵ k 4 λ N ζ 5 ϵ 2 k 4 2 λ N 2 4 ζ 6 > 0 , ϑ 2 = k 3 ϵ k 3 2 ϵ 0 2 4 ζ 1 ( ϵ + k 1 k 2 ) k 3 ζ 2 k 2 ϵ k 2 ξ ^ m ^ ζ 3 ϵ k 2 k 4 λ N 4 ζ 4 ϵ 2 k 4 2 4 ζ 7 > 0 , and σ 1 = k 1 k 2 ϵ 0 ξ ˇ s ˇ k 1 2 ζ 1 ϵ k 2 ξ ^ m ^ 4 ζ 3 ϵ k 2 k 4 λ N ζ 4 ϵ k 4 λ N 4 ζ 5 k 3 k 4 λ N k 3 k 4 ( 1 2 + μ 0 ) > 0 , where ζ i , i = 1 , 2 , , 7 are positive constants, ξ ^ = max { ξ 1 , ξ 2 , , ξ N } , s ˇ = min { s 1 , s 2 , , s N } , m ^ = max { m 1 , m 2 , , m N } , and ϵ and ϵ 0 are two positive constants, and μ 0 > 0 is a positive number for adjustment. Moreover, the event-trigger instantly satisfies
t k + 1 = inf t { t > t k | e ( t ) 2 η i = 1 N ( j = 1 N w i j x j ( t ) x i ( t ) 2 ) } ,
where η = 2 k 3 k 4 μ 0 k 3 k 4 λ N + 2 ( k 1 2 ζ 1 + ζ 6 + k 2 2 ζ 7 ) .
Theorem 2.
Suppose that Assumptions 1–3 and 5 hold, then the optimization problem (1) can be solved via distributed algorithm (12).
Proof. 
Choosing the same Lyapunov function as (8). Based on equalities (9)–(11), (16), and (17), we take the time derivation of W 1 ( t ) , W 2 ( t ) , and W 3 ( t ) for t [ t k , t k + 1 ) , as follows:
W ˙ 1 ( t ) = k 1 ( 1 ϵ 0 ) y ˇ 1 T ( t ) y ˇ 1 ( t ) k 2 y ˇ 1 T ( t ) ( q 1 T Ξ I n ) ϕ ( t ) k 1 k 2 ϵ 0 x ˇ 1 T ( t ) ( q 1 T Ξ I n ) ϕ ( t ) , W ˙ 2 ( t ) = k 1 ( 1 ϵ 0 ) y ˇ 2 : N T ( t ) y ˇ 2 : N ( t ) k 2 y ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) k 1 k 2 ϵ 0 x ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) + k 1 k 3 ϵ 0 h ˇ 2 : N T ( t ) e ˇ 2 : N T ( t ) + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) x ˇ 2 : N ( t ) + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) ,
and
W ˙ 3 ( t ) = y ˇ T ( t ) ( Q T Ξ I n ) ϕ ( t ) .
Therefore, the time derivation of V 1 ( t ) for t [ t k , t k + 1 ) is obtained as
V ˙ 1 ( t ) = W ˙ 1 ( t ) + W ˙ 2 ( t ) + k 2 W ˙ 3 ( t ) = k 1 ( 1 ϵ 0 ) y ˇ T ( t ) y ˇ ( t ) k 1 k 2 ϵ 0 x ˇ T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) x ˇ 2 : N ( t ) + k 1 k 3 ϵ 0 h ˇ 2 : N T ( t ) e ˇ 2 : N T ( t ) + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) .
By using the Young’s inequality with ϵ > 0 , a b 1 4 ϵ a 2 + ϵ b 2 for a , b > 0 , one has
k 1 k 3 ϵ 0 h ˇ 2 : N T ( t ) e ˇ 2 : N T ( t ) k 3 2 ϵ 0 2 4 ζ 1 h ˇ 2 : N ( t ) 2 + k 1 2 ζ 1 e ˇ 2 : N ( t ) 2 .
Then, we consider the following Lyapunov function
V 2 ( t ) = W 4 ( t ) + ϵ 2 k 2 y ˇ 1 T ( t ) y ˇ 1 ( t ) + ϵ W 1 ( t ) ,
where
W 4 ( t ) = ϵ 2 k 2 y ˇ 2 : N T ( t ) y ˇ 2 : N ( t ) + ϵ y ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) + ϵ k 2 2 h ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) .
Note that W 4 ( t ) 0 and W 4 ( t ) = 0 if and only if y ˇ 2 : N ( t ) = 0 and h ˇ 2 : N ( t ) = 0 . Take the time derivation of W 4 ( t ) and V 2 ( t ) for t [ t k , t k + 1 ) , respectively, one has
W ˙ 4 ( t ) = ϵ k 2 y ˇ 2 : N T ( t ) y ˇ ˙ 2 : N ( t ) + ϵ y ˇ ˙ 2 : N T ( t ) h ˇ 2 : N ( t ) + ϵ y ˇ 2 : N T ( t ) h ˇ ˙ 2 : N ( t ) + ϵ k 2 h ˇ 2 : N T ( t ) h ˇ ˙ 2 : N ( t ) = ϵ k 1 k 2 y ˇ 2 : N T ( t ) y ˇ 2 : N ( t ) ( ϵ + k 1 k 2 ) k 3 k 2 y ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) ϵ y ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) ϵ k 2 h ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) + ϵ k 2 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) h ˇ 2 : N ( t ) + ϵ k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) y ˇ 2 : N ( t ) + ϵ k 4 y ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) k 3 ϵ h ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) + ϵ k 2 k 4 h ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) .
Therefore, the time derivation of V 2 ( t ) for t [ t k , t k + 1 ) is obtained as
V ˙ 2 ( t ) = W ˙ 4 ( t ) + ϵ k 2 y ˇ 1 T ( t ) y ˇ ˙ 1 ( t ) + ϵ W ˙ 1 ( t ) = ϵ k 1 k 2 y ˇ T ( t ) y ˇ ( t ) ( ϵ + k 1 k 2 ) k 3 k 2 y ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) k 3 ϵ h ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) ϵ k 2 h ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) + ϵ k 2 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) h ˇ 2 : N ( t ) + ϵ k 4 y ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) + ϵ k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) y ˇ 2 : N ( t ) + ϵ k 2 k 4 h ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) .
By using Young’s inequality, it has
( ϵ + k 1 k 2 ) k 3 k 2 y ˇ 2 : N T ( t ) h ˇ 2 : N ( t ) ( ϵ + k 1 k 2 ) k 3 4 k 2 ζ 2 y ˇ 2 : N ( t ) 2 + ( ϵ + k 1 k 2 ) k 3 ζ 2 k 2 h ˇ 2 : N ( t ) 2 , ϵ k 2 h ˇ 2 : N T ( t ) ( Q 2 T Ξ I n ) ϕ ( t ) ϵ k 2 ξ ^ m ^ 4 ζ 3 x ˇ ( t ) 2 + ϵ k 2 ξ ^ m ^ ζ 3 h ˇ 2 : N ( t ) 2 , ϵ k 2 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) h ˇ 2 : N ( t ) ϵ k 2 k 4 λ N 4 ζ 4 h ˇ 2 : N ( t ) 2 + ϵ k 2 k 4 λ N ζ 4 x ˇ 2 : N ( t ) 2 , ϵ k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) y ˇ 2 : N ( t ) ϵ k 4 λ N 4 ζ 5 x ˇ 2 : N ( t ) 2 + ϵ k 4 λ N ζ 5 y ˇ 2 : N ( t ) 2 , ϵ k 4 y ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) ϵ 2 k 4 2 λ N 2 4 ζ 6 y ˇ 2 : N ( t ) 2 + ζ 6 e ˇ 2 : N ( t ) 2 ,
and
ϵ k 2 k 4 h ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) ϵ 2 k 4 2 4 ζ 7 h ˇ 2 : N ( t ) 2 + k 2 2 ζ 7 e ˇ 2 : N ( t ) 2 .
Let V 3 ( t ) = V 1 ( t ) + V 2 ( t ) , then it is clear that
V ˙ 3 ( t ) = V ˙ 1 ( t ) + V ˙ 2 ( t ) ϑ 1 y ˇ 2 : N ( t ) 2 ϑ 2 h ˇ 2 : N ( t ) 2 ϑ 3 x ˇ 2 : N ( t ) 2 ϑ 4 y ˇ 1 ( t ) 2 ϑ 5 x ˇ 1 ( t ) 2 + k 3 k 4 x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) + k 1 2 ζ 1 + ζ 6 + k 2 2 ζ 7 k 3 k 4 e ˇ 2 : N ( t ) 2 ,
where ϑ 1 = k 1 ( 1 ϵ 0 ) + ϵ k 1 k 2 ( ϵ + k 1 k 2 ) k 3 4 k 2 ζ 2 ϵ k 4 λ N ζ 5 ϵ 2 k 4 2 λ N 2 4 ζ 6 , ϑ 2 = k 3 ϵ k 3 2 ϵ 0 2 4 ζ 1 ( ϵ + k 1 k 2 ) k 3 ζ 2 k 2 ϵ k 2 ξ ^ m ^ ζ 3 ϵ k 2 k 4 λ N 4 ζ 4 ϵ 2 k 4 2 4 ζ 7 , ϑ 3 = k 1 k 2 ϵ 0 ξ ˇ s ˇ k 1 2 ζ 1 ϵ k 2 ξ ^ m ^ 4 ζ 3 ϵ k 2 k 4 λ N ζ 4 ϵ k 4 λ N 4 ζ 5 k 3 k 4 λ N , ϑ 4 = k 1 ( 1 ϵ 0 ) + ϵ k 1 k 2 , and ϑ 5 = k 1 k 2 ϵ 0 ξ ˇ s ˇ k 1 2 ζ 1 ϵ k 2 ξ ^ m ^ 4 ζ 3 .
Next, we let r ¯ ( t ) = r ( t ) + k 1 2 ζ 1 + ζ 6 + k 2 2 ζ 7 k 3 k 4 e ˇ 2 : N ( t ) 2 , where r = x ˇ 2 : N T ( t ) ( Q 2 T W Q 2 I n ) e ˇ 2 : N ( t ) . As x ˇ ( t ) = ( Q T I n ) x ^ ( t ) and e ˇ ( t ) = ( Q T I n ) e ( t ) , it has
r ( t ) = x ^ T ( t ) ( W I n ) e ( t ) .
Moreover, according to the above definition x ^ ( t ) = x ( t ) x * , one has
r ( t ) = ( x ( t ) ( 1 N x * ) ) T ( W I n ) e ( t ) .
Based on 1 N T W = 0 , it has
r ( t ) = x T ( t ) ( W I n ) e ( t ) 1 2 x T ( t ) ( W I n ) x ( t ) + 1 2 e T ( t ) ( W I n ) e ( t ) .
Based on e T ( t ) ( W I n ) e ( t ) λ N e ˇ ( t ) 2 , let
e ˇ ( t ) 2 2 k 3 k 4 μ 0 x T ( t ) ( W I n ) x ( t ) k 3 k 4 λ N + 2 ( k 1 2 ζ 1 + ζ 6 + k 2 2 ζ 7 ) ,
where μ 0 > 0 is an arbitrary positive parameter.
Therefore, it has
r ¯ ( t ) ( 1 2 λ N + k 1 2 ζ 1 + ζ 6 + k 2 2 ζ 7 k 3 k 4 ) e ˇ ( t ) 2 + 1 2 x T ( t ) ( W I n ) x ( t ) ( 1 2 + μ 0 ) x T ( t ) ( W I n ) x ( t ) .
Consequently, it is clear that
V ˙ 3 ( t ) ϑ 1 y ˇ 2 : N ( t ) 2 ϑ 2 h ˇ 2 : N ( t ) 2 σ 1 x ˇ 2 : N ( t ) 2 ϑ 4 y ˇ 1 ( t ) 2 σ 2 x ˇ 1 ( t ) 2 .
Therefore, the agent i can use event-triggered sampling control (18) to determine whether to store and update sampling data.
From the condition of Theorem 2, V ˙ 3 ( t ) 0 , and V ˙ 3 ( t ) = 0 if and only if y ˇ ( t ) = 0 , x ˇ ( t ) = 0 , h ˇ 2 : N ( t ) = 0 . Therefore, the state x ( t ) of the system can be asymptotically uniform and converge to the optimal solution of the optimization problem. □
Remark 7.
Compared with existing work in [37], the selection of the Lyapunov function is different. In our paper, the positive function W 3 ( t ) , inspired by the one in [44], is added to broaden the limitation of related coefficients in the dynamics.
Remark 8.
In this event-triggered algorithm, the global information e ( t ) and x ( t ) are used by each agent in the trigger function. To avoid using global information, the proposed algorithm will be improved in our future work.
Theorem 3.
For the MAS (12), if all conditions of Theorem 2 are satisfied, then there is no Zeno behavior.
Proof. 
For t [ t k , t k + 1 ) , k N + , the the upper right-hand derivative of e i ( t ) is given by
D + e i ( t ) e ˙ i ( t ) = ξ i y i ( t ) .
According to equality (12), it has
y i ( t ) y i ( t k ) = t k t ( k 1 ξ i y i ( s ) k 2 f i ( x i ( s ) ) k 3 h i ( s ) ) d s = e k 1 ξ i ( t t k ) y i ( t k ) e k 1 ξ i t k 2 t k t e k 1 ξ i s f i ( x i ( s ) ) d s e k 1 ξ i t k 3 t k t e k 1 ξ i s h i ( s ) d s .
As the functions e k 1 ξ i t f i ( x i ( t ) ) and e k 1 ξ i t h i ( t ) are continuous and differentiable on the closed interval [ t k , t ] , then there exist at least two points α and β in [ t k , t ] , which makes the following two formulas hold, respectively,
t k t e k 1 ξ i s f i ( x i ( s ) ) d s = e k 1 ξ i α f i ( x i ( α ) ) ( t t k ) ,
and
t k t e k 1 ξ i s h i ( s ) d s = e k 1 ξ i β h i ( β ) ( t t k ) .
Hence, it has
y i ( t ) 2 y i ( t k ) + k 2 f i ( x i ( α ) ) + k 3 h i ( β ) ( t t k ) .
Denote Δ i ( t ) = max t k s t { ξ i k 2 f i ( x i ( α ) ) + h i ( β ) ( s t k ) } .
Since e ( t k ) = 0 , considering equality (20), we can obtain
e i ( t ) ( 2 ξ i y i ( t k ) + Δ i ( ϖ ) ) ( t t k ) ,
where ϖ [ t k , t ] is a constant. The event instants of agent i are determined by equality (18), and the next event will not be triggered before e i ( t ) 2 = η j = 1 N w i j x j ( t ) x i ( t ) 2 . Therefore, it has
e i ( t k + 1 ) = η j = 1 N w i j ( ( x j ( t k + 1 ) x i ( t k + 1 ) ) 2 ) ( 2 ξ i y i ( t k ) + Δ i ( ϖ ) ) ( t k + 1 t k ) .
From equality (23), we can easily obtain that t k + 1 t k η j = 1 N w i j ( ( x j ( t k + 1 ) x i ( t k + 1 ) ) 2 ) 2 ξ i y i ( t k ) + Δ i ( ϖ ) . As the network topology G is strongly connected, directed, and detail-balanced,
j = 1 N w i j ( ( x j ( t k + 1 ) x i ( t k + 1 ) ) 2 ) = 0
if and only if
( x j ( t k + 1 ) x i ( t k + 1 ) ) = 0 , i , j = 1 , 2 , , N .
That means all agents states achieve consensus, and the communication between agents is no longer needed. Then, t k + 1 t k > 0 before all agents achieve consensus. Therefore, Zeno behavior can be excluded. □
Remark 9.
The optimization problem with event-triggered communication and data rate constraint was investigated in [18]. To preprocess the information, a vector-valued quantizer with finite quantization levels was introduced. This gives us a good research direction. In [18], the considered system is a first-order linear difference equation, and the optimization problem is modeled as the sum of all agents’ local convex cost functions. However, in our paper, we consider a second-order multi-agent system, and the optimization problem is the weighted sum of agents’ local convex cost functions.

5. Numerical Example

In this section, an economic dispatch example is used to verify the performance of the proposed algorithms (3) and (12).
We consider a power system which contains 8 generators and 22 loads. The active power generated by the ith generator is denoted by x i = [ x i 1 , x i 2 ] T , and the allocation weight is represented by ξ i , i = 1 , 2 , , 8 . The communication topology is constructed as shown in Figure 1.
The cost functions are given by the quadratic function [26] f i ( x i ( t ) ) = α i 1 x i 1 ( t ) 2 + α i 2 x i 2 ( t ) 2 + β i 1 x i 1 ( t ) + β i 2 x i 2 ( t ) + η i , in which α i 1 , α i 2 , β i 1 , β i 2 , and η i represent the cost coefficients. Therefore, the economic dispatch problem is given by
min x R 2 f ˜ ( x ) = i = 1 8 ξ i f i ( x )
In the simulation, we let k 1 = 100 , k 2 = 8 , k 3 = 0.2 , and k 4 = 0.022 and choose the initial states as x 1 ( 0 ) = [ 1 , 0.2 ] T , x 2 ( 0 ) = [ 0.3 , 0.5 ] T , x 3 ( 0 ) = [ 0.4 , 0.2 ] T , x 4 ( 0 ) = [ 0.3 , 0.4 ] T , x 5 ( 0 ) = [ 1 , 0.9 ] T , x 6 ( 0 ) = [ 0.8 , 0.5 ] T , x 7 ( 0 ) = [ 0.4 , 0.3 ] T , and x 8 ( 0 ) = [ 0 , 1 ] T . The cost coefficients and the allocation weights are given in Table 1.
Case 1.
For the optimization problem (24), we use the proposed algorithm (3) with the above parameters. Through calculation, all conditions of Theorem 1 are satisfied. By using MATLAB software for simulation, the states of x i 1 ( t ) and x i 2 ( t ) are given in Figure 2 and Figure 3, respectively. It is noted that all agents’ states reach consensus and gradually converge to the optimal solution x * = [ 0.164 , 0.105 ] T . Figure 4 and Figure 5 show the trajectories of y i 1 ( t ) and y i 2 ( t ) , respectively. Obviously, y i 1 ( t ) and y i 2 ( t ) gradually converge to 0. Figure 6 and Figure 7 show the evolution trajectories of h i 1 ( t ) and h i 2 ( t ) , respectively. From Figure 6 and Figure 7, it can be seen that the condition of i = 1 8 ξ i h i ( t ) = 0 is always satisfied. Moreover, h i j ( t ) gradually converge to some constants. Based on the third equation of (3), that means all agents’ states asymptotically reach the same value. The evolution trajectory of the objective function F ( t ) is shown in Figure 8.
Case 2.
For the optimization problem (24), we use the proposed algorithm (12). All parameters are the same as the ones in Case 1. Through calculation, all conditions of Theorem 2 are also satisfied. The states of x i 1 ( t ) and x i 2 ( t ) are given in Figure 9 and Figure 10, respectively, where the optimal solution x * is asymptotically reachable. The trajectories of y i 1 ( t ) and y i 2 ( t ) are presented in Figure 11 and Figure 12, respectively. Apparently, they all asymptotically converge to 0. Figure 13 and Figure 14 show the trajectories of h i 1 ( t ) and h i 2 ( t ) , respectively. The condition of i = 1 8 ξ i h i ( t ) = 0 is always satisfied. Figure 15 and Figure 16 describe the trajectories of x i 1 ( t k ) and x i 2 ( t k ) , respectively. It can be shown that x i 1 ( t k ) and x i 2 ( t k ) oscillate to the same value. Figure 17 shows the evolution trajectory of the objective function F ( t ) . Figure 18 shows the event-triggered instants, in which 1 and 0 represent trigger and non-trigger, respectively.
According to the comparison between Cases 1 and 2, we find that the continuous-time communication algorithm (3) and the event-triggered communication algorithm (12) can make the states of MAS achieve consensus and asymptotically reach the optimal solution of the optimization problem (24). That means these two algorithms are feasible. However, under the same conditions, the event-triggered communication algorithm can effectively reduce the limited resource cost of the agents while slowing down the convergence rate of the system. Therefore, in practical application, selecting the optimal method still needs the analysis of specific problems.
Through simulation, we can verify that the distributed optimization problem (1) can be solved asymptotically under the proposed continuous-time communication and event-triggered control protocols. This means that the optimal solution can be obtained as t + . In a finite time, only an approximate solution of the optimization problem can be obtained. However, in some practical applications, such as the economic dispatch of smart grids, the optimal allocation is required in a finite time. Therefore, the distributed optimization algorithms with finite-time convergence are worth studying further.

6. Conclusions

In this paper, compared with the existing work, we considered a more general distributed optimization problem based on the second-order MASs over directed networks. A special directed network was carefully designed, and then an improved distributed optimization algorithm with continuous-time communication was proposed. By using the Lyapunov stability theory, some conditions were obtained to ensure the asymptotical convergence of the proposed algorithm. Furthermore, we designed an event-triggered control protocol to reduce the burden of communication and analyze the convergence of the event-triggered control algorithm. In addition, the Zeno behavior can be avoided. Finally, some numerical simulations were presented to verify the validity of the results. In our future work, the finite-time distributed optimization problem over directed networks will be considered.

Author Contributions

Conceptualization, F.Y. and Z.Y.; methodology, F.Y. and Z.Y.; software, D.H.; validation, F.Y., Z.Y. and H.J.; formal analysis, F.Y.; investigation, F.Y.; resources, Z.Y.; data curation, F.Y.; writing—original draft preparation, F.Y.; writing—review and editing, Z.Y.; visualization, D.H.; supervision, H.J.; project administration, Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 62003289, 62163035), in part by the China Postdoctoral Science Foundation (Grant No. 2021M690400), in part by the Special Project for Local Science and Technology Development Guided by the Central Government (Grant No. ZYYD2022A05), and in part by Xinjiang Key Laboratory of Applied Mathematics (Grant No. XJDX1401).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MASsMulti-agent systems

References

  1. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Swarm Intelligence: From Natural to Artificial Systems; Oxford Press: New York, NY, USA, 1999. [Google Scholar]
  2. Zhao, H.; Wang, L.; Zhou, H.; Du, D. Consensus for a class of sampled-data heterogeneous multi-agent Systems. Int. J. Control. Autom. Syst. 2021, 19, 1751–1759. [Google Scholar] [CrossRef]
  3. Bender, J. An overview of systems studies of automated highway systems. IEEE Trans. Veh. Technol. 1991, 40, 82–99. [Google Scholar] [CrossRef]
  4. Xu, Z.; Liu, H.; Liu, Y. Fixed-time leader-following flocking for nonlinear second-order multi-agent systems. IEEE Access 2020, 8, 86262–86271. [Google Scholar] [CrossRef]
  5. Aryankia, K.; Selmic, R. Neuro-adaptive formation control and target tracking for nonlinear multi-agent systems with time-delay. IEEE Control. Syst. Lett. 2021, 5, 791–796. [Google Scholar] [CrossRef]
  6. Li, Q.; Wei, J.; Gou, Q.; Niu, Z. Distributed adaptive fixed-time formation control for second-order multi-agent systems with collision avoidance. Inf. Sci. 2021, 564, 27–44. [Google Scholar] [CrossRef]
  7. Léauté, T.; Faltings, B. Protecting privacy through distributed computation in multi-agent decision making. J. Artif. Intell. Res. 2013, 47, 649–695. [Google Scholar] [CrossRef]
  8. Ke, W.; Wang, S. Reliability evaluation for distributed computing networks with imperfect nodes. IEEE Trans. Reliab. 1997, 46, 342–349. [Google Scholar]
  9. Manjula, K.; Karthikeyan, P. Distributed computing approaches for scalability and high performance. Int. J. Eng. Sci. Technol. 2010, 2, 2328–2336. [Google Scholar]
  10. Fu, Z.; He, X.; Huang, T.; Abu-Rub, H. A distributed continuous time consensus algorithm for maximize social welfare in micro grid. J. Frankl. Inst. 2016, 353, 3966–3984. [Google Scholar] [CrossRef]
  11. Madan, R.; Lall, S. Distributed algorithms for maximum lifetime routing in wireless sensor networks. IEEE Trans. Wirel. Commun. 2004, 5, 2185–2193. [Google Scholar] [CrossRef]
  12. Thirugnanam, K.; Moursi, M.; Khadkikar, V.; Zeineldin, H.; Hosani, M. Energy management of grid interconnected multi-microgrids based on P2P energy exchange: A data driven approach. IEEE Trans. Power Syst. 2021, 36, 1546–1562. [Google Scholar] [CrossRef]
  13. Karavas, C.; Kyriakarakos, G.; Arvanitis, K.; Papadakis, G. A multi-agent decentralized energy management system based on distributed intelligence for the design and control of autonomous polygeneration microgrids. Energy Convers. Manag. 2015, 103, 166–179. [Google Scholar] [CrossRef]
  14. Boglou, V.; Karavas, C.; Karlis, A.; Arvanitis, K. An intelligent decentralized energy management strategy for the optimal electric vehicles’ charging in low-voltage islanded microgrids. Int. J. Energy Res. 2022, 46, 2988–3016. [Google Scholar] [CrossRef]
  15. Karavas, C.; Plakas, K.; Krommydas, K.; Kurashvili, A.; Dikaiakos, C.; Papaioannou, G. A review of wide-area monitoring and damping control systems in Europe. In Proceedings of the 2021 IEEE Madrid PowerTech, Madrid, Spain, 28 June–2 July 2021; pp. 1–6. [Google Scholar]
  16. Zhang, Y.; Lou, Y.; Hong, Y.; Xie, L. Distributed projection-based algorithms for source localization in wireless sensor networks. IEEE Trans. Wirel. Commun. 2015, 14, 3131–3142. [Google Scholar] [CrossRef]
  17. Li, C.; Chen, S.; Li, J.; Wang, F. Distributed multi-step subgradient optimization for multi-agent system. Syst. Control Lett. 2019, 128, 26–33. [Google Scholar] [CrossRef]
  18. Li, H.; Liu, S.; Soh, Y.; Xie, L. Event-triggered communication and data rate constraint for the distributed optimization of multiagent systems. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1908–1919. [Google Scholar] [CrossRef]
  19. Ma, W.; Fu, M.; Cui, P.; Zhang, H.; Li, Z. Finite-time average consensus based approach for distributed convex optimization. Asian J. Control 2020, 22, 323–333. [Google Scholar] [CrossRef]
  20. Nedic, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48–61. [Google Scholar] [CrossRef]
  21. Nedic, A.; Ozdaglar, A.; Parrilo, P. Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 2010, 55, 922–938. [Google Scholar] [CrossRef]
  22. Shi, W.; Ling, Q.; Wu, G.; Yin, W. Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 2015, 25, 944–966. [Google Scholar] [CrossRef] [Green Version]
  23. Khatana, V.; Saraswat, G.; Patel, S.; Salapaka, M. Gradient-consensus method for distributed optimization in directed multi-agent networks. In Proceedings of the 2020 American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020; pp. 4689–4694. [Google Scholar]
  24. Kia, S.; Cortés, J.; Martínez, S. Distributed convex optimization via continuous time coordination algorithms with discrete-time communication. Automatica 2015, 55, 254–264. [Google Scholar] [CrossRef] [Green Version]
  25. Xie, Y.; Lin, Z. Global optimal consensus for higher-order multi-agent systems with bounded controls. Automatica 2019, 99, 301–307. [Google Scholar] [CrossRef]
  26. Chen, G.; Li, Z. A fixed-time convergent algorithm for distributed convex optimization in multi-agent systems. Automatica 2018, 95, 539–543. [Google Scholar] [CrossRef]
  27. Lin, P.; Ren, W.; Yang, C.; Gui, W. Distributed continuous-time and discrete-time optimization with nonuniform unbounded convex constraint sets and nonuniform stepsizes. IEEE Trans. Autom. Control 2019, 64, 5148–5155. [Google Scholar] [CrossRef] [Green Version]
  28. Pantoja, A.; Obando, G.; Quijano, N. Distributed optimization with information-constrained population dynamics. J. Frankl. Inst. 2019, 356, 209–236. [Google Scholar] [CrossRef]
  29. Wang, D.; Chen, Y.; Gupta, V.; Lian, J. Distributed constrained optimization for multi-agent systems over a directed graph with piecewise stepsize. J. Frankl. Inst. 2020, 357, 4855–4868. [Google Scholar] [CrossRef]
  30. Wang, D.; Wang, Z.; Chen, M.; Wang, W. Distributed optimization for multi-agent systems with constraints set and communication time-delay over a directed graph. Inf. Sci. 2018, 438, 1–14. [Google Scholar] [CrossRef]
  31. Zhang, Q.; Gong, Z.; Yang, Z.; Chen, Z. Distributed convex optimization for flocking of nonlinear multi-agent systems. Int. J. Control Autom. Syst. 2019, 17, 1177–1183. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Hong, Y. Distributed optimization design for second-order multi-agent systems. In Proceedings of the 33rd Chinese Control Conference, Nanjing, China, 28–30 July 2014; pp. 1755–1760. [Google Scholar]
  33. Tran, N.; Wang, Y.; Yang, W. Distributed optimization problem for double-integrator systems with the presence of the exogenous disturbance. Neurocomputing 2018, 272, 386–395. [Google Scholar] [CrossRef]
  34. Lü, Q.; Li, H.; Liao, X.; Li, H. Geometrical convergence rate for distributed optimization with zero-like-free event-triggered communication scheme and uncoordinated step-sizes. In Proceedings of the 2017 Seventh International Conference on Information Science and Technology, Da Nang, Vietnam, 16–19 April 2017; pp. 351–358. [Google Scholar]
  35. Lü, Q.; Li, H.; Xia, D. Distributed optimization of first-order discrete-time multi-agent systems with event-triggered communication. Neurocomputing 2017, 235, 255–263. [Google Scholar] [CrossRef]
  36. Chen, W.; Ren, W. Event-triggered zero-gradient-sum distributed consensus optimization over directed networks. Automatica 2016, 65, 90–97. [Google Scholar] [CrossRef] [Green Version]
  37. Tran, N.; Wang, Y.; Liu, X.; Xiao, J.; Lei, Y. Distributed optimization problem for second-order multi-agent systems with event-triggered and time-triggered communication. J. Frankl. Inst. 2019, 356, 10196–10215. [Google Scholar] [CrossRef]
  38. Gharesifard, B.; Cortés, J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Autom. Control 2014, 59, 781–786. [Google Scholar] [CrossRef] [Green Version]
  39. Yang, S.; Liu, Q.; Wang, J. Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 717–728. [Google Scholar] [CrossRef]
  40. Wang, L.; Xiao, F. Finite-time consensus problems for networks of dynamic agents. IEEE Trans. Autom. Control 2010, 55, 950–955. [Google Scholar] [CrossRef]
  41. Yu, Z.; Yu, S.; Jiang, H.; Mei, X. Distributed fixed-time optimization for multi-agent systems over a directed network. Nonlinear Dyn. 2021, 103, 775–789. [Google Scholar] [CrossRef]
  42. Zhang, Y.; Hong, Y. Distributed optimization design for high-order multi-agent systems. In Proceedings of the 34rd Chinese Control Conference, Hangzhou, China, 28–30 July 2015; pp. 7251–7256. [Google Scholar]
  43. Gharesifard, B.; Cortés, J. Distributed convergence to Nash equilibria by adversarial networks with undirected topologies. In Proceedings of the 2012 American Control Conference (ACC), Montreal, QC, Canada, 27–29 June 2012; pp. 5881–5886. [Google Scholar]
  44. Yi, X.; Yao, L.; Yang, T.; George, J.; Johansson, K. Distributed optimization for second-order multi-agent systems with dynamic event-triggered communication. In Proceedings of the IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 17–19 December 2018; pp. 3397–3402. [Google Scholar]
Figure 1. The communication topology.
Figure 1. The communication topology.
Mathematics 10 03803 g001
Figure 2. The states of x i 1 ( t ) .
Figure 2. The states of x i 1 ( t ) .
Mathematics 10 03803 g002
Figure 3. The states of x i 2 ( t ) .
Figure 3. The states of x i 2 ( t ) .
Mathematics 10 03803 g003
Figure 4. The trajectories of y i 1 ( t ) .
Figure 4. The trajectories of y i 1 ( t ) .
Mathematics 10 03803 g004
Figure 5. The trajectories of y i 2 ( t ) .
Figure 5. The trajectories of y i 2 ( t ) .
Mathematics 10 03803 g005
Figure 6. The trajectories of h i 1 ( t ) .
Figure 6. The trajectories of h i 1 ( t ) .
Mathematics 10 03803 g006
Figure 7. The trajectories of h i 2 ( t ) .
Figure 7. The trajectories of h i 2 ( t ) .
Mathematics 10 03803 g007
Figure 8. The cost function F ( t ) .
Figure 8. The cost function F ( t ) .
Mathematics 10 03803 g008
Figure 9. The states of x i 1 ( t ) .
Figure 9. The states of x i 1 ( t ) .
Mathematics 10 03803 g009
Figure 10. The states of x i 2 ( t ) .
Figure 10. The states of x i 2 ( t ) .
Mathematics 10 03803 g010
Figure 11. The trajectories of y i 1 ( t ) .
Figure 11. The trajectories of y i 1 ( t ) .
Mathematics 10 03803 g011
Figure 12. The trajectories of y i 2 ( t ) .
Figure 12. The trajectories of y i 2 ( t ) .
Mathematics 10 03803 g012
Figure 13. The trajectories of h i 1 ( t ) .
Figure 13. The trajectories of h i 1 ( t ) .
Mathematics 10 03803 g013
Figure 14. The trajectories of h i 2 ( t ) .
Figure 14. The trajectories of h i 2 ( t ) .
Mathematics 10 03803 g014
Figure 15. The trajectories of x i 1 ( t k ) .
Figure 15. The trajectories of x i 1 ( t k ) .
Mathematics 10 03803 g015
Figure 16. The trajectories of x i 2 ( t k ) .
Figure 16. The trajectories of x i 2 ( t k ) .
Mathematics 10 03803 g016
Figure 17. The cost function F ( t ) .
Figure 17. The cost function F ( t ) .
Mathematics 10 03803 g017
Figure 18. Event-triggered signals.
Figure 18. Event-triggered signals.
Mathematics 10 03803 g018
Table 1. The coefficients.
Table 1. The coefficients.
No. α i 1 α i 2 β i 1 β i 2 η i ξ i
10.10.10.20−7.90.0625
20.20.20000.125
30.10.10.20200.1875
40.20.100−70.125
50.10.20050.125
60.10.10−0.290.1875
70.10.20.20.100.0625
80.10.1−0.2000.125
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, F.; Yu, Z.; Huang, D.; Jiang, H. Distributed Optimization for Second-Order Multi-Agent Systems over Directed Networks. Mathematics 2022, 10, 3803. https://doi.org/10.3390/math10203803

AMA Style

Yang F, Yu Z, Huang D, Jiang H. Distributed Optimization for Second-Order Multi-Agent Systems over Directed Networks. Mathematics. 2022; 10(20):3803. https://doi.org/10.3390/math10203803

Chicago/Turabian Style

Yang, Feiyang, Zhiyong Yu, Da Huang, and Haijun Jiang. 2022. "Distributed Optimization for Second-Order Multi-Agent Systems over Directed Networks" Mathematics 10, no. 20: 3803. https://doi.org/10.3390/math10203803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop