Next Article in Journal
Quantum Simulation of the Shortcut to the Adiabatic Passage Using Nuclear Magnetic Resonance
Previous Article in Journal
Partition-Based Point Cloud Completion Network with Density Refinement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Optimization for Resource Allocation Problem with Dynamic Event-Triggered Strategy

College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(7), 1019; https://doi.org/10.3390/e25071019
Submission received: 11 May 2023 / Revised: 24 June 2023 / Accepted: 29 June 2023 / Published: 4 July 2023
(This article belongs to the Section Complexity)

Abstract

:
This study aims to unravel the resource allocation problem (RAP) by using a consensus-based distributed optimization algorithm under dynamic event-triggered (DET) strategies. Firstly, based on the multi-agent consensus approach, a novel one-to-all DET strategy is presented to solve the RAP. Secondly, the proposed one-to-all DET strategy is extended to a one-to-one DET strategy, where each agent transmits its state asynchronously to its neighbors. Furthermore, it is proven that the proposed two types of DET strategies do not have Zeno behavior. Finally, numerical simulations are provided to validate and illustrate the effectiveness of the theoretical results.

1. Introduction

With the development of network information technology and the era of artificial intelligence, multi-agent systems (MASs) have received extensive attention in view of their applications in the machining industry [1], synchronous generators [2], microservice-based cloud applications [3], USVs [4], and other fields. It is worth noting that consensus is one of the most fundamental and important problems in MASs, and there have been many studies about it [5,6,7,8]. In essence, the distributed optimization problem is that a group of agents achieve a goal by exchanging local information with neighbors and minimizing the sum of all the local cost functions. In contrast to conventional consensus, distributed optimization problems require both achieving consensus and solving optimization problems. Up to now, distributed optimization problems have already appeared widely in power systems [9], MPC and network flows [10], wireless ad hoc networks [11], etc.
Early distributed optimization problems were mainly solved by centralized optimization algorithms. The feature of a centralized optimization algorithm is that all agents have a central node that centrally stores all of the information to address the optimization problem [12,13]. However, centralized optimization algorithms are unsuitable for large-scale networks, because collecting information from all agents in the network requires a lot of communication and computational overhead, and there will be the single point of failure problem. Consequently, distributed optimization algorithms have emerged as the times require. In recent years, distributed optimization algorithms are divided into two main categories, i.e., discrete-time algorithms and continuous-time algorithms. More specifically, discrete-time distributed optimization algorithms have been utilized in the optimal solution of the saddle point dynamics problem [14], epidemic control resource allocation [15], and tactical production planning [16]. Additionally, many researchers have made substantial explorations of continuous-time distributed optimization algorithms recently. For instance, a continuous-time optimization model was developed in [17] for source-sink matching in carbon capture and storage systems. In [18], the application of a continuous-time optimization algorithm was investigated in power system load distribution, and the distributed continuous-time approximate projection protocol was proposed in [19] for solving the shortest distance optimization problem.
Many of the above optimization algorithms communicate in continuous time, which can lead to frequent algorithm updating and then cause unnecessary communication resource consumption, so it is necessary to solve the system’s resource problem. Therefore, applying event-triggered strategies to distributed optimization algorithms [20,21,22,23,24,25,26] is a feasible and promising scheme that can effectively reduce the energy waste of the system. Only when the designed event-triggered condition is satisfied, is the system allowed to communicate and update the protocol, which helps to reduce the cost and burden of communication and computing as well as the collection of gradient information. Primarily, for static event-triggered (SET) mechanisms, which include the constant trigger thresholds independent of time, it is theoretically difficult to rule out Zeno behavior. Furthermore, as the working time increases, the inter-event time intervals become larger, which results in more trigger actions and wasting the system’s resources. Furthermore, the event-triggered strategy has undergone a paradigm shift from the SET strategy to the dynamic event-triggered (DET), which introduces an auxiliary parameter for each agent to dynamically adjust its threshold. Moreover, in most cases, the DET strategy can well extend the average event intervals, thus further reducing the consumption of communication resources compared to SET communication. Therefore, the DET strategy has aroused much interest and it holds great applicability value, which was considered in [27,28,29,30,31,32,33]. An improved event-triggered strategy, independent of the initial conditions, was leveraged in [34] to solve the topology separation problems caused by critical communication link failures. In [35], the corresponding DET mechanism was presented for two cases based on nonlinear relative and absolute states coupling, and it was also proved that the continuous communication between agents can be effectively avoided. Under the DET strategy, each agent transmits information to all neighbors synchronously when its trigger condition is met, which is usually called the one-to-all DET strategy. Nevertheless, under the one-to-all DET strategy, it is unreasonable to ignore the possibility that each agent has different triggering sequences. Therefore, to overcome the limitation of the one-to-all DET strategy, it is essential to design a DET strategy that allows each agent to decide its own triggering sequences and transmit information asynchronously to its neighbors according to different event-triggered conditions designed for each of its neighbors, which is referred to as the one-to-one DET strategy. Under the one-to-one DET strategy, owing to its characteristics, an agent is not constrained by any synchronous execution of its neighbors’ transmission information, so it can adjust the information transmission more flexibly, especially in the case of cyber-attacks. In [36], under an adaptive DET strategy, the fully distributed observer-based strategy was developed, which guarantees asymptotic consensus and eliminates Zeno behavior.
So far, note that many distributed optimization algorithms have been leveraged to solve the resource allocation problem (RAP), such as in [37,38,39]. Therefore, it is necessary and significant to combine DET strategies to solve the RAP. Motivated by the above discussions, we further investigate distributed optimization algorithms with two novel synchronous and asynchronous DET strategies to address the RAP. The main contributions of this article are developed as follows.
(1)
This work combines the consensus idea and one-to-all DET strategy to design a new distributed optimization algorithm to solve RAP, in which the algorithm can keep the equality constraint constant. In addition, unlike the SET strategies of [40,41], the DET in this work has a lower trigger frequency, which means that the system resources can be saved.
(2)
In order to improve the flexibility and practicality of the algorithm, the one-to-all DET strategy is extended to a one-to-one DET strategy. Based on this strategy, a distributed optimization algorithm is developed to address the RAP.
(3)
The two types of proposed distributed optimization algorithms only use the information of the decision variable x i ( t ) to avoid the communication among agents, which ingeniously reduces the resource consumption, while the algorithm in [42] needs to exchange information about the variables ϕ i ( t ) and ζ i ( t ) . In addition, the introduced internal dynamic parameters in this work are not only effective in solving RAP, but also crucial in successfully excluding Zeno behavior.
The organization of the remaining parts of this paper is as follows. Some algebraic graph theory preliminaries, a basic definition and assumptions, and the optimization problem formulation are given in Section 2. In Section 3 and Section 4, distributed optimization algorithms under the proposed one-to-all and one-to-one DET strategies are presented to solve the RAP. Furthermore, the proof of the exclusion of Zeno behavior is included. In Section 5, numerical simulation results are given to illustrate the effectiveness of the proposed algorithms. Finally, we show our conclusions and future work direction in Section 6.
Notation 1.
The symbols appearing in this article are listed in Table 1.

2. Preliminaries

2.1. Algebraic Graph Theory

The topology among n nodes can be modeled as a graph G = ( V , E , A ) consisting of a finite node set V = ( v 1 , v 2 , , v n ) , a set of edges E V × V , and a weighted adjacency matrix A = [ a i j ] R n × n , with a i j > 0 if  ( v j , v i ) E and a i j = 0 otherwise. Given an edge ( v j , v i ) E , we refer to v j as a neighbor of v i , then, v j and v i can receive each other’s information. The set of v i is defined as N i = { v j V : ( v i , v j ) E } , which does not contain self-edges ( v i , v i ) . An undirected graph G is connected if for any vertex v i , v j V , there exists a path that connects v i and v j . The Laplacian matrix L = [ l i j ] R n × n is denoted by l i i = j = 1 , j i n a i j and l i j = a i j , i j . Furthermore, 1 n L = 0 n . The eigenvalue of L is a non-decreasing order, i.e.,  0 = λ 1 < λ 2 λ n .

2.2. Problem Statement

In the distributed RAP, we consider the MASs composed of n agents where each agent has a local quadratic convex cost function f i ( x i ( t ) ) : R R . The global objective function is denoted by F ( x ( t ) ) : R n R . f i ( x i ( t ) ) = α i x i 2 ( t ) + β i x i ( t ) + γ i , where the cost coefficients α i , β i , and γ i > 0 . Then, the RAP can be rewritten as the following optimization problem:
min F ( x ( t ) ) = i = 1 n f i ( x i ( t ) ) , s . t . i = 1 n x i ( t ) = D ,
where x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) is a decision variable vector. For convenience, only the case x i ( t ) R will be discussed, owing to the fact that when x i ( t ) R n it can be solved similarly and completely by using the Kronecker product. F ( x ( t ) ) and f i ( x i ( t ) ) stand for the global cost function and the local cost function, respectively. D R represents the global resource constraint. In the economic dispatch problem of smart grids, x i ( t ) denotes the output power of generator i, and D denotes the total power demand and equality constraint and is called the demand constraint.
This paper aims to design distributed DET strategies to solve RAP (1). Therefore, we need the following definition and assumptions before further analysis.
Definition 1.
The multi-agent consensus problem would be addressed as long as for any initial value of state z i ( 0 ) R n ,
lim t z i ( t ) z j ( t ) = 0 , i , j N .
Assumption 1.
The communication topology is undirected and connected.
Assumption 2.
The local objective functions are quadratically continuously differentiable and strongly convex.

3. The One-to-All DET Strategy

In this section, we construct the one-to-all DET strategy, which allows each agent to transmit information synchronously. Moreover, a distributed optimization algorithm with the proposed DET is introduced and the consensus is derived, which solves the RAP (1).
For the one-to-all DET, the triggering time sequence is determined by 0 = t 0 i < t 1 i < < t s i < . The measurement error of each agent is defined as
e i ( t ) = f i ( x i ( t k i ) ) f i ( x i ( t ) ) , t t k i , t k + 1 i , i N .
Then, we propose the one-to-all DET triggering sequence { t k i } k N as follows
t k + 1 i = inf l > t k i { l : ( e i ( t ) ) 2 c i j = 1 n a i j ( ϕ i ( t ) ϕ j ( t ) ) 2 π i Γ i ( t ) 0 , t [ t k i , l ] } ,
where c i and π i , i N , are positive constants.
Remark 1.
If setting Γ i ( t ) = 0 , the DET strategy reduces to the SET strategy. Then, the one-to-all SET triggering sequence { t k i } k N is as follows
t k + 1 i = inf l > t k i { l : ( e i ( t ) ) 2 c i j = 1 n a i j ( ϕ i ( t ) ϕ j ( t ) ) 2 0 , t [ t k i , l ] } .
Consequently, the SET strategy is a special case of the DET strategy, and the DET strategy is a more general situation. In addition, due to the internal dynamic variables of the DET function, it is easier to exclude Zeno behavior than for SET.
Inspired by [43], we design an internal dynamic variable Γ i ( t ) satisfying
Γ ˙ i ( t ) = ψ i Γ i ( t ) + μ i [ c i j = 1 n a i j ( ϕ i ( t ) ϕ j ( t ) ) 2 ( e i ( t ) ) 2 ] ,
where Γ i ( 0 ) > 0 , ψ i and μ i = w i c i with w i 0 , i N , are positive constants.
Let ϕ i ^ ( t ) = f i ( x i ( t k i ) ) and ϕ i ( t ) = f i ( x i ( t ) ) . The distributed optimization algorithm is designed as follows to solve the RAP (1):
x i ( t ) = ζ i ( t ) + x i ( 0 ) , ζ ˙ i ( t ) = j = 1 n a i j ( ϕ j ^ ( t ) ϕ i ^ ( t ) ) , ζ i ( 0 ) = 0 ,
where ζ i ( t ) , i N is an auxiliary variable.
According to the distributed optimization algorithm (4), one obtains ϕ ˙ i ( t ) = 2 f i ( x i ( t ) ) x i 2 ( t ) x ˙ i ( t ) = 2 α i ζ ˙ i ( t ) = 2 α i j = 1 n a i j [ ϕ j ^ ( t ) ϕ i ^ ( t ) ] , where the initial value ϕ i ( 0 ) satisfies ϕ i ( 0 ) = 2 α i x i ( 0 ) + β i .
In addition, ϕ ˙ i ( t ) in matrix form can be described as
ϕ ˙ ( t ) = 2 Λ L [ ϕ ( t ) + e ( t ) ] ,
where ϕ ( t ) = ( ϕ 1 ( t ) , ϕ 2 ( t ) , , ϕ n ( t ) ) R n , Λ = diag { α 1 , , α n } and e ( t ) = ( e 1 ( t ) , e 2 ( t ) , , e n ( t ) ) .
Then, the distributed optimization problem is transformed into a multi-agent consensus, which implies when ϕ i ( t ) = ϕ j ( t ) , i , j N , the RAP (1) is obtained for any agents. Then, ϕ * is the final value of ϕ i ( t ) when it reaches consensus. The detailed procedure of the one-to-all DET strategy is given as Algorithm 1.
Algorithm 1 Distributed optimization algorithm with the one-to-all DET strategy
Require: 
 
 
Initialize all parameters, such as the states x i ( 0 ) and ζ i ( 0 ) of the agent i and so on. During the initialization process, it is required that i = 1 n x i ( 0 ) = D and ζ i ( 0 ) = 0 .
 
Input last triggering times t k i and state ϕ ^ i ( t ) , i N .
Ensure: 
 
 
for  t = 0 to t end  do
 
   for  i = 1 to n do
 
     Compute measurement errors with e i ( t ) .
 
     Compute the trigger threshold c i j = 1 n a i j ( ϕ i ( t ) ϕ j ( t ) ) 2 π i Γ i ( t ) .
 
     if trigger condition (2) holds then
 
        The event is triggered, and the event time is recorded as t k + 1 i .
 
        Update the state ϕ ^ i ( t ) of agent i at event time t k + 1 i .
 
        Communicate information between state ϕ ^ i ( t ) and its neighbor state ϕ ^ j ( t ) .
 
     else
 
        Update the state ϕ ^ i ( t ) of agent i at instant t which belongs to interval [ t k i , t k + 1 i ) .
 
     end if
 
   end for
 
end for
Remark 2.
For the quadratic original optimization problem with the equality constraint, based on the Lagrange multiplier method, we construct the Lagrangian function as L ( x ( t ) , λ ) = i = 1 n f i ( x i ( t ) ) λ ( i = 1 n x i ( t ) D ) . Then, under Assumption 2, x i * = λ * β i 2 α i , i N is the optimal solution, where λ * is the optimal Lagrange multipliers if and only if f 1 ( x 1 ( t ) ) x 1 ( t ) = f 2 ( x 2 ( t ) ) x 2 ( t ) = = f n ( x n ( t ) ) x n ( t ) = λ * , i N . Therefore, we need to let the Lagrange multiplier λ i R of each agent update λ i so that all λ i reach consensus at the value λ * , which means that the optimization problem with equality constraint is transformed to a MASs consensus problem completely. Therefore, as long as the equation ϕ 1 ( t ) = ϕ 2 ( t ) = = ϕ n ( t ) = λ * = D + i = 1 n β i 2 α i i = 1 n 1 2 α i holds, the algorithm can achieve consensus and the optimization problem can be addressed.
Remark 3.
Algorithm (4) only uses the information of variable x i ( t ) , which is beneficial to save communication resources in the case of limited bandwidth. Furthermore, let ζ ( t ) = ( ζ 1 ( t ) , ζ 2 ( t ) , , ζ n ( t ) ) , from Assumption 1, i.e., 1 n L = 0 n , the proposed zero-initial-value distributed optimization algorithm, i.e.,  ζ ( 0 ) = 0 n , satisfies the equality constraint at all times. The initial values of the algorithm are composed of the decision variable initial value x ( 0 ) and the auxiliary variable initial value ζ ( 0 ) = 0 n . Then, we can prove that t 0 , i = 1 n ζ i ( t ) = i = 1 n ζ i ( 0 ) = 0 and i = 1 n x i ( t ) = i = 1 n x i ( 0 ) , because the equation i = 1 n ζ ˙ i ( t ) = 1 n L f ( x ( t ) ) = 0 holds. Therefore, when the equation i = 1 n x i ( 0 ) = D is satisfied, the equality constraint holds as well at any time.
Theorem 1.
Under Assumptions 1 and 2, assume that ψ i π i ( 2 w i c i ) , 0 < c i 1 4 , then the RAP (1) is solved under the distributed optimization algorithm (4) and the DET strategies (2) and (3). Moreover, Zeno behavior is excluded.
Proof. 
Construct the Lyapunov function W 1 ( t ) of the following form
W 1 ( t ) = U ( t ) + i = 1 n Γ i ( t ) ,
where U ( t ) = 1 2 i = 1 n 1 α i ( ϕ i ( t ) ϕ * ) 2 .    □
The rest of the proof is the similar to Theorem 2.

4. The One-to-One DET Strategy

In this section, in consideration of the existence of asynchronous transmission needs, the one-to-one DET strategy is introduced, which has the unique characteristics that each agent transmits its information to all of its neighbors asynchronously, unlike the one-to-all DET strategy. Furthermore, based on the one-to-one DET strategy, a more flexible distributed optimization algorithm is similarly presented and the consensus is achieved, which also solves the RAP (1). Then, we prove that the Zeno behavior will not occur, which strongly ensures that the algorithm is implementable.
For the one-to-one DET strategy, the edge-dependent triggering time sequence is raised, i.e.,  0 = t 0 i j < t 1 i j < < t s i j < , which essentially differs from the one-to-all case.
Corresponding to the one-to-one DET case, the measurement error is described as
e i j ( t ) = f i ( x i ( t k i j ) ) f i ( x i ( t ) ) , i N , j N i .
Then, we propose the one-to-one DET triggering sequence { t k i j } k N as follows
t k + 1 i j = inf l > t k i j { l : ( e i j ( t ) ) 2 c i j ( ϕ i ( t ) ϕ j ( t ) ) 2 π i j Γ ˜ i j ( t ) 0 , t [ t k i j , l ] } ,
where c i j and π i j , i N , j N i , are positive constants.
Remark 4.
Similarly, if setting Γ ˜ i j ( t ) = 0 , the DET strategy reduces to the SET strategy. Then, the one-to-one SET triggering sequence { t k i j } k N is as follows
t k + 1 i j = inf l > t k i j { l : ( e i j ( t ) ) 2 c i j ( ϕ i ( t ) ϕ j ( t ) ) 2 0 , t [ t k i j , l ] } .
Inspired by [43], we design an internal dynamic variable Γ ˜ i j ( t ) satisfying
Γ ˜ ˙ i j ( t ) = ψ ˜ i j Γ ˜ i j ( t ) + μ ˜ i j [ c i j ( ϕ i ( t ) ϕ j ( t ) ) 2 ( e i j ( t ) ) 2 ] ,
where Γ ˜ i j ( 0 ) > 0 , ψ ˜ i j and μ ˜ i j = w ˜ i j c i j with w ˜ i j 0 , i N , j N i , are positive constants. In addition, Γ ˜ ˙ i j ( t ) ( ψ ˜ i j + w ˜ i j c i j ) Γ ˜ i j ( t ) , t > 0 , and thus Γ ˜ i j ( t ) Γ ˜ i j ( 0 ) exp ( ( ψ ˜ i j + w ˜ i j c i j ) t ) > 0 .
Let ϕ i j ^ ( t ) = f i ( x i ( t k i j ) ) . The distributed optimization algorithm is determined as follows to solve the RAP (1):
x i ( t ) = ζ i ( t ) + x i ( 0 ) , ζ ˙ i ( t ) = j = 1 n a i j ( ϕ j i ^ ( t ) ϕ i j ^ ( t ) ) , ζ i ( 0 ) = 0 ,
for i N , j N i . In addition, one obtains
ϕ ˙ i ( t ) = 2 α i j = 1 n a i j [ ϕ j i ^ ( t ) ϕ i j ^ ( t ) ] = 2 α i j = 1 n a i j [ ( ϕ j ( t ) + e j i ( t ) ) ( ϕ i ( t ) + e i j ( t ) ) ] ,
where the initial value ϕ i ( 0 ) , i N , satisfies the equation ϕ i ( 0 ) = 2 α i x i ( 0 ) + β i . The detailed one-to-one DET procedure is given as Algorithm 2.
Theorem 2.
Under Assumptions 1 and 2, if the parameters ψ ˜ i j and c i j in (5a,b) and (6) satisfy ψ ˜ i j π i j ( 2 w ˜ i j c i j ) , 0 < c i j 1 4 , then the RAP (1) is solved under the distributed optimization algorithm (7) and the DET strategies (5a,b) and (6). Moreover, Zeno behavior is excluded.
Algorithm 2 Distributed optimization algorithm with the one-to-one DET strategy
Require: 
 
 
Initialize all parameters, such as the states x i ( 0 ) and ζ i ( 0 ) of the agent i and so on. During the initialization process, it is required that i = 1 n x i ( 0 ) = D and ζ i ( 0 ) = 0 .
 
Input last triggering times t k i j and state ϕ ^ i j ( t ) , i N .
Ensure: 
 
 
for  t = 0 to t end  do
 
   for  i = 1 to n do
 
     Compute measurement errors with e i j ( t ) .
 
     Compute the triggered threshold c i j ( ϕ i ( t ) ϕ j ( t ) ) 2 π i j Γ ˜ i j ( t ) .
 
     if trigger condition (5a,b) holds then
 
        The event is triggered, and the event time is recorded as t k + 1 i j .
 
        Update the state ϕ ^ i j ( t ) of agent i to agent j at event time t k + 1 i j .
 
        Communicate information between state ϕ ^ i j ( t ) and its neighbor state ϕ ^ j i ( t ) .
 
     else
 
        Update the state ϕ ^ i j ( t ) of agent i at instant t which belongs to interval [ t k i j , t k + 1 i j ) .
 
     end if
 
   end for
 
end for
Proof. 
 
(i)
Define the following Lyapunov function:
W ( t ) = V ( t ) + i = 1 n j = 1 n a i j Γ ˜ i j ( t ) ,
where V ( t ) = 1 2 i = 1 n 1 α i ( ϕ i ( t ) ϕ * ) 2 .
From (8), we have
V ˙ ( t ) = i = 1 n 1 α i ( ϕ i ( t ) ϕ * ) ϕ ˙ i ( t ) = i = 1 n 1 α i ( ϕ i ( t ) ϕ * ) 2 α i j = 1 n a i j [ ( ϕ j ( t ) + e j i ( t ) ) ( ϕ i ( t ) + e i j ( t ) ) ] = i = 1 n j = 1 n 2 a i j ( ϕ i ( t ) ϕ * ) [ ( ϕ j ( t ) + e j i ( t ) ) ( ϕ i ( t ) + e i j ( t ) ) ] = i = 1 n j = 1 n 2 a i j ϕ i ( t ) [ ( ϕ j ( t ) + e j i ( t ) ) ( ϕ i ( t ) + e i j ( t ) ) ] = i = 1 n j = 1 n 2 a i j [ ϕ i ( t ) ( ϕ j ( t ) ϕ i ( t ) ) + ϕ i ( t ) ( e j i ( t ) e i j ( t ) ) ] .
Note that
i = 1 n j = 1 n 2 a i j ϕ i ( t ) ( ϕ j ( t ) ϕ i ( t ) ) = i = 1 n j = 1 n a i j [ ϕ i ( t ) ( ϕ j ( t ) ϕ i ( t ) ) + ϕ j ( t ) ( ϕ i ( t ) ϕ j ( t ) ) ] = i = 1 n j = 1 n a i j [ ϕ i ( t ) ϕ j ( t ) ] 2 .
From Young’s inequality, one has
i = 1 n j = 1 n 2 a i j ϕ i ( t ) ( e j i ( t ) e i j ( t ) ) i = 1 n j = 1 n a i j [ 2 ( e i j ( t ) ) 2 + 1 2 ( ϕ i ( t ) ϕ j ( t ) ) 2 ] .
Substituting (10) and (11) into (9) yields
V ˙ ( t ) i = 1 n j = 1 n a i j [ 2 ( e i j ( t ) ) 2 1 2 ( ϕ i ( t ) ϕ j ( t ) ) 2 ] .
According to Formula (12), taking the derivative of the Lyapunov function W ( t ) can be derived as
W ˙ ( t ) i = 1 n j = 1 n a i j [ 2 ( e i j ( t ) ) 2 1 2 ( ϕ i ( t ) ϕ j ( t ) ) 2 + Γ ˜ ˙ i j ( t ) ] .
Then, we can obtain from (5a,b) and (6) that
W ˙ ( t ) i = 1 n j = 1 n a i j [ 2 ( e i j ( t ) ) 2 1 2 ( ϕ i ( t ) ϕ j ( t ) ) 2 ψ ˜ i j Γ ˜ i j ( t ) + w ˜ i j ( ϕ i ( t ) ϕ j ( t ) ) 2 μ ˜ i j ( e i j ( t ) ) 2 ] i = 1 n j = 1 n a i j [ ( 2 w ˜ i j c i j ) ( e i j ( t ) ) 2 + ( w ˜ i j 1 2 ) ( ϕ i ( t ) ϕ j ( t ) ) 2 ψ ˜ i j Γ ˜ i j ( t ) ] i = 1 n j = 1 n a i j [ ( 2 w ˜ i j c i j ) ( c i j ( ϕ i ( t ) ϕ j ( t ) ) 2 + π i j Γ ˜ i j ( t ) ) + ( w ˜ i j 1 2 ) ( ϕ i ( t ) ϕ j ( t ) ) 2 ψ ˜ i j Γ ˜ i j ( t ) ] = i = 1 n j = 1 n a i j [ ( 2 c i j 1 2 ) ( ϕ i ( t ) ϕ j ( t ) ) 2 + ( ( 2 w ˜ i j c i j ) π i j ψ ˜ i j ) Γ ˜ i j ( t ) ] .
Since ψ ˜ i j π i j ( 2 w ˜ i j c i j ) , π i j > 0 , μ ˜ i j = w ˜ i j c i j and 0 < c i j 1 4 , one obtains W ˙ ( t ) 0 . This implies that W ( t ) cannot increase and that ϕ i ( t ) ϕ j ( t ) and Γ ˜ i j ( t ) are bounded. In addition, Γ ˜ i j ( t ) > 0 , t 0 , which leads to W ( t ) > 0 .
By LaSalle’s invariance principle in [44], one obtains lim t | ϕ i ( t ) ϕ j ( t ) | = 0 , lim t Γ ˜ i j ( t ) = 0 ( i , j E ) . Thus, the RAP (1) is solved eventually.
(ii)
In this part, we prove that Zeno behavior does not occur by contradiction. Assume that the triggering sequence { t k i j } k N determined by (7) and (8) leads to Zeno behavior, which indicates that for any ε * > 0 there exists a K ( ε * ) Z + such that for any k K ( ε * ) , | t k i j t * i j | < ε * .
Evidently,
t K ( ε * ) + 1 i j t K ( ε * ) i j < 2 ε * .
For t [ 0 , T * i j ] , ε 1 , ε 2 > 0 , s . t . | ϕ i ( t ) ϕ j ( t ) | ε 1 , | Γ ˜ i j ( t ) | ε 2 .
Then, for t k i j t t k + 1 i j , from (8),
D + | e i j ( t ) | | e ˙ i j ( t ) | = | ϕ ˙ i ( t ) | = | 2 α i j = 1 n a i j ( ϕ j i ^ ( t ) ϕ i j ^ ( t ) ) | 2 α ¯ Π k i j ,
where α ¯ = max 1 i n { α i } and Π k i j = max t [ t k i j , t k + 1 i j ) { | j = 1 n a i j ( ϕ ^ j i ( t ) ϕ ^ i j ( t ) ) | } .
Therefore, for any t t k i j , t k + 1 i j ,
( e i j ( t ) ) 2 4 α ¯ 2 Π k i j 2 ( t t k i j ) 2 .
By the trigger conditions (5a,b) and (6), when t = t k + 1 i j ,
| e i j ( t k + 1 i j ) | = c i j | ϕ i ( t k + 1 i j ) ϕ j ( t k + 1 i j ) | + π i j Γ ˜ i j ( t k + 1 i j ) π i j Γ ˜ i j ( t k + 1 i j ) > 0 .
Noting
Γ ˜ ˙ i j ( t ) = ψ ˜ i j Γ ˜ i j ( t ) + μ ˜ i j c i j ( ϕ i ( t ) ϕ j ( t ) ) 2 ( e i j ( t ) ) 2 ψ ˜ i j Γ ˜ i j ( t ) π i j Γ ˜ i j ( t ) = ( ψ ˜ i j + π i j ) Γ ˜ i j ( t ) .
By using the comparison principle,
Γ ˜ i j ( t ) Γ ˜ i j ( 0 ) e ( ψ ˜ i j + π i j ) t ,
Γ ˜ i j ( t k + 1 i j ) Γ ˜ i j ( 0 ) e ( ψ ˜ i j + π i j ) t k + 1 i j ,
| e i j ( t k + 1 i j ) | π i j Γ ˜ i j ( t k + 1 i j ) π i j Γ ˜ i j ( 0 ) e ( ψ ˜ i j + π i j ) t k + 1 i j ,
( e i j ( t k + 1 i j ) ) 2 π i j Γ ˜ i j ( 0 ) e ( ψ ˜ i j + π i j ) t k + 1 i j .
Combining (14) and (15), it has
π i j Γ ˜ i j ( 0 ) e ( ψ ˜ i j + π i j ) t k + 1 i j 4 α ¯ 2 Π k i j 2 ( t k + 1 i j t k i j ) 2 .
Therefore,
t k + 1 i j t k i j π i j Γ ˜ i j ( 0 ) e ( α ˜ i j + π i j ) t k + 1 i j 2 α ¯ Π k i j π i j Γ ˜ i j ( 0 ) e ( α ˜ i j + π i j ) T * i j 2 α ¯ Π k i j .
For ε * = π i j Γ ˜ i j ( 0 ) e ( α ˜ i j + π i j ) T * i j 4 α ¯ Π k i j > 0 , it is not difficult to see from (16) that t K ( ε * ) + 1 i j t K ( ε * ) i j 2 ε * , which is obviously contradictory to (13). Consequently, there is no Zeno behavior. □
Remark 5.
In contrast to the one-to-all DET strategy mentioned in Theorem 1, under the one-to-one DET strategy, the triggering sequences { t k i j } ( j N i ) of each agent is different, which contributes to flexibly adjusting the transferred information to each of its neighbors j N i . Furthermore, the remarkable feature of the one-to-one DET strategy is that each agent is allowed to design its own distinctive triggering instant t k i j which is immune to any synchronous executions and the requirements of t k i j = t k j i or t k i j 1 = t k i j 2 ( j 1 , j 2 N i ) , and so on. Therefore, in practice, one-to-one DET strategies potentially offer greater flexibility and efficiency in terms of adjusting the transmission of information, which is significant to designing a good DET strategy.
Remark 6.
The proposed algorithms (4) and (7) can effectively solve RAP, but both of them need to satisfy i = 1 n x i ( 0 ) = D and ζ i ( 0 ) = 0 , which means with initialization constraints. In our future research, we will consider eliminating state initialization.

5. Numerical Example

In this section, two numerical examples are provided to illustrate the effectiveness of the theoretical results. The proposed one-to-all and one-to-one DET strategies are applied to the RAP (1) in case 1 and case 2, respectively. Figure 1 depicts the connection topology, which satisfies Assumption 1. The chosen cost coefficients α i , β i , and γ i of the quadratic cost function f i ( x i ( t ) ) = α i x i 2 ( t ) + β i x i ( t ) + γ i are listed in Table 2. The load demand D is assumed to be 145. Then, the initial values of x i ( t ) are selected as x 1 ( 0 ) = 30 , x 2 ( 0 ) = 25 , x 3 ( 0 ) = 40 , x 4 ( 0 ) = 50 , and ζ i ( 0 ) = 0 , i = 1 , 2 , 3 , 4 .
Case 1. One-to-all DET
First, consider the one-to-all DET based on Theorem 1. Given the scalars Γ 1 ( 0 ) = 4 , Γ 2 ( 0 ) = 3 , Γ 3 ( 0 ) = 8 , Γ 4 ( 0 ) = 6 , c 1 = 0.05 , c 2 = 0.1 , c 3 = 0.15 , c 4 = 0.2 , ω 1 = 0.025 , ω 2 = 0.03 , ω 3 = 0.06 , ω 4 = 0.07 , μ 1 = 0.5 , μ 2 = 0.3 , μ 3 = 0.4 , μ 4 = 0.35 , π 1 = 1 , π 2 = 2 , π 3 = 3 , π 4 = 4 , ψ 1 = 1.5 , ψ 2 = 3.4 , ψ 3 = 4.8 , ψ 4 = 6.6 , Figure 2 shows that ϕ i ( t ) converges to ϕ * , which essentially guarantees that all agents reach asymptotic consensus. Then, from Figure 3, f i ( x i ( t ) ) converges to the optimal values. Figure 4 shows the triggering instants of the one-to-all DET strategy. In addition, the equality constraint i = 1 4 x i ( t ) = D can be obtained from Figure 5.
Furthermore, the equality constraint i = 1 4 ζ i ( t ) = 0 can be obtained from Figure 6. The trajectories of x i ( t ) are shown in Figure 7. Moreover, Figure 8 shows the minimum value of F ( x ( t ) ) , where F ( x * ) is the optimal solution of the RAP (1). Figure 9 exhibits the trajectory of the dynamic variable Γ i ( t ) .
Case 2. One-to-one DET
Consider the one-to-one DET based on Theorem 2. Different from case 1, given Γ ˜ 12 ( 0 ) = 1 , Γ ˜ 23 ( 0 ) = 4 , Γ ˜ 34 ( 0 ) = 5 , Γ ˜ 41 ( 0 ) = 10 , Γ ˜ 21 ( 0 ) = 1 , Γ ˜ 32 ( 0 ) = 4 , Γ ˜ 43 ( 0 ) = 5 , Γ ˜ 14 ( 0 ) = 10 , c 12 = 0.05 , c 23 = 0.1 , c 34 = 0.15 , c 41 = 0.2 , c 21 = 0.04 , c 32 = 0.08 , c 43 = 0.12 , c 14 = 0.24 , ω 12 = 0.025 , ω 23 = 0.03 , ω 34 = 0.06 , ω 41 = 0.07 , ω 21 = 0.01 , ω 32 = 0.05 , ω 43 = 0.08 , ω 14 = 0.09 , μ ˜ 12 = 0.5 , μ ˜ 23 = 0.3 , μ ˜ 34 = 0.4 , μ ˜ 41 = 0.35 , μ ˜ 21 = 0.25 , μ ˜ 32 = 0.625 , μ ˜ 43 = 0.67 , μ ˜ 14 = 0.375 , π 12 = 1 , π 23 = 2 , π 34 = 3 , π 41 = 4 , π 21 = 1 , π 32 = 2 , π 43 = 3 , π 14 = 4 , ψ ˜ 12 = 1.5 , ψ ˜ 23 = 3.4 , ψ ˜ 34 = 4.8 , ψ ˜ 41 = 6.6 , ψ ˜ 21 = 1.75 , ψ ˜ 32 = 2.75 , ψ ˜ 43 = 3.99 , ψ ˜ 14 = 6.5 , Figure 10 shows that ϕ i ( t ) converges to ϕ * , which also implies that consensus is indeed achieved. Then, as seen in Figure 11, f i ( x i ( t ) ) converges to the minimum value. Figure 12 shows the triggering instants of the one-to-one DET strategy. In addition, the equation constraint i = 1 4 x i ( t ) = D is guaranteed from Figure 13.
Besides, the equation constraint i = 1 4 ζ i ( t ) = 0 is guaranteed from Figure 14. The motion trajectory of x i ( t ) is shown in Figure 15. Furthermore, Figure 16 depicts the minimum value of F ( x ( t ) ) , where F ( x * ) is the optimal solution of the RAP (1). Figure 17 shows that Γ ˜ i j ( t ) converges to 0 and Γ ˜ i j ( t ) > 0 always holds.
Case 3. DET vs. SET
By letting Γ i ( t ) = 0 and Γ ˜ i j ( t ) = 0 in (2) and (5), one has the one-to-all SET and one-to-one SET versions (2b) and (5b). Then, the one-to-all DET and SET strategies are compared in Figure 18 and Figure 19. Moreover, the one-to-one DET strategy and the corresponding SET strategy are compared in Figure 20 and Figure 21. Since Γ i ( t ) > 0 in (2) and Γ ˜ i j ( t ) > 0 in (5), the DET strategies are likely to have fewer triggering times, as compared with the SET strategies, which are also displayed in Figure 18, Figure 19, Figure 20 and Figure 21 and Table 3 and Table 4, which means that DET is beneficial for saving system resources with a slower update frequency.

6. Conclusions

In this paper, two novel DET strategies are combined to design distributed optimization algorithms to solve the RAP; they have fewer trigger times compared to SET strategies. Furthermore, the designed distributed optimization algorithms require only the state information of the agent itself and do not require information exchange with neighboring nodes, which saves on the communication energy of the system. Furthermore, the internal dynamic variables Γ i ( t ) and Γ ˜ i j ( t ) not only solve the RAP, but also play an important role in eliminating the Zeno behavior. In the future, we will combine DET strategies to study optimization problems with equality and inequality constraints under directed graphs.

Author Contributions

Conceptualization, F.G. and S.C.; methodology, F.G. and X.C.; software, M.Y.; validation, M.Y. and F.G.; formal analysis, F.G. and X.C.; investigation, F.G.; resources, F.G.; data curation, F.G. and S.C.; writing—original draft preparation, M.Y.; writing—review and editing, F.G. and M.Y.; visualization, M.Y.; supervision, H.J.; project administration, S.C.; funding acquisition, X.C. and H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant no. 62163035), in part by the China Postdoctoral Science Foundation under Grant 2021M690400, in part by the Special Project for Local Science and Technology Development Guided by the Central Government under Grant ZYYD2022A05, in part by the Xinjiang Key Laboratory of Applied Mathematics under Grant XJDX1401, in part by Tianshan Talent Program (Grant No. 2022TSYCLJ0004), and in part by the National Undergraduate Training Program for Innovation and Entrepreneurship (no. 202210755077).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MASs Multi-agent systems
SET Static event-triggered
DET Dynamic event-triggered
RAP Resource allocation problem

References

  1. Yahouni, Z.; Ladj, A.; Belkadi, F.; Meski, O.; Ritou, M. A smart reporting framework as an application of multi-agent system in machining industry. Int. J. Comput. Integr. Manuf. 2021, 34, 470–486. [Google Scholar] [CrossRef]
  2. Sharifi, A.; Sharafian, A.; Ai, Q. Adaptive MLP neural network controller for consensus tracking of multi-agent systems with application to synchronous generators. Expert Syst. Appl. 2021, 184, 115460. [Google Scholar] [CrossRef]
  3. Liu, Z.; Yu, H.; Fan, G.; Chen, L. Reliability modelling and optimization for microservice-based cloud application using multi-agent system. IET Commun. 2022, 16, 1182–1199. [Google Scholar] [CrossRef]
  4. Males, L.; Sumic, D.; Rosic, M. Applications of multi-agent systems in unmanned surface vessels. Electronics 2022, 11, 3182. [Google Scholar] [CrossRef]
  5. Qin, J.; Ma, Q.; Shi, Y.; Wang, L. Recent advances in consensus of multi-agent systems: A brief survey. IEEE Trans. Ind. Electron. 2016, 64, 4972–4983. [Google Scholar] [CrossRef]
  6. Amirkhani, A.; Barshooi, A. Consensus in multi-agent systems: A review. Artif. Intell. Rev. 2021, 55, 3897–3935. [Google Scholar] [CrossRef]
  7. Yu, W.; Zhou, L.; Yu, X.; Lu, J.; Lu, R. Consensus in multi-agent systems with second-order dynamics and sampled data. IEEE Trans. Ind. Inform. 2012, 9, 2137–2146. [Google Scholar] [CrossRef]
  8. Xie, Y.; Lin, Z. Global optimal consensus for multi-agent systems with bounded controls. Syst. Control Lett. 2017, 102, 104–111. [Google Scholar] [CrossRef]
  9. Nedi, A.; Liu, J. Distributed optimization for control. Annu. Rev. Control. Robot. Auton. Syst. 2018, 1, 77–103. [Google Scholar] [CrossRef]
  10. Mota, J.F.C.; Xavier, J.M.F.; Aguiar, P.M.Q.; Puschel, A. Distributed optimization with local domains: Applications in MPC and network flows. IEEE Trans. Autom. Control 2014, 60, 2004–2009. [Google Scholar] [CrossRef] [Green Version]
  11. Tychogiorgos, G.; Gkelias, A.; Leung, K. A non-convex distributed optimization framework and its application to wireless ad-hoc networks. IEEE Trans. Wirel. Commun. 2013, 12, 4286–4296. [Google Scholar] [CrossRef] [Green Version]
  12. Hasegawa, M.; Hirai, H.; Nagano, K.; Harada, H.; Aihara, K. Optimization for centralized and decentralized cognitive radio networks. Proc. IEEE 2014, 102, 574–584. [Google Scholar] [CrossRef]
  13. Jumpasri, N.; Pinsuntia, K.; Woranetsuttikul, K.; Nilsakorn, T.; Khan-ngern, W. Comparison of distributed and centralized control for partial shading in PV parallel based on particle swarm optimization algorithm. In Proceedings of the 2014 International Electrical Engineering Congress (iEECON), Chonburi, Thailand, 19–21 March 2014; pp. 1–4. [Google Scholar]
  14. Gharesifard, B.; Cortes, J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Autom. Control 2014, 59, 781–786. [Google Scholar] [CrossRef] [Green Version]
  15. Ramírez-Llanos, E.; Martinez, S. Distributed discrete-time optimization algorithms with applications to resource allocation in epidemics control. Optim. Control Appl. Methods 2018, 39, 160–180. [Google Scholar] [CrossRef]
  16. Díaz-Madroñero, M.; Mula, J.; Peidro, D. A review of discrete-time optimization models for tactical production planning. Int. J. Prod. Res. 2014, 52, 5171–5205. [Google Scholar] [CrossRef]
  17. Tan, R.R.; Aviso, K.B.; Bandyopadhyay, S.; Ng, D.K.S. Continuous-time optimization model for source-sink matching in carbon capture and storage systems. Ind. Eng. Chem. Res. 2012, 51, 10015–10020. [Google Scholar] [CrossRef]
  18. Yi, P.; Hong, Y.; Liu, F. Distributed gradient algorithm for constrained optimization with application to load sharing in power systems. Syst. Control Lett. 2015, 83, 45–52. [Google Scholar] [CrossRef]
  19. Lou, Y.; Hong, Y.; Wang, S. Distributed continuous-time approximate projection protocols for shortest distance optimization problems. Automatica 2016, 69, 289–297. [Google Scholar] [CrossRef] [Green Version]
  20. Chen, G.; Yao, D.; Zhou, Q.; Li, H.; Lu, R. Distributed event-triggered formation control of USVs with prescribed performance. J. Syst. Sci. Complex. 2022, 35, 820–838. [Google Scholar] [CrossRef]
  21. Zhang, L.; Che, W.W.; Deng, C.; Wu, Z.G. Prescribed performance control for multiagent systems via fuzzy adaptive event-triggered strategy. IEEE Trans. Fuzzy Syst. 2022, 30, 5078–5090. [Google Scholar] [CrossRef]
  22. Wang, X.; Zhou, Y.; Huang, T.; Chakrabarti, P. Event-triggered adaptive fault-tolerant control for a class of nonlinear multiagent systems with sensor and actuator faults. IEEE Trans. Circuits Syst. I Regul. Pap. 2022, 69, 4203–4214. [Google Scholar] [CrossRef]
  23. Ge, C.; Liu, X.; Liu, Y.; Hua, C. Event-triggered exponential synchronization of the switched neural networks with frequent asynchronism. IEEE Trans. Neural Netw. Learn. Syst. 2022, 2162–2388. [Google Scholar] [CrossRef] [PubMed]
  24. Wang, J.; Yan, Y.; Ma, C.; Liu, Z.; Ma, K.; Chen, L. Fuzzy adaptive event-triggered finite-time constraint control for output-feedback uncertain nonlinear systems. Fuzzy Sets Syst. 2022, 443, 236–257. [Google Scholar] [CrossRef]
  25. Li, M.; Long, Y.; Li, T.; Chen, C.L.P. Consensus of linear multi-agent systems by distributed event-triggered strategy with designable minimum inter-event time. Inf. Sci. 2022, 609, 644–659. [Google Scholar] [CrossRef]
  26. Zhang, S.; Che, W.; Deng, C. Observer-based event-triggered control for linear MASs under a directed graph and DoS attacks. J. Control Decis. 2022, 9, 384–396. [Google Scholar] [CrossRef]
  27. Wu, X.; Mao, B.; Wu, X.; Lu, J. Dynamic event-triggered leader-follower consensus control for multiagent systems. SIAM J. Control Optim. 2022, 60, 189–209. [Google Scholar] [CrossRef]
  28. Han, F.; Lao, X.; Li, J.; Wang, M.; Dong, H. Dynamic event-triggered protocol-based distributed secondary control for islanded microgrids. Int. J. Electr. Power Energy Syst. 2022, 137, 107723. [Google Scholar] [CrossRef]
  29. Liu, K.; Ji, Z. Dynamic event-triggered consensus of general linear multi-agent systems with adaptive strategy. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3440–3444. [Google Scholar] [CrossRef]
  30. Hu, S.; Qiu, J.; Chen, X.; Zhao, F.; Jiang, X. Dynamic event-triggered control for leader-following consensus of multiagent systems with the estimator. IET Control Theory Appl. 2022, 16, 475–484. [Google Scholar] [CrossRef]
  31. Liang, D.; Dong, Y. Robust cooperative output regulation of linear uncertain multi-agent systems by distributed event-triggered dynamic feedback control. Neurocomputing 2022, 483, 1–9. [Google Scholar] [CrossRef]
  32. Xin, C.; Li, Y.; Niu, B. Event-Triggered Adaptive Fuzzy Finite Time Control of Fractional-Order Non-Strict Feedback Nonlinear Systems. J. Syst. Sci. Complex. 2022, 35, 2166–2180. [Google Scholar] [CrossRef]
  33. Liu, P.; Xiao, F.; Wei, B. Event-Triggered Control for Multi-Agent Systems: Event Mechanisms for Information Transmission and Controller Update. J. Syst. Sci. Complex. 2022, 35, 953–972. [Google Scholar] [CrossRef]
  34. Xing, L.; Xu, Q.; Wen, C.; Mishra, Y.C.Y.; Ledwich, G.; Song, Y. Robust event-triggered dynamic average consensus against communication link failures with application to battery control. IEEE Trans. Control Netw. Syst. 2020, 7, 1559–1570. [Google Scholar] [CrossRef]
  35. Xu, Y.; Sun, J.; Wang, G.; Wu, Z.G. Dynamic triggering mechanisms for distributed adaptive synchronization control and its application to circuit systems. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 68, 2246–2256. [Google Scholar] [CrossRef]
  36. Xu, W.; He, W.; Ho, D.W.C.; Kurths, J. Fully distributed observer-based consensus protocol: Adaptive dynamic event-triggered schemes. Automatica 2022, 139, 110188. [Google Scholar] [CrossRef]
  37. Wang, X.; Hong, Y.; Sun, X.; Liu, K. Distributed optimization for resource allocation problems under large delays. IEEE Trans. Ind. Electron. 2019, 66, 9448–9457. [Google Scholar] [CrossRef]
  38. Deng, Z.; Chen, T. Distributed algorithm design for constrained resource allocation problems with high-order multi-agent systems. Automatica 2022, 144, 110492. [Google Scholar] [CrossRef]
  39. Li, L.; Zhou, Z.; Sun, S.; Wei, M. Distributed optimization of enhanced intercell interference coordination and resource allocation in heterogeneous networks. Int. J. Commun. Syst. 2019, 32, e3915. [Google Scholar] [CrossRef]
  40. Weng, S.; Yue, D.; Dou, C. Event-triggered mechanism based distributed optimal frequency regulation of power grid. IET Control Theory Appl. 2019, 13, 2994–3005. [Google Scholar] [CrossRef]
  41. Hu, W.; Liu, L.; Feng, G. Consensus of linear multi-agent systems by distributed event-triggered strategy. IEEE Trans. Cybern. 2016, 46, 148–157. [Google Scholar] [CrossRef]
  42. Dai, H.; Jia, J.; Yan, L.; Fang, X.; Chen, W. Distributed fixed-time optimization in economic dispatch over directed networks. IEEE Trans. Ind. Inform. 2021, 17, 3011–3019. [Google Scholar] [CrossRef]
  43. Hu, W.; Yang, C.; Huang, T.; Gui, W. A distributed dynamic event-triggered control approach to consensus of linear multiagent systems with directed networks. IEEE Trans. Cybern. 2020, 50, 869–874. [Google Scholar] [CrossRef] [PubMed]
  44. Lygeros, J.; Johansson, K.H.; Simic, S.N.; Zhang, J.; Sastry, S.S. Dynamical properties of hybrid automata. IEEE Trans. Autom. Control 2003, 48, 2–17. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Connection topology.
Figure 1. Connection topology.
Entropy 25 01019 g001
Figure 2. Trajectory of ϕ i ( t ) .
Figure 2. Trajectory of ϕ i ( t ) .
Entropy 25 01019 g002
Figure 3. State evolution of f i ( x i ( t ) ) .
Figure 3. State evolution of f i ( x i ( t ) ) .
Entropy 25 01019 g003
Figure 4. Event triggering instants under one-to-all DET.
Figure 4. Event triggering instants under one-to-all DET.
Entropy 25 01019 g004
Figure 5. Trajectory of i = 1 4 x i ( t ) .
Figure 5. Trajectory of i = 1 4 x i ( t ) .
Entropy 25 01019 g005
Figure 6. Trajectory of i = 1 4 ζ i ( t ) .
Figure 6. Trajectory of i = 1 4 ζ i ( t ) .
Entropy 25 01019 g006
Figure 7. State evolution of x i ( t ) .
Figure 7. State evolution of x i ( t ) .
Entropy 25 01019 g007
Figure 8. State evolution of F ( x ( t ) ) .
Figure 8. State evolution of F ( x ( t ) ) .
Entropy 25 01019 g008
Figure 9. State evolution of Γ i ( t ) .
Figure 9. State evolution of Γ i ( t ) .
Entropy 25 01019 g009
Figure 10. Trajectory of ϕ i ( t ) .
Figure 10. Trajectory of ϕ i ( t ) .
Entropy 25 01019 g010
Figure 11. State evolution of f i ( x i ( t ) ) .
Figure 11. State evolution of f i ( x i ( t ) ) .
Entropy 25 01019 g011
Figure 12. Event triggering instants under one-to-one DET.
Figure 12. Event triggering instants under one-to-one DET.
Entropy 25 01019 g012
Figure 13. Trajectory of i = 1 4 x i ( t ) .
Figure 13. Trajectory of i = 1 4 x i ( t ) .
Entropy 25 01019 g013
Figure 14. Trajectory of i = 1 4 ζ i ( t ) .
Figure 14. Trajectory of i = 1 4 ζ i ( t ) .
Entropy 25 01019 g014
Figure 15. State evolution of x i ( t ) .
Figure 15. State evolution of x i ( t ) .
Entropy 25 01019 g015
Figure 16. State evolution of F ( x ( t ) ) .
Figure 16. State evolution of F ( x ( t ) ) .
Entropy 25 01019 g016
Figure 17. State evolution of Γ ˜ i j ( t ) .
Figure 17. State evolution of Γ ˜ i j ( t ) .
Entropy 25 01019 g017
Figure 18. Event under one-to-all DET (2a).
Figure 18. Event under one-to-all DET (2a).
Entropy 25 01019 g018
Figure 19. Event under one-to-all SET (2b).
Figure 19. Event under one-to-all SET (2b).
Entropy 25 01019 g019
Figure 20. Event under one-to-one DET (5a).
Figure 20. Event under one-to-one DET (5a).
Entropy 25 01019 g020
Figure 21. Event under one-to-one SET (5b).
Figure 21. Event under one-to-one SET (5b).
Entropy 25 01019 g021
Table 1. Notation used in this paper.
Table 1. Notation used in this paper.
SymbolDescription
R A set of real numbers
R n An n-dimensional Euclidean space
· The Euclidean norm or induced matrix 2-norm
N { 1 , 2 , , n }
diag { α 1 , α 2 , , α n } A diagonal matrix with α i , i = 1 , 2 , , n
1 n An n × 1 column vector of all ones
0 n An n × 1 column vector of all zeros
I n An n × n identity matrix
A B The Kronecker product of matrices A R m × n and B R p × q
D + f ( x 0 ) The right-hand Dini derivative of f at x 0
f The gradient of f
Table 2. Cost coefficients.
Table 2. Cost coefficients.
i   α i    β i    γ i
10.532
21.541
3350.5
4121.5
Table 3. One-to-all DET performance comparison with SET.
Table 3. One-to-all DET performance comparison with SET.
Event Triggering StrategyTriggering Numbers for Agents
1234
DET12244522
SET602410546
Table 4. One-to-one DET performance comparison with SET.
Table 4. One-to-one DET performance comparison with SET.
Event Triggering StrategyTriggering Numbers for Agents
1→22→11→44→12→33→23→44→3
DET12161821144285524
SET10653392514815810381073
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, F.; Chen, X.; Yue, M.; Jiang, H.; Chen, S. Distributed Optimization for Resource Allocation Problem with Dynamic Event-Triggered Strategy. Entropy 2023, 25, 1019. https://doi.org/10.3390/e25071019

AMA Style

Guo F, Chen X, Yue M, Jiang H, Chen S. Distributed Optimization for Resource Allocation Problem with Dynamic Event-Triggered Strategy. Entropy. 2023; 25(7):1019. https://doi.org/10.3390/e25071019

Chicago/Turabian Style

Guo, Feilong, Xinrui Chen, Mengyao Yue, Haijun Jiang, and Siyu Chen. 2023. "Distributed Optimization for Resource Allocation Problem with Dynamic Event-Triggered Strategy" Entropy 25, no. 7: 1019. https://doi.org/10.3390/e25071019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop