Next Article in Journal
Concrete Beams Reinforced with High Strength Rebar in Combination with External Steel Tape
Previous Article in Journal
Mechanisms and Models of Attenuation of Shock Waves through Rock Formations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed-Time Optimization of Perturbed Multi-Agent Systems under the Resource Constraints

Department of Automation, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(7), 4527; https://doi.org/10.3390/app13074527
Submission received: 7 March 2023 / Revised: 28 March 2023 / Accepted: 31 March 2023 / Published: 3 April 2023
(This article belongs to the Section Robotics and Automation)

Abstract

:
In this paper, a novel fixed-time distributed optimization algorithm is proposed to solve the multi-agent collaborative optimization (MSCO) problem with local inequality constraints, global equation constraints and unknown disturbances. At first, a penalty function method is used to eliminate the local inequality constraints and transform the original problem into a problem without local constraints. Then, a novel three-stage control scheme is designed to achieve a robust fixed-time convergence. In the first stage, a fixed-time reaching law is given to completely eliminate the effect of unknown disturbances with the aid of the integral sliding mode control method; in the second stage, a suitable interaction strategy is provided such that the whole system could satisfy the global constraints in fixed-time; in the third stage, a fixed-time gradient optimization algorithm of the multi-agent system is presented, with which the states of all the agents will converge to the minimum value of the global objective in a fixed-time. Finally, the effectiveness of the proposed control strategy is verified in the problem of wind farm co-generation with 60 wind turbines.

1. Introduction

With the increasing number of clustered and networked engineering applications, many tasks have been transformed from monolithic to multi-agent systems collaboration, and MSCO is an important problem arising from it. The ultimate aim of a MSCO problem is to minimize the global objective function of a group of agents under global resource requirements. Each agent has access solely to its own state and goal information, communicating exclusively with neighboring agents. MSCO problems arise in many decision-making tasks, such as economic dispatches in smart grids [1,2,3,4,5], data transmission in wireless sensor networks [6,7,8], and path planning in multiple unmanned vehicles [9,10], etc.
For a MSCO problem, the global objective function is often expressed as a sum of the local objective functions that are generally convex and only accessible by local agents. Different from conventional distributed optimization problems, the agents are always assumed to be constrained by some convex sets, which brings many difficulties in dealing with such problems. For solving such a problem, many discrete-time distributed optimization algorithms [11,12,13] have been proposed. The distributed subgradient descent method [11,12] is proposed based on the consistency algorithm, which solves the unconstrained convex optimization problem under the condition of consistent joint connectivity of time-varying undirected networks. The algorithm requires that each agent keeps a backup copy of the global decision variables, which undoubtedly increases the burden of communication and storage. The exact first-order algorithm (EXTRA) [13] for distributed exact convergence is proposed as a distributed accelerated optimization algorithm that achieves a linear convergence property under undirected graph connectivity and certain algorithm parameter settings. Since many dynamic systems are modeled as continuous-time models, many continuous-time distributed optimization algorithms have been proposed [14,15,16]. The distributed PI algorithm [14] is proposed under the equilibrium-directed graph based on the idea of PI control, and gives the stability analysis of the algorithm based on the Lyapunov theorem. The paper analyzes the convergence characteristics of distributed optimization from the perspective of a continuous-time control system and gives the design framework of a distributed optimization algorithm. A distributed extended monotropic optimization algorithm is proposed in [16] using the extended Lagrangian method, which considers a specific class of separable structures without the need for the same number of dimensions of each agent.
Compared to exponential convergence algorithms, fixed-time convergence algorithms gained increasing attention due to their initial state-independent convergence property [17]. Fixed-time convergence algorithms for multi-agent systems have achieved some results, as evidenced by [18,19,20,21,22] and related references. In [18], a distributed fixed-time convex optimization algorithm is developed using the projected gradient and penalty function methods, which can be easily implemented for practical usage. Similarly, a fixed-time economic scheduling algorithm for smart grids is proposed in [19] under a directed communication network. In [20], the authors employ the integral sliding mode control method to eliminate external perturbations in multi-agent systems, thereby achieving a distributed fixed-time economic scheduling algorithm. However, the proposed algorithm does not take into account the local constraints of the agents. In [21], a class of heterogeneous agents with random noise is considered to achieve the finite-time consistent control using a power-integrator approach, which provides a reference for designing distributed control strategies for multi-agent systems with nonlinear dynamic structures. A fixed-time consensus proposal by a fuzzy adaptive method is proposed in [22] for the consensus tracking problem.
In this paper, we consider the fixed-time MSCO problem for a continuous-time multi-agent system with unknown external disturbances. Firstly, the local inequality constraints are removed using the penalty function method, resulting in a transformed problem without local constraints, and the solution gap is theoretically analyzed. Secondly, a three-stage fixed-time convergence strategy is designed: the first stage uses the integral sliding mode control to eliminate the external perturbation in fixed-time, the second stage uses the symmetry of the communication network to satisfy the global resource constraint in fixed-time, and the third stage uses the fixed-time gradient optimization algorithm to make all the agents converge to the minimum value of the transformed problem in fixed-time. Finally, the Lyapunov function analysis shows that the system can track the global resource demand and satisfy the local resource constraint in fixed-time. The main contribution of the paper can be summarized as follows:
(1)
The penalty function method is used to remove the local constraints of the agents and the error bound of the approximate solution is analyzed.
(2)
A novel three-stage fixed-time distributed optimization algorithm is proposed to solve the MSCO problem with local inequality constraints, global equation constraints and unknown disturbances.
(3)
The effectiveness of the proposed algorithm is verified by the engineering case of a wind farm co-generation optimization problem considering the wake effect.
The remainder of the paper is organized as follows: Section 2 gives some preliminary knowledge and formulates the MSCO problem. The main results are presented in Section 3, including the analysis of the optimal solution set and three-stage fixed-time MSCO algorithm. Simulation for wind farm co-generation is provided in Section 4 to validate all the results. Section 5 concludes the whole paper.
Notation 1.
R is the set of real numbers, R > 0 is the set of positive real numbers, R 0 is the set of non-negative real numbers, and 0 n R n is a full zero column vector. The ( · ) represents the transpose, and let the second smallest eigenvalue of the matrix be λ 2 ( · ) . f C k ( U , V ) represents the function f : U V , U R n , V R m are kth order continuous differentiable functions. f : R n R n is the gradient of the function f C 1 ( R n , R ) . · p is the l p norm, and in particular, · is the Euclidean norm.
Suppose the system has N agents, and the communication topology G = ( A , V ) between agents is an undirected graph with adjacency matrix A ( t ) = [ a i j ( t ) ] R N × N , a i j { 0 , 1 } , and the set of nodes V = { 1 , 2 , , N } . The set of neighbor nodes of node i is denoted as N i ( t ) , that is, N i ( t ) = { j V | a i j ( t ) = 1 } .
sgn ( · ) is the sign function. Given x = [ x 1 , , x n ] R n , we define the function sign β : R n R n , β 0 as
sign β ( x ) = x x β 1 ,
and sign β ( 0 n ) = 0 n .

2. Preliminary and Problem Description

2.1. Some Related Lemmas

Consider the following system:
x ˙ = g ( t , x ) , x ( 0 ) = x 0 ,
where x R n , g : R n R n is a nonlinear function. Assume that the solution of the system (2) exists and is unique, and is continuous for all initial values x ( 0 ) R n of the system. The origin is the equilibrium point of the system (2), i.e., g ( 0 n ) = 0 n .
Definition 1.
If the equilibrium point x = 0 n of the system (2) is globally asymptotically stable and any trajectory of the system (2) arrives at the equilibrium point in fixed-time, i.e., x ( t , x 0 ) = 0 n , t T ( x 0 ) , then the equilibrium point x = 0 n is said to be the global fixed-time stable equilibrium point, and T : R n R > 0 is the settling time.
Definition 2.
If any trajectory of the system (2) arrives at the set M R n and stays in M in fixed-time, i.e., x ( t , x 0 ) M , t T ( x 0 ) , then M is said to be the global fixed-time attraction domain of the system (2), and T : R n R > 0 is the settling time.
Lemma 1
([17,19]). If there exists a radial unbounded function V C 1 ( R n , R 0 ) , such that V ( x ) = 0 x M . The time derivative of V along the system (2) satisfies for any c 0 > 0 , c 1 > 0 , c 2 > 0 , 0 < μ < 1 , ν > 1 with
(1)
V ˙ ( x ( t ) ) c 1 V μ ( x ( t ) ) c 2 V ν ( x ( t ) ) , then the set M is a global fixed-time attraction domain of the system (2), and settling time T 1 c 1 ( 1 μ ) + 1 c 2 ( ν 1 ) ;
(2)
V ˙ ( x ( t ) ) c 0 V ( x ( t ) ) c 1 V μ ( x ( t ) ) c 2 V ν ( x ( t ) ) holds, then the set M is a global fixed-time attraction domain of the system (2), and T ln ( 1 + c 0 / c 1 ) c 0 ( 1 μ ) + ln ( 1 + c 0 / c 2 ) c 0 ( ν 1 ) .
Definition 3.
If function f C 1 ( R n , R ) is m-strongly convex, then there exists m > 0 such that f ( y ) f ( x ) + f ( x ) ( y x ) + m 2 x y 2 , x , y R n .
Lemma 2
([23]). For any z i R 0 , i { 1 , 2 , , N } , the following inequalities holds:
i = 1 N z i p i = 1 N z i p , 0 < p 1 ,
i = 1 N z i p N 1 p i = 1 N z i p , p > 1 .
Lemma 3
([24]). The graph G = ( A , V ) is an undirected connected graph. The Laplacian matrix of G   L A [ l i j ] R N × N is given by the following equation:
l i j = k = 1 , k i a i k , i = j , a i j , i j .
The Laplacian matrix L A has the following properties:
(1)
L A is a symmetric semi-positive definite matrix, L A 1 N = 0 N , λ 2 ( L A ) > 0 .
(2)
x L A x = 1 2 i , j = 1 n a i j ( x i x j ) 2 , if 1 N x = 0 , we have x L A x λ 2 ( L A ) x x .

2.2. Problem Description

Consider a multi-agent system consisting of N first-order agents, the communication topology of the system is given by the graph G = ( A , V ) . The state of agent i is x i R , and the global state of the entire system is x = ( x 1 , , x n ) R N . Agent i’s local objective function f i : R | N i | + 1 R is only related to the local state x i and the state of the neighboring agents { x j } j N i . Denote the local augmented state of the agent i as x ˜ i = ( x i , { x j } j N i ) . The global objective function of the system can be separated into the sum of the local objective functions of the agents, i.e., f ( x ) = i = 1 N f i ( x ˜ i ) . Consider the following MSCO problem:
min x R N f ( x ) = i = 1 N f i ( x ˜ i ) , s . t . i = 1 N x i = i = 1 N d i = D , ( x i x i l ) ( x i x i u ) 0 , i N ,
where, d i R denotes the local resource requirements of the agent i, D R is the global resource requirement of the multi-agent system, and x i l , x l u R 0 are the known upper and lower local resource constraints of the agent i, respectively. The distributed solution of the problem (5) requires that the agent i store only their local states x i , local objective function f i , local resource constraints x i l and x i u , and local resource requirements d i , and only interacts with information from its neighboring agents. The set of feasible solutions to the problem (5) is denoted as
X fe = { x R N | x 1 + + x N = D , x i l x i x i u , i N } .
The optimal set of solutions to the problem (5) is denoted as Ω , and the corresponding objective function value is f .
The dynamics of the first-order agents can be described as
x ˙ i ( t ) = u i ( t ) + ω i ( t ) , i = 1 , 2 , , N ,
where, u i ( t ) R denotes the control input of the agent i, w i ( t ) R is the unknown external disturbance. Now, it is necessary to design the control law u i ( t ) in the presence of external perturbation w i ( t ) such that Ω is a global fixed-time attraction domain of the MSCO problem (5).
Assumption 1.
Graph G is a static undirected connected graph.
Assumption 2.
The local objective function f i ( x ˜ i ) is continuously differentiable, and the global objective function f ( x ) C 1 ( R N , R ) is m-strongly convex.
Assumption 3
(relaxed Slater condition).There exists
x ¯ = ( x ¯ 1 , , x ¯ N ) ,
such that ( x ¯ i x i l ) ( x ¯ i x i u ) 0 , i N , i = 1 N x ¯ i = D holds simultaneously.
Assumption 4.
The l 2 norm of the external perturbation ω i ( t ) is bounded, i.e., there exists a constant δ i R > 0 such that ω i δ i , and we write δ ¯ = max ( δ i ) .
Remark 1.
Assumptions 1–3 are commonly used assumptions in the design of distributed fixed-time optimization algorithms. Assumptions 1 are used to simplify the description of the algorithm, which is also applicable in time-varying topologies and directed strongly connected graphs as seen later. Assumption 2 and 3 guarantee the existence and uniqueness of the problem (5), which can be verified by the dual problem of the original problem (5), i.e., there exists a unique x R N such that f ( x ) = f is a minimal value point. Common objective functions in engineering applications, such as the quadratic consumption characteristic curves of generating units in power system economic dispatches, and l 2 loss functions in supervised learning, are all consistent with the Assumption 2. Assumption 3 is the Slater condition after relaxation, which does not require the optimal solution of the problem (5) to be the interior point of the set of resource constraints.
Remark 2.
Under weaker conditions than the Assumptions 1–4, some studies have obtained asymptotic convergence algorithms for solving the MSCO problem (5) [13,14,15,16]. Several multi-agent fixed-time optimization algorithms exist without local resource constraints ( x i x i l ) ( x i x i u ) 0 , i N . The literature [20] addresses the problem of distributed optimization without local resource constraints in the presence of external disturbances. In this paper, we propose to solve the distributed optimization problem (5) in the presence of external disturbances and local resource constraints for agents.

3. Main Results

3.1. Optimal Solution Set

Before giving the distributed algorithm for the problem (5), we discuss the properties of the optimal solution of the problem (5). By Assumptions 2 and 3, the optimal solution x R N of problem (5) satisfies the following KKT optimality condition [25]:
f i ( x ˜ i ) x i + j N i f j ( x ˜ j ) x i + λ + μ i ( ( x i x i l ) + ( x i x i u ) ) = 0 ,
x 1 + + x N = D ,
( x i x i l ) ( x i x i u ) 0 ,
μ i ( x i x i l ) ( x i x i u ) = 0 ,
where λ R , μ i R 0 , i N . Since f ( x ) is a strongly convex function, there exists a unique x as the optimal solution to the problem (5) [25].
Let g i ( x i ) = ( x i x i l ) ( x i x i u ) , i N i . For handling inequality constraints, Pinar [26] proposed a smooth penalty function of the following form:
p i , ϵ ( g i ( x i ) ) = 0 , g i ( x i ) 0 , g i ( x i ) 2 2 ϵ , 0 g i ( x i ) ϵ , g i ( x i ) ϵ 2 , g i ( x i ) ϵ .
For ϵ R > 0 , the problem (5) can be rewritten as
min x R N f p ( x ) = i = 1 N f i p ( x ˜ i ) ,
s . t . x 1 + + x N = D ,
where,
f i p ( x ˜ i ) = f i ( x ˜ i ) + γ p i , ϵ ( g i ( x i ) ) , i N .
Since f ( x ) is an m-strongly convex function and i = 1 N p i , ϵ ( g i ( x i ) ) is a convex function, f p ( x ) is also an m-strongly convex function and the solution of the problem (10) is unique. Let the ϵ -feasible solution set of the problem (5) be
X fe ϵ = { x R N | x 1 + + x N = D , g i ( x i ) ϵ , i N } .
For an appropriate value of γ , the optimal solution of the problem (10) is within the set of ϵ -feasible solutions of the problem (5) and the optimal solution of the problem (10) is in the ϵ -neighborhood of the optimal solution of the problem (5). That is, we have the following expression.
Lemma 4
([26,27]). If x , λ , μ 1 , , μ N is a set of solutions to the KKT Equation (8) satisfying Assumptions 2 and 3, let x ^ be the optimal solution to problem (10) for some γ , ϵ R > 0 . If we take γ > 1 N 1 N γ , where γ = max { μ 1 , , μ N } , then we have
x ^ X fe ϵ ,
0 f f ( x ^ ) ϵ γ N .
Remark 3.
From Lemma 4, it follows that if there is an upper bound for μ i , i N , then the solution of the problem (5) can be approximated by solving (10).
Theorem 1.
If x , λ , μ 1 , , μ N is the solution of KKT Equation (8) under Assumptions 2 and 3, denote Φ i = max x ˜ i X ˜ fe i f i ( x ˜ i ) x i , X ˜ fe i = { g i ( x i ) 0 , g j ( x j ) 0 , j N i } , yielding
| λ | max Φ i + j N i Φ j i = 1 N ,
max { μ i } i = 1 N 2 max Φ i + j N i Φ j i = 1 N min { x i u x i l } i = 1 N .
Proof. 
It is straightforward to see that
f i ( x ˜ i ) x i + j N i f j ( x ˜ j ) x i max x ˜ i X ˜ fe i f i ( x ˜ i ) x i + j N i f j ( x ˜ j ) x i Φ i + j N i Φ j , i N .
Therefore, the following inequality holds:
max { Φ i + j N i Φ j } i = 1 N f i ( x ˜ i ) x i j N i f j ( x ˜ j ) x i max { Φ i + j N i Φ j } i = 1 N .
Substituting (15) into the KKT Equation (8), we obtain the upper bound of | λ | in (14). □
Remark 4.
To obtain the upper bound of the optimal multiplier μ i , one only needs to know the local objective function f i ( x ˜ i ) and the upper/lower bounds of the resource constraints of both the local and neighboring agents. One can easily design the distributed protocol to find the optimal multiplier, and then set the parameter γ.

3.2. Three-Stage Fixed-Time Robust MSCO Algorithm

By choosing a suitable γ , solving the problem (10) yields an ϵ -approximate solution to the original problem (5). Using the fixed-time integral sliding mode control [28], the fixed-time robust MSCO algorithm for solving the problem (10) under the undirected connected graph is proposed as follows:
u i = u i 1 + u i 2 , u i 1 = ξ 1 sign μ ( e i ) ξ 2 sign ν ( e i ) + η i e i = x i d i 0 t η i d τ , η i = ζ 1 j N i sign μ ( ϕ j ϕ i ) + ζ 2 j N i sign ν ( ϕ j ϕ i ) , ϕ i = f i p x i + j N i f j p x i , u i 2 = κ 0 s i κ 1 sgn ( s i ) κ 2 sign α ( s i ) , s i = x i 0 t u i 1 ( τ ) d τ ,
where, κ 0 , κ 2 , ξ 1 , ξ 2 , ζ 1 , ζ 2 R > 0 , κ 1 > δ ¯ , α > 1 , 0 < μ < 1 , ν > 1 are adjustable parameters, s i denotes the integral sliding manifold, ϕ i denotes the summation of the local and neighboring gradient information of agent i, and u i 1 , u i 2 , e i , η i are auxiliary variables.
Remark 5.
The Algorithm (16) is based on a three-stage strategy design. Inspired by the integral sliding mode control, the main task of the first stage is to complete the rejection of the external disturbance w i ( t ) . Using the symmetry of the undirected graph, the second stage is designed so that the global resource demand constraint is satisfied. The third stage drives the agents to the global optimum x ^ . In summary, since the above three stages are completed in fixed-time, the Algorithm (16) will drive the agents to solve the problem (10) in fixed-time.
Theorem 2.
Assuming that Assumptions 1–4 hold, the problem (5) can be solved by distributed Algorithm (16) in fixed-time.
Proof. 
The proof can be divided into three steps.
  • Step 1: Reaching law u i 2 ( t ) is used to guarantee the fixed-time convergence of sliding manifold s i , where the effect of the external perturbation ω i ( t ) will be totally eliminated thereafter.
The dynamics of s i are obtained from the algorithm (16)
s ˙ i ( t ) = x ˙ i ( t ) u i 1 ( t ) = u i 2 ( t ) + ω i ( t ) = κ 0 s i κ 1 sgn ( s i ) κ 2 sign α ( s i ) + ω i ( t ) .
Consider the Lyapunov function V 1 , i ( t ) = s i 2 . The derivative of V 1 , i ( t ) along the dynamics (17) is
V ˙ 1 , i ( t ) = 2 s i s ˙ i = 2 κ 0 s i 2 2 κ 1 | s i | 2 κ 2 | s i | α + 1 + 2 s i ω i 2 κ 0 s i 2 2 ( κ 1 δ ¯ ) ( s i 2 ) 1 / 2 2 κ 2 ( s i 2 ) α + 1 2 = 2 κ 0 V 1 , i 2 ( κ 1 δ ¯ ) V 1 , i 1 / 2 2 κ 2 V 1 , i α + 1 2 .
By applying the Cauchy–Schwarz inequality s i ω i | s i | | ω i | and leveraging Lemma 4, it can be shown that inequality (18) holds. Furthermore, Lemma 1 states that if V 1 , i ( t ) = 0 implies s i = 0 , then s i can converge to 0 in fixed-time T 1 even in the presence of external perturbations ω i ( t ) . As a result, we can derive the following:
T 1 ln 1 + κ 0 κ 1 δ ¯ κ 0 + ln 1 + κ 0 κ 2 κ 0 ( α 1 ) .
Thus, we have x ˙ i ( t ) = u i 1 ( t ) , t T 1 .
  • Step 2: By using the symmetry property of control input u i 1 , it is proven that the global resource constraint x i = D will be satisfied in fixed time.
When t T 1 , the dynamic degeneracy of x i is
x ˙ i = ξ 1 sign μ ( e i ) ξ 2 sign ν ( e i ) + η i .
The dynamics of e i can be simplified by Equation (20) as
e ˙ i = ξ 1 sign μ ( e i ) ξ 2 sign ν ( e i ) .
At this point, the error e i is decoupled from the state x i . Consider the Lyapunov candidate
V 2 , i ( t ) = e i 2 .
The derivative of V 2 , i ( t ) along the subsystem (21) is
V ˙ 2 , i ( t ) = 2 e i e ˙ i = 2 e i ξ 1 sign μ ( e i ) ξ 2 sign ν ( e i ) = 2 ξ 1 ( e i 2 ) μ + 1 2 2 ξ 2 ( e i 2 ) ν + 1 2 = 2 ξ 1 V 2 , i μ + 1 2 2 ξ 2 V 2 , i ν + 1 2 .
According to Lemma 1, By V 2 , i = 0 e i = 0 , it follows that the error e i converges to 0 in fixed-time T 2 , and
T 2 T 1 + 1 ξ 1 ( 1 μ ) + 1 ξ 2 ( ν 1 ) .
Therefore, when t T 2 , e i = e ˙ i = 0 , there is
i = 1 N e i = i = 1 N x i i = 1 N d i 0 t i = 1 N η i d τ = 0 .
The graph G is an undirected connected graph, then we have
i = 1 N j N i ζ 1 sign μ ( ϕ j ϕ i ) = i = 1 N j N i ζ 2 sign ν ( ϕ j ϕ i ) = 0 .
From Equations (25) and (26), we have
i = 1 N x i = i = 1 N d i = D , t T 2 ,
which indicates that the global resource requirement D is always satisfied when t T 2 .
  • Step 3: Based on the design of the existing unconstrained distributed gradient-based optimization algorithm, a fixed-time scheme is applied for driving f p ( x ) that reaches the optimal solution f p ( x ^ ) .
We have e i = 0 when t T 2 , and the dynamic degeneracy of x i now is
x ˙ i = ζ 1 j N i sign μ ( ϕ j ϕ i ) + ζ 2 j N i sign ν ( ϕ j ϕ i ) .
Consider the Lyapunov function
V 3 ( t ) = f p ( x ) f p ( x ^ ) .
It is straightforward to see that V 3 ( t ) 0 , and that V 3 ( t ) = 0 x = x ^ . The derivative of V 3 ( t ) along the dynamics (28) is
V ˙ 3 ( t ) = i = 1 N ϕ i x ˙ i = ζ 1 i = 1 N j N i ϕ i sign μ ( ϕ j ϕ i ) + ζ 2 i = 1 N j N i ϕ i sign ν ( ϕ j ϕ i ) = 1 2 ζ 1 i = 1 N j N i | ϕ j ϕ i | μ + 1 1 2 ζ 2 i = 1 N j N i | ϕ j ϕ i | ν + 1 = 1 2 ζ 1 i = 1 N j = 1 N a i j | ϕ j ϕ i | 2 μ + 1 2 1 2 ζ 2 i = 1 N j = 1 N a i j | ϕ j ϕ i | 2 ν + 1 2 .
Let ρ i j = a i j | ϕ j ϕ i | 2 . Based on the conditions 0 < μ < 1 and ν > 1 , Lemma 2 yields
V ˙ 3 = 1 2 ζ 1 i = 1 N j = 1 N ρ i j μ + 1 2 1 2 ζ 2 i = 1 N j = 1 N ρ i j ν + 1 2 1 2 ζ 1 i , j = 1 N ρ i j μ + 1 2 1 2 ζ 2 N 1 ν i , j = 1 N ρ i j ν + 1 2 .
Let the average gradient ϕ c = 1 N i = 1 N ϕ i , and the gradient error is ϕ ˜ i = ϕ i ϕ c , then
ρ i j = a i j | ϕ ˜ j ϕ ˜ i | 2 .
Let the global gradient vector be ϕ = ( ϕ 1 , , ϕ N ) . The global gradient error vector is ϕ ˜ = ( ϕ ˜ 1 , , ϕ ˜ N ) , noting that ϕ = f p ( x ) . Additionally, by Step 2 we know that 1 N x = D , t T 2 . The optimal solution x ^ satisfies the constraint 1 N x ^ = D , then
ϕ c 1 N ( x ^ x ) = ϕ c 1 N ( x ^ x ) = 0 ,
Given the m-strong convexity of f p ( x ) , let f p = f p ( x ^ ) , then
f p f p ( x ) + ϕ ( x ^ x ) + m 2 x x ^ 2 = f p ( x ) + ( ϕ ϕ c 1 N ) ( x ^ x ) + m 2 x ^ x 2 = f p ( x ) + ϕ ˜ ( x ^ x ) + m 2 x x ^ 2 .
Denote F ( x ) = f p ( x ) f p + ϕ ˜ ( x x ) + m 2 x x ^ 2 , and F ( x ) is a positive definite quadratic with respect to x ^ x . From the discriminant formula for the roots,
ϕ ˜ 2 = ϕ ˜ ϕ ˜ 2 m f p f p .
By the nature of the error gradient ϕ ˜ , we have
1 N ϕ ˜ = 0 .
From Equation (32) and the Lemma 3, we have
i , j = 1 N ρ i j = i , j = 1 N a i j | ϕ ˜ j ϕ ˜ i | 2 = 2 ϕ ˜ L A ϕ ˜ 2 λ 2 ( L A ) ϕ ˜ ϕ ˜ 4 m λ 2 ( L A ) ( f p f p ) = r V 3 ,
where, r = 4 m λ 2 ( L A ) . This leads to
V ˙ 3 1 2 ζ 1 r μ + 1 2 V 3 μ + 1 2 1 2 ζ 2 N 1 ν r ν + 1 2 V 4 ν + 1 2 .
According to the Lemma 1, we have V 3 = 0 f p ( x ) = f p , x = x ^ , t T 3 . Furthermore,
T 3 = T 2 + 4 ζ 1 r μ + 1 2 ( 1 μ ) + 4 ζ 2 N 1 ν r ν + 1 2 ( ν 1 ) ,
i.e., the problem (10) is solved by the algorithm (16) in fixed-time T 3 . □
Remark 6.
The local objective function f i in the MSCO problem solved in this paper not only contains the local state x i , but can also be related to the states x j , j N i of neighboring agents. While common MSCO problems in [18,19,27] assume that the local objective function f i is only a function of x i , this paper deals with the more general case.
Remark 7.
Unlike the common distributed exponential convergence algorithms, the algorithm (16) converges in fixed-time and the convergence time is only related to the constants in the algorithm and the second smallest eigenvalue of the Laplacian matrix of the communication topology graph, which can be easily adjusted to achieve different convergence times. Although the communication topology of the multi-agent system in this paper is static, the algorithm (16) can solve the time-varying topology problem directly by changing the second small eigenvalue of the Laplace matrix to the lower bound of the time-varying case in Step 3, so the proof for the time-varying topology case is not repeated.

4. Simulation

This section considers a scenario of distributed cooperative power generation in a wind farm. The wind farm contains many wind turbines, and the centralized wind farm scheduling method has the problems of complex communication links and poor communication fault tolerance. The distributed generation distribution control method only requires the establishment of communication links among the WTGs to ensure that the WTGs form a fully connected topology, which is robust to topology switching.
In this section, the effectiveness of the algorithm (16) is tested on the distributed cooperative power tracking of the Princess Amalia wind farm in the Netherlands, and the detailed configuration of which can be found in [20]. The wind farm contains 60 wind turbines with a rated power of 2 MW. Considering the wake effect, the power cost function of the wind turbines is affected by the power of the neighboring wind turbines upwind. For a given wind direction, the wake effect for the Princess Amalia wind farm is shown in Figure 1. Take turbine 24 as an example; its generation cost is a function of its own power and the power of turbines 15 and 16. Considering the decision variable x as the wind turbine power of the wind farm, the generation cost function of the i wind turbine is f i = f i , i ( x i ) + j N i f i , j ( x i , x j ) , where f i , i ( x i ) is the own generation cost of the i wind turbine, f i , j ( x i , x j ) is the generation cost of the ith wind turbine affected by the tail flow effect of the jth wind turbine. The wind turbine’s own generation cost is a quadratic cost curve, i.e., f i , i ( x i ) = c 2 , i x i 2 + c 1 , i x i + c 0 , i . Based on the main characteristics of the wake effect, the generation cost under the influence of adjacent turbines is assumed to be f i , j ( x j ) = k i , j x i x j , k i , j 0 . Note that the local generation cost f i of the turbine under this assumption may be a non-convex function. The generation cost of turbine 24 is f 24 = c 2 , 24 x 24 2 + c 1 , 24 x 24 + c 0 , 24 + k 24 , 15 x 15 x 24 + k 24 , 16 x 16 x 24 .
The parameters of the wind turbine cost function are as follows: c 2 , i [ 4.0 , 6.0 ] , c 1 , i [ 2.0 , 3.0 ] , c 0 , i [ 95.0 , 110.0 ] , i = 1 , 2 , , 60 , and the effect coefficient of the wake effect k i , j = 0.1 . In this setting, the total generation cost of the wind farm f ( x ) = i = 1 60 f i is a strongly convex function. Additionally, consider the matching perturbation ω i ( t ) = 0.5 sin ( 0.1 i t ) + W G i for the ith wind turbine, where W G i denotes zero-mean Gaussian noise with a variance of 0.1. Taking the initial value x i ( 0 ) [ 0.8 , 1.8 ] , the total power demand of the wind farm is D = 78 MW and is cut to 68 MW at t = 2 s and increased to 83 MW at t = 4 s. Note that the algorithm in this paper need not satisfy i = 1 60 x i ( 0 ) = D . Set the parameters of the algorithm (16) to [ κ 0 , κ 1 , κ 2 , ξ 1 , ξ 2 , ζ 1 , ζ 2 , α , μ , ν ] = [ 10 , 10 , 10 , 10 , 10 , 10 , 100 , 100 , 1.5 , 0.5 , 1.5 ] , the fixed convergence time can be calculated as T 3 = 1.3856 s.
The convergence curve for each wind turbine is shown in Figure 2, and the total power variation curve of the wind farm is shown in Figure 3. It is shown that the global resource constraint i = 1 60 x i = D is satisfied in T 3 for t < 2 . Moreover, the proposed algorithm maintains a quick response to the power demand variations at t = 2 s and t = 4 s. The total power generation cost of the wind farm is shown in Figure 4. Under different global resource demands, the total power generation cost of the wind farm converges to the global optimal solution as labeled in Figure 4. The convergence trajectories of sliding manifolds s i are shown in Figure 5, where all the sliding manifolds are reached in fixed-time and the reaching time is shorter than the estimated one.

5. Conclusions

A novel fixed-time distributed optimization algorithm is proposed to solve an MSCO problem with local inequality constraints, global equation constraints, and unknown disturbances. A penalty function method is utilized to eliminate the local constraints and a novel three-stage control scheme is proposed to achieve robust fixed-time convergence. The proposed algorithm consists of a fixed-time reaching law to eliminate the unknown disturbances, a suitable interaction strategy to satisfy the global constraints, and a fixed-time gradient optimization algorithm to converge to the minimum value of the global objective in fixed-time. The effectiveness of the proposed strategy is demonstrated in the wind farm co-generation problem with 60 wind turbines. Our results show the potential of the proposed algorithm in solving MSCO problems. In the future, the following research topics will be considered.
  • Designing a fixed-time proximal gradient algorithm for handling the local constraints;
  • Designing a fixed-time optimization algorithm for multi-agent systems with unmodeled dynamics.

Author Contributions

Writing—original draft preparation, F.W.; validation, Y.C.; supervision, B.W.; writing—review and editing, B.W., F.W., Y.C. and C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grant No. 51777058 and the “SCBS” plan of Jiangsu Province under Grant No. JSSCBS20210243.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data were used to support this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, D.; Chen, M.; Wang, W. Distributed extremum seeking for optimal resource allocation and its application to economic dispatch in smart grids. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3161–3171. [Google Scholar] [CrossRef] [PubMed]
  2. Zhang, H.; Liang, S.; Liang, J.; Han, Y. Convergence analysis of a distributed gradient algorithm for economic dispatch in smart grids. Int. J. Electr. Power Energy Syst. 2022, 134, 107373. [Google Scholar] [CrossRef]
  3. Mao, S.; Tang, Y.; Dong, Z.; Meng, K.; Dong, Z.Y.; Qian, F. A privacy preserving distributed optimization algorithm for economic dispatch over time-varying directed networks. IEEE Trans. Ind. Inform. 2020, 17, 1689–1701. [Google Scholar] [CrossRef]
  4. Xu, Y.; Dong, Z.; Li, Z.; Liu, Y.; Ding, Z. Distributed optimization for integrated frequency regulation and economic dispatch in microgrids. IEEE Trans. Smart Grid 2021, 12, 4595–4606. [Google Scholar] [CrossRef]
  5. Liu, Q.; Le, X.; Li, K. A distributed optimization algorithm based on multiagent network for economic dispatch with region partitioning. IEEE Trans. Cybern. 2019, 51, 2466–2475. [Google Scholar] [CrossRef]
  6. Kuthadi, V.M.; Selvaraj, R.; Baskar, S.; Shakeel, P.M.; Ranjan, A. Optimized energy management model on data distributing framework of wireless sensor network in IoT system. Wirel. Pers. Commun. 2021, 127, 1377–1403. [Google Scholar] [CrossRef]
  7. Fu, X.; Pace, P.; Aloi, G.; Yang, L.; Fortino, G. Topology optimization against cascading failures on wireless sensor networks using a memetic algorithm. Comput. Netw. 2020, 177, 107327. [Google Scholar] [CrossRef]
  8. Gong, J.; Chang, T.H.; Shen, C.; Chen, X. Flight time minimization of UAV for data collection over wireless sensor networks. IEEE J. Sel. Areas Commun. 2018, 36, 1942–1954. [Google Scholar] [CrossRef] [Green Version]
  9. Huang, S.; Teo, R.S.H.; Tan, K.K. Collision avoidance of multi unmanned aerial vehicles: A review. Annu. Rev. Control 2019, 48, 147–164. [Google Scholar] [CrossRef]
  10. Zhang, D.; Duan, H. Social-class pigeon-inspired optimization and time stamp segmentation for multi-UAV cooperative path planning. Neurocomputing 2018, 313, 229–246. [Google Scholar] [CrossRef]
  11. Nedic, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48–61. [Google Scholar] [CrossRef]
  12. Johansson, B.; Keviczky, T.; Johansson, M.; Johansson, K.H. Subgradient methods and consensus algorithms for solving convex optimization problems. In Proceedings of the 2008 47th IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 4185–4190. [Google Scholar]
  13. Shi, W.; Ling, Q.; Wu, G.; Yin, W. Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 2015, 25, 944–966. [Google Scholar] [CrossRef] [Green Version]
  14. Gharesifard, B.; Cortés, J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Autom. Control 2013, 59, 781–786. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, J.; Elia, N. A control perspective for centralized and distributed convex optimization. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; pp. 3800–3805. [Google Scholar]
  16. Zeng, X.; Yi, P.; Hong, Y.; Xie, L. Distributed continuous-time algorithms for nonsmooth extended monotropic optimization problems. SIAM J. Control Optim. 2018, 56, 3973–3993. [Google Scholar] [CrossRef]
  17. Polyakov, A. Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Autom. Control 2011, 57, 2106–2110. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, G.; Guo, Z. Initialization-free distributed fixed-time convergent algorithms for optimal resource allocation. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 845–854. [Google Scholar] [CrossRef]
  19. Dai, H.; Jia, J.; Yan, L.; Fang, X.; Chen, W. Distributed fixed-time optimization in economic dispatch over directed networks. IEEE Trans. Ind. Inform. 2020, 17, 3011–3019. [Google Scholar] [CrossRef]
  20. Firouzbahrami, M.; Nobakhti, A. Cooperative fixed-time/finite-time distributed robust optimization of multi-agent systems. Automatica 2022, 142, 110358. [Google Scholar] [CrossRef]
  21. Hu, Z.; Ma, L.; Wang, B.; Zou, L.; Bo, Y. Finite-time consensus control for heterogeneous mixed-order nonlinear stochastic multi-agent systems. Syst. Sci. Control Eng. 2021, 9, 405–416. [Google Scholar] [CrossRef]
  22. Yang, T.; Kang, H.; Ma, H. Adaptive fuzzy fixed-Time tracking control for switched high-order multi-agent systems with input delay. IEEE Trans. Netw. Sci. Eng. 2022, 9, 3492–3503. [Google Scholar] [CrossRef]
  23. Hardy, G.H.; Littlewood, J.E.; Pólya, G.; Pólya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  24. Mesbahi, M.; Egerstedt, M. Graph theoretic methods in multiagent networks. In Graph Theoretic Methods in Multiagent Networks; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  25. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  26. Pinar, M.Ç.; Zenios, S.A. On smoothing exact penalty functions for convex constrained optimization. SIAM J. Optim. 1994, 4, 486–511. [Google Scholar] [CrossRef]
  27. Kia, S.S. Distributed optimal resource allocation over networked systems and use of an e-exact penalty function. IFAC-PapersOnLine 2016, 49, 13–18. [Google Scholar] [CrossRef]
  28. Chen, Y.; Wang, B.; Chen, Y.; Wang, Y. Sliding mode control for a class of nonlinear fractional order systems with a fractional fixed-Time reaching law. Fractal Fract. 2022, 6, 678. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the wake effect on the location of 60 units at Princess Amalia wind farm and at a certain wind direction.
Figure 1. Schematic diagram of the wake effect on the location of 60 units at Princess Amalia wind farm and at a certain wind direction.
Applsci 13 04527 g001
Figure 2. Evaluation of the power set points for Princess Amalia wind farm.
Figure 2. Evaluation of the power set points for Princess Amalia wind farm.
Applsci 13 04527 g002
Figure 3. Variation curve of total power of 60 wind turbines in Princess Amalia wind farm.
Figure 3. Variation curve of total power of 60 wind turbines in Princess Amalia wind farm.
Applsci 13 04527 g003
Figure 4. Variation curve of total generation cost of 60 wind turbines in Princess Amalia wind farm.
Figure 4. Variation curve of total generation cost of 60 wind turbines in Princess Amalia wind farm.
Applsci 13 04527 g004
Figure 5. Convergence trajectories of sliding manifolds.
Figure 5. Convergence trajectories of sliding manifolds.
Applsci 13 04527 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, B.; Wang, F.; Chen, Y.; Peng, C. Fixed-Time Optimization of Perturbed Multi-Agent Systems under the Resource Constraints. Appl. Sci. 2023, 13, 4527. https://doi.org/10.3390/app13074527

AMA Style

Wang B, Wang F, Chen Y, Peng C. Fixed-Time Optimization of Perturbed Multi-Agent Systems under the Resource Constraints. Applied Sciences. 2023; 13(7):4527. https://doi.org/10.3390/app13074527

Chicago/Turabian Style

Wang, Bing, Fumian Wang, Yuquan Chen, and Chen Peng. 2023. "Fixed-Time Optimization of Perturbed Multi-Agent Systems under the Resource Constraints" Applied Sciences 13, no. 7: 4527. https://doi.org/10.3390/app13074527

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop