Next Article in Journal
Enhancing the Modbus Communication Protocol to Minimize Acquisition Times Based on an STM32-Embedded Device
Previous Article in Journal
Componentwise Perturbation Analysis of the QR Decomposition of a Matrix
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed-Time Distributed Optimization for Multi-Agent Systems with Input Delays and External Disturbances

College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(24), 4689; https://doi.org/10.3390/math10244689
Submission received: 13 October 2022 / Revised: 7 December 2022 / Accepted: 8 December 2022 / Published: 10 December 2022
(This article belongs to the Topic Distributed Optimization for Control)

Abstract

:
This study concentrates on a fixed-time distributed optimization problem for multi-agent systems (MASs) with input delay and external disturbances. First, by adopting the Artstein model reduction technique, the time-delay system is first transformed into a delay-free one, and external disturbances are then effectively eliminated by using an integral sliding mode control strategy. Second, a new centralized optimization mechanism is developed that allows all agents to reach the same state in a fixed time and then converge to the optimal value of the global objective function. Meanwhile, the optimization problem is extended to switching topologies. Moreover, as the gradient information of the global objective function is difficult to obtain in advance, we construct a decentralized optimization protocol that enables all agents to acquire the same state in a certain amount of time while also optimizing the global optimization problem. Finally, two numerical simulations are presented to validate the effectiveness and reliability of the developed control strategy.

1. Introduction

The distributed control of multi-agent systems (MASs) has garnered increasing interest as it can describe many complicated problems in industrial domains, such as sensor networks [1], formation control, machine learning, intelligent transportation systems, and so on [2,3,4,5]. Consensus is a fundamental problem of distributed control, which refers that a group of agents achieving agreement upon certain quantities of interest under some distributed protocols [6]. However, in many practical applications, including resource allocation [7], economic dispatch of power systems [8], and smart grids [9], they not only require agents to solve problems cooperatively but also to achieve optimal performance. Therefore, the distributed optimization issue has been one of the hottest subjects recently. The purpose of a distributed optimization problem is to establish some viable protocols such that all agents cooperatively minimize the sum of their cost functions [10,11]. On general cooperation and consensus mechanisms in MASs, the goal of minimizing the sum of costs is not always assumed because of the nature of fairness or difficulties of summingup the costs of different stakeholders. In this paper, we focus on the issues that we can assume it is possible to sum up their cost functions. Further discussions regarding the difficulties of summingup the cost functions can be found in [12].
Numerous studies have yielded positive outcomes on distributed optimization problems [13,14,15,16,17,18]. Depending on whether a system is discrete or continuous, the majority of existing distributed algorithms fall into one of two groups: the discrete-time distributed algorithm and the continuous-time one. The subgradient method was used in discrete systems to seek the optimal solution of a distributed optimization issue in [19]. To achieve a faster convergence rate, several novel algorithms were proposed, such as the gradient-free distributed algorithm [20], Newton–Raphson algorithm [21], and so on. Additionally, given the constraining factor of real life, the primal-dual perturbation approach was put forth to handle the optimization problem with constraints in [22,23]. It should be mentioned that the above research focuses on discrete MASs. However, the state of the physical system is continuously changing in some applications. As a result, some control algorithms with regard to continuous-time MASs [24,25,26,27,28] were proposed based on the developed Lyapunov stability theory. In [24], the distributed optimization problem was explored using the zero-gradient-sum framework. Furthermore, the distributed optimization problem with communication delays and sampled-data delays was further studied by utilizing the linear matrix inequality (LMI) technique in [25,26], respectively. From the perspective of convergence time, the aforementioned works [24,25,26] were discussed in infinite time. Following that, the finite-time distributed optimization issue was considered in [27]. However, the estimation of convergence time is dependent on the initial values of the system. To overcome this disadvantage, the fixed-time continuous optimization protocol was introduced in [28].
To the best of our knowledge, the communication bandwidth and agent speed limitations in MASs frequently result in time delays. Furthermore, in real-world applications involving distributed optimization problems, communication delays have a significant impact on the stability of systems. To conserve communication capacity and energy supply, the event-triggered control method was developed to resolve the optimization problem of MASs with communication delays and sampled-data delays in [29]. On the other hand, ambient noise and measurement inaccuracies will impact agent dynamics, leading to agents failing to precisely attain the optimal value in an optimization problem. Therefore, effective approaches to reject external disturbances have been found in several research findings. Convex analysis and the internal model technique, for instance, were used to analyze the distributed optimization problem for a class of nonlinear MASs with external disturbances [30]. Likewise, the disturbance observer and integral sliding mode control approach also were applied to the distributed optimization problem of MASs in the presence of various external disturbances [31,32]. Notably, the existing works [25,29,30,31,32] only addressed either time delays or external disturbances. The fact that these two influencing elements always coexist in a real system motivates us to explore this study.
This work focuses on the fixed-time distributed optimization problem for multi-agent systems both with input delays and external disturbances, which is inspired by the aforementioned discussions. Prior to designing the protocol, the Artstein model reduction method is introduced to cope with the time delay caused by the control input. Then, utilizing the integral sliding mode term, a centralized fixed-time optimization protocol and a distributed one are devised. These control protocols ensure that all agents enter the sliding surface and reach the same states in a fixed time, and then asymptotically converge to the optimal value of the global objective function. The following are the main contributions of this paper.
(1)
In previous works [25,29,30,31,32], only one aspect of input delay or external disturbance was considered. However, these two factors tend to coexist in the system. Therefore, the distributed optimization problem of MASs with both input time delay and external disturbance is considered in this paper. Both the fixed and switching topologies are discussed.
(2)
To solve the influences of time delay and external disturbance on optimization problem, we combine the Artstein model reduction technique and the integral sliding mode control strategy in the design of control protocols, which was rarely used in the existing works.
(3)
Although distributed optimization problems were taken into account in the previous works [15,16,17,18,19,20,29,30,31,32], the optimization algorithms were asymptotically or finite-time convergent. In this article, we propose two kinds of fixed-time optimization algorithms, which have fast convergence rate, and the estimation of convergence time is independent of the initial value of the system.
The structure of the rest of this paper is as follows. In Section 2, the problem statement is provided together with a basic introduction to graph theory. Section 3 provides a distributed optimization algorithm and a centralized optimization control protocol. In Section 4, two simulation examples are used to demonstrate the viability of two different types of algorithms. Section 5 concludes the entire paper.
Notation 1.
Let R and R + be real number set and positive real number set. For a matrix A, the terms λ max ( A ) and λ min ( A ) stand for the largest and smallest eigenvalues, respectively, I n represents n × n identity matrix. The x = [ x 1 , x 2 , , x n ] T represents an n-dimensional column vector, sign ( x ) is the sign function, and diag ( · ) denotes the diagonal matrix. Let · and present the Euclidean norm. f represents the gradient of f.

2. Preliminaries

2.1. Graph Theory

In this subsection, we introduce some basic concepts in graph theory that will be used throughout this paper. More information is available in reference [33].
Let graph G = ( V , E , A ) be communication topology formed by N agents, where V = { 1 , 2 , . . . , N } denotes a nodal set and E V × V represents an edge set. An edge of G is defined as a pair of nodes ( j , i ) E such that agent j can receive information from agent i. A = [ a i j ] R N × N represents the adjacency matrix with a i j = 1 , if ( j , i ) E and a i j = 0 otherwise. G is connected if there is a path between any pair of different nodes. The degree matrix is D = diag { d 1 , d 2 , , d N } with d i = j = 1 N a i j . The Laplacian matrix of graph G is L = D A = [ l i j ] R N × N .

2.2. Useful Lemma

Consider the equation as follows
Z ˙ ( t ) = H ( Z ( t ) , t ) Z ( 0 ) = Z 0 ,
where Z R , H : R × R + R is a nonlinear function.
Lemma 1
([34]). For any solution of (1), if there exists a Lyapunov function W ( Z ( t ) ) such that
W ˙ ( Z ( t ) ) ς 1 W μ ( Z ( t ) ) ς 2 W v ( Z ( t ) )
for ς 1 , ς 2 > 0 , μ ( 0 , 1 ) , v ( 1 , ) , the origin of the system (1) is fixed-time stable and upper bound of the setting time T satisfies
T T max = 1 ς 1 ( 1 μ ) + 1 ς 2 ( v 1 ) .
Lemma 2
([6]). Suppose G is an undirected and connected graph, and L is its Laplacian matrix. The eigenvalues of L are 0 λ 2 λ N . If 1 T x = 0 , then x T L x λ 2 x T x .
Lemma 3
([35]). Let θ 1 , θ 2 , , θ N 0 . Then, it has
N φ ( p ) ( 1 p ) ( i = 1 N θ i ) p i = 1 N θ i p ,
where φ ( p ) = 0 , 0 < p < 1 ; 1 , p > 1 .

2.3. Problem Statement

Inspired by reference [36], we consider a MAS with both input delays and external disturbances, in which the dynamics of agents are described as follows
x ˙ i ( t ) = u i ( t τ ) + ω i ( t ) , i = 1 , 2 , , N
where x i ( t ) R , u i ( t ) R and ω i ( t ) R are the position state, the input control and the unknown disturbance of the ith agent, respectively, τ > 0 represents the input delay. Suppose that each agent i has a local objective function f i ( · ) , then the minimum value problem of the global objective function is given as
min F ( x ) = i = 1 N f i ( x ) ,
where F ( · ) represents the global objective function, and x R is the decision variable.
Remark 1.
In the existing studies [20,21], the optimization problem was solved for MASs without input delay. Furthermore, the exponential or asymptotic consensus problems were also solved in the works [3,5,21], which were completed in infinite time. However, the fixed-time optimization problem is investigated for MASs with input delays and external disturbances in this study. Especially, when τ = 0 , the optimization problem is similar to the existing one.
The main goal of this research is to build a suitable controller u i ( t ) such that all agents cooperatively solve the optimization problem (5). In light of the fact that the problem can be viewed as an optimization problem of MASs involving N identical agents, the problem (5) can be rewritten as
min F ( x ( t ) ) = i = 1 N f i ( x i ( t ) ) s . t . lim t T ( x i ( t ) x j ( t ) ) = 0 i , j = 1 , 2 , , N
where x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x N ( t ) ] T .
Remark 2.
It is worth noting that the state variables do not need to reach consensus in general optimization problems such as distributed resource allocation [7] and economic dispatch in power grids [8]. In this research, the optimization problem (5) can be converted into (6). Therefore, all agents need to reach same state value and commonly solve the optimization problem in (6). It can be regarded as consensus optimization problem. Additionally, it achieves the perfect integration of the consensus problem and optimization problem theoretically. This combination can resolve several problems in realistic applications and demonstrates the close connection between consensus and optimization challenges.
Before designing the control protocol, the following general assumptions are given.
Assumption 1.
The communication topological graph G is connected and undirected.
Assumption 2.
The external disturbance ω i ( t ) is bounded, that is | ω i ( t ) | ρ i , where ρ i is a positive constant.
Assumption 3.
The local objective function f i ( · ) is twice continuous differentiable, and the global objective function F ( · ) is convex.

3. Main Result

3.1. Centralized Optimization Protocol

This section puts forward the centralized fixed-time optimization algorithm. The fixed topology situation is thought of first, and then the switching topology case is also considered. The following lists the specific design concepts.
 A 
Fixed Topology
The time-delay system (4) needs to be improved before an optimization algorithm can be designed. Drawing inspiration from work [36], the Artstein model reduction method is introduced as follows:
ζ i ( t ) = z i ( t ) z ¯ ( t ) , z i ( t ) = x i ( t ) + t τ t u i ( v ) d v , i = 1 , 2 , , N ,
where z ¯ ( t ) = ( 1 / N ) j = 1 N z j ( t ) . Using a straightforward computation, we can determine
z ˙ i ( t ) = u i ( t ) + ω i ( t ) .
Remark 3.
It is noted that the fixed-time convergence analysis is challenging for MASs with time delay. The Artstein model reduction approach is employed, which can convert the delay system (4) into a delay-free system (8). Additionally, using the fixed-time convergence analysis method, we can resolve the fixed-time distributed optimization problem for MASs with time delay in this study.
The control protocol is considered below
u i ( t ) = u i 1 ( t ) + u i 2 ( t ) i = 1 , 2 , , N ,
u i 1 ( t ) = 0 , 0 t T 1 ; ι 1 j = 1 N a i j Ω i j μ ( t ) ι 2 j = 1 N a i j Ω i j ( t ) ι 3 j = 1 N a i j sign ( Ω i j ( t ) ) , T 1 < t T 2 ; δ j = 1 N f j ( z j ( t ) ) , t > T 2 ;
u i 2 ( t ) = γ 1 ( s i ( t ) ) μ γ 2 s i ( t ) γ 3 sign ( s i ( t ) ) ,
s i ( t ) = z i ( t ) z i ( 0 ) 0 t u i 1 ( v ) d v ,
where ι 1 , ι 2 , ι 3 , γ 1 , γ 2 and δ are positive constants, and γ 3 > ρ where ρ = max 1 i N { ρ i } . Ω i j ( t ) = z i ( t ) z j ( t ) and μ > 1 is the proportion of positive odd numbers. T 1 and T 2 are positive constants to be determined latter.
Theorem 1.
If Assumptions 1–3 hold, then the consensus can be achieved in a fixed-time T 2 for MAS (4), and problem (6) can be resolved under the optimization algorithm (9–12). Moreover, the settling time T 2 satisfies
T 2 T 1 + 2 ι 3 λ 2 ( L 2 ) + 4 ι 1 N 1 μ ( 4 λ 2 ( L 2 μ + 1 ) ) μ + 1 2 ( μ 1 ) ,
where T 1 = τ + 2 γ 3 ρ + 2 2 μ + 1 2 γ 1 ( μ 1 ) , L 2 μ + 1 is Laplacian matrix of graph G 2 μ + 1 with adjacency matrix A 2 μ + 1 = [ ( a i j ) 2 μ + 1 ] N × N .
Proof. 
The proof includes three steps. The first step is to verify that all agents’ states reach the sliding plane in fixed time, that is, lim t T 1 s i ( t ) = s ˙ i ( t ) = 0 . Then we need to prove that all agents achieve consensus in a fixed-time T 2 , which means that lim t T 2 ( z i ( t ) z ¯ ( t ) ) = 0 where z ¯ ( t ) = ( 1 / N ) j = 1 N z j ( t ) . Finally, we need to prove that z ¯ ( t ) converges to the optimal value z * ( t ) of the optimization problem (6).
Step 1. Take into account the following Lyapunov candidate function
V 1 ( t ) = 1 2 i = 1 N s i 2 ( t ) .
We take the derivative of V 1 ( t ) based on the Equation (12), there is
V ˙ 1 ( t ) = i = 1 N s i ( t ) ( z ˙ i ( t ) u i 1 ( t ) ) = i = 1 N s i ( t ) ( γ 1 ( s i ( t ) ) μ γ 2 s i ( t ) γ 3 sign ( s i ( t ) ) + ω i ( t ) ) γ 1 i = 1 N ( s i ( t ) ) μ + 1 γ 2 i = 1 N ( s i ( t ) ) 2 ( γ 3 ρ i ) i = 1 N | s i ( t ) | .
By applying Lemma 3, one has
V ˙ 1 ( t ) γ 1 ( i = 1 N s i 2 ( t ) ) u + 1 2 ( γ 3 ρ ) ( i = 1 N s i 2 ( t ) ) 1 2 = γ 1 ( 2 V 1 ( t ) ) u + 1 2 ( γ 3 ρ ) ( 2 V 1 ( t ) ) 1 2 ,
where ρ = max { ρ 1 , ρ 2 , , ρ N } . According to Lemma 1, it is easy to obtain that lim t T 0 s i ( t ) = s ˙ i ( t ) = 0 , and T 0 is estimated as
T 0 2 γ 3 ρ + 2 2 u + 1 2 γ 1 ( u 1 ) .
Let T 1 = T 0 + τ , since the sliding mode surface s i ( t ) = s ˙ i ( t ) = 0 will be achieved for t T 0 , so u i 2 ( t ) = 0 for T 0 < t T 1 . Further, we can obtain that T 0 T 1 u i ( s ) d s = 0 , that is z i ( t ) = x i ( t ) .
Step 2. Prove that lim t T 2 ( z i ( t ) z ¯ ( t ) ) = 0 . For T 1 t T 2 , s ˙ i ( t ) = 0 , we have
z ˙ i ( t ) = ι 1 j = 1 N a i j Ω i j μ ( t ) ι 2 j = 1 N a i j Ω i j ( t ) ι 3 j = 1 N a i j sign ( Ω i j ( t ) ) .
Choosing the following appropriate Lyapunov function candidate
V 2 ( t ) = 1 2 i = 1 N ζ i 2 ( t ) .
Similarly, we take the derivative of V 2 ( t ) along with (8) and (9–12), and one has
V ˙ 2 ( t ) = i = 1 N ζ i ( t ) ζ ˙ i ( t ) = i = 1 N ζ i ( t ) ( z ˙ i ( t ) z ¯ ˙ ( t ) ) = i = 1 N ζ i ( t ) ι 1 j = 1 N a i j Ω i j μ ( t ) ι 2 j = 1 N a i j Ω i j ( t ) ι 3 j = 1 N a i j sign ( Ω i j ( t ) ) i = 1 N ζ i ( t ) z ¯ ˙ ( t ) .
According to the definition of ζ i in the (7), we have i = 1 N ζ i ( t ) = 0 . Then, one has
V ˙ 2 ( t ) = i = 1 N ζ i ( t ) ι 1 j = 1 N a i j Ω i j μ ( t ) ι 2 j = 1 N a i j Ω i j ( t ) ι 3 j = 1 N a i j sign ( Ω i j ( t ) )
In light of the property a i j = a j i of the adjacency matrix of the undirected graph, we have i = 1 N j = 1 N a i j ζ i ( ζ i ζ j ) = 1 2 i = 1 N j = 1 N a i j ( ζ i ζ j ) 2 . Therefore, the derivative of V 2 ( t ) can be rewritten as
V ˙ 2 ( t ) = 1 2 ι 1 i = 1 N j = 1 N a i j Φ i j μ + 1 ( t ) 1 2 ι 2 i = 1 N j = 1 N a i j Φ i j 2 ( t ) 1 2 ι 3 i = 1 N j = 1 N a i j Φ i j ( t ) sign ( Φ i j ( t ) ) 1 2 ι 1 N 1 μ i = 1 N j = 1 N a i j 2 μ + 1 | Φ i j ( t ) | 2 μ + 1 2 1 2 ι 3 i = 1 N j = 1 N a i j 2 | Φ i j ( t ) | 2 1 2 ,
where Φ i j ( t ) = ζ i ( t ) ζ j ( t ) . On the other hand, owing to
i = 1 N j = 1 N a i j Φ i j 2 ( t ) = 2 ζ T ( t ) L ζ ( t ) ,
where ζ ( t ) = [ ζ 1 ( t ) , ζ 2 ( t ) , , ζ N ( t ) ] T . Let G 2 and G 2 μ + 1 represent two new network topologies, in which the corresponding adjacency matrixes are A 2 = [ a i j 2 ] N × N and A 2 μ + 1 = [ a i j 2 μ + 1 ] N × N , and the corresponding Laplacian matrices are L 2 and L 2 μ + 1 . It yields
i = 1 N j = 1 N a i j 2 | Φ i j ( t ) | 2 4 λ 2 ( L 2 ) V ( t ) ,
i = 1 N j = 1 N a i j 2 μ + 1 | Φ i j ( t ) | 2 4 λ 2 ( L 2 μ + 1 ) V ( t ) .
In a combination with (22) and (23), the inequality (21) is rewritten as
V ˙ 2 ( t ) 1 2 ι 1 N 1 μ 4 λ 2 ( L 2 μ + 1 ) V 2 ( t ) μ + 1 2 ι 3 λ 2 ( L 2 ) V 2 1 2 ( t ) ς 1 V 2 1 2 ( t ) ς 2 V 2 μ + 1 2 ( t ) ,
where ς 1 = ι 3 λ 2 ( L 2 ) and ς 2 = ( 1 / 2 ) ι 1 N 1 μ ( 4 λ 2 ( L 2 μ + 1 ) ) ( μ + 1 ) / ( 2 ) . It follows that lim t T ¯ 2 V 2 ( t ) = 0 based on Lemma 1, and the settling time T ¯ 2 is estimated as T ¯ 2 T 1 + 2 ς 1 + 2 ς 2 ( μ 1 ) . Assuming that T 2 = T ¯ 2 + τ , because the states of agents satisfy z i ( t ) = z j ( t ) for t T ¯ 2 , then u i 1 ( t ) = 0 for T ¯ 2 < t T 2 . Further, we can obtain T ¯ 2 T 2 u i ( s ) d s = 0 , which means that all agents will achieve consensus for t T 2 .
Step 3. For t > T 2 , according to the above analysis, one has
d dt i = 1 N f i ( x ¯ i ( t ) ) = i = 1 N f i ( x ¯ i ( t ) ) d x ¯ i ( t ) dt = i = 1 N f i ( z i ( t ) ) d z i ( t ) dt = i = 1 N f i ( z i ( t ) ) u i 1 ( t ) = i = 1 N f i ( z ( t ) ) ( δ j = 1 N f j ( z j ( t ) ) ) .
From Equation (25), we can get
d dt i = 1 N f i ( x ¯ ( t ) ) = δ i = 1 N f i ( z ( t ) ) 2 0 .
Because i = 1 N f i ( x ¯ ( t ) ) is bounded, one can obtain that lim t i = 1 N f i ( x ¯ ( t ) ) = 0 . Then the optimization problem is solved. The proof is completed. □
Remark 4.
To solve the optimization problem (6) for MASs with input delays and external disturbances, the optimization algorithm is proposed by using an integral mode scheme in (11) and (12). The protocol (9–12) is inspired by [32,37]. In [32], the external disturbance can be effectively eliminated by the super twisting-based integral sliding mode controller in finite time. In this paper, we propose a centralized optimization protocol for MASs with time delays, which can ensure all agents’ states converge to the same value in a fixed time. In [37], the authors address the average consensus problem of MASs subject to input delay and external disturbances. However, in our paper, the optimization protocols that contain the consensus term and optimization term are designed to solve the optimization problem (6).
 B 
Switching Topologies
In the practical system, some agents may reconstruct new network topologies due to the instability of network connections. We take into account the switching communication topologies.
Denote σ ( t ) : [ 0 , + ) Γ as a switching signal, where Γ = { 1 , 2 , , M } is a finite set and M is a positive integer. t 0 , t 1 , , t k , is the switching time sequence. Let G γ = ( V , E , A σ ( t ) ) be an undirected graph set. If the communication topology is fixed during the time interval [ t j , t j + 1 ) , i.e., σ ( t ) Γ for t [ t j , t j + 1 ) and switches at the time t j , then G σ ( t ) G γ is the corresponding graph at time t. The Laplacian matrix of the switching topology G σ ( t ) is represented as L σ ( t ) . In the sequel, the fixed-time distributed optimization problem will be resolved under switching topology.
Theorem 2.
If Assumptions 2–3 hold and the switching topologies G σ ( t ) are connected, the fixed-time consensus can be achieved for MAS (4) with switching topologies G σ ( t ) , and the optimization problem (6) is also resolved under the algorithm (9–12). Moreover, the settling time T 2 satisfies
T 2 T 1 + 2 ι 3 λ 2 min ( L 2 ) + 4 ι 1 N 1 μ ( 4 λ 2 min ( L 2 μ + 1 ) ) μ + 1 2 ( μ + 1 ) ,
where T 1 = 2 γ 3 ρ + 2 2 μ + 1 2 γ 1 ( μ 1 ) + τ , λ 2 min ( L 2 ) = min { λ 2 ( L σ ( t 0 ) 2 ) , λ 2 ( L σ ( t 1 ) 2 ) , } and λ 2 min ( L 2 μ + 1 ) = min { λ 2 min ( L σ ( t 0 ) 2 μ + 1 ) , λ 2 min ( L σ ( t 1 ) 2 μ + 1 ) , } . The related Laplacian matrices at time t for graphs G 2 and G 2 μ + 1 are L σ ( t ) 2 and L σ ( t ) 2 μ + 1 ( t ) , respectively.
Proof. 
The proof is also divided into three steps. The first step is to prove that lim t T 1 s i ( t ) = s ˙ i ( t ) = 0 . Furthermore, we need to prove that all agents achieve consensus in a fixed-time T 2 , which means that lim t T 2 ( z i ( t ) z ¯ ( t ) ) = 0 where z ¯ ( t ) = ( 1 / N ) j = 1 N z j ( t ) . Finally, we need to verify that the consensus value z ¯ ( t ) converges to the optimal value z * ( t ) of the optimization problem (6).
The Step 1 is same as the one in Theorem 1, then we omit it.
Step 2. With respect to the same V 2 ( t ) of Theorem 1, we obtain
V ˙ 2 ( t ) ι 3 λ 2 ( L 2 ) V 2 1 2 ( t ) 1 2 ι 1 N 1 u 4 λ 2 ( L 2 μ + 1 ) V 2 ( t ) μ + 1 2 ι 3 λ 2 min ( L 2 ) V 2 1 2 ( t ) 1 2 ι 1 N 1 μ 4 λ 2 min ( L 2 μ + 1 ) V 2 ( t ) μ + 1 2 ς ¯ 1 V 2 1 2 ( t ) ς ¯ 2 V 2 μ + 1 2 ( t ) ,
where ς ¯ 1 = ι 3 λ 2 min ( L 2 ) and ς ¯ 2 = ( 1 / 2 ) ι 1 N 1 μ ( 4 λ 2 min ( L 2 μ + 1 ) ) μ + 1 2 .
The inequality (28) holds for any σ ( t ) Γ . Based on Lemma 1, it follows that V 2 ( t ) converges to zero in fixed time. Let T 2 = T 1 + 2 ς ¯ 1 + 2 ς ¯ 2 ( μ 1 ) + τ , we obtain lim t T 2 V 2 ( t ) = 0 .
Step 3. For t > T 2 , the proof that lim t + z ¯ ( t ) = z * ( t ) is same as the one in Theorem 1. According to the above analysis, the optimization problem (6) can also be resolved under the switching topologies. □
Remark 5.
It is worth noting that the gradient method used in this paper cannot be applied to general non-holonomic systems directly [38], so we only consider the distributed optimization problem of first-order integral systems. However, the application [39] of non-holonomic systems is more in line with actual needs. We will concentrate on the optimization problem of non-holonomic systems in our future work. In addition, the u i ( t ) is a centralized protocol because it uses the global gradient information j = 1 N f j ( x j ( t ) ) . However, the global information of the agents is always difficult to obtain in advance. To make up for these drawbacks, we need to develop some distributed optimization algorithms.

3.2. Distributed Optimization Protocol

In this section, to propose the distributed optimization protocol, we need to devise a distributed estimator for each agent to obtain the convex combination j = 1 N f j ( x j ( t ) ) of all agents’ local cost function. In combination with the developed distributed estimators and the previous design strategy of the protocol, we will design the distributed optimization algorithm to solve the optimization problem (6).
Suppose that each agent is equipped with an estimator, which can obtain the gradient of its objective function and needs to estimate the information of other agents. Let y r i ( t ) represent the estimation of gradient f r ( x r ( t ) ) of the ith agent with respect to the rth agent. We take advantage of b i r to denote the information transmission strength between the ith agent and the rth estimator, in which b i i > 0 , and b r i = 0 for i r , i , r = 1 , 2 , , N . Denote L r = L + B r , where the matrix B r = diag { b k 1 , b k 1 , , b k N } , then the matrix L r is positive definite. The distributed estimator is designed as follows
y ˙ r i ( t ) = d r ( t ) sign j N i a i j Ψ r i j ( t ) + b r i ( y r i ( t ) f r ( z r ( t ) ) ) c r j N i a i j Ψ r i j ( t ) + b r i ( y r i ( t ) f r ( z r ( t ) ) ) μ ,
where μ > 1 , Ψ r i j ( t ) = y r i ( t ) y r j ( t ) , and c r is a positive gain, d r ( t ) is a time-varying gain to be determined later.
Theorem 3.
If Assumption 1 holds and the network topology is connected. Then y r i ( t ) f r ( z r ( t ) ) , for i , r = 1 , 2 , , N based on a designed distributed estimator (29) in a fixed time.
Proof. 
Let the estimator error y ^ r i ( t ) = y t i ( t ) f r ( z r ( t ) ) , y ^ r ( t ) = [ y ^ r 1 ( t ) , y ^ r 2 ( t ) , y ^ r N ( t ) ] T , y ¯ r ( t ) = L y ^ r ( t ) , ϱ r ( t ) = 1 N 2 f r ( z r ( t ) ) z ˙ r ( t ) . Choose the following Lyapunov function
V 3 r ( t ) = 1 2 y ^ r T ( t ) L r y ^ r ( t ) .
Because L r is a positive matrix, so V 3 r ( t ) is a positive definite function, and one has
1 2 λ min ( L r ) y ^ r ( t ) 2 V 3 ( t ) 1 2 λ max ( L r ) y ^ r ( t ) 2 .
The derivative of V 3 r ( t ) is
V ˙ 3 r ( t ) = y ¯ r T ( t ) ( d r ( t ) sign ( y ¯ r ( t ) ) c r ( y ¯ r ( t ) ) μ ϱ r ( t ) ) d r ( t ) N 1 μ 2 y ¯ r ( t ) 1 + μ ( c r ϱ r ( t ) ) y ¯ r ( t ) 1 .
Because λ min ( L r ) 2 V 3 r ( t ) y ¯ r ( t ) 2 λ max ( L ) r 2 V 3 r ( t ) , and y ¯ r ( t ) y ¯ r ( t ) 1 N y ¯ r ( t ) , there is y ¯ r ( t ) 1 λ min ( L r ) 2 ( V 3 r ( t ) ) 1 2 . Choosing c r ϱ r ( t ) + c r , where c r are positive constants. Hence, there is
V ˙ 3 r ( t ) c r N 1 μ 2 λ min ( L r ) 2 V 3 r ( t ) 1 + μ 2 c r λ min ( L r ) 2 ( V 3 r ( t ) ) 1 2 = α 1 ( V 3 r ( t ) ) 1 + μ 2 α 2 ( V 3 r ( t ) ) 1 2
where α 1 = c r N 1 μ 2 λ min ( L r ) 2 1 + μ 2 , α 2 = c r λ min ( L r ) 2 . It can be deduced that lim t T r y ^ r ( t ) = 0 from (33), and the convergence time is estimated by T r which satisfies T r 1 α 1 ( μ 1 ) + 2 α 2 . Let T max = max { T r | r = 1 , 2 , , N } , then y r i ( t ) = f r ( z r ( t ) ) for t T max . □
By using y r i ( t ) to estimate the gradient f r ( z r ( t ) ) of agent r for agent i, the term j = 1 N f j ( z j ( t ) ) in centralized optimal protocol (10) can be replaced by the estimation term y r i ( t ) . Therefore, the following distributed optimal protocol is proposed
u i ( t ) = u i 1 ( t ) + u i 2 ( t ) i = 1 , 2 , , N
u i 1 ( t ) = 0 , 0 t T 1 ; ι 1 j = 1 N a i j Ω i j u ( t ) ι 2 j = 1 N a i j Ω i j ( t ) ι 3 j = 1 N a i j sign ( Ω i j ( t ) ) , T 1 < t T ^ 2 ; δ r = 1 N y r i ( t ) , t > T ^ 2 .
y ˙ r i ( t ) = 0 , 0 t T 1 ; d r ( t ) sign j N i a i j Ψ r i j ( t ) + b r i ( y r i ( t ) f r ( z r ( t ) ) ) c r j N i a i j Ψ r i j ( t ) + b r i ( y r i ( t ) f r ( z r ( t ) ) ) u , t > T 1 ;
u i ω ( t ) = γ 1 ( s i ( t ) ) u γ 2 s i ( t ) γ 3 sign ( s i ( t ) ) ,
s i ( t ) = x i ( t ) x i ( 0 ) 0 t u i 1 ( v ) d v ,
where T ^ 2 = max { T 2 , T 1 + T max } .
Theorem 4.
If Assumptions 1–2 hold, then the proposed distributed protocol (34–38) enables all agents of the system (4) to reach a consensus in a fixed time and resolve the problem (6) asymptotically.
Proof. 
It is simple to obtain lim t T 1 s i ( t ) = s ˙ i ( t ) ) = 0 in a similar way using the method described in step 1 of Theorem 1. When t > T 1 , it can be found that the gradient information of other agents is estimated efficiently by agent i based on the distributed protocol (31c). In addition, all estimator values y r i ( t ) will approach to f r ( z r ( t ) ) in fixed time T max + T 1 from Theorem 3. Therefore, there is y r i ( t ) = f r ( z r ( t ) ) for t max { T 2 , T max + T 1 } , i , k = 1 , 2 , , N . Based on Theorem 1, all agents’ states x i ( t ) reach the consensus and converge to the optimal solution of the optimization problem (6) in a fixed time. □
Remark 6.
There are few works on fixed-time distributed optimization problems for MASs with external disturbances. In this research, combining the Artstein model reduction technique and the integral sliding mode control strategy, we solve the fixed-time distributed optimization problems subject to time-delay systems with external disturbances. In addition, the fixed-time convergence of the protocol is strictly proved. As a further improvement of finite-time convergence, the estimation of setting time is unaffected by the initial conditions.
Remark 7.
In Reference [40], the finite-time convergence for bilateral teleoperation systems with disturbance and time-varying delays was studied, in which the settling time is associated with the initial value of the system. In this research, we consider the fixed-time distributed optimization problem for the multi-agent system with disturbances and invariant input delays. The convergence time is independent of the initial value of the system. Inspired by Reference [40], we will consider the fixed-time distributed optimization problem with time-varying delays in our future work.

4. Numerical Example

In this section, an economic dispatch example is used to verify the performance of the proposed algorithms (9–12).
Consider a MAS with six agents whose dynamic is described by (4), in which x i ( t ) R , ω i ( t ) = 0.3 sin ( i π t ) and τ = 0.5 for i = 1 , 2 , , 6 . The optimization problem is described as
min F ( x ( t ) ) = i = 1 6 f i ( x i ( t ) ) ,
where the cost function f 1 ( x 1 ( t ) ) = 0.5 x 1 2 ( t ) + 5 , f 2 ( x 2 ( t ) ) = ( x 2 ( t ) + 6 ) 2 + 2 , f 3 ( x 3 ( t ) ) = 0.5 ( x 3 ( t ) 1 ) 2 + 2 , f 4 ( x 4 ( t ) ) = 0.5 ( x 3 ( t ) + 3 ) 2 10 , f 5 ( x 5 ( t ) ) = x 5 2 ( t ) + cos ( x 5 ( t ) ) + 1 , f 6 ( x 6 ( t ) ) = x 6 2 ( t ) sin ( x 6 ( t ) ) + 3 . Obviously, Assumptions 2 and 3 hold, and ρ i = 0.3 for i = 1 , 2 , , 6 . Figure 1 displays the evolutionary trajectory of local and global objective functions. Additionally, it is simple to find that the fixed topology graph with all weights being 0−1 in Figure 2a is connected.
Case 1. Fixed-time centralized optimization protocol Under protocol (9), we select the initial value states as x ( 0 ) = [ 5 , 2 , 1 , 2 , 3 , 4 ] T , suppose the control parameters ι 1 = 0.5 , ι 2 = 1.5 , ι 3 = 2 , μ = 7 / 5 , δ = 0.5 and γ 1 = γ 2 = γ 3 = 1 . By simply calculating, one obtains λ 2 ( L ) = λ 2 ( L 2 ) = λ 2 ( L 2 μ + 1 ) = 1 . Then, it yields T 1 = 4.7 and T 2 = 13.46 . The simulation results are given in Figure 3, Figure 4, Figure 5 and Figure 6. From Figure 3, the proposed algorithm enables the agents’ states to converge toward the same value x * = 1.67 in a fixed time T 2 = 13.46 . The evolution of control input and function f i ( x i ( t ) ) are shown in Figure 4 and Figure 5, respectively. Although the function F ( z ( t ) ) reaches a minimum when t [ 4.7 , 13.46 ) , it is not the minimum of the optimization problem (5). Figure 6 shows that the optimal value of the cost function is F ( x * ( t ) ) = 43.91 .
Figure 2 indicates the switching topologies, starting with topology (1), the system switches to topology (2) at t = 5 , then to topology (3) at t = 5.1 , and finally to topology (4) at t = 5.2 . Notably, the four topology graphs are connected, so λ 2 min ( L ) = λ 2 min ( L 2 ) = λ 2 min ( L 2 μ + 1 ) = 0.76 . Under the same parameters as the fixed topology, we have T 1 = 4.7 and T 2 = 18.12 . Figure 7 shows the state evolution, and agreement is reached. Figure 8 depicts the control input evolution.
By comparing Figure 3 with Figure 7, one can observe that consensus is possible even when the communication topology is changing. Compared with Figure 4, the u i ( t ) will change slightly at the time of the switch t = 5 , t = 5.1 and t = 5.2 in the subgraph of Figure 8.
Case 2. Fixed-time distributed optimization protocol.
Under the protocols (34–38), we choose the same control parameters as in Case 1. Furthermore, suppose B r = diag { b r 1 , b r 2 , , b r 6 } , where b r r = 3 and b r i = 0 for i r , c r = 2 , and d r ( t ) = f r ( z r ( t ) ) z ˙ r ( t ) + 2 for r = 1 , 2 , , 6 . By calculation, it has T 1 = 4.7 , T 2 = 13.46 , T max = 31.25 and T ^ = 35.95 . The simulation results are shown in Figure 9, Figure 10, Figure 11 and Figure 12. It is discovered that all agents can also achieve agreement and converge to the optimal solution of the global optimization problem (6) in Figure 9. For t 4.7 , the settling time of achieving consensus under the protocols (34–38) is larger than the one in (9–12) due to protocol (11) being used to estimate the gradients of other agents’ objective functions. Further, this will lead to the settling time being large under algorithms (34–38).
Remark 8.
Although the algorithms proposed in this paper have been well verified in the above numerical examples, there are still some shortcomings. The estimation for the settling time is rather conservative due to the use of Lemma 1. For example, Figure 3 shows that the convergence is achieved at t = 5.42 , but the estimated settling time is T 2 = 13.46 . In practice, the conservative estimation may fail to provide useful system information. To solve this issue, some more accurate settling time estimation methods should be further considered.

5. Conclusions

In this paper, a centralized optimization algorithm was proposed to handle the optimization problem of MASs with both input delays and external disturbances. By using the Artstein model reduction technique, the time-delay system was transformed into a delay-free one, and the external disturbances can be effectively eliminated by an integral sliding mode control strategy. By creating certain distributed estimators, the acquired centralized algorithm can be expanded to the distributed one. The fixed time consensus was proved under the proposed algorithms, and the global optimal value can be achieved asymptotically. However, there are certain limitations in the Artstein model reduction technique to procecertainly-invariant delay. In this research, the control input should be zero after reaching consensus. Otherwise, it will be impossible to convert a time-delay system to a delay-free system. In future work, we will further investigate the analysis methods of time-delay system in more detail and seek some new techniques to deal with this issue.

Author Contributions

Conceptualization, X.X. and Z.Y.; methodology, X.X. and Z.Y.; software, Z.Y.; validation, X.X., Z.Y. and H.J; formal analysis, X.X.; investigation, X.X.; the resources, Z.Y.; data curation, X.X.; writing—original draft preparation, X.X.; writing—review and editing, Z.Y.; visualization, Z.Y.; supervision, H.J.; project administration, Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 62003289, 62163035), in part by the China Postdoctoral Science Foundation (Grant No. 2021M690400), in part by the Special Project for Local Science and Technology Development Guided by the Central Government (Grant No. ZYYD2022A05), and in part by Xinjiang Key Laboratory of Applied Mathematics (Grant No. XJDX1401).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MASsMulti-agent systems

References

  1. Weiss, G. Multiagent Systems, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  2. Chen, L.; Sun, Z. Gradient-based bearing-only formation control: An elevation angle approach. Automatica 2022, 141, 1–9. [Google Scholar] [CrossRef]
  3. Deng, H.; Liang, S.; Hong, Y. Distributed continuous-time algorithms for resource allocation problems over weight-balanced digraphs. IEEE Trans. Cybern. 2018, 48, 3116–3125. [Google Scholar] [CrossRef]
  4. Wang, X.; Han, M. Online sequential extreme learning machine with kernels for nonstationary time series prediction. Neurocomputing 2014, 145, 90–97. [Google Scholar] [CrossRef]
  5. Alam, M.; Schiller, E.; Shu, L. Dependable and real-time vehicular communication for intelligent transportation systems. Mob. Netw. Appl. 2018, 23, 1129–1131. [Google Scholar] [CrossRef]
  6. Olfati-Saber, R.; Murray, R. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef] [Green Version]
  7. Zhu, Y.; Ren, W.; Yu, W.; Wen, G. Distributed resource allocation over directed graphs via continuous-time algorithms. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1097–1106. [Google Scholar] [CrossRef]
  8. Yan, Y.; Chen, Z.; Varadharajan, V. Distributed consensus-based economic dispatch in power grids using the paillier cryptosystem. IEEE Trans. Smart Grid 2021, 12, 3493–3502. [Google Scholar] [CrossRef]
  9. Meng, W.; Wang, X.; Liu, S. Distributed load sharing of an inverter-based microgrid with reduced communication. IEEE Trans. Smart Grid 2018, 9, 1354–1364. [Google Scholar]
  10. Nedić, A.; Ozdaglar, A.; Parrilo, P. Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 2010, 55, 922–938. [Google Scholar]
  11. Chen, J.; Sayed, A. Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans. Signal Process. 2012, 60, 4289–4305. [Google Scholar] [CrossRef] [Green Version]
  12. Shoham, Y.; Leyton-Brown, K. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  13. Wang, Q.; Duan, Z.; Wang, J.; Chen, G. LQ synchronization of discrete-time multiagent systems: A distributed optimization approach. IEEE Trans. Autom. Control 2019, 64, 5183–5190. [Google Scholar] [CrossRef]
  14. Liu, H.; Yu, W. Discrete-time algorithm for distributed unconstrained optimization problem with finite-time computations. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 351–355. [Google Scholar] [CrossRef]
  15. Yu, W.; Liu, H.; Zheng, W.; Zhu, Y. Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs. J. Frankl. Inst. 2021, 134, 1–15. [Google Scholar] [CrossRef]
  16. Zhang, L.; Liu, S. Projected subgradient based distributed convex optimization with transmission noises. Appl. Math. Comput. 2021, 418, 1–12. [Google Scholar]
  17. Li, Y.; Zhang, H.; Huang, B.; Han, J. A distributed Newton-Raphson-based coordination algorithm for multi-agent optimization with discrete-time communication. Neural Comput. Appl. 2020, 32, 4649–4663. [Google Scholar] [CrossRef]
  18. Wang, C.; Xu, S.; Yuan, D.; Zhang, B.; Zhang, Z. Distributed online convex optimization with a bandit primal-dual mirror descent push-sum algorithm. Neurocomputing 2022, 497, 204–215. [Google Scholar] [CrossRef]
  19. Guo, Z.; Chen, G. Distributed zero-gradient-sum algorithm for convex optimization with time-varying communication delays and switching networks. Int. J. Robust Nonlinear Control 2018, 28, 4900–4915. [Google Scholar] [CrossRef]
  20. Pang, Y.; Hu, G. Gradient-free distributed optimization with exact convergence. Automatica 2021, 144, 110474. [Google Scholar] [CrossRef]
  21. Varagnolo, D.; Zanella, F.; Cenedese, A. Newton-Raphson consensus for distributed convex optimization. IEEE Trans. Autom. Control 2016, 61, 994–1009. [Google Scholar] [CrossRef] [Green Version]
  22. Chang, T.; Nedić, A.; Scaglione, A. Distributed constrained optimization by consensus-based primal-dual perturbation method. Syst. Control Lett. 2014, 59, 1524–1538. [Google Scholar] [CrossRef] [Green Version]
  23. Jin, B.; Li, H.; Yan, W.; Cao, M. Distributed model predictive control and optimization for linear systems with global constraints and time-varying communication. IEEE Trans. Autom. Control 2021, 66, 3393–3400. [Google Scholar] [CrossRef]
  24. Lu, J.; Tang, C. Zero-gradient-sum algorithms for distributed convex optimization: The continuous-time case. IEEE Trans. Autom. Control 2011, 57, 2348–2354. [Google Scholar] [CrossRef] [Green Version]
  25. Yang, S.; Liu, Q.; Wang, J. Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 717–728. [Google Scholar] [CrossRef]
  26. Wang, D.; Wang, Z.; Chen, M.; Wang, W. Distributed optimization for multi-agent systems with constraints set and communication time-delay over a directed graph. Inf. Sci. 2018, 438, 1–14. [Google Scholar] [CrossRef]
  27. Lin, P.; Ren, W.; Farrell, J. Distributed continuous-time optimization: Nonuniform gradient gains, finite-time convergence, and convex constraint set. IEEE Trans. Autom. Control 2017, 62, 2239–2253. [Google Scholar] [CrossRef]
  28. Yu, Z.; Yu, S.; Jiang, H.; Mei, X. Distributed fixed-time optimization for multi-agent systems over a directed network. Nonlinear Dyn. 2021, 103, 775–789. [Google Scholar] [CrossRef]
  29. Liu, P.; Xiao, F.; Wei, B.; Wang, A. Distributed constrained optimization problem of heterogeneous linear multi-agent systems with communication delays. Syst. Control Lett. 2021, 155, 105002. [Google Scholar] [CrossRef]
  30. Wang, X.; Hong, Y.; Ji, H. Distributed optimization for a class of nonlinear multiagent systems with disturbance rejection communication. IEEE Trans. Autom. Control 2016, 46, 1655–1666. [Google Scholar]
  31. Wang, X.; Li, S.; Wang, G. Distributed optimization for disturbed second-order multiagent systems based on active antidisturbance control. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 2104–2117. [Google Scholar] [CrossRef]
  32. Feng, Z.; Hu, G.; Cassandras, C. Finite-time distributed convex optimization for continuous-time multiagent systems with disturbance rejection. IEEE Trans. Control. Netw. Syst. 2020, 7, 686–698. [Google Scholar] [CrossRef]
  33. Godsil, C.; Royle, G. Algebraic Graph Theory; Graduate Texts in Mathematics; Springer: New York, NY, USA, 2020. [Google Scholar]
  34. Polyakov, A. Nonlinear feedback design for fixed-time stabilization of linear control systems. Automatica 2012, 57, 2106–2110. [Google Scholar] [CrossRef] [Green Version]
  35. Zuo, Z. Nonsingular fixed-time consensus tracking for second-order multi-agent networks. Automatica 2015, 54, 305–309. [Google Scholar] [CrossRef]
  36. Liu, J.; Zhang, Y.; Sun, C.; Yu, Y. Fixed-time consensus of multi-agent systems with input delay and uncertain disturbances via event-triggered control. Inf. Sci. 2019, 480, 261–272. [Google Scholar] [CrossRef]
  37. Liu, J.; Yu, Y.; Xu, Y. Fixed-time average consensus of nonlinear delayed MASs under switching topologies: An event-based triggering approach. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 2721–2733. [Google Scholar] [CrossRef]
  38. Vu, V.; Pham, T.; Dao, P. Disturbance observer-based adaptive reinforcement learning for perturbed uncertain surface vessels. ISA Trans. 2022, 130, 277–292. [Google Scholar] [CrossRef]
  39. Zhao, S.; Dimarogonas, D.; Sun, Z.; Bauso, D. A general approach to coordination control of mobile agents with motion constraints. IEEE Trans. Autom. Control 2018, 63, 1509–1516. [Google Scholar] [CrossRef] [Green Version]
  40. Dao, P.; Nguyen, V.; Liu, Y. Finite-time convergence for bilateral teleoperation systems with disturbance and time-varying delays. IET Control Theor. Appl. 2021, 15, 1736–1748. [Google Scholar] [CrossRef]
Figure 1. Evolutionary trajectory of local and global functions.
Figure 1. Evolutionary trajectory of local and global functions.
Mathematics 10 04689 g001
Figure 2. The switching topology graphs.
Figure 2. The switching topology graphs.
Mathematics 10 04689 g002
Figure 3. The states of x i ( t ) for i = 1 , 2 , , 6 .
Figure 3. The states of x i ( t ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g003
Figure 4. The evolution of control input u i ( t ) for i = 1 , 2 , , 6 .
Figure 4. The evolution of control input u i ( t ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g004
Figure 5. The trajectories of f i ( x i ( t ) ) for i = 1 , 2 , , 6 .
Figure 5. The trajectories of f i ( x i ( t ) ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g005
Figure 6. The trajectory of F ( x ( t ) ) .
Figure 6. The trajectory of F ( x ( t ) ) .
Mathematics 10 04689 g006
Figure 7. The states of x i ( t ) for i = 1 , 2 , , 6 .
Figure 7. The states of x i ( t ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g007
Figure 8. The evolution of control input u i ( t ) for i = 1 , 2 , , 6 .
Figure 8. The evolution of control input u i ( t ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g008
Figure 9. The states of x i ( t ) for i = 1 , 2 , , 6 .
Figure 9. The states of x i ( t ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g009
Figure 10. The evolution of control input u i ( t ) for i = 1 , 2 , , 6 .
Figure 10. The evolution of control input u i ( t ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g010
Figure 11. The trajectories of f i ( x i ( t ) ) for i = 1 , 2 , , 6 .
Figure 11. The trajectories of f i ( x i ( t ) ) for i = 1 , 2 , , 6 .
Mathematics 10 04689 g011
Figure 12. The trajectory of F ( x ( t ) ) .
Figure 12. The trajectory of F ( x ( t ) ) .
Mathematics 10 04689 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, X.; Yu, Z.; Jiang, H. Fixed-Time Distributed Optimization for Multi-Agent Systems with Input Delays and External Disturbances. Mathematics 2022, 10, 4689. https://doi.org/10.3390/math10244689

AMA Style

Xu X, Yu Z, Jiang H. Fixed-Time Distributed Optimization for Multi-Agent Systems with Input Delays and External Disturbances. Mathematics. 2022; 10(24):4689. https://doi.org/10.3390/math10244689

Chicago/Turabian Style

Xu, Xuening, Zhiyong Yu, and Haijun Jiang. 2022. "Fixed-Time Distributed Optimization for Multi-Agent Systems with Input Delays and External Disturbances" Mathematics 10, no. 24: 4689. https://doi.org/10.3390/math10244689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop