Next Article in Journal
Passive Aggressive Ensemble for Online Portfolio Selection
Next Article in Special Issue
Boundedness of Solutions for an Attraction–Repulsion Model with Indirect Signal Production
Previous Article in Journal
INLA Estimation of Semi-Variable Coefficient Spatial Lag Model—Analysis of PM2.5 Influencing Factors in the Context of Urbanization in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Fault Diagnosis via Iterative Learning for Partial Differential Multi-Agent Systems with Actuators

1
School of Mechanical and Electrical Engineering, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Vocational and Technical Education, Guangxi Science & Technology Normal University, Laibin 546199, China
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(7), 955; https://doi.org/10.3390/math12070955
Submission received: 20 February 2024 / Revised: 12 March 2024 / Accepted: 21 March 2024 / Published: 23 March 2024
(This article belongs to the Special Issue Applications of Partial Differential Equations, 2nd Edition)

Abstract

:
Component failures can lead to performance degradation or even failure in multi-agent systems, thus necessitating the development of fault diagnosis methods. Addressing the distributed fault diagnosis problem in a class of partial differential multi-agent systems with actuators, a fault estimator is designed under the introduction of virtual faults to the agents. A P-type iterative learning control protocol is formulated based on the residual signals, aiming to adjust the introduced virtual faults. Through rigorous mathematical analysis utilizing contraction mapping and the Bellman–Gronwall lemma, sufficient conditions for the convergence of this protocol are derived. The results indicate that the learning protocol ensures the tracking of virtual faults to actual faults, thereby facilitating fault diagnosis for the systems. Finally, the effectiveness of the learning protocol is validated through numerical simulation.

1. Introduction

In the past decade, research on multi-agent systems (MASs) has garnered widespread attention [1,2,3,4,5,6]. MASs consist of intelligent entities with sensing and execution capabilities, collaborating through network coupling to collectively solve problems, thereby enhancing the efficiency of problem solving [1,2]. Due to their advantages, MASs find extensive applications in collaborative research areas such as unmanned ground vehicles [3], unmanned boats [4], and drones [5]. A detailed survey of the application domains of MASs is conducted in [6].
The successful completion of complex tasks through cooperation in MASs is contingent upon the normal operation of each agent and the maintenance of regular communication connections between agents [7]. However, unlike the single intelligent system described by ordinary differential equations (ODEs) in [8,9,10], with an increase in the number of agents, the long-term operation of MASs becomes more susceptible to the influence of failures in practice [11]. Suppose that faults should be promptly diagnosed and addressed after occurrence, the distributed nature of MASs makes faults prone to propagating among their networks, leading to MAS paralysis and causing severe economic losses [12]. Consequently, considering the reliability and security of MASs in real-world scenarios, issues related to fault diagnosis have garnered widespread attention [13,14,15,16,17]. For instance, in [13], a sliding mode observer addressed the fault detection issue in MASs subject to disturbances. In [14], a distributed fault diagnosis approach for MASs was proposed based on a belief rule base. Furthermore, in [15], a novel iterative learning control scheme was developed to mitigate the impact of uncertainties and actuator failures in MASs. In contrast, the scheme relied on estimating both the self-state and the neighboring states. Additional research reports on fault diagnosis in MASs can be found in references [16,17].
Iterative learning control (ILC) represents an effective intelligent control strategy for achieving the high-precision tracking of an unknown system with accurate models within a finite time interval [18]. It is particularly well suited to systems exhibiting repetitive cyclic tracking characteristics. ILC, initially proposed by Arimoto and others, is a significant branch of learning control [19,20]. The core idea of ILC involves utilizing the input–output data obtained from previous iterations, continuously refining the control input from the previous iteration, and achieving the complete tracking of the desired trajectory within a finite time interval [21,22]. Compared to some traditional control methods, ILC requires minimal prior knowledge and computations, making it suitable for straightforwardly handling dynamic systems with high uncertainty [23]. Consequently, ILC has been widely researched in both practical and theoretical analyses of MASs [24,25]. In recent years, ILC has made significant strides in system fault diagnosis [26,27,28]. For example, in [26], an investigation was conducted on iterative learning fault diagnosis for stochastic repeat systems with Brownian motion. In [27], a novel fault detection and estimation algorithm based on ILC was proposed to address the challenges of detecting and estimating faults in time-varying uncertain network systems. Subsequently, an effective data-driven ILC scheme was designed for a specific class of network systems with potential faults [28]. Thus, the objective of diagnosing system faults can be achieved by iteratively adjusting the introduced virtual faults through the residual signal using ILC methods. Unfortunately, as evident from the mentioned references [13,14,15,16,17,26,27,28], these studies primarily address fault diagnosis issues in the time domain for MASs.
Despite this, recent research has yielded novel findings on MASs described by ODEs [29,30,31], with ample consideration given to the temporal evolution of agent states. However, in the real world, the state of MASs is not only time-dependent but also spatially influenced, as observed in examples such as flexible robotic arms, the axial movement of motors, and spacecraft surfaces, as referenced in [32,33,34,35]. Consequently, in recent years, significant efforts have been dedicated to addressing the control issues of MASs modeled by partial differential equations (PDEs) [36,37,38,39]. In [36], considering the spatiotemporal dynamic evolution of agents, the partial differential MAS dynamic model was established through PDEs to describe this behavior. In [37], the consensus problem of a class of nonlinear pulse partial MASs was addressed by applying ILC methods. Furthermore, [38] and [39] employed ILC schemes to solve the consensus control problem of discretized partial differential MASs, respectively. This indicates that ILC can effectively address the coordinated control issues of partial differential MASs. Compared to the previously mentioned ordinary differential MASs, partial differential MASs are more prone to faults due to the spatiotemporal complexity of their states. More preliminary research on the fault diagnosis problem in spatiotemporal partial differential MASs must be conducted, presenting an open and challenging field. In this context, developing reliable fault detection methods specifically tailored to partial differential MASs becomes a critical and urgent challenge hindering their advancement.
This study investigates the distributed fault diagnosis problem of a class of linear partial differential MASs with actuators using the ILC method. Firstly, a fault estimator is designed by introducing virtual faults to the agents. Secondly, a P-type ILC protocol is formulated based on the error between the actual systems’ output and the fault estimator’s (residual signal) output. After this control protocol is applied, the introduced virtual faults converge to the actual faults, achieving the goal of fault diagnosis for the partial differential MASs. Finally, the effectiveness of the learning protocol is validated through numerical simulation examples.
The main contributions of this work are summarized in the following key points:
(1)
Unlike the fault diagnosis for ordinary differential MASs discussed in [13,14,15,16,17], this work addresses the fault diagnosis problem for a class of partial differential MASs over a continuous spatiotemporal period. The aim is to tackle the limitations hindering the in-depth development of partial differential MASs.
(2)
Based on the ILC method, a distributed P-type ILC protocol with high-precision tracking performance is constructed to address the fault diagnosis problem of partial differential MASs with actuators. This learning process exhibits strong resistance to disturbances compared to traditional observers.
(3)
The necessary and sufficient conditions for fault diagnosis in partial differential MASs are derived through the application of contraction mapping and the Bellman–Gronwall lemma. Despite the intricate spatiotemporal dynamics of the agents, this derivation further enriches the theoretical achievements in fault diagnosis for MASs.
The remaining sections of the article are organized as follows: In Section 2, fundamental knowledge will be provided, problem will be formulated under certain assumptions, and a description of the MASs is presented. Section 3 will focus on the design of a suitable fault estimator and ILC protocol. The analysis of the convergence conditions for estimating MAS faults will be presented in Section 4. Simulation results and conclusions will be provided in Section 5 and Section 6, respectively.

2. Preliminaries and Problem Statement

2.1. Preliminaries

Algebraic graph theory is frequently employed to depict the communication relationships among agents in MASs. Let G ¯ = ( V ¯ , E ¯ , A ¯ ) describe the directed graph of the network consisting of N agents in the system. Here, V ¯ = { v ¯ 1 , v ¯ 2 , , v ¯ N } is the set of nodes in graph G ¯ , and E ¯ V ¯ × V ¯ is the set of its edges. Moreover, e ¯ j . i E ¯ implies that there is a communication link from node j to node i . A ¯ = [ a ¯ j , i ] R N × N is the adjacency matrix, where a ¯ j , i > 0 e ¯ j , i E ¯ . Define D ¯ = d i a g ( d ¯ 1 , d ¯ 2 , , d ¯ N ) as the degree matrix of graph G ¯ , then the degree of node j is d ¯ j = i N j a ¯ j i , where N j is the set of neighbors for agent j . The Laplacian matrix of graph G ¯ is L ¯ = D ¯ A ¯ . Additionally, is the Kronecker product, I N is the identity matrix, and the transpose of matrix C is C Τ .
Furthermore, for the n -dimensional vector X = ( x 1 , x 2 , , x n ) Τ , its norm is defined as X = l = 1 n x l 2 , and the norm of the corresponding n × n -dimensional matrix A is A = λ max ( A Τ A ) , where λ max ( ) denotes the maximum eigenvalue. Let L 2 ( Ω ) be the function space consisting of all measurable functions satisfying p L 2 2 = Ω p ( s ) d s < .
If p l ( s ) L 2 ( Ω ) ,         l = 1 , 2 , , n , then P = ( p 1 , p 2 , , p n ) R n L 2 ( Ω ) holds, and P L 2 = { Ω P Τ ( s ) P ( s ) d s } 1 2 . For a given positive constant λ , ( L 2 , λ ) is defined as P ( L 2 , λ ) = sup 0 t T { P ( , t ) L 2 2 e λ t } .
The following lemma lays the foundation for the convergence analysis of the subsequent main results, and the lemma used is as follows:
Lemma 1.
(Contraction mapping principle [40]). For a non-negative real sequence p k satisfying p k + 1 θ p k + ω k , where  0 θ < 1 and lim   k ω k = 0 , it holds that lim   k p k = 0 .
Lemma 2
(Bellman–Gronwall Inequality [41]). Assuming r 1 ( t )  and r 2 ( t )  are the real-valued continuous function in the interval  [ 0 , T ] , and ς 1 0 , if r 1 ( t ) ς 3 + 0 t ( ς 1 r 1 ( τ ) + ς 2 r 2 ( τ ) ) d τ , then r 1 ( t ) ς 3 e ς 1 t + 0 t e ς 1 ( t τ ) ς 2 r 2 ( τ ) d τ .

2.2. Problem Statement

Consider the parabolic partial differential MAS composed of N actuators that performs repeatable tasks. Among them, the dynamic model of the j -th agent actuator is as follows:
p j ( s , t ) t = H Δ p j ( s , t ) + A p j ( s , t ) + B u j ( s , t ) + E f j ( s , t ) , y j ( s , t ) = C p j ( s , t ) + D u j ( s , t ) + F f j ( s , t ) ,
where j = 1 , 2 , , N denotes the label of the actuator agent, and ( s , t ) Ω × [ 0 , T ] , while Ω represents a smooth and bounded region. The state, input, and output of the actuator agent j are, respectively, denoted by p j ( s , t ) R n , u j ( s , t ) R m and y j ( s , t ) R p . The matrix H is a positive and bounded diagonal matrix, specifically with H = d i a g [ h 1 , h 2 , , h n ] R n × n and 0 < h l < . The matrices A R n × n , B , E R n × m , C R p × n  and D , F R p × m are all bounded matrices. f j ( s , t ) R m represents the fault signal for actuator agent j , and Δ is the Laplace operator defined on the region Ω , i.e., Δ = l = 1 p 2 s l 2 . Meanwhile, the initial and boundary conditions for MAS (1) are as follows:
σ p j ( s , t ) + β p j ( s , t ) v = 0 , ( s , t ) Ω × [ 0 , T ] ,
p j ( s , t ) = φ j ( s ) , s Ω , j { 1 , 2 , , N } ,
where σ = d i a g [ σ 1 , σ 2 , , σ n ] , σ l 0 , β = d i a g [ β 1 , β 2 , , β n ] , β l > 0 , and v represents the outward normal vector on the boundary Ω of the region.
Building upon the strategies outlined in the literature above [15,26,27,28], which utilize the ILC scheme for system fault diagnosis, to diagnose faults in MAS (1), an appropriate ILC is designed to design a suitable fault estimator. Simultaneously, while ensuring the convergence of the designed learning protocol, the following assumptions need to be satisfied for the fault diagnosis of the actual output and fault estimation output of the MAS (1):
Assumption 1.
The communication graph G ¯  is a spanning tree.
Assumption 2.
Throughout the learning process, the initial and boundary conditions are consistently satisfied:
σ p j , k ( s , t ) + β p j , k ( s , t ) v = 0 , ( s , t ) Ω × [ 0 , T ] ,
p j , k ( s , t ) = φ j , k ( s ) , s Ω , j { 1 , 2 , , N } , k Z + ,
where the function  φ j , k ( s )  satisfies the following: φ j , k + 1 ( s ) φ j , k ( s ) L 2 2 l r k , r [ 0 , 1 ) , l > 0 .
Remark 1.
The mathematical symbols and lemmas in the Preliminaries lay the foundation for subsequent analysis and description. Based on references [13,14,15,16,17,36,37,38,39], the dynamic model is constructed in the problem statement, addressing the fault diagnosis problem for a class of partial differential MASs. Furthermore, appropriate ILC protocols are designed to diagnose faults within the MAS. To prove the convergence of this learning protocol, mathematical tools such as λ -norms, widely utilized in the convergence proof process, are employed, as seen in reference [36,37,38,39].
Remark 2.
Assumption 1 ensures no isolated actuator agent is in MAS (1), guaranteeing the ability to estimate potential faults for each actuator agent. Assumption 2 serves as a fundamental condition in the design of the ILC, ensuring optimal tracking and estimation performance. This aligns with the approach taken in many ILC references [26,27,28], avoiding the sacrifice of tracking performance to eliminate this situation.

3. Design of Fault Estimation Tracker and Learning Control Protocol

To estimate the magnitude of fault when MAS (1) experiences a fault, and under the fulfillment of the aforementioned Assumptions 1–2, it is necessary to first design the following virtual fault tracking estimator:
p ^ j , k ( s , t ) t = H Δ p ^ j , k ( s , t ) + A p ^ j , k ( s , t ) + B u j ( s , t ) + E f ^ j , k ( s , t ) + V ( y j ( s , t ) y ^ j , k ( s , t ) ) , y ^ j , k ( s , t ) = C p ^ j , k ( s , t ) + D u j ( s , t ) + F f ^ j , k ( s , t ) ,
where k represents the iteration count, and p ^ j , k ( s , t ) and y ^ j , k ( s , t ) are the state and output estimation values of actuator agent j , respectively. For the sake of convenience, the corresponding symbols are denoted as follows:
p j , k ( s , t ) = p j ( s , t ) p ^ j , k ( s , t ) ,
f j , k ( s , t ) = f j ( s , t ) f ^ j , k ( s , t ) ,
e j , k ( s , t ) = y j ( s , t ) y ^ j , k ( s , t ) ,
where f j , k ( s , t ) represents the fault error of actuator agent j , while e j , k ( s , t ) corresponds to the difference between the output of actuator agent j and the output of the fault estimator (6), namely, the residual signal. V R n × p is a predefined gain matrix.
To ensure that the virtual faults of the actuator agents approximate the actual faults through iteration, a P-type ILC protocol is designed based on the aforementioned residual signal e j , k ( s , t ) , as follows:
f ^ j , k + 1 ( s , t ) = f ^ j , k ( s , t ) + γ i = 1 N a ¯ j , i [ ( y j ( s , t ) y ^ j , k ( s , t ) ) ( y i ( s , t ) y ^ i , k ( s , t ) ) ] ,
where γ is the learning gain matrix. In the actual learning process, the correction of virtual faults by the fault estimator ceases when the actual output and estimator output of actuator agent j satisfy y j ( s , t ) y ^ j , k ( s , t ) L 2 2 ε , where ε is a given performance metric. The correction of virtual faults stops when the estimate y j ( s , t ) y ^ j , k ( s , t ) L 2 2 ε is satisfied.
Inspired by references [15,26,27,28], the corresponding fault estimator and ILC protocol are designed. The core idea is as follows: within the selected optimization region, the residual signal is utilized between the actual output of actuator agent j in MAS (1) and the fault tracking estimation output. Further, the ILC protocol (10) is leveraged to adjust the introduced virtual faults. This adjustment is performed in such a way that the virtual faults gradually approach the true faults of actuator agent j in the MAS along the iteration axis. The ultimate goal is to achieve the estimation of this fault, i.e., lim k f j ( , t ) f ^ j , k ( , t ) L 2 2 = 0 . Simultaneously, this implies lim k y j ( , t ) y ^ j , k ( , t ) L 2 2 = 0 .
For the convenience of the subsequent convergence analysis of the fault estimator and control protocol, the corresponding compact forms are provided as follows:
p ^ k ( s , t ) t = ( I N H ) Δ p ^ k ( s , t ) + ( I N A ) p ^ k ( s , t ) + ( I N B ) u ( s , t )                           + ( I N E ) f ^ k ( s , t ) + ( I N V ) ( y ( s , t ) y ^ k ( s , t ) ) , y ^ k ( s , t ) = ( I N C ) p ^ k ( s , t ) + ( I N D ) u ( s , t ) + ( I N F ) f ^ k ( s , t ) ,
where p ^ k ( s , t ) = [ p ^ 1 , k Τ ( s , t ) , p ^ 2 , k Τ ( s , t ) , , p ^ N , k Τ ( s , t ) ] Τ R N n ,   u ( s , t ) = [ u 1 Τ ( s , t ) , u 2 Τ ( s , t ) , , u N Τ ( s , t ) ] Τ R N m ,
y ( s , t ) = [ y 1 Τ ( s , t ) , y 2 Τ ( s , t ) , , y N Τ ( s , t ) ] R N p ,   y ^ ( s , t ) = [ y ^ 1 , k Τ ( s , t ) , y 2 , k Τ ( s , t ) , , y N , k Τ ( s , t ) ] R N p ,
f ^ ( s , t ) = [ f ^ 1 , k Τ ( s , t ) , f ^ 2 , k Τ ( s , t ) , , f ^ N , k Τ ( s , t ) ] R N m .
f ^ k + 1 ( s , t ) = f ^ k ( s , t ) + ( L ¯ γ ) e k ( s , t ) ,
where e k ( s , t ) = y ( s , t ) y ^ k ( s , t ) represents the residual signal of the actuator agent in MAS (1), and L ¯ R N × N is the Laplacian matrix.

4. Convergence Analysis of Control Protocol

In order to ensure that the virtual fault estimator’s faults approximate the actual faults of the MAS through the previously designed ILC protocol, it is necessary to satisfy Theorem 1:
Theorem 1.
Under the Assumptions 1–2 for MASs (1) and utilizing the designed fault estimator, when estimating faults occurring in the system through the designed P-type iterative learning control (ILC) protocol, if the learning gain γ  satisfies condition I N ( L ¯ γ ) ( I N F ) 2 < 0.5 , accompanied by condition k , then the faults of the virtual fault estimator approach the faults of the actual MASs (1), i.e., lim k f j ( , t ) f ^ j , k ( , t ) L 2 = 0 , j = 1 , 2 , , N , t [ 0 , T ] .
Proof .
From the above learning control protocol (12), the following is obtained:
f ( s , t ) f ^ k + 1 ( s , t ) = f ( s , t ) f ^ k ( s , t ) ( L ¯ γ ) e k ( s , t ) .
Then, Equations (7), (8) and (11), Equation (13) can be further expressed as follows:
f k + 1 ( s , t ) = f k ( s , t ) ( L ¯ γ ) ( I N C ) p k ( s , t ) ( L ¯ γ ) ( I N F ) f k ( s , t )                                 = ( I N ( L ¯ γ ) ( I N F ) ) f k ( s , t ) ( L ¯ γ ) ( I N C ) p k ( s , t )                                 = ( I N γ F ) f k ( s , t ) ( γ C ) p k ( s , t ) ,
where γ F = ( L ¯ γ ) ( I N F ) , γ C = ( L ¯ γ ) ( I N C ) .
With Equations (8) and (11), through Equation (14), the following inequality holds:
f k + 1 Τ ( s , t ) f k + 1 ( s , t ) = f k Τ ( s , t ) ( I N γ F ) Τ ( I N γ F ) f k ( s , t ) f k Τ ( s , t ) ( I N γ F ) Τ ( γ C ) p k ( s , t )                                                                     p k Τ ( s , t ) ( γ C ) Τ ( I N γ F ) f k ( s , t ) + p k Τ ( s , t ) ( γ C ) Τ ( γ C ) p k ( s , t )                                                                         2 f k Τ ( s , t ) ( I N γ F ) Τ ( I N γ F ) f k ( s , t ) + 2 p k Τ ( s , t ) ( γ C ) Τ ( γ C ) p k Τ ( s , t ) .
Integrating the inequality (15) with respect to s over Ω , one obtains the following:
f k + 1 ( , t ) L 2 2 2 μ Ω f k Τ ( s , t ) f k ( s , t ) d s + 2 ϑ Ω p k Τ ( s , t ) p k ( s , t ) d s                                               2 μ f k ( , t ) L 2 2 + 2 ϑ p k ( , t ) L 2 2 ,
where μ = I N ( L ¯ γ ) ( I N F ) 2 , ϑ = ( L ¯ γ ) ( I N C ) 2 .
Next, from Equations (1) and (11), we can obtain the following:
p ( s , t ) t p ^ k ( s , t ) t = ( I N H ) Δ ( p ( s , t ) p ^ k ( s , t ) ) + ( I N A ) ( p ( s , t ) p ^ k ( s , t ) )                                                             + ( I N E ) ( f ( s , t ) f ^ k ( s , t ) ) + ( I N V ) ( y ( s , t ) y ^ k ( s , t ) ) .
By rearranging Equation (17) using the indices from Equation (7), one can obtain the following:
p k ( s , t ) t = ( I N H ) Δ p k ( s , t ) + ( I N A ) p k ( s , t ) + ( I N E ) f k ( s , t )                                 ( I N V ) ( I N C ) p k ( s , t ) ( I N V ) ( I N F ) f k ( s , t )                             = ( I N H ) Δ p k ( s , t ) + [ ( I N A ) ( I N N V C ) ] p k ( s , t ) + [ ( I N E ) ( I N N V F ) ] f k ( s , t )                             = ( I N H ) Δ p k ( s , t ) + A p k ( s , t ) + E f k ( s , t ) ,
where A = ( I N A ) ( I N N V C ) , E = ( I N E ) ( I N N V F ) .
Multiplying both sides of Equation (18) by p k Τ ( s , t ) , one can obtain the following:
p k Τ ( s , t ) p k ( s , t ) t = p k Τ ( s , t ) ( I N H ) Δ p k ( s , t ) + p k Τ ( s , t ) A p k ( s , t ) + p k Τ ( s , t ) E f k ( s , t ) .
Transposing Equation (18), multiplying both sides by p k ( s , t ) , and combining this with Equation (19), one obtains the following:
( p k Τ ( s , t ) p k ( s , t ) ) t = 2 p k Τ ( s , t ) ( I N H ) Δ p k ( s , t ) + p k Τ ( s , t ) ( A + A Τ ) p k ( s , t ) + 2 p k Τ ( s , t ) E f k ( s , t ) .
Integrating Equation (20) with respect to s over Ω yields, one obtains the following:
( p k ( , t ) L 2 2 ) d t = 2 Ω p k Τ ( s , t ) ( I N H ) Δ p k ( s , t ) d s + Ω p k Τ ( s , t ) ( A + A Τ ) p k ( s , t ) d s + 2 Ω p k Τ ( s , t ) E f k ( s , t ) d s .
Using Green’s formula, Equation (21) can be further written as follows:
( p k ( , t ) L 2 2 ) d t 2 Ω p k Τ ( s , t ) ( I N H ) v p k ( s , t ) 2 Ω p k Τ ( s , t ) ( I N H ) Δ p k ( s , t ) d s                                           + λ max ( A + A Τ ) p k ( , t ) L 2 2 + λ max ( E ) [ p k ( , t ) L 2 2 + f k ( , t ) L 2 2 ] .
From the MAS (1) boundary conditions, p ( s , t ) v = 1 β σ p ( s , t ) is held, and from inequality (22), one obtains the following:
( p k ( , t ) L 2 2 ) d t 2 Ω p k Τ ( s , t ) ( I N H ) ( 1 β σ ) p k ( s , t ) d s + λ max ( A + A Τ ) p k ( , t ) L 2 2       + λ max ( E ) [ p k ( , t ) L 2 2 + f k ( , t ) L 2 2 ] η 1 p k ( , t ) L 2 2 + η 2 f k ( , t ) L 2 2 ,
where η 1 = λ max ( A + A Τ ) + λ max ( E ) , η 2 = λ max ( E ) .
Integrating both sides of inequality (23) with respect to t , and utilizing Lemma 2, one can obtain the following:
p k ( , t ) L 2 2 p k ( , 0 ) L 2 2 + 0 t ( η 1 p k ( , t ) L 2 2 + η 2 f k ( , t ) L 2 2 ) d s                                       e η 1 t p k ( , 0 ) L 2 2 + 0 t η 2 e η 1 ( t s ) f k ( , t ) L 2 2 d s .
By choosing λ ( λ > η 1 ) as appropriately large and multiplying both sides of the inequality (24) by e λ t , one obtains the following:
p k ( , t ) L 2 2 e λ t e η 1 t p k ( , 0 ) L 2 2 + η 2 λ η 1 f k ( L 2 , λ ) .
From the MAS (1) initial conditions, p k ( , 0 ) L 2 2 l r k is held. Therefore, inequality (25) can be further written as follows:
p k ( , t ) L 2 2 e λ t l r k + η 2 λ η 1 f k ( L 2 , λ ) .
By multiplying both sides of the previous inequality (16) by e λ t and substituting it into inequality (26), one obtains the following:
f k + 1 ( , t ) L 2 2 e λ t     2 μ f k ( , t ) L 2 2 e λ t + 2 ϑ { l r k + η 2 λ η 1 f k ( L 2 , λ ) } 2 ϑ l r k + ( 2 μ + 2 ϑ η 2 λ η 1 ) f k ( L 2 , λ ) ,
where ρ = 2 μ + 2 ϑ η 2 λ η 1 . On one hand, since 0 r < 1 , it implies that when k , r k 0 . On the other hand, it is known that 2 μ < 1 . Based on the continuity of real numbers, it can be inferred that for a sufficiently large λ , ρ < 1 holds. Therefore, by Lemma 1, one can conclude the following:
lim k f k ( L 2 , λ ) = 0 .
For t [ 0 , T ] , one can obtain the following:
f k ( , t ) L 2 2 = ( f k ( , t ) L 2 2 e λ t ) e λ t f k ( L 2 , λ ) e λ T .
From Formulas (28) and (29), the following expression holds:
lim k f k ( , t ) L 2 = 0 .
Since the aforementioned f k ( , t ) is the compact form of f j , k ( , t ) , Equation (30) implies the following:
lim k f j , k ( , t ) L 2 = 0 , j = 1 , 2 , , N , t [ 0 , T ] .
In conclusion, it can be deduced that the L 2 norm of the fault regulation error of actuator agent j will gradually converge to zero along the iteration axis. The proof of Theorem 1 is completed. □
In addition to Theorem 1, to ensure that the output of the virtual fault estimator approximates the system’s actual output through the previously designed ILC protocol, it is necessary to satisfy Theorem 2.
Theorem 2.
Under the same conditions as Theorem 1, the output of the fault estimator approximates the actual output, i.e., lim k y j ( , t ) y ^ j , k ( , t ) L 2 = 0 , j = 1 , 2 , , N , t [ 0 , T ] .
Proof .
Next, to analyze the convergence of the output error, one can derive the following from Equation (9):
e k Τ ( s , t ) e k ( s , t ) = f k Τ ( s , t ) ( I N F ) Τ ( I N F ) f k ( s , t ) + f k Τ ( s , t ) ( I N F ) Τ ( I N C ) p k ( s , t )           + p k Τ ( s , t ) ( I N C ) Τ ( I N F ) f k ( s , t ) + p k Τ ( s , t ) ( I N C ) Τ ( I N C ) p k ( s , t )   2 λ max [ ( I N F ) Τ ( I N F ) ] f k Τ ( s , t ) f k ( s , t ) + 2 λ max [ ( I N C ) Τ ( I N C ) ] p k Τ ( s , t ) p k ( s , t ) 2 χ 1 f k Τ ( s , t ) f k ( s , t ) + 2 χ 2 p k Τ ( s , t ) p k ( s , t ) ,
where χ 1 = λ max [ ( I N F ) Τ ( I N F ) ] , χ 2 = λ max [ ( I N C ) Τ ( I N C ) ] .
By integrating inequality (32) with respect to s over Ω , one can obtain the following:
e k ( , t ) L 2 2 2 χ 1 Ω f k Τ ( s , t ) f k ( s , t ) d s + 2 χ 2 Ω p k Τ ( s , t ) p k ( s , t ) d s 2 χ 1 f k ( , t ) L 2 2 + 2 χ 2 p k ( , t ) L 2 2 .
By multiplying both sides of inequality (33) by e λ t , using inequality (26), one obtains the following:
e k ( , t ) L 2 2 e λ t     2 χ 2 l r k + ( 2 χ 1 + 2 χ 2 η 2 λ η 1 ) f k ( L 2 , λ ) .
Analysis reveals that both sides of inequality (34) are independent of time t . Therefore, the following inequality holds:
e k     ( L 2 , λ ) 2 χ 2 l r k + ( 2 χ 1 + 2 χ 2 η 2 λ η 1 + 2 χ 2 η 2 λ η 1 ) f k ( L 2 , λ ) .
Similar to Formulas (27)–(31), one can obtain the following:
lim k e j , k ( , t ) L 2 = 0 , j = 1 , 2 , , N , t [ 0 , T ] ,
The proof of Theorem 2 is completed. □

5. Numerical Simulation

This section presents numerical simulations to demonstrate the effectiveness of the proposed fault diagnosis method. A class of partial differential MASs with actuators involving four actuator agents is considered, and the communication topology is shown in Figure 1.
Then, the corresponding Laplacian matrix of Figure 1 is as follows:
L ¯ = 3 1 1 1 1 2 0 1 1 0 2 1 1 1 1 3 .
The dynamic mathematical model and related parameters of the MASs are as follows:
p j ( s , t ) t = H Δ p j ( s , t ) + A p j ( s , t ) + B u j ( s , t ) + E f j ( s , t ) , y j ( s , t ) = C p j ( s , t ) + D u j ( s , t ) + F f j ( s , t ) ,
where j = 1 , 2 , 3 , 4 denotes the label of the actuator agent, and ( s , t ) [ 0 , 1 ] × [ 0 , 1 ] , while H = 1 . A = 0.2 1.5 1 0.3 , B = 0 0.1 0.5 0.6 , E = 0.1 0 0 0.2 , C = 0.3 0 0 0.2 , D = 0.3 0 0 0.2 , F = 0.3 0 0 0.2 . V = 0.8 0 0 0.5 .
After selecting an appropriate learning gain, with the initial parameter set to r = 0 , analysis and calculations reveal that these parameters satisfy the conditions of Theorems 1 and 2. Meanwhile, assuming the occurrence of fault in MAS (37), it is specified as follows:
f j ( s , t ) = f j , d = 6 sin ( 12 t ) sin ( 2 s ) , j = 1 , 2 , 3 , 4 ,
where ( s , t ) [ 0 , 1 ] × [ 0 , 1 ] . Moreover, j = 1 , 2 , 3 , 4 denotes the label of the actuator agent.
Utilizing the finite difference method for the differential equations and applying the previously designed fault estimator (11) and ILC protocol (12), the corresponding simulation results are illustrated in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8.
Figure 1 depicts the connectivity among the actuator agents, noting that actuator agents 2 and 3 cannot directly exchange information. Figure 2 depicts the surface of the actual faults occurring in the actuator agents of MAS (37), while Figure 3 and Figure 4 illustrate the surface of the faults in the estimator after 20 and 40 iterations of iterative learning, respectively. Combining Figure 2, Figure 3 and Figure 4, it can be observed that the fault estimation in the estimator is approaching the actual output of the actuator agents.
Figure 5 and Figure 6 present curves depicting the variations in the fault error and output error in MAS (37) with the iteration count, respectively. Figure 7 and Figure 8 present curves depicting the variations in the actuator fault error and output error with the iteration count, respectively. According to the varying curves in Figure 7 and Figure 8, it can be observed that the fault errors and output errors of the four actuator agents gradually converge along the positive direction of the iteration axis. To further observe the variations in these errors, localized zoom-in plots are provided for each of them in Figure 7 and Figure 8, respectively. By the 50th iteration, the fault errors and output errors of all actuator agents can enter the preset error band of 0.01. Therefore, Figure 7 and Figure 8 show that with an increase in the number of iterative learning cycles, the fault estimation in the actuator agent’s estimator gradually converges to the actual fault, and the output also tends to approach the actual output.
Figure 2,Figure 3,Figure 4,Figure 5,Figure 6,Figure 7,Figure 8 collectively demonstrate that the designed fault estimator can effectively track the partial differential MASs and learn and approximate faults.

6. Conclusions

This study investigates the fault diagnosis problem for parabolic partial differential MASs with actuators. A distributed virtual fault tracker and a p-type iterative learning fault-tracking protocol based on local measurement information among actuator agents are designed. A theoretical analysis of the learning protocol is rigorously conducted using contraction mapping and the Bellman–Gronwall lemma, providing convergence conditions for the protocol. The effectiveness of the proposed theory and learning protocol is validated through numerical simulation. However, it is worth noting that the study is limited to linear homogeneous MASs. Future research will address these limitations by considering more realistic nonlinear and heterogeneous MASs. Additionally, the applicability of this protocol in fault diagnosis for multiple biomimetic robots and unmanned aerial vehicles will also be explored.

Author Contributions

Conceptualization, C.W. and J.W.; methodology, C.W. and J.W.; software, J.W.; validation, C.W. and Z.Z.; investigation, C.W., J.W. and Z.Z.; resources, J.W.; data curation, J.W.; writing—original draft preparation, C.W.; writing—review and editing, C.W., J.W. and Z.Z.; supervision, C.W. and J.W.; project administration, J.W. and Z.Z.; funding acquisition, C.W. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Young and Middle-aged Teachers’ Basic Scientific Research Ability Promotion Project in Guangxi Universities under Grant (No. 2021KY0852); Scientific Research Fund of Guangxi Science & Technology Normal University (GXKS2022ZD001); 2023 Demonstration modern Industry College construction project (GXKS2022ON006).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.; Jia, Y.M. An iterative learning approach to formation control of multi-agent systems. Syst. Control Lett. 2012, 61, 148–154. [Google Scholar] [CrossRef]
  2. Liu, X.C.; Chu, H.J.; Zhang, W.D. Observer-based adaptive consensus protocol design for multi-agent systems with one-sided lipschitz nonlinearity. Mathematics 2024, 12, 87. [Google Scholar] [CrossRef]
  3. Liu, D.Y.; Liu, H.; Liu, K.X.; Gu, H.B.; Lu, J.H. Robust hierarchical pinning control for nonlinear heterogeneous multiagent system with uncertainties and disturbances. IEEE Trans. Circuits Syst. I 2022, 69, 5273–5285. [Google Scholar] [CrossRef]
  4. Conte, G.; Scaradozzi, D.; Mannocchi, D.; Raspa, P.; Panebianco, L.; Screpanti, L. Development and experimental tests of a ros multi-agent structure for autonomous surface vehicles. J. Intell. Robot. Syst. 2018, 92, 705–718. [Google Scholar] [CrossRef]
  5. Pantelimon, G.; Tepe, K.; Carriveau, R.; Ahmed, S. Survey of multi-agent communication strategies for information exchange and mission control of drone deployments. J. Intell. Robot. Syst. 2019, 95, 779–788. [Google Scholar] [CrossRef]
  6. Liu, M. Applications of multi-agent systems. Inf. Technol. Sel. Tutor. 2004, 157, 239–270. [Google Scholar]
  7. Chen, Z.; Nian, X.H.; Meng, Q. Nash equilibrium seeking of general linear multi-agent systems in the cooperation-competition network. Syst. Control Lett. 2023, 175, 105510. [Google Scholar] [CrossRef]
  8. Zhou, Z.P.; Liu, X.F. State and fault estimation of sandwich systems with hysteresis. Int. J. Robust. Nonlinear Control 2018, 28, 3974–3986. [Google Scholar] [CrossRef]
  9. Zhou, Z.P.; Tan, Y.H.; Shi, P. Fault detection of a sandwich system with dead-zone based on robust observer. Syst. Control Lett. 2016, 96, 132–140. [Google Scholar] [CrossRef]
  10. Tchepemen, N.; Balasubramanian, S.; Kanagaraj, N.; Kengne, E. Modulational instability in a coupled nonlocal media with cubic, quintic and septimal nonlinearities. Nonlinear Dyn. 2023, 111, 20311–20329. [Google Scholar] [CrossRef]
  11. Cao, Y.Y.; Li, T.; Li, Y.; Wang, X.M. Heterogeneous multi-agent-based fault diagnosis scheme for actuation system. Actuators 2022, 11, 113. [Google Scholar] [CrossRef]
  12. Ye, Z.Y.; Jiang, B.; Cheng, Y.H.; Yu, Z.Q.; Yang, Y. Distributed fault diagnosis observer for multi-agent system against actuator and sensor faults. J. Syst. Eng. Electron. 2023, 34, 766–774. [Google Scholar] [CrossRef]
  13. Quan, Y.; Chen, W.; Wu, Z.H.; Peng, L. Distributed fault detection and isolation for leader-follower multi-agent systems with disturbances using observer techniques. Nonlinear Dyn. 2018, 93, 863–871. [Google Scholar] [CrossRef]
  14. Wang, Z.Y.; Li, S.H.; He, W.; Yang, R.H.; Feng, Z.C.; Sun, G.W. A new topology-switching strategy for fault diagnosis of multi-agent systems based on belief rule base. Entropy 2022, 24, 1591. [Google Scholar] [CrossRef]
  15. Chen, J.N.; Hua, C.C. Adaptive iterative learning fault-tolerant consensus control of multiagent systems under binary-valued communications. IEEE Trans. Cybern. 2023, 53, 6751–6760. [Google Scholar] [CrossRef]
  16. Natan, A.; Kalech, M.; Barták, R. Diagnosis of intermittent faults in multi-agent systems: An SFL approach. Artif. Intell. 2023, 324, 103994. [Google Scholar] [CrossRef]
  17. Zhong, Y.J.; Zhang, Y.M.; Ge, S.S.; He, X. Robust distributed sensor fault detection and diagnosis within formation control of multiagent systems. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 1340–1353. [Google Scholar] [CrossRef]
  18. Xu, J.X. Analysis of iterative learning control for a class of nonlinear discrete-time systems. Automatica 1997, 33, 1905–1907. [Google Scholar] [CrossRef]
  19. Arimoto, S.; Kawamura, S.; Miyazaki, F. Bettering operation of Robots by learning. J. Robot. Syst. 1984, 1, 123–140. [Google Scholar] [CrossRef]
  20. Bristow, D.A.; Tharayil, M.; Alleyne, A.G. A survey of iterative learning control. IEEE Control Syst. Mag. 2016, 26, 96–114. [Google Scholar]
  21. Rafajlowicz, W.; Jurewicz, P.; Reiner, J.; Rafajlowicz, E. Iterative learning of optimal control for nonlinear processes with applications to laser additive manufacturing. IEEE Trans. Control Syst. Technol. 2019, 27, 2647–2654. [Google Scholar] [CrossRef]
  22. Luo, Z.J.; Xiong, W.J.; Huang, T.W.; Duan, J. Distributed quadratic optimization with terminal consensus iterative learning strategy. Neurocomputing 2023, 528, 12–19. [Google Scholar] [CrossRef]
  23. Gu, P.P.; Wang, H.; Chen, L.P.; Chu, Z.B.; Tian, S.P. Iterative learning consensus control for one-sided lipschitz multi-agent systems. Int. J. Robust Nonlinear Control 2023, 33, 11257–11274. [Google Scholar] [CrossRef]
  24. Xie, J.; Chen, J.X.; Li, J.M.; Chen, W.S.; Zhang, S. Consensus control for heterogeneous uncertain multi-agent systems with hybrid nonlinear dynamics via iterative learning algorithm. Sci. China Technol. Sci. 2023, 66, 2897–2906. [Google Scholar] [CrossRef]
  25. Koposov, A.S.; Pakshin, P.V. Iterative learning control of stochastic multi-agent systems with variable reference trajectory and topology. Autom. Remote Control 2023, 84, 612–625. [Google Scholar] [CrossRef]
  26. Li, L.F.; Yao, L.N.; Wang, H.; Gao, Z.W. Iterative learning fault diagnosis and fault tolerant control for stochastic repetitive systems with Brownian motion. ISA Trans. 2022, 121, 171–179. [Google Scholar] [CrossRef]
  27. Zhang, J.Y.; Huang, K. Fault diagnosis of coal-mine-gas charging sensor networks using iterative learning-control algorithm. Phys. Commun. 2020, 43, 101175. [Google Scholar] [CrossRef]
  28. Patan, K.; Patan, M. Fault-tolerant design of non-linear iterative learning control using neural networks. Eng. Appl. Artif. Intell. 2023, 124, 106501. [Google Scholar] [CrossRef]
  29. Zhou, X.Y.; Wang, H.P.; Tian, Y. Robust adaptive flexible prescribed performance tracking and vibration control for rigid-flexible coupled robotic systems with input quantization. Nonlinear Dyn. 2024, 112, 1951–1969. [Google Scholar] [CrossRef]
  30. Stankovic, M.S.; Beko, M.; Stankovic, S.S. Distributed consensus-based multi-agent temporal-difference learning. Automatica 2023, 151, 110922. [Google Scholar] [CrossRef]
  31. Najar, A.; Karegar, H.K.; Esmaeilbeigi, S. Multi-agent protection scheme for microgrid using deep learning. IET Renew. Power Gen. 2024, 18, 663–678. [Google Scholar] [CrossRef]
  32. Khatami, I.; Zahedi, M. Nonlinear vibration analysis of axially moving string. SN Appl. Sci. 2019, 1, 1668. [Google Scholar] [CrossRef]
  33. Fan, X.; Xu, J.; Zhou, Q.; Leung, T. Dynamic modeling and control of flexible robotic manipulators. Control Theory Appl. 1997, 14, 318–335. [Google Scholar]
  34. Song, X.N.; Zhang, Q.Y.; Wang, M.; Song, S. Distributed estimation for nonlinear PDE systems using space-sampling approach: Applications to high-speed aerospace vehicle. Nonlinear Dyn. 2021, 106, 3183–3198. [Google Scholar] [CrossRef]
  35. Manikandan, K. Solitary wave solutions of the conformable space–time fractional coupled diffusion equation. Partial Differ. Equ. Appl. Math. 2024, 9, 100630. [Google Scholar] [CrossRef]
  36. Dai, X.S.; Wang, C.; Tian, S.P.; Huang, Q.N. Consensus control via iterative learning for distributed parameter models multi-agent systems with time-delay. J. Frankl. Inst. 2019, 356, 5240–5259. [Google Scholar] [CrossRef]
  37. Wu, J.; Dai, X.S.; Tian, S.P.; Huang, Q.N. Iterative learning consensus control of nonlinear impulsive distributed parameter multi-agent systems. Eur. J. Control 2023, 71, 100785. [Google Scholar] [CrossRef]
  38. Zhou, M.; Wang, J.R.; Shen, D. Iterative learning based consensus control for distributed parameter type multi-agent differential inclusion systems. Int. J. Robust Nonlinear Control 2022, 32, 6785–6804. [Google Scholar] [CrossRef]
  39. Wang, C.; Zhou, Z.P.; Dai, X.S.; Liu, X.F. Iterative learning approach for consensus tracking of partial difference multi-agent systems with control delay under switching topology. ISA Trans. 2023, 136, 46–60. [Google Scholar] [CrossRef]
  40. Chen, Y.; Wen, C. Iterative Learning Control: Convergence, Robustness and Applications; Springer: London, UK, 1999; Volume 27. [Google Scholar]
  41. Xie, S.; Chen, S. Stability criteria for parabolic type partial difference equations. J. Comput. Appl. Math. 1996, 75, 57–66. [Google Scholar] [CrossRef]
Figure 1. Connections between actuator agents.
Figure 1. Connections between actuator agents.
Mathematics 12 00955 g001
Figure 2. Actual fault f j , d ( s , t ) in actuator agent j of the systems (37).
Figure 2. Actual fault f j , d ( s , t ) in actuator agent j of the systems (37).
Mathematics 12 00955 g002
Figure 3. Fault estimation f j , e ( s , t ) ( k = 5 ) in the estimator of the actuator agents.
Figure 3. Fault estimation f j , e ( s , t ) ( k = 5 ) in the estimator of the actuator agents.
Mathematics 12 00955 g003
Figure 4. Fault estimation f j , e ( s , t ) ( k = 40 ) in the estimator of the actuator agents.
Figure 4. Fault estimation f j , e ( s , t ) ( k = 40 ) in the estimator of the actuator agents.
Mathematics 12 00955 g004
Figure 5. Fault error of system (37): iteration number curve.
Figure 5. Fault error of system (37): iteration number curve.
Mathematics 12 00955 g005
Figure 6. Output error of system (37); iteration number curve.
Figure 6. Output error of system (37); iteration number curve.
Mathematics 12 00955 g006
Figure 7. Fault error of actuator agents: iteration number curve.
Figure 7. Fault error of actuator agents: iteration number curve.
Mathematics 12 00955 g007
Figure 8. Output error of actuator agents: iteration number curve.
Figure 8. Output error of actuator agents: iteration number curve.
Mathematics 12 00955 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Zhou, Z.; Wang, J. Distributed Fault Diagnosis via Iterative Learning for Partial Differential Multi-Agent Systems with Actuators. Mathematics 2024, 12, 955. https://doi.org/10.3390/math12070955

AMA Style

Wang C, Zhou Z, Wang J. Distributed Fault Diagnosis via Iterative Learning for Partial Differential Multi-Agent Systems with Actuators. Mathematics. 2024; 12(7):955. https://doi.org/10.3390/math12070955

Chicago/Turabian Style

Wang, Cun, Zupeng Zhou, and Jingjing Wang. 2024. "Distributed Fault Diagnosis via Iterative Learning for Partial Differential Multi-Agent Systems with Actuators" Mathematics 12, no. 7: 955. https://doi.org/10.3390/math12070955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop