Next Article in Journal
Mesoscopic Kinetic Approach of Nonequilibrium Effects for Shock Waves
Previous Article in Journal
Entanglement of Temporal Sections as Quantum Histories and Their Quantum Correlation Bounds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed-Time Aperiodic Intermittent Control for Quasi-Bipartite Synchronization of Competitive Neural Networks

1
College of Mathematics and System Science, Xinjiang University, Urumqi 830017, China
2
School of Mathematics and Statistics, Yili Normal University, Yining 835000, China
3
College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(3), 199; https://doi.org/10.3390/e26030199
Submission received: 25 January 2024 / Revised: 16 February 2024 / Accepted: 23 February 2024 / Published: 26 February 2024

Abstract

:
This paper concerns a class of coupled competitive neural networks, subject to disturbance and discontinuous activation functions. To realize the fixed-time quasi-bipartite synchronization, an aperiodic intermittent controller is initially designed. Subsequently, by combining the fixed-time stability theory and nonsmooth analysis, several criteria are established to ensure the bipartite synchronization in fixed time. Moreover, synchronization error bounds and settling time estimates are provided. Finally, numerical simulations are presented to verify the main results.

1. Introduction

Since neural networks are widely used in computer science [1], remote sensing [2], autonomous control systems [3], and other fields, their dynamic behaviors have been extensively studied over the past several decades. It is worth noting that neural networks (NNs) consider the dynamic level of neural activity. However, it is essential to recognize that synaptic weights between neurons change over time [4]. Consequently, Meyer-bäse et al. [5] developed competitive neural networks (CNNs) with different time scales in 1996, which can be viewed as an extension of Hopfield neural networks and cellular networks [6,7]. CNNs are defined using two types of state variables: short-term memory (STM) describes rapid neural activity, while long-term memory (LTM) depicts slow, unsupervised synaptic modifications. On the other hand, coupled competitive neural networks (CCNNs) consist of several interconnected subsystems, and due to their complex dynamic behavior, they have garnered significant attention [8,9].
The activation functions of NNs are widely recognized for describing the connection between the input and output of a single neuron. They are commonly considered to be continuous. When the activation function is believed to be at the high gain limit, however, the activation function approaches discontinuity. As a result, an increasing number of scholars have been conducting considerable research on NNs with discontinuous activation functions [10,11,12,13]. On the other hand, the dynamic behaviors of NNs are frequently impacted by external disturbances, such as changes in network structure, hardware facilities, and environmental noise. As far as we are aware, there are few studies that take both discontinuous activation functions and external disturbances into account when discussing CCNNs. Therefore, it is both intriguing and challenging to research discontinuous activation functions and external disturbances in CCNNs.
Synchronization means that two or more dynamical systems achieve a common dynamical behavior. The synchronization problem of NNs has garnered significant attention recently due to its wide applicability in communication systems, biological sciences, mechanical engineering, and other domains [14,15,16,17]. However, the synchronization of the aforementioned NNs only considers the cooperative relationships between network nodes. In many practical systems, relationships of competition and cooperation coexist. Therefore, the synchronization issue of NNs with both competitive and cooperative connections between nodes, known as bipartite synchronization, is of crucial importance and has been researched in [18,19]. On the other hand, due to inherent network constraints, complete synchronization may not be achievable, and instead, quasi-synchronization is observed. Quasi-synchronization implies that the synchronization error no longer approaches zero but rather converges to a bounded set. To our knowledge, few papers have addressed the quasi-bipartite synchronization problem of CCNNs.
In addition to asymptotic synchronization or exponential synchronization, finite-time synchronization (FETs) has gained widespread attention as a more practical form of network synchronization in recent years. In FETs, the settling time is bounded, but it depends on the initial state of the node. To eliminate this dependence on the initial state, fixed-time synchronization (FXTs) is proposed based on the fixed-time (FXT) stability [20], and its settling time only depends on the system or control parameters. Compared with the rich results of NNs [21,22], it is extremely rare at present to explore the FXTs of CCNNs. In [23], the authors studied the finite-time bipartite synchronization of delayed CCNNs under quantized control. To our knowledge, there are limited reports on FXT quasi-bipartite synchronization of CCNNs, and further research is needed.
Over the past decades, the control problem of networks has been one of the most widely studied topics, and many useful control methods have been developed, such as adaptive control, sliding mode control, impulse control, and intermittent control, among others. Intermittent control involves alternating periods of applying control input and periods of no control input, making it a more economical choice compared to continuous control. Hence, the intermittent control strategy has received extensive attention [24,25,26,27]. In [28], the FXT synchronization problem of time-delay complex networks under intermittent pinning control is studied. The author in [29] solved the FXT and predefined-time cluster lag synchronization of stochastic multi-weighted complex networks via intermittent quantized control. To our knowledge, there is currently no existing literature that addresses the challenging problem of FXT quasi-bipartite synchronization of CCNNs under intermittent control.
Motivated by the analysis provided above, the primary objective of this paper is to investigate FXT quasi-bipartite synchronization in coupled competitive neural networks. Firstly, the model under consideration incorporates time-varying delays, discontinuous activation functions, and external disturbances simultaneously, rendering it more comprehensive. Secondly, we introduce an innovative FXT aperiodic intermittent control scheme, making the pioneering endeavor to explore quasi-bipartite synchronization in CCNNs. Furthermore, some robust criteria are established for FXT quasi-bipartite synchronization based on the theory of practical FXT stability. Finally, we provide estimations for error bounds and settling times.
The paper is organized as follows. Section 2 provides some necessary preliminary knowledge and the model description. Section 3 introduces the main theoretical conclusions. Section 4 provides numerical examples to validate the theoretical conclusions. Section 5 completes our study and discusses future research.
Notations: R represents the set of real numbers. The n-dimensional Euclidean space is represented by R n . R n × m is the set of n × m real matrices. N + = { 1 , 2 , , N } . I n is the identity matrix. 0 n denotes zero matrix. | P | = ( | p i j | ) n × n . For a symmetric matrix B, λ m a x ( B ) represents the maximum eigenvalue of matrix B. diag(·) represents the diagonal matrix. 1 n denotes that all elements of a column vector are 1. For any p = ( p 1 , , p n ) T R n , sgn ( p ) = diag ( sign ( p 1 ) , , sign ( p n ) ) , sign ( · ) represents the sign function. sign ( D ) = ( sign ( d i j ) ) n × n . The 2-norm of the vector p is denoted by p 2 . For vector w > 0   ( < , , ) , all of the components of w are positive (negative, non-negative, non-positive). For vectors q 1 and q 2 , q 1 < q 2   ( q 1 q 2 ) implies q 1 q 2 < 0   ( q 1 q 2 0 ) . Notation ⊗ denotes Kronecker product. k > 0 , C ( [ k , 0 ] , R n ) denotes the set of continuous function from [ k , 0 ] to R n .

2. Model Description and Preliminaries

Consider the following competitive neural networks with time-varying delay:
ε z ˙ k ( t ) = c k z k ( t ) + q = 1 n a k q f q ( z q ( t ) ) + q = 1 n b k q f q ( z q ( t τ ( t ) ) ) + E k l = 1 p ω l m k l ( t ) , k = 1 , , n , m ˙ k l ( t ) = c ¯ k m k l ( t ) + a ¯ k ω l f k ( z k ( t ) ) , l = 1 , , p ,
where z k ( t ) is the state variable. m k l ( t ) represents synaptic efficiency; c k > 0 represents the self-feedback coefficient. Connection weights and delay connection weights, respectively, are represented by a k q and b k q . f q ( · ) is the output of a neuron, which is discontinuous. ω l is the weight of an external stimulus, c ¯ k and a ¯ k are given constants, E k is the intensity of the external stimulus, ε is the time scale of S T M , and τ ( t ) is the time-varying delay.
Remark 1.
Neural network models are often formulated in terms of ordinary differential equations (ODEs), mainly because ODEs are a mathematical tool that effectively describes the dynamic behavior and interactions between neurons. Specifically: first, the state of the neuron needs to be defined. This may include the neuron’s potential, activity, or other relevant variables. These states will be the unknowns of the ODEs. Second, interaction rules need to be established. Rules describing the interactions between neurons are usually expressed in the form of weights and connections. These rules will determine how one neuron affects other neurons. Based on the states of neurons and the rules of interaction, a system of ODEs can be built to describe the dynamic evolution of a neural network. This usually involves differentiating the interaction rules to capture the temporal changes in the system. The ODE system then needs to be provided with initial conditions, which represent the initial state of the neural network at a given moment in time. Finally, the system of ODEs can be solved using numerical methods (e.g., Euler’s method, Runge–Kutta method, etc.) or analytical methods to obtain the state of the neural network at different points in time. In this way, the dynamic behavior of the neural network can be accurately represented in mathematical form. This model of ODEs not only helps in theoretical analysis, but also provides a basis for simulation and emulation, enabling us to better understand the behavior and response of neural networks.
Let s ( t ) = s 1 ( t ) , , s n ( t ) T , s k ( t ) = l = 1 p ω l m k l ( t ) , and ω = ( ω 1 , , ω p ) T . Without losing of generality, suppose ω 2 2 = 1 , then network (1) can be written as
z ˙ ( t ) = C z ( t ) + A f ( z ( t ) ) + B f ( z ( t τ ( t ) ) ) + E s ( t ) , s ˙ ( t ) = C ¯ s ( t ) + A ¯ f ( z ( t ) ) ,
where z ( t ) = z 1 ( t ) , , z n ( t ) T , f ( z ( t ) ) = f 1 ( z 1 ( t ) ) , , f n ( z n ( t ) ) T , A = ( a k q ε ) n × n , B = ( b k q ε ) n × n , C = diag ( c 1 ε , , c n ε ) , C ¯ = diag ( c ¯ 1 , , c ¯ n ) , A ¯ = diag ( a ¯ 1 , , a ¯ n ) , and E = diag ( E 1 ε , ,   E n ε ) . The initial value of system (2) is given by z ( t ) = ϕ z ( t ) C ( [ τ , 0 ] , R n ) and s ( t ) = ϕ s ( t ) C ( [ τ , 0 ] , R n ) .
A class of CCNNs with external disturbances are modeled as follows:
Z ˙ i ( t ) = C Z i ( t ) + A f ( Z i ( t ) ) + B f ( Z i ( t τ ( t ) ) ) + E S i ( t ) + j = 1 N | d i j | sign ( d i j ) Z j ( t ) Z i ( t ) + R i ( t ) + Ξ i ( t ) , S ˙ i ( t ) = C ¯ S i ( t ) + A ¯ f ( Z i ( t ) ) + j = 1 N | u i j | sign ( u i j ) S j ( t ) S i ( t ) + R ¯ i ( t ) + Ξ i ( t ) , i N + ,
where Z i ( t ) = Z i 1 ( t ) , , Z i n ( t ) T R n and S i ( t ) = S i 1 ( t ) , , S i n ( t ) T R n are the state variables of S T M and L T M , respectively; Ξ i ( t ) = Ξ i 1 ( t ) , , Ξ i n ( t ) T indicates the external disturbance vector. D = ( d i j ) and U = ( u i j ) R N × N are the adjacency matrix associated with the signed graph G ( D ) and G ( U ) of the CCNNs, satisfying d i i = 0   u i i = 0 for i N + ; for i j , d i j 0   u i j 0 if there is a directed communication link from node j to node i, otherwise d i j = 0   u i j = 0 . R i ( t ) and R ¯ i ( t ) are controllers to be designed. The initial conditions of network (3) meet: Z i ( t ) = φ i ( t ) C ( [ τ , 0 ] , R n ) and S i ( t ) = φ ¯ i ( t ) C ( [ τ , 0 ] , R n ) .
Remark 2.
When d i j > 0   u i j > 0 , then the connection between nodes i and j is cooperative, and the coupling term is given as d i j Z j ( t ) Z i ( t )   u i j S j ( t ) S i ( t ) . When d i j < 0   u i j < 0 , the connection between nodes i and j is competitive, and the coupling term is presented as d i j Z j ( t ) + Z i ( t )   u i j S j ( t ) + S i ( t ) .
For the convenience of discussion, define x i ( t ) = Z i T ( t ) , S i T ( t ) , A 1 = A T , A ¯ T T , B 1 = B T , 0 n T , W i ( t ) = Ξ i T ( t ) , Ξ i T ( t ) T , K i ( t ) = R i T ( t ) , R ¯ i T ( t ) T , I = I n , 0 n , C 1 = C E 0 n C ¯ , d i j = d i j 0 0 u i j I n and D 1 = ( d i j ) 2 n N × 2 n N .
Therefore, the coupled competitive neural networks (3) become:
x ˙ i ( t ) = C 1 x i ( t ) + A 1 f ( I x i ( t ) ) + B 1 f ( I x i ( t τ ( t ) ) ) + K i ( t ) + j = 1 N | d i j | Sign ( d i j ) x j ( t ) x i ( t ) + W i ( t ) .
From (2), the tracking target can be described as follows:
y ˙ ( t ) = C 1 y ( t ) + A 1 f ( I y ( t ) ) + B 1 f ( I y ( t τ ( t ) ) ) ,
where y ( t ) = z T ( t ) , s T ( t ) T is the state vector.
The necessary definitions, lemmas, and assumptions are given below.
Definition 1 
([30]). Considering a system with discontinuous right-hand sides in the form of
ζ ˙ ( t ) = F ( ζ ( t ) ) , ζ ( 0 ) = 0 ,
where ζ R n , F ( ζ ) : R n R n is locally bounded and Lebesgue measurable. The function ζ ( t ) is said to be the solution in the Filippov sense, which is defined in the interval [ 0 , t * ) , t * [ 0 , + ) , if ζ ( t ) is absolutely continuous and satisfies the below differential inclusion
ζ ˙ ( t ) K [ F ] ( ζ ( t ) ) , a . e . t [ 0 , t * ) ,
where the set-valued map K [ F ] : R n R n is defined as
K [ F ] ( ζ ( t ) ) δ > 0 μ ( Ω ) > 0 c o ¯ { F ( B ( ζ ( t ) , δ ) Ω } ,
where c o ¯ stands for the convex closure, μ ( Ω ) is the Lebesgue measure of the set Ω, and B ( ζ ( t ) , δ ) denotes the open ball centered at ζ ( t ) with radius δ.
Definition 2.
The network (4) is said to achieve FXT quasi-bipartite synchronization with network (5) if there is a constant T 1 > 0 , such that
lim t T 1 x i ( t ) w i y ( t ) 2 θ , x i ( t ) w i y ( t ) 2 θ , t T 1 , i N + ,
where θ is a nonnegative constant.
Definition 3 
([31]). Aperiodically intermittent control is said to have an average control rate γ ( 0 , 1 ) , if there is T γ 0 such that
T c o n ( t , s ) γ ( t s ) T γ , t > s > t 0 ,
where T c o n ( t , s ) denotes the total control interval length on [ s , t ) , and T γ is called the elasticity number.
Lemma 1 
([32]). If x i 0 , i = 1 , , n , 0 < ξ 1 and η > 1 , then
i = 1 n x i ξ ( i = 1 n x i ) ξ , i = 1 n x i η n 1 η ( i = 1 n x i ) η .
Lemma 2 
([33]). For any x , y R n , and positive-definite matrix R n × n , such that
2 x T y x T x + y T 1 y .
Lemma 3 
([34]). Assume that there is a Lyapunov function V ( t ) 0 that satisfies
V ˙ ( t ) a 1 V q ( t ) a 2 V p ( t ) a 3 V ( t ) + 1 , t [ ζ k , μ k ) , V ˙ ( t ) a 4 V ( t ) + 2 , t [ μ k , ζ k + 1 ) ,
in which a 1 , a 2 , a 3 , a 4 , 1 , 2 are positive constants and 0 < p < 1 , q > 1 . It is said the system practical fixed time is stable if a 4 γ d < 0 and ( 1 γ ) d a 3 < 0 , in which d > 0 . And the setting time T * satisfies
T * 1 + a 5 ( q 1 ) T γ a 5 ( q 1 ) γ + 1 + a 6 ( 1 p ) T γ a 6 ( 1 p ) γ ,
where a 5 = a 1 ( 1 ϕ ) exp { T γ ( 1 q ) d } , a 6 = a 2 ( 1 ϕ ) and 0 < ϕ < 1 . γ and T γ are defined in Definition 3. When t > T * , there is
V ( t ) max δ 1 , δ 2 , δ 3 ,
where δ 1 = ( 1 ϕ a 1 ) 1 q , δ 2 = ( 1 ϕ a 2 ) 1 p , δ 3 = 2 d γ a 4 .
Remark 3.
When 1 and 2 are 0 in Lemma 3, it is still true, and we can refer to the proof of [35] Lemma 2. In this case, the result based on this lemma is complete synchronization. This lemma is more general and more widely applicable.
For each q = 1 , , n , the following assumptions are introduced:
Assumption 1 
([36]).  f q ( · ) : R R is continuous except on a countable set of isolated points { θ r q } , where both the left limit f q ( θ r q ) and right limit f q + ( θ r q ) exist. In addition, f q ( · ) has at most finite discontinuous jump points in each bounded compact set.
Assumption 2
([36]). There exist positive constants l q and h q such that
| ξ q ς q | l q | u ν | + h q , u , ν R ,
where ξ q K [ f q ( u ) ] , ς q K [ f q ( ν ) ] with K [ f q ( · ) ] = c o ¯ [ f q ( · ) ] = [ m i n f q ( · ) , f q + ( · ) , m a x f q ( · ) , f q + ( · ) ] .
Assumption 3
([23]). The signed graphs are structurally balanced. In other words, the node sets V of G can be divided into two unsigned subgraphs V 1 and V 2 , respectively. It satisfies V = V 1 V 2 and V 1 V 2 = . In addition, the links inside each subgraph are nonnegative, while the links between two unsigned subgraphs are negative.
Assumption 4.
The activation functions f q ( · ) satisfies
f q ( z ) = f q ( z ) , z R .
Assumption 5.
There exists a positive constant M q such that | f q ( z ) | M q .
Assumption 6.
The external disturbance Ξ i k ( t ) ( k = 1 , , 2 n ) is bounded. That is, there is a positive constant W k such that | Ξ i k ( t ) | W k .
Assumption 3 implies that there exists a diagonal matrix ω = diag ( w 1 , , w N ) , where w i = 1 if node v i V 1 , otherwise w i = 1 . To achieve the FXT quasi-bipartite synchronization of CCNNs, the intermittent controller is designed as follows:
R i ( t ) = λ x i ( t ) w i y ( t ) σ 1 sgn x i ( t ) w i y ( t ) | x i ( t ) w i y ( t ) | α σ 2 sgn x i ( t ) w i y ( t ) | x i ( t ) w i y ( t ) | β , t [ ζ k , μ k ) , 0 2 n t [ μ k , ζ k + 1 ) ,
where i N + , 0 < α < 1 , β > 1 , λ , σ 1 , σ 2 are positive constants.
Remark 4.
The intermittent control proposed in this study can be accomplished in a fixed time, unlike the previous intermittent control, which can only achieve asymptotic results. Moreover, the aperiodic intermittent controller proposed in this study is different from the controller in [26,27] as follows:
R i ( t ) = r 2 q ( e i ( t ) ) k 2 sgn ( q ( e i ( t ) ) ) ( e i ( t ) α + e i ( t ) β ) k 2 ( ( t τ t e i T ( s ) e i ( s ) d s ) 1 + α 2 + ( t τ t e i T ( s ) × e i ( s ) d s ) 1 + β 2 ) e i ( t ) e i ( t ) 2 , m T t < ( m + θ ) T , r 2 q ( e i ( t ) ) , ( m + θ ) T t < ( m + 1 ) T ,
and
R i ( t ) = k i e i ( t ) α r = 1 3 ( ξ r 1 σ r t τ r ( t ) t e i T ( s ) e i ( s ) d s ) p + 1 2 e i ( t ) e i ( t ) 2 β r = 1 3 ( ξ r 1 σ r t τ r ( t ) t e i T ( s ) e i ( s ) d s ) q + 1 2 e i ( t ) e i ( t ) 2 α s i g n e i ( t ) e i ( t ) p β s i g n e i ( t ) e i ( t ) q , m T t < ( m + θ ) T , k i e i ( t ) , ( m + θ ) T t < ( m + 1 ) T .
A linear term does not need to be set in the rest interval. This approach is proposed first to achieve FXT quasi-bipartite synchronization for coupled competitive neural networks. Furthermore, aperiodic intermittent control can be degenerated into periodic intermittent control and continuous control. It is particularly suitable for complex systems that require dynamic and flexible control.
Combined with controller (8), when t [ ζ k , μ k ) , the CCNNs (4) is rewritten as follows:
x ˙ i ( t ) = C 1 x i ( t ) + A 1 f ( I x i ( t ) ) + B 1 f ( I x i ( t τ ( t ) ) ) + j = 1 N d ¯ i j x j ( t ) + W i ( t ) λ x i ( t ) w i y ( t ) σ 1 sgn x i ( t ) w i y ( t ) | x i ( t ) w i y ( t ) | α σ 2 sgn x i ( t ) w i y ( t ) | x i ( t ) w i y ( t ) | β ,
where D ¯ 1 = ( d ¯ i j ) 2 n N × 2 n N , d ¯ i j = d i j for i j and d ¯ i i = j = 1 , j i N | d i j | . Then, according to Assumption 3, the CCNNs (9) can be transformed into:
x ˜ ˙ i ( t ) = C 1 x ˜ i ( t ) + A 1 f ( I x ˜ i ( t ) ) + B 1 f ( I x ˜ i ( t τ ( t ) ) ) + j = 1 N d ˜ i j x ˜ j ( t ) λ x ˜ i ( t ) y ( t ) σ 1 sgn x ˜ i ( t ) y ( t ) | x ˜ i ( t ) y ( t ) | α σ 2 sgn x ˜ i ( t ) y ( t ) | x ˜ i ( t ) y ( t ) | β + w i W i ( t ) ,
where x ˜ i ( t ) = w i x i ( t ) , d ˜ i j = w i d ¯ i j w j = | d i j | for i j and d ˜ i i = j = 1 , j i N | d i j | , i N + .
In a similar analysis, for t [ μ k , ζ k + 1 ) , one has
x ˜ ˙ i ( t ) = C 1 x ˜ i ( t ) + A 1 f ( I x ˜ i ( t ) ) + B 1 f ( I x ˜ i ( t τ ( t ) ) ) + j = 1 N d ˜ i j x ˜ j ( t ) + w i W i ( t ) .
By Definition 1, there is ϖ ( t ) = ϖ 1 ( t ) , , ϖ n ( t ) T K [ f ( I y ) ] such that
y ˙ ( t ) = C 1 y ( t ) + A 1 ϖ ( t ) + B 1 ϖ ( t τ ( t ) ) .
Similarly, there exists at least one measurable function i ( t ) = ( i 1 ( t ) , ,   i n ( t ) ) T K [ f ( I x ˜ i ) ] such that
x ˜ ˙ i ( t ) = C 1 x ˜ i ( t ) + A 1 i ( t ) + B 1 i ( t τ ( t ) ) + j = 1 N d ˜ i j x ˜ j ( t ) + w i W i ( t ) λ x ˜ i ( t ) y ( t ) σ 1 sgn x ˜ i ( t ) y ( t ) | x ˜ i ( t ) y ( t ) | α σ 2 sgn x ˜ i ( t ) y ( t ) | x ˜ i ( t ) y ( t ) | β , t [ ζ k , μ k ) , x ˜ ˙ i ( t ) = C 1 x ˜ i ( t ) + A 1 i ( t ) + B 1 i ( t τ ( t ) ) + j = 1 N d ˜ i j x ˜ j ( t ) + w i W i ( t ) , t [ μ k , ζ k + 1 ) .

3. Main Result

In this part, we mainly study the fixed-time quasi-bipartite synchronization of networks (12) and (13) with external disturbances. According to the fixed-time stability theory, under the designed controller (8), by constructing an appropriate Lyapunov function, some criteria are derived to ensure that networks (12) and (13) achieve fixed-time quasi-bipartite synchronization.
Define the synchronization error e i ( t ) = x ˜ i ( t ) y ( t ) , and then the error dynamical system is:
e ˙ i ( t ) = C 1 e i ( t ) + A 1 g i ( t ) + B 1 g i ( t τ ( t ) ) + j = 1 N d ˜ i j e j ( t ) + w i W i ( t ) σ 1 sgn ( e i ( t ) ) | e i ( t ) | α σ 2 sgn ( e i ( t ) ) | e i ( t ) | β λ ( e i ( t ) ) , t [ ζ k , μ k ) , e ˙ i ( t ) = C 1 e i ( t ) + A 1 g i ( t ) + B 1 g i ( t τ ( t ) ) + j = 1 N d ˜ i j e j ( t ) + w i W i ( t ) , t [ μ k , ζ k + 1 ) ,
where g i ( t ) = i ( t ) ϖ ( t ) , g i ( t τ ( t ) ) = i ( t τ ( t ) ) ϖ ( t τ ( t ) ) .
Theorem 1.
Based on Assumptions 1–6 and the controller (8), if
2 λ λ max ( Φ ) κ > 0 , ψ + λ max ( Φ ) + κ > 0 , ψ + λ max ( Φ ) + κ γ d < 0 , ( 1 γ ) d 2 λ + λ max ( Φ ) + κ < 0 ,
where Φ = I N C 1 + C 1 T + I N | A 1 | L I + ( | A 1 | L I ) T + ( D ˜ 1 + D ˜ 1 T ) , κ = κ 1 + 2 κ 2 + κ 3 , ψ, λ, d are positive constants, γ 0 , 1 stands for the average control rate, then the quasi-bipartite synchronization can be ensured between networks (12) and (13) in fixed time. The settling time satisfies
T * 2 + a 5 β 1 T γ a 5 β 1 γ + 2 + a 6 1 α T γ a 6 1 α γ ,
Here, a 5 = 2 σ 2 2 n N 1 β 2 1 ϕ exp 1 2 T γ 1 β d , a 6 = 2 σ 1 1 ϕ , T γ represents the elasticity number, 0 < ϕ < 1 . The state trajectory of (14) converges to a compact set Ω = e ( t ) e ( t ) 2 max { δ 1 , δ 2 , δ 3 } within T * , δ 1 = ( 2 ϕ σ 2 ( n N ) 1 β 2 ) 2 β + 1 , δ 2 = ( 2 ϕ σ 1 ) 2 α + 1 , δ 3 = d γ ψ ˜ , ψ ˜ = ψ + λ max ( Φ ) + κ , = N κ 1 | A 1 | h T | A 1 | h + 2 N κ 2 | B 1 | M T | B 1 | M + N κ 3 W T W .
Proof. 
Consider the following Lyapunov function
V ( t ) = e T ( t ) e ( t ) ,
where e ( t ) = e 1 T ( t ) , , e N T ( t ) T .
For t [ ζ k , μ k ) , calculate the derivative of V ( t ) along the trajectory of the error system (14), and one has
V ˙ ( t ) = 2 i = 1 N e i T ( t ) ( C 1 e i ( t ) + A 1 g i ( t ) + B 1 g i ( t τ ( t ) ) + j = 1 N d ˜ i j e j ( t ) + w i W i ( t ) λ ( e i ( t ) ) σ 1 sgn ( e i ( t ) ) | e i ( t ) | α σ 2 sgn ( e i ( t ) ) | e i ( t ) | β ) .
Based on Assumption 2, we obtain
2 i = 1 N e i T ( t ) A 1 g i ( t ) 2 i = 1 N | e i T ( t ) | | A 1 | | i ( t ) ϖ ( t ) | 2 i = 1 N | e i T ( t ) | | A 1 | [ L I | e i ( t ) | + h ] ,
where L = diag { l 1 , , l n } , h = ( h 1 , , h n ) T .
By Assumption 5, there is
2 i = 1 N e i T ( t ) B 1 g i ( t τ ( t ) ) 2 i = 1 N | e i T ( t ) | | B 1 | | i ( t τ ( t ) ) ϖ ( t τ ( t ) ) | 4 i = 1 N | e i T ( t ) | | B 1 | | M ,
where M = ( M 1 , , M n ) T .
From (15) and (16), one has
V ˙ ( t ) 2 i = 1 N e i T ( t ) C 1 e i ( t ) + 2 i = 1 N | e i T ( t ) | | A 1 | L I | e i ( t ) | + 2 i = 1 N | e i T ( t ) | | A 1 | h + 2 i = 1 N e i T ( t ) j = 1 N d ˜ i j e j ( t ) + 2 i = 1 N e i T ( t ) W i ( t ) 2 λ i = 1 N e i T ( t ) e i ( t ) 2 σ 1 i = 1 N e i T ( t ) sgn ( e i ( t ) ) | e i ( t ) | α 2 σ 2 i = 1 N e i T ( t ) sgn ( e i ( t ) ) | e i ( t ) | β + 4 i = 1 N | e i T ( t ) | | B 1 | M .
According to Lemma 1, one has
2 σ 1 i = 1 N e i T ( t ) sgn ( e i ( t ) ) | e i ( t ) | α = 2 σ 1 i = 1 N k = 1 2 n | e i k ( t ) | α + 1 2 σ 1 i = 1 N ( k = 1 2 n e i k 2 ( t ) ) α + 1 2 2 σ 1 ( i = 1 N e i T ( t ) e i ( t ) ) α + 1 2 ,
and
2 σ 2 i = 1 N e i T ( t ) sgn ( e i ( t ) ) | e i ( t ) | β = 2 σ 2 i = 1 N k = 1 2 n | e i k ( t ) | β + 1 2 σ 2 i = 1 N k = 1 2 n ( e i k 2 ( t ) ) β + 1 2 2 ( 2 n N ) 1 β 2 σ 2 ( i = 1 N e i T ( t ) e i ( t ) ) β + 1 2 .
From Lemma 2, it follows that
2 i = 1 N | e i T ( t ) | | A 1 | h κ 1 i = 1 N | e i T ( t ) | | e i ( t ) | + N κ 1 ( | A 1 | h ) T | A 1 | h ,
4 i = 1 N | e i T ( t ) | | B 1 | M 2 κ 2 i = 1 N | e i T ( t ) | | e i ( t ) | + 2 N κ 2 ( | B 1 | M ) T | B 1 | M ,
and
2 i = 1 N e i T ( t ) W i ( t ) κ 3 i = 1 N e i T ( t ) e i ( t ) + N κ 3 W T W ,
where W = ( W 1 , , W 2 n ) T .
It follows from (17)–(21) that
V ˙ ( t ) 2 i = 1 N e i T ( t ) C 1 e i ( t ) + 2 i = 1 N | e i T ( t ) | | A 1 | L I | e i ( t ) | + κ 1 i = 1 N | e i T ( t ) | | e i ( t ) | + 2 i = 1 N j = 1 N e i T ( t ) d ˜ i j e j ( t ) + 2 κ 2 i = 1 N | e i T ( t ) | | e i ( t ) | + N κ 1 ( | A 1 | h ) T | A 1 | h + 2 N κ 2 ( | B 1 | M ) T | B 1 | M 2 λ i = 1 N e i T ( t ) e i ( t ) 2 σ 1 ( i = 1 N e i T ( t ) e i ( t ) ) α + 1 2 2 σ 2 ( 2 n N ) 1 β 2 ( i = 1 N e i T ( t ) e i ( t ) ) β + 1 2 + κ 3 i = 1 N e i T ( t ) e i ( t ) + N κ 3 W T W 2 e T ( t ) ( I N C 1 ) e ( t ) + 2 e T ( t ) ( I N | A 1 | L I ) e ( t ) + κ 1 e T ( t ) e ( t ) + 2 e T ( t ) D ˜ 1 e ( t ) + 2 κ 2 e T ( t ) e ( t ) 2 λ e T ( t ) e ( t ) 2 σ 1 ( e T ( t ) e ( t ) ) α + 1 2 2 σ 2 ( 2 n N ) 1 β 2 ( e T ( t ) e ( t ) ) β + 1 2 + N κ 1 ( | A 1 | h ) T | A 1 | h + 2 N κ 2 ( | B 1 | M ) T | B 1 | M + κ 3 e T ( t ) e ( t ) + N κ 3 W T W λ max ( Φ ) + κ 1 + 2 κ 2 + κ 3 2 λ e T ( t ) e ( t ) 2 σ 1 e T ( t ) e ( t ) α + 1 2 2 σ 2 ( 2 n N ) 1 β 2 e T ( t ) e ( t ) β + 1 2 + N κ 1 ( | A 1 | h ) T | A 1 | h + 2 N κ 2 ( | B 1 | M ) T | B 1 | M + N κ 3 W T W ,
where Φ = I N ( C 1 + C 1 T ) + I N ( | A 1 | L I + ( | A 1 | L I ) T ) + ( D ˜ 1 + D ˜ 1 T ) .
Therefore,
V ˙ ( t ) λ 1 V ( t ) 2 σ 1 V ( t ) α + 1 2 2 σ 2 ( 2 n N ) 1 β 2 V ( t ) β + 1 2 +
for t [ ζ k , μ k ) , λ 1 = 2 λ λ max ( Φ ) κ , = N κ 1 ( | A 1 | h ) T | A 1 | h + 2 N κ 2 ( | B 1 | M ) T | B 1 | M + N κ 3 W T W , and κ = κ 1 + 2 κ 2 + κ 3 .
Then, for t [ μ k , ζ k + 1 ) , we have
V ˙ ( t ) 2 e T ( t ) [ I N C 1 + I N | A 1 | L I + D ˜ 1 ] e ( t ) + ψ + κ e T ( t ) e ( t ) + N κ 1 ( | A 1 | h ) T | A 1 | h + 2 N κ 2 ( | B 1 | M ) T | B 1 | M ψ ˜ V ( t ) + ,
where ψ ˜ = λ max ( Φ ) + ψ + κ .
Based on Lemma 3, the networks (12) and (13) achieve FXT quasi-bipartite synchronization and the settling time is estimated as T * . Moreover, the system error e ( t ) will converge to Ω = e ( t ) e ( t ) 2 max { δ 1 , δ 2 , δ 3 } within T, where δ 1 = ( 2 ϕ σ 2 ( n N ) 1 β 2 ) 2 β + 1 , δ 2 = ( 2 ϕ σ 1 ) 2 α + 1 , δ 3 = d γ ψ ˜ .
The theorem is proven. □
Remark 5.
A previous study [23] focused on studying finite-time bipartite synchronization in competitive neural networks, whereby the settling time was dependent on the initial state. In contrast, Theorem 1 provides a sufficient condition for achieving FXT quasi-bipartite synchronization in CCNNs where the settling time is no longer dependent on the initial state, but rather on the adjustable controller parameters and the average control rate. Furthermore, our study utilizes a more practical intermittent controller compared to the one used in [23].
In particular, when the external disturbance of the network is zero, the following Corollary 1 can be obtained. Obviously, it can be seen that the convergence domain of the network is smaller and closer to complete synchronization without external disturbance.
Corollary 1.
Based on Assumptions 1–5 and the controller (8), if
2 λ λ max ( Φ ) κ ˜ > 0 , ϱ 1 + λ max ( Φ ) + κ ˜ > 0 , ϱ 1 + λ max ( Φ ) + κ ˜ γ d < 0 , ( 1 γ ) d 2 λ + λ max ( Φ ) + κ ˜ < 0 ,
where Φ = I N C 1 + C 1 T + I N | A 1 | L I + ( | A 1 | L I ) T + ( D ˜ 1 + D ˜ 1 T ) , κ ˜ = κ 1 + 2 κ 2 , λ, ϱ 1 , d are positive constants, γ ( 0 , 1 ) stands for the average control rate, then the quasi-bipartite synchronization can be ensured between networks (12) and (13) in fixed time. The settling time satisfies
T 2 + a 5 β 1 T γ a 5 β 1 γ + 2 + a 6 1 α T γ a 6 1 α γ ,
here a 5 = 2 σ 2 2 n N 1 β 2 1 ϕ exp 1 2 T γ 1 β d , a 6 = 2 σ 1 1 ϕ , T γ represents the elasticity number, 0 < ϕ < 1 . The state trajectory of (14) converges to a compact set Ω ˜ = e ( t ) e ( t ) 2 max { δ ˜ 1 , δ ˜ 2 , δ ˜ 3 } , δ ˜ 1 = ( ˜ 2 ϕ σ 2 ( n N ) 1 β 2 ) 2 β + 1 , δ ˜ 2 = ( ˜ 2 ϕ σ 1 ) 2 α + 1 , δ ˜ 3 = ˜ d γ ϱ , ϱ = ϱ 1 + λ max ( Φ ) + κ ˜ , ˜ = N κ 1 | A 1 | h ) T | A 1 | h + 2 N κ 2 | B 1 | M T | B 1 | M .
Proof. 
The proof process of Corollary 1 is the same as Theorem 1, and it is not repeated. □
Remark 6.
Unlike prior studies [34,37], our research takes into account the impact of discontinuous activation functions, external disturbances, and competitive relationships between nodes, which more closely mimics real-world networks. Specifically, due to the competitive nature among nodes and the presence of external disturbances, the synchronization method employed in [34] is not directly applicable to achieve FXT quasi-bipartite synchronization. Consequently, our main theorem extends the prior findings of FXT bipartite synchronization and is tailored to suit the aforementioned conditions.
If the time-varying delay, discontinuous activation functions, and external disturbance are not considered, Corollary 2 is obtained. The error of Corollary 2 will converge to 0, and the result is fixed-time complete bipartite synchronization.
Corollary 2.
Based on Assumptions 1–4 and the controller (8), if
2 λ λ max ( Φ ) > 0 , ϱ 1 + λ max ( Φ ) > 0 , ϱ 1 + λ max ( Φ ) γ d < 0 , ( 1 γ ) d 2 λ + λ max ( Φ ) < 0 ,
where Φ = I N C 1 + C 1 T + I N | A 1 | L I + ( | A 1 | L I ) T + ( D ˜ 1 + D ˜ 1 T ) , λ, ϱ 1 , d are positive constants, γ ( 0 , 1 ) stands for the average control rate, then the bipartite synchronization can be ensured between networks (12) and (13) in fixed time. The settling time satisfies
T 2 + a 5 β 1 T γ a 5 β 1 γ + 2 + a 6 1 α T γ a 6 1 α γ ,
Here, a 5 = 2 σ 2 2 n N 1 β 2 1 ϕ exp 1 2 T γ 1 β d , a 6 = 2 σ 1 1 ϕ , T γ represents the elasticity number, 0 < ϕ < 1 .
Proof. 
The proof process of Corollary 2 is also similar to the proof process of Theorem 1, and the proof is no longer repeated. □
Remark 7.
It can be seen from Corollary 2 that time-varying delay, discontinuous activation function, and external disturbance will make the network unable to achieve complete synchronization, and only quasi-synchronization can be achieved. However, the results of [38] show that competitive neural networks with time-varying delays and discontinuous activation functions can achieve fixed-time complete synchronization, rather than quasi-synchronization. After analysis, it can be found that due to the intermittent control strategy, the problem of achieving full synchronization of fixed time under intermittent control is worth studying in the future.
Remark 8.
Predefined time control has emerged as a promising method that allows synchronization time to be pre-set independently of system parameters. Due to its potential in various applications, predefined time synchronization has become a highly topical research area. However, there is still insufficient research into the predefined time bipartite synchronization of CCNNs, therefore further investigation is necessary.

4. Numerical Examples

Two numerical examples are given in this part to demonstrate the validity of the derived theoretical conclusions.
Consider the following network:
z ˙ ( t ) = 1 ε C z ( t ) + 1 ε A f ( z ( t ) ) + 1 ε B f ( z ( t τ ( t ) ) ) + 1 ε E s ( t ) , s ˙ ( t ) = C ¯ s ( t ) + A ¯ f ( z ( t ) ) ,
where z ( t ) = ( z 1 ( t ) , z 2 ( t ) ) T , s ( t ) = ( s 1 ( t ) , s 2 ( t ) ) T , y ( t ) = z T ( t ) , s T ( t ) T , ε = 0.83 , τ ( t ) = 2 e t 2 ( 1 + e t ) , f ( z ( t ) ) = sin ( z 1 ( t ) ) + 0.01 sign ( z 1 ( t ) ) , sin ( z 2 ( t ) ) + 0.01 sign ( z 2 ( t ) ) T , C = diag ( 2 , 2 ) , C ¯ = diag ( 0.5 , 0.6 ) , A ¯ = diag ( 1.2 , 1 ) , E = diag ( 0.01 , 0.01 ) , A = [ 1 , 2 ; 5.2 , 3.2 ] , B = [ 4 , 2.1 ; 2 , 3.5 ] . Figure 1 shows the chaotic trajectories of network (22) with initial values z ( 0 ) = ( 0.4 , 0.6 ) T , s ( 0 ) = ( 0.7 , 0.1 ) T .
The competitive neural network considering seven nodes coupling is as follows:
x ˙ i ( t ) = 1 ε C x i ( t ) + 1 ε A f ( x i ( t ) ) + 1 ε B f ( x i ( t τ ( t ) ) ) + 1 ε E y i ( t ) + j = 1 7 | d i j | sign ( d i j ) x j ( t ) x i ( t ) + R i ( t ) + Ξ i , y ˙ i ( t ) = C ¯ y i ( t ) + A ¯ f ( x i ( t ) ) + j = 1 7 | u i j | sign ( u i j ) y j ( t ) y i ( t ) + R ¯ i ( t ) + Ξ i , i = 1 , 2 , , 7 ,
where x i ( t ) = x i 1 ( t ) , x i 2 ( t ) T , y i ( t ) = y i 1 ( t ) , y i 2 ( t ) T , x i ( t ) = x i T ( t ) , y i T ( t ) T , and where Ξ i 1 ( t ) = 0.2 cos ( t ) , Ξ i 2 ( t ) = 0.1 sin ( 2 t ) .
The topology of the network (23) is presented in Figure 2. Let V 1 = { 1 , 2 , 3 } , V 2 = { 4 , 5 , 6 , 7 } , and take ω = diag ( 1 , 1 , 1 , 1 , 1 , 1 , 1 ) , e i T ( t ) , e ^ T ( t ) T = x i ( t ) w i y ( t ) . It satisfies the definition of structurally balanced for a signed graph.
The control intervals of the intermittent controller (8) is designed as follows:
k = 0 + [ ζ k , μ k ) = l = 0 + [ 0.2 l , 0.2 l + 0.05 ) [ 0.2 l + 0.1 , 0.2 l + 0.18 ) .
Set the initial values of network (23) to be x 1 ( v ) = ( 1.9 , 2.8 ) T , x 2 ( v ) = ( 1.3 , 1.4 ) T , x 3 ( v ) = ( 2.2 , 1.4 ) T , x 4 ( v ) = ( 2.1 , 2.9 ) T , x 5 ( v ) = ( 2.4 , 1.8 ) T , x 6 ( v ) = ( 1.2 , 0.9 ) T , x 7 ( v ) = ( 1.3 , 0.2 ) T , y 1 ( v ) = ( 2.6 , 0.1 ) T , y 2 ( v ) = ( 0.4 , 0.1 ) T , y 3 ( v ) = ( 0.1 , 0.9 ) T , y 4 ( v ) = ( 0.4 , 1.6 ) T , y 5 ( v ) = ( 0.5 , 1.6 ) T , y 6 ( v ) = ( 0.1 , 0.8 ) T , y 7 ( v ) = ( 1.7 , 0.7 ) T , v [ 1 , 0 ] . By simple calculation, γ = 0.65 and λ = 32 . Choose α = 0.96 , β = 3 , σ 1 = 20 , σ 2 = 25 , κ 1 = 0.01 , κ 2 = 10 , ϕ = 0.5 , κ 3 = 1 , d = 101 and T γ = 0.0001 . By Theorem 1, the networks (22) and (23) are FXT quasi-bipartite synchronization, and it is obtained that T = 5.59 s and error bound is 2.97.
Under the controller (8), Figure 3a,b depict the trajectories of the first and second components of STM, and Figure 3c,d depict the trajectories of the first and second components of LTM. The time evolution of synchronization errors e i ( t ) = x i ( t ) w i z ( t ) and e ^ i ( t ) = y i ( t ) w i s ( t ) between systems (22) and (23) are depicted in Figure 4. Define E 1 ( t ) = e ( t ) 2 2 and E 2 ( t ) = e ^ ( t ) 2 2 , in which e ( t ) = ( e 1 ( t ) , , e 7 ( t ) ) T and e ^ ( t ) = ( e ^ 1 ( t ) , , e ^ 7 ( t ) ) T . Figure 5a,b show that the synchronization errors eventually converge within a bounded region. Together with Figure 3 and Figure 4, it can be seen that systems (22) and (23) under the intermittent controller (8) achieve the quasi-bipartite synchronization in fixed time, which coincides with the conclusion of Theorem 1.

5. Conclusions

In this study, the problem of FXT quasi-bipartite synchronization for CCNNs is considered by intermittent control strategy. Compared with the existing FXT intermittent control strategy, the linear term on the rest interval is removed, which makes our control method simpler and more economical. In addition, the influence of discontinuous activation functions, external disturbances, and competitive relationships between nodes are considered in synchronous analysis, which makes the obtained criteria more general. Note that we can implement control measures on all nodes to achieve FXT synchronization when the network topology is known. In practical scenarios, it is often either infeasible or unnecessary to control every single node. Therefore, the intermittent pinning control will be considered in forthcoming research.

Author Contributions

Methodology, S.T. and J.L.; formal analysis, S.T.; resources, J.W.; writing—original draft preparation, S.T.; writing—review and editing, S.T. and J.L.; supervision, H.J. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Special Project for Local Science and Technology Development Guided by the Central Government (Grant No. ZYYD2022A05), the Natural Science Foundation of Xinjiang Uygur Autonomous Region (Grants No. 2021D01C113, No. 2021D01D10), the National Natural Science Foundation of People’s Republic of China (Grants No. 62006196, No. 62163035), Tianshan Talent Program (Grant No. 2022TSYCLJ0004) and Xinjiang Key Laboratory of Applied Mathematics (Grant No. XJDX1401).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cai, Q.; Alam, S.; Pratama, M.; Liu, J. Robustness evaluation of multipartite complex networks based on percolation theory. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 6244–6257. [Google Scholar] [CrossRef]
  2. Hu, J.; Hu, P.; Kang, X.; Zhang, H.; Fan, S. Pan-sharpening via multiscale dynamic convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2231–2244. [Google Scholar] [CrossRef]
  3. Li, X.; Pal, N.; Li, H.; Huang, T. Intermittent event-triggered exponential stabilization for state dependent switched fuzzy neural networks with mixed delays. IEEE Trans. Fuzzy Syst. 2022, 30, 3312–3321. [Google Scholar] [CrossRef]
  4. Meyer-Bäse, A.; Ohl, F.; Scheich, H. Singular perturbation analysis of competitive neural networks with different time scales. Neural Comput. 1996, 8, 1731–1742. [Google Scholar] [CrossRef] [PubMed]
  5. Meyer-Bäse, A.; Pilyugin, S.; Wismüller, A.; Foo, S. Local exponential stability of competitive neural networks with different time scales. Eng. Appl. Artif. Intell. 2004, 17, 227–232. [Google Scholar] [CrossRef]
  6. Chen, T. Global exponential stability of delayed Hopfield neural networks. Neural Netw. 2008, 14, 977–980. [Google Scholar] [CrossRef] [PubMed]
  7. Arik, S.; Tavsanoglu, V. On the global asymptotic stability of delayed cellular neural networks. IEEE Trans. Circuits Syst. I-Regul. Pap. 2000, 47, 571–574. [Google Scholar] [CrossRef]
  8. Shi, Y.; Zhu, P. Synchronization of stochastic competitive neural networks with different timescales and reaction-diffusion terms. Neural Comput. 2014, 9, 2005–2024. [Google Scholar] [CrossRef]
  9. Yang, W.; Wang, Y.; Shen, Y.; Pan, L. Cluster synchronization of coupled delayed competitive neural networks with two time scales. Nonlinear Dyn. 2017, 90, 2767–2782. [Google Scholar] [CrossRef]
  10. Wei, C.; Wang, X.; Hui, M.; Zeng, Z. Quasi-synchronization of fractional multiweighted coupled neural networks via aperiodic intermittent control. IEEE Trans. Cybern. 2023, 54, 1671–1684. [Google Scholar] [CrossRef]
  11. He, Z.; Li, C.; Cao, C.; Li, H. Periodicity and global exponential periodic synchronization of delayed neural networks with discontinuous activations and impulsive perturbations. Neurocomputing 2021, 431, 111–127. [Google Scholar] [CrossRef]
  12. Xiang, J.; Ren, J.; Tan, M. Stability analysis for memristor-based stochastic multi-layer neural networks with coupling disturbance. Chaos Solitons Fractals 2022, 165, 112771. [Google Scholar] [CrossRef]
  13. Hu, J.; Tan, H.; Zeng, C. Global exponential stability of delayed complex-valued neural networks with discontinuous activation functions. Neurocomputing 2020, 416, 1–11. [Google Scholar] [CrossRef]
  14. Han, Z.; Chen, N.; Wei, X.; Yuan, M.; Li, H. Projective synchronization of delayed uncertain coupled memristive neural networks and their application. Entropy 2023, 25, 1241. [Google Scholar] [CrossRef]
  15. Peng, H.; Lu, R.; Shi, P. Synchronization control for coupled delayed neural networks with time-varying coupling via markov pinning strategy. IEEE Syst. J. 2022, 16, 4071–4081. [Google Scholar] [CrossRef]
  16. Cao, Y.; Zhao, L.; Wen, S.; Huang, T. Lag H synchronization of coupled neural networks with multiple state couplings and multiple delayed state couplings. Neural Netw. 2022, 151, 143–155. [Google Scholar] [CrossRef] [PubMed]
  17. Sheng, Y.; Gong, H.; Zeng, Z. Global synchronization of complex-valued neural networks with unbounded time-varying delays. Neural Netw. 2023, 162, 309–317. [Google Scholar] [CrossRef]
  18. Zhu, S.; Bao, H.; Cao, J. Bipartite synchronization of coupled delayed neural networks with cooperative-competitive interaction via event-triggered control. Physics A 2022, 600, 127586. [Google Scholar] [CrossRef]
  19. Mao, K.; Liu, X.; Cao, J.; Hu, Y. Finite-time bipartite synchronization of coupled neural networks with uncertain parameters. Physics A 2022, 585, 126431. [Google Scholar] [CrossRef]
  20. Polyakov, A. Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Autom. Control 2012, 57, 2106–2110. [Google Scholar] [CrossRef]
  21. Li, N.; Wu, X.; Feng, J.; Xu, Y.; Lü, J. Fixed-time synchronization of coupled neural networks with discontinuous activation and mismatched parameters. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2470–2482. [Google Scholar] [CrossRef]
  22. Gan, Q.; Li, L.; Yang, J.; Qin, Y.; Meng, M. Improved results on fixed-/preassigned-time synchronization for memristive complex-valued neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 5542–5556. [Google Scholar] [CrossRef]
  23. Zou, Y.; Su, H.; Tang, R.; Yang, X. Finite-time bipartite synchronization of switched competitive neural networks with time delay via quantized control. ISA Trans. 2022, 10, 156–165. [Google Scholar] [CrossRef]
  24. Ren, Y.; Jiang, H.; Li, J.; Lu, B. Finite-time synchronization of stochastic complex networks with random coupling delay via quantized aperiodically intermittent control. Neurocomputing 2021, 420, 337–348. [Google Scholar] [CrossRef]
  25. Zhang, L.; Zhong, J.; Lu, J. Intermittent control for finite-time synchronization of fractional-order complex networks. Neural Netw. 2021, 144, 11–20. [Google Scholar] [CrossRef] [PubMed]
  26. Zhou, W.; Hu, Y.; Liu, X.; Cao, J. Finite-time adaptive synchronization of coupled uncertain neural networks via intermittent control. Physics A 2022, 596, 127107. [Google Scholar] [CrossRef]
  27. Gan, Q.; Xiao, F.; Sheng, H. Fixed-time outer synchronization of hybrid-coupled delayed complex networks via periodically semi-intermittent control. J. Frankl. Inst.-Eng. Appl. Math. 2019, 356, 6656–6677. [Google Scholar] [CrossRef]
  28. Yan, D.; Chen, J.; Cao, J. Fixed-time pinning synchronization for delayed complex networks under completely intermittent control. J. Frankl. Inst.-Eng. Appl. Math. 2008, 359, 7708–7732. [Google Scholar]
  29. Qin, X.; Jiang, H.; Qiu, J.; Hu, C.; Ren, Y. Strictly intermittent quantized control for fixed/predefined-time cluster lag synchronization of stochastic multi-weighted complex networks. Neural Netw. 2023, 158, 258–271. [Google Scholar] [CrossRef]
  30. Pu, H.; Li, F. Finite-/fixed-time synchronization for Cohen-Grossberg neural networks with discontinuous or continuous activations via periodically switching control. Cogn. Neurodyn. 2022, 16, 195–213. [Google Scholar] [CrossRef]
  31. Guo, Y.; Duan, M.; Wang, P. Input-to-state stabilization of semilinear systems via aperiodically intermittent event-triggered control. IEEE Trans. Control Netw. Syst. 2022, 9, 731–741. [Google Scholar] [CrossRef]
  32. Yang, W.; Wang, Y.; Morărescu, I.; Liu, X.; Huang, Y. Fixed-time synchronization of competitive neural networks with multiple time scales. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 4133–4138. [Google Scholar] [CrossRef] [PubMed]
  33. Su, H.; Luo, R.; Huang, M.; Fu, J. Practical fixed time active control scheme for synchronization of a class of chaotic neural systems with external disturbances. Chaos Solitons Fractals 2022, 157, 111917. [Google Scholar] [CrossRef]
  34. Liu, J.; Wu, Y.; Xue, L.; Liu, J. A new intermittent control approach to practical fixed-time consensus with input delay. IEEE Trans. Circuits Syst. II-Express Briefs 2023, 70, 2186–2190. [Google Scholar] [CrossRef]
  35. Wu, Y.; Sun, Z.; Ran, G.; Xue, L. Intermittent Control for Fixed-Time Synchronization of Coupled Networks. IEEE-CAA J. Autom. Sin. 2023, 10, 1488–1490. [Google Scholar] [CrossRef]
  36. Zou, Y.; Yang, X.; Tang, R.; Cheng, Z. Finite-time quantized synchronization of coupled discontinuous competitive neural networks with proportional delay and impulsive effects. J. Frankl. Inst.-Eng. Appl. Math. 2020, 357, 11136–11152. [Google Scholar] [CrossRef]
  37. Zhao, Y.; Ren, S.; Kurths, J. Finite-time and fixed-time synchronization for a class of memristor-based competitive neural networks with different time scales. Chaos Solitons Fractals 2021, 148, 111033. [Google Scholar] [CrossRef]
  38. Zheng, C.; Hu, C.; Yu, J.; Jiang, H. Fixed-time synchronization of discontinuous competitive neural networks with time-varying delays. Neural Netw. 2022, 153, 192–203. [Google Scholar] [CrossRef]
Figure 1. (a) Chaotic trajectory of z ( t ) ; (b) chaotic trajectory of s ( t ) .
Figure 1. (a) Chaotic trajectory of z ( t ) ; (b) chaotic trajectory of s ( t ) .
Entropy 26 00199 g001
Figure 2. (a) Topology structure of STM in network (23); (b) topology structure of LTM in network (23).
Figure 2. (a) Topology structure of STM in network (23); (b) topology structure of LTM in network (23).
Entropy 26 00199 g002
Figure 3. (a) Trajectories of the first component of x i ( t ) ; (b) trajectories of the second component of x i ( t ) ; (c) trajectories of the first component of y i ( t ) ; (d) trajectories of the second component of y i ( t ) .
Figure 3. (a) Trajectories of the first component of x i ( t ) ; (b) trajectories of the second component of x i ( t ) ; (c) trajectories of the first component of y i ( t ) ; (d) trajectories of the second component of y i ( t ) .
Entropy 26 00199 g003aEntropy 26 00199 g003b
Figure 4. (a) Trajectories of the first component of e i ( t ) ; (b) trajectories of the second component of e i ( t ) ; (c) trajectories of the first component of e ^ i ( t ) ; (d) trajectories of the second component of e ^ i ( t ) .
Figure 4. (a) Trajectories of the first component of e i ( t ) ; (b) trajectories of the second component of e i ( t ) ; (c) trajectories of the first component of e ^ i ( t ) ; (d) trajectories of the second component of e ^ i ( t ) .
Entropy 26 00199 g004
Figure 5. (a) Trajectory of E 1 ( t ) with E 1 ( t ) = e ( t ) 2 2 ; (b) trajectory of E 2 ( t ) with E 2 ( t ) = e ^ ( t ) 2 2 .
Figure 5. (a) Trajectory of E 1 ( t ) with E 1 ( t ) = e ( t ) 2 2 ; (b) trajectory of E 2 ( t ) with E 2 ( t ) = e ^ ( t ) 2 2 .
Entropy 26 00199 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, S.; Li, J.; Jiang, H.; Wang, J. Fixed-Time Aperiodic Intermittent Control for Quasi-Bipartite Synchronization of Competitive Neural Networks. Entropy 2024, 26, 199. https://doi.org/10.3390/e26030199

AMA Style

Tang S, Li J, Jiang H, Wang J. Fixed-Time Aperiodic Intermittent Control for Quasi-Bipartite Synchronization of Competitive Neural Networks. Entropy. 2024; 26(3):199. https://doi.org/10.3390/e26030199

Chicago/Turabian Style

Tang, Shimiao, Jiarong Li, Haijun Jiang, and Jinling Wang. 2024. "Fixed-Time Aperiodic Intermittent Control for Quasi-Bipartite Synchronization of Competitive Neural Networks" Entropy 26, no. 3: 199. https://doi.org/10.3390/e26030199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop