1. Introduction
Since neural networks are widely used in computer science [
1], remote sensing [
2], autonomous control systems [
3], and other fields, their dynamic behaviors have been extensively studied over the past several decades. It is worth noting that neural networks (NNs) consider the dynamic level of neural activity. However, it is essential to recognize that synaptic weights between neurons change over time [
4]. Consequently, Meyer-bäse et al. [
5] developed competitive neural networks (CNNs) with different time scales in 1996, which can be viewed as an extension of Hopfield neural networks and cellular networks [
6,
7]. CNNs are defined using two types of state variables: short-term memory (STM) describes rapid neural activity, while long-term memory (LTM) depicts slow, unsupervised synaptic modifications. On the other hand, coupled competitive neural networks (CCNNs) consist of several interconnected subsystems, and due to their complex dynamic behavior, they have garnered significant attention [
8,
9].
The activation functions of NNs are widely recognized for describing the connection between the input and output of a single neuron. They are commonly considered to be continuous. When the activation function is believed to be at the high gain limit, however, the activation function approaches discontinuity. As a result, an increasing number of scholars have been conducting considerable research on NNs with discontinuous activation functions [
10,
11,
12,
13]. On the other hand, the dynamic behaviors of NNs are frequently impacted by external disturbances, such as changes in network structure, hardware facilities, and environmental noise. As far as we are aware, there are few studies that take both discontinuous activation functions and external disturbances into account when discussing CCNNs. Therefore, it is both intriguing and challenging to research discontinuous activation functions and external disturbances in CCNNs.
Synchronization means that two or more dynamical systems achieve a common dynamical behavior. The synchronization problem of NNs has garnered significant attention recently due to its wide applicability in communication systems, biological sciences, mechanical engineering, and other domains [
14,
15,
16,
17]. However, the synchronization of the aforementioned NNs only considers the cooperative relationships between network nodes. In many practical systems, relationships of competition and cooperation coexist. Therefore, the synchronization issue of NNs with both competitive and cooperative connections between nodes, known as bipartite synchronization, is of crucial importance and has been researched in [
18,
19]. On the other hand, due to inherent network constraints, complete synchronization may not be achievable, and instead, quasi-synchronization is observed. Quasi-synchronization implies that the synchronization error no longer approaches zero but rather converges to a bounded set. To our knowledge, few papers have addressed the quasi-bipartite synchronization problem of CCNNs.
In addition to asymptotic synchronization or exponential synchronization, finite-time synchronization (FETs) has gained widespread attention as a more practical form of network synchronization in recent years. In FETs, the settling time is bounded, but it depends on the initial state of the node. To eliminate this dependence on the initial state, fixed-time synchronization (FXTs) is proposed based on the fixed-time (FXT) stability [
20], and its settling time only depends on the system or control parameters. Compared with the rich results of NNs [
21,
22], it is extremely rare at present to explore the FXTs of CCNNs. In [
23], the authors studied the finite-time bipartite synchronization of delayed CCNNs under quantized control. To our knowledge, there are limited reports on FXT quasi-bipartite synchronization of CCNNs, and further research is needed.
Over the past decades, the control problem of networks has been one of the most widely studied topics, and many useful control methods have been developed, such as adaptive control, sliding mode control, impulse control, and intermittent control, among others. Intermittent control involves alternating periods of applying control input and periods of no control input, making it a more economical choice compared to continuous control. Hence, the intermittent control strategy has received extensive attention [
24,
25,
26,
27]. In [
28], the FXT synchronization problem of time-delay complex networks under intermittent pinning control is studied. The author in [
29] solved the FXT and predefined-time cluster lag synchronization of stochastic multi-weighted complex networks via intermittent quantized control. To our knowledge, there is currently no existing literature that addresses the challenging problem of FXT quasi-bipartite synchronization of CCNNs under intermittent control.
Motivated by the analysis provided above, the primary objective of this paper is to investigate FXT quasi-bipartite synchronization in coupled competitive neural networks. Firstly, the model under consideration incorporates time-varying delays, discontinuous activation functions, and external disturbances simultaneously, rendering it more comprehensive. Secondly, we introduce an innovative FXT aperiodic intermittent control scheme, making the pioneering endeavor to explore quasi-bipartite synchronization in CCNNs. Furthermore, some robust criteria are established for FXT quasi-bipartite synchronization based on the theory of practical FXT stability. Finally, we provide estimations for error bounds and settling times.
The paper is organized as follows.
Section 2 provides some necessary preliminary knowledge and the model description.
Section 3 introduces the main theoretical conclusions.
Section 4 provides numerical examples to validate the theoretical conclusions.
Section 5 completes our study and discusses future research.
Notations: represents the set of real numbers. The n-dimensional Euclidean space is represented by . is the set of real matrices. . is the identity matrix. denotes zero matrix. . For a symmetric matrix B, represents the maximum eigenvalue of matrix B. diag(·) represents the diagonal matrix. denotes that all elements of a column vector are 1. For any , , represents the function. . The 2-norm of the vector p is denoted by . For vector , all of the components of are positive (negative, non-negative, non-positive). For vectors and , implies . Notation ⊗ denotes Kronecker product. , denotes the set of continuous function from to .
2. Model Description and Preliminaries
Consider the following competitive neural networks with time-varying delay:
where
is the state variable.
represents synaptic efficiency;
represents the self-feedback coefficient. Connection weights and delay connection weights, respectively, are represented by
and
.
is the output of a neuron, which is discontinuous.
is the weight of an external stimulus,
and
are given constants,
is the intensity of the external stimulus,
is the time scale of
, and
is the time-varying delay.
Remark 1. Neural network models are often formulated in terms of ordinary differential equations (ODEs), mainly because ODEs are a mathematical tool that effectively describes the dynamic behavior and interactions between neurons. Specifically: first, the state of the neuron needs to be defined. This may include the neuron’s potential, activity, or other relevant variables. These states will be the unknowns of the ODEs. Second, interaction rules need to be established. Rules describing the interactions between neurons are usually expressed in the form of weights and connections. These rules will determine how one neuron affects other neurons. Based on the states of neurons and the rules of interaction, a system of ODEs can be built to describe the dynamic evolution of a neural network. This usually involves differentiating the interaction rules to capture the temporal changes in the system. The ODE system then needs to be provided with initial conditions, which represent the initial state of the neural network at a given moment in time. Finally, the system of ODEs can be solved using numerical methods (e.g., Euler’s method, Runge–Kutta method, etc.) or analytical methods to obtain the state of the neural network at different points in time. In this way, the dynamic behavior of the neural network can be accurately represented in mathematical form. This model of ODEs not only helps in theoretical analysis, but also provides a basis for simulation and emulation, enabling us to better understand the behavior and response of neural networks.
Let
,
, and
. Without losing of generality, suppose
, then network (1) can be written as
where
,
,
,
,
,
,
, and
. The initial value of system (2) is given by
and
.
A class of CCNNs with external disturbances are modeled as follows:
where
and
are the state variables of
and
, respectively;
indicates the external disturbance vector.
and
are the adjacency matrix associated with the signed graph
and
of the CCNNs, satisfying
for
; for
,
if there is a directed communication link from node
j to node
i, otherwise
.
and
are controllers to be designed. The initial conditions of network (3) meet:
and
.
Remark 2. When , then the connection between nodes i and j is cooperative, and the coupling term is given as . When , the connection between nodes i and j is competitive, and the coupling term is presented as .
For the convenience of discussion, define , , , , , , , and .
Therefore, the coupled competitive neural networks (3) become:
From (2), the tracking target can be described as follows:
where
is the state vector.
The necessary definitions, lemmas, and assumptions are given below.
Definition 1 ([
30])
. Considering a system with discontinuous right-hand sides in the form ofwhere , is locally bounded and Lebesgue measurable. The function is said to be the solution in the Filippov sense, which is defined in the interval , if is absolutely continuous and satisfies the below differential inclusionwhere the set-valued map is defined aswhere stands for the convex closure, is the Lebesgue measure of the set Ω, and denotes the open ball centered at with radius δ. Definition 2. The network (4) is said to achieve FXT quasi-bipartite synchronization with network (5) if there is a constant , such thatwhere θ is a nonnegative constant. Definition 3 ([
31])
. Aperiodically intermittent control is said to have an average control rate , if there is such thatwhere denotes the total control interval length on , and is called the elasticity number. Lemma 1 ([
32])
. If , and , then Lemma 2 ([
33])
. For any , and positive-definite matrix , such that Lemma 3 ([
34])
. Assume that there is a Lyapunov function that satisfiesin which are positive constants and , . It is said the system practical fixed time is stable if and , in which . And the setting time satisfieswhere , and . γ and are defined in Definition 3. When , there iswhere , , . Remark 3. When and are 0 in Lemma 3, it is still true, and we can refer to the proof of [35] Lemma 2. In this case, the result based on this lemma is complete synchronization. This lemma is more general and more widely applicable. For each , the following assumptions are introduced:
Assumption 1 ([
36])
. is continuous except on a countable set of isolated points , where both the left limit and right limit exist. In addition, has at most finite discontinuous jump points in each bounded compact set. Assumption 2 ([
36])
. There exist positive constants and such thatwhere , with . Assumption 3 ([
23])
. The signed graphs are structurally balanced. In other words, the node sets of G can be divided into two unsigned subgraphs and , respectively. It satisfies and . In addition, the links inside each subgraph are nonnegative, while the links between two unsigned subgraphs are negative. Assumption 4. The activation functions satisfies Assumption 5. There exists a positive constant such that .
Assumption 6. The external disturbance is bounded. That is, there is a positive constant such that .
Assumption 3 implies that there exists a diagonal matrix
, where
if node
, otherwise
. To achieve the FXT quasi-bipartite synchronization of CCNNs, the intermittent controller is designed as follows:
where
,
,
,
,
,
are positive constants.
Remark 4. The intermittent control proposed in this study can be accomplished in a fixed time, unlike the previous intermittent control, which can only achieve asymptotic results. Moreover, the aperiodic intermittent controller proposed in this study is different from the controller in [26,27] as follows:andA linear term does not need to be set in the rest interval. This approach is proposed first to achieve FXT quasi-bipartite synchronization for coupled competitive neural networks. Furthermore, aperiodic intermittent control can be degenerated into periodic intermittent control and continuous control. It is particularly suitable for complex systems that require dynamic and flexible control. Combined with controller (8), when
, the CCNNs (4) is rewritten as follows:
where
,
for
and
. Then, according to Assumption 3, the CCNNs (9) can be transformed into:
where
,
for
and
,
.
In a similar analysis, for
, one has
By Definition 1, there is
such that
Similarly, there exists at least one measurable function
such that
3. Main Result
In this part, we mainly study the fixed-time quasi-bipartite synchronization of networks (12) and (13) with external disturbances. According to the fixed-time stability theory, under the designed controller (8), by constructing an appropriate Lyapunov function, some criteria are derived to ensure that networks (12) and (13) achieve fixed-time quasi-bipartite synchronization.
Define the synchronization error
, and then the error dynamical system is:
where
,
.
Theorem 1. Based on Assumptions 1–6 and the controller (8), ifwhere , , ψ, λ, d are positive constants, stands for the average control rate, then the quasi-bipartite synchronization can be ensured between networks (12) and (13) in fixed time. The settling time satisfiesHere, , , represents the elasticity number, . The state trajectory of (14) converges to a compact set within , , , , , . Proof. Consider the following Lyapunov function
where
.
For
, calculate the derivative of
along the trajectory of the error system (14), and one has
Based on Assumption 2, we obtain
where
,
.
By Assumption 5, there is
where
.
From (15) and (16), one has
According to Lemma 1, one has
and
From Lemma 2, it follows that
and
where
It follows from (17)–(21) that
where
.
Therefore,
for
,
,
, and
.
Then, for
, we have
where
.
Based on Lemma 3, the networks (12) and (13) achieve FXT quasi-bipartite synchronization and the settling time is estimated as . Moreover, the system error will converge to within T, where , , .
The theorem is proven. □
Remark 5. A previous study [23] focused on studying finite-time bipartite synchronization in competitive neural networks, whereby the settling time was dependent on the initial state. In contrast, Theorem 1 provides a sufficient condition for achieving FXT quasi-bipartite synchronization in CCNNs where the settling time is no longer dependent on the initial state, but rather on the adjustable controller parameters and the average control rate. Furthermore, our study utilizes a more practical intermittent controller compared to the one used in [23]. In particular, when the external disturbance of the network is zero, the following Corollary 1 can be obtained. Obviously, it can be seen that the convergence domain of the network is smaller and closer to complete synchronization without external disturbance.
Corollary 1. Based on Assumptions 1–5 and the controller (8), ifwhere , , λ, , d are positive constants, stands for the average control rate, then the quasi-bipartite synchronization can be ensured between networks (12) and (13) in fixed time. The settling time satisfieshere , , represents the elasticity number, . The state trajectory of (14) converges to a compact set , , , , , . Proof. The proof process of Corollary 1 is the same as Theorem 1, and it is not repeated. □
Remark 6. Unlike prior studies [34,37], our research takes into account the impact of discontinuous activation functions, external disturbances, and competitive relationships between nodes, which more closely mimics real-world networks. Specifically, due to the competitive nature among nodes and the presence of external disturbances, the synchronization method employed in [34] is not directly applicable to achieve FXT quasi-bipartite synchronization. Consequently, our main theorem extends the prior findings of FXT bipartite synchronization and is tailored to suit the aforementioned conditions. If the time-varying delay, discontinuous activation functions, and external disturbance are not considered, Corollary 2 is obtained. The error of Corollary 2 will converge to 0, and the result is fixed-time complete bipartite synchronization.
Corollary 2. Based on Assumptions 1–4 and the controller (8), ifwhere , λ, , d are positive constants, stands for the average control rate, then the bipartite synchronization can be ensured between networks (12) and (13) in fixed time. The settling time satisfiesHere, , , represents the elasticity number, . Proof. The proof process of Corollary 2 is also similar to the proof process of Theorem 1, and the proof is no longer repeated. □
Remark 7. It can be seen from Corollary 2 that time-varying delay, discontinuous activation function, and external disturbance will make the network unable to achieve complete synchronization, and only quasi-synchronization can be achieved. However, the results of [38] show that competitive neural networks with time-varying delays and discontinuous activation functions can achieve fixed-time complete synchronization, rather than quasi-synchronization. After analysis, it can be found that due to the intermittent control strategy, the problem of achieving full synchronization of fixed time under intermittent control is worth studying in the future. Remark 8. Predefined time control has emerged as a promising method that allows synchronization time to be pre-set independently of system parameters. Due to its potential in various applications, predefined time synchronization has become a highly topical research area. However, there is still insufficient research into the predefined time bipartite synchronization of CCNNs, therefore further investigation is necessary.
4. Numerical Examples
Two numerical examples are given in this part to demonstrate the validity of the derived theoretical conclusions.
Consider the following network:
where
,
,
,
,
,
,
,
,
,
,
,
.
Figure 1 shows the chaotic trajectories of network (22) with initial values
,
.
The competitive neural network considering seven nodes coupling is as follows:
where
,
,
, and where
,
.
The topology of the network (23) is presented in
Figure 2. Let
,
, and take
,
. It satisfies the definition of structurally balanced for a signed graph.
The control intervals of the intermittent controller (8) is designed as follows:
Set the initial values of network (23) to be , , , , , , , , , , , , , , . By simple calculation, and . Choose , , , , , , , , and . By Theorem 1, the networks (22) and (23) are FXT quasi-bipartite synchronization, and it is obtained that s and error bound is 2.97.
Under the controller (8),
Figure 3a,b depict the trajectories of the first and second components of STM, and
Figure 3c,d depict the trajectories of the first and second components of LTM. The time evolution of synchronization errors
and
between systems (22) and (23) are depicted in
Figure 4. Define
and
, in which
and
.
Figure 5a,b show that the synchronization errors eventually converge within a bounded region. Together with
Figure 3 and
Figure 4, it can be seen that systems (22) and (23) under the intermittent controller (8) achieve the quasi-bipartite synchronization in fixed time, which coincides with the conclusion of Theorem 1.