Next Article in Journal
U-Net-Based Learning Using Enhanced Lane Detection with Directional Lane Attention Maps for Various Driving Environments
Next Article in Special Issue
Local C0,1-Regularity for the Parabolic p-Laplacian Equation on the Group SU(3)
Previous Article in Journal
Contravariant Curvatures of Doubly Warped Product Poisson Manifolds
Previous Article in Special Issue
Dynamic Analysis of the M/G/1 Stochastic Clearing Queueing Model in a Three-Phase Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Synchronization of Impulsive Reaction–Diffusion BAM Neural Networks at a Fixed and Predetermined Time

by
Rouzimaimaiti Mahemuti
1,2,*,
Ehmet Kasim
3 and
Hayrengul Sadik
3
1
School of Information Technology and Engineering, Guangzhou College of Commerce, Guangzhou 511363, China
2
Guangdong Provincial Key Laboratory of Computational Science and Material Design, Southern University of Science and Technology, Shenzhen 518055, China
3
College of Mathematics and Systems Science, Xinjiang University, Urumqi 830017, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(8), 1204; https://doi.org/10.3390/math12081204
Submission received: 3 March 2024 / Revised: 6 April 2024 / Accepted: 15 April 2024 / Published: 17 April 2024
(This article belongs to the Special Issue Research on Dynamical Systems and Differential Equations)

Abstract

:
This paper discusses the synchronization problem of impulsive stochastic bidirectional associative memory neural networks with a diffusion term, specifically focusing on the fixed-time (FXT) and predefined-time (PDT) synchronization. First, a number of more relaxed lemmas are introduced for the FXT and PDT stability of general types of impulsive nonlinear systems. A controller that does not require a sign function is then proposed to ensure that the synchronization error converges to zero within a predetermined time. The controllerdesigned in this paper serves the additional purpose of preventing the use of an unreliable inequality in the course of proving the main results. Next, to guarantee FXT and PDT synchronization of the drive–response systems, this paper employs the Lyapunov function method and derives sufficient conditions. Finally, a numerical simulation is presented to validate the theoretical results.

1. Introduction

Bidirectional associative memory (BAM) neural networks are a type of artificial neural network model that can be used to recognize and classify patterns in input data. They can be applied to tasks such as pattern recognition, image classification, speech recognition, and signal processing [1,2,3,4,5]. In BAM networks, information can be stored and retrieved bidirectionally, meaning that patterns can be associated in both forward and backward directions [1]. BAM networks are particularly useful for establishing and retrieving associations between patterns. They can be used to build associative memory systems in which patterns can be stored and retrieved based on their associations with other patterns [6].
The reaction–diffusion equation is a powerful tool for modeling and understanding dynamic systems that involve both diffusion and chemical reactions. Its applications span various scientific disciplines and have practical implications in fields such as pattern formation, biology, chemistry, physics, and image processing [7]. In addition, the reaction–diffusion term plays a crucial role in BAM neural networks. The reaction–diffusion term allows for the dynamic evolution of the network’s activity, and is responsible for the learning and recall processes. For instance, the reaction–diffusion term enables the network to store the associations between input and output patterns [8] as well as to recall them accurately, making it a powerful tool in applications such as pattern recognition and associative memory tasks. In this decade, there has been a significant amount of research on reaction-diffusion neural networks. In one study [9], the authors focused on achieving global exponential synchronization in delayed BAM neural networks with reaction–diffusion terms. Another study [10] developed an adaptive pinning controller to ensure tracking synchronization for a specific class of neural networks with coupled reaction–diffusion terms. Similarly, in [11] the authors examined the general decay synchronization of delayed reaction–diffusion BAM neural networks through the use of a nonlinear controller. In [12], the authors investigated the synchronization of delayed fractional reaction-diffusion neural networks with mixed boundary conditions.
On the other hand, there has been growing interest among researchers in studying how neural networks can synchronize under the influence of random disturbances. The introduction of these disturbances enhances the network’s resilience to noise and fluctuations in the input data [13]. This stochastic perturbation technique improves the network’s capacity to generalize and make precise predictions in uncertain scenarios [14], ultimately leading to more accurate and reliable pattern recognition outcomes. By incorporating randomness into the system, BAM neural networks can exhibit superior performance and adaptability across various applications [14,15,16,17,18].
In the natural course of motion, it is inevitable to experience abrupt changes. Changes that occur in a very short period of time in comparison to the overall movement are referred to as impulse effects. These effects play a vital role in the operation of BAM neural networks. When a particular pattern is introduced to the network, it spreads through the interconnected nodes, activating the appropriate nodes and modifying their states. This enables the network to remember and identify similar patterns in future instances. It is one of the key capabilities of impulsive neural networks, making them highly valuable in various fields including image encryption, time series forecasting, and natural language processing [19,20,21].
Following the introduction of the concept of synchronization in neural networks by Hertz et. al. [22], research in computational neuroscience has been increasingly focused on this area. Numerous studies have been conducted on synchronization. In [23], the authors carried out an investigation of general decay synchronization in time-delayed BAM neural networks using a nonlinear feedback controller. Subsequently, other researchers studied general decay and switching synchronization in reaction–diffusion BAM neural networks with time-delay [11,24]. This work has been extended by incorporating stochastic phenomena into reaction–diffusion BAM neural networks [25,26,27]. However, in terms of real engineering applications, achieving synchronization in finite time is more desirable. Recently, finite-time (FNT) synchronization, both with and without reaction–diffusion, has gained attention among researchers [28,29,30,31]. FXT synchronization, which accomplishes synchronization in a fixed time, has become particularly popular [32,33]. In [32], the authors studied FXT synchronization for neural networks. In another reference [33], FXT synchronization was successfully applied to impulsive neural networks with stochastic perturbations within a predefined time. The application of fixed-time synchronization in image encryption and machine learning can improve the security and stability of algorithms. For example, in the field of image encryption, the time factor can be introduced into the encryption process by setting a fixed time interval, making it more difficult to predict. Additionally, fixed-time synchronization can be used to control the inference speed of the model, ensuring that it completes the inference task within a fixed amount of time. However, in FXT synchronization it is impossible to predict the convergence time. Researchers have made significant progress recently towards overcoming this limitation [33,34,35,36,37]. In [34], the authors proposed PDT synchronization for neural networks, which guarantees convergence of the error to zero within a specified time. In [36], the authors explored the FXT and PDT lag synchronization of complex-valued BAM neural networks with random disturbances using the non-separation method. However, very few studies have investigated the combined effects of reaction–diffusion terms, stochastic perturbations, and impulse effects in BAM neural networks. These factors can greatly improve the network’s robustness against data noise and enhance its predictive capabilities.
In this paper, inspired by the aforementioned observations, we investigate the PDT synchronization of stochastic impulsive reaction–diffusion BAM neural networks. The key contributions of this study are as follows. We are the first to investigate FXT and PDT synchronization in BAM neural networks incorporating stochastic perturbations, impulsive effects, and reaction–diffusion terms. The reaction-diffusion term is a versatile tool for modeling dynamic systems, finding applications in pattern formation, physics, image processing, etc. Stochastic perturbation techniques improve network generalization and prediction in uncertain scenarios, essential for information processing, machine learning, and image encryption. BAM neural networks handle the abrupt changes crucial for image encryption, time series forecasting, and natural language processing. Combining these techniques enhances network robustness and predictive capabilities for applications in machine learning, image encryption, etc. In addition, we propose a new controller to design simple criteria for achieving PDT synchronization in BAM neural networks with stochastic perturbations, reaction–diffusion terms, and general impulsive effects. Finally, we demonstrate that the PDT synchronization approach is robust against variations in the parameter settings and initial conditions. To effectively showcase these novel and valuable contributions, Table 1 presents a comparative analysis of this paper with respect to previous works, where R and C correspond to the set of real numbers and the set of complex numbers, respectively.
We have structured the remainder of this paper as follows. Section 2 introduces essential definitions, lemmas, and details of the considered systems. These elements are crucial to proving the main results presented in the subsequent section. Section 3 presents the design for a novel controller aimed at achieving PDT synchronization of impulsive reaction–diffusion BAM neural networks with stochastic perturbations. In Section 4, an example is provided to assess the effectiveness of the theoretical results proposed in this paper. Finally, Section 5 provides a concise conclusion summarizing the key points discussed in this paper.

2. Preliminaries

Throughout this paper, we define the range of indices for neurons I = { 1 , 2 , , n } , J = { 1 , 2 , , m } , H = { 1 , 2 , , l } , where n and m represent the total number of neurons and l is the dimension of the space. We consider the following nonlinear impulsive stochastic reaction–diffusion BAM neural networks:
d v i ( t ) = [ h = 1 l D i h 2 v i ( t , x ) x h 2 a i v i ( t , x ) + j = 1 m b i j f j ( w j ( t , x ) ) + I i ] d t + ϱ i ( t , v i ( t , x ) ) d ω ( t ) , d w j ( t ) = [ h = 1 l D ¯ j h 2 w j ( t , x ) x h 2 a ¯ j w j ( t , x ) + i = 1 n b ¯ j i g i ( v i ( t , x ) ) + J j ] d t + ϱ ¯ j ( t , w j ( t , x ) ) d ω ( t ) , Δ v i ( t τ , x ) = ( ξ i 1 ) v i ( t τ , x ) , Δ w j ( t τ , x ) = ( ξ ¯ j 1 ) w j ( t τ , x )
for i I , j J , and h H , where τ is a positive intger, i and j are indices representing the ith neuron and jthe neuron, respectively, the vector x = ( x 1 , x 2 , , x l ) T represents the spatial location within a bounded compact set Ω , which has a smooth boundary Ω , the transpose T denotes the vector’s transpose operation, v i ( t , x ) R and w j ( t , x ) R correspond to the state variables of the ith neuron and jth neuron at time t and space x, respectively, D i h and D ¯ j h represent the diffusion coefficients, the positive constants a i and a ¯ j represent the self-inhibition rates, the constants b i j and b ¯ j i represent the synaptic connection weights, the variables I i and J j represent the bias of the neurons, ν i and ν ¯ j are the input of the ith neuron and jth neuron, respectively, the functions h i ( · ) and g j ( · ) are the activation functions of the neurons, the functions ϱ i ( t , · ) : R + × R R k and ϱ ¯ i ( t , · ) : R + × R R k stand for the noise intensity functions, ω ( t ) is the k-dimensional Brownian motion introduced in [33], and ξ i > 0 and ξ ¯ j > 0 are constants. For all τ N , v i ( t τ , x ) = lim t t τ 0 v i ( t , x ) = v i ( t τ , x ) , v i ( t τ + , x ) = lim t t τ + 0 v i ( t , x ) , τ N , w i ( t τ , x ) = lim t t τ 0 w i ( t , x ) = w i ( t τ , x ) , w i ( t τ + , x ) = lim t t τ + 0 w i ( t , x ) are the impulse jumps occurring at the impulse moment t τ and { t τ , τ = 1 , 2 , } is a strictly increasing sequence satisfying t τ + when τ + . The initial boundary conditions of System (1) are as follows:
v i ( 0 , x ) = v i 0 ( x ) , w j ( 0 , x ) = w j 0 ( x ) , for x Ω ,
v i ( t , x ) = 0 , w j ( t , x ) = 0 for ( t , x ) ( t 0 , + ) × Ω ,
for i I and j J , where v i 0 ( · ) and w j 0 ( · ) are bounded continuous functions. The construction of the dynamical system is shown in Figure 1.
Now, we focus on the response system for the drive system in (1):
d v ˜ i ( t ) = [ h = 1 l D i h 2 v ˜ i ( t , x ) x h 2 a i v ˜ i ( t , x ) + j = 1 m b i j f j ( w ˜ j ( t , x ) ) + I i + u i ( t , x ) ] d t + ϱ i ( t , v ˜ i ( t , x ) ) d ω ( t ) , t t τ , d w ˜ j ( t ) = [ h = 1 l D ¯ j h 2 w ˜ j ( t , x ) x h 2 a ¯ j w ˜ j ( t , x ) + i = 1 n b ¯ j i g i ( v ˜ i ( t , x ) ) + J j + u ¯ j ( t , x ) ] d t + ϱ ¯ j ( t , w ˜ j ( t , x ) ) d ω ( t ) , t t τ , Δ v ˜ i ( t τ , x ) = v ˜ i ( t τ + , x ) v ˜ i ( t τ , x ) = ( ξ i 1 ) v ˜ i ( t τ , x ) , Δ w ˜ j ( t τ , x ) = w ˜ j ( t τ + , x ) w ˜ j ( t τ , x ) = ( ξ ¯ j 1 ) w ˜ j ( t τ , x ) ,
where v ˜ i ( t , x ) R and w ˜ j ( t , x ) R are the state variables of System (2). In the upcoming section, the controllers (referred as u i ( t , x ) and u ¯ j ( t , x ) ) are designed, where i I , j J . The initial-boundary conditions of System (2) are provided by
v ˜ i ( 0 , x ) = v ˜ i 0 ( x ) , w ˜ j ( 0 , x ) = w ˜ j 0 ( x ) , for x Ω ,
v ˜ i ( t , x ) = 0 , w ˜ j ( t , x ) = 0 for ( t , x ) ( t 0 , + ) × Ω ,
for i I and j J , where v ˜ i 0 ( · ) and w ˜ j 0 ( · ) are bounded continuous functions. Letting i I and j J , the following assumptions hold throughout this paper.
Assumption 1.
The activation functions f j ( · ) and g i ( · ) of the neurons satisfy the Lipschitz condition; in other words, for any real numbers x 1 and x 2 there exist L j f > 0 and L i g > 0 such that
| f j ( x 1 ) f j ( x 2 ) |     L j f | x 1 x 2 | , | g i ( x 1 ) g i ( x 2 ) |     L i g | x 1 x 2 | .
Assumption 2.
There exist η i > 0 and η ¯ j > 0 , and the noise intensity functions ϱ i ( t , · ) and ϱ ¯ j ( t , · ) satisfy the following inequalities:
[ ϱ i ( t , x 1 ) ϱ i ( t , x 2 ) ] T [ ϱ i ( t , x 1 ) ϱ i ( t , x 2 ) ] η i ( x 1 x 2 ) 2 [ ϱ ¯ j ( t , x 1 ) ϱ ¯ j ( t , x 2 ) ] T [ ϱ ¯ j ( t , x 1 ) ϱ ¯ j ( t , x 2 ) ] η ¯ j ( x 1 x 2 ) 2
for all real numbers x 1 and x 2 .
The error system between the drive–response systems in (1) and (2) can be expressed as
d e i ( t ) = [ h = 1 l D i h 2 e i ( t , x ) x h 2 a i e i ( t , x ) + j = 1 m b i j F j ( ε j ( t , x ) ) + u i ( t , x ) ] d t + ρ i ( t , e i ( t , x ) ) d ω ( t ) , t t τ , d ε j ( t ) = [ h = 1 l D ¯ j h 2 ε j ( t , x ) x h 2 a ¯ j ε j ( t , x ) + i = 1 n b ¯ j i G i ( e i ( t , x ) ) + u ¯ j ( t , x ) ] d t + ρ ¯ j ( t , ε ˜ j ( t , x ) ) d ω ( t ) , t t τ , Δ e i ( t τ , x ) = e i ( t τ + , x ) e i ( t τ , x ) = ( ξ i 1 ) e i ( t τ , x ) , Δ ε j ( t τ , x ) = ε j ( t τ + , x ) ε j ( t τ , x ) = ( ξ ¯ j 1 ) ε j ( t τ , x ) ,
where
e i ( t , x ) = v ˜ i ( t , x ) v i ( t , x ) , ε j ( t , x ) = w ˜ j ( t , x ) w j ( t , x ) , F j ( ε j ( t , x ) ) = f j ( w ˜ j ( t , x ) f j ( w j ( t , x ) ) , G i ( e i ( t , x ) ) = g i ( v ˜ i ( t , x ) g i ( v i ( t , x ) ) , ρ i ( t , e i ( t , x ) ) = ϱ i ( t , v ˜ i ( t , x ) ) ϱ i ( t , v i ( t , x ) ) , ρ ¯ j ( t , ε j ( t , x ) ) = ϱ ¯ j ( t , w ˜ j ( t , x ) ) ϱ ¯ j ( t , w j ( t , x ) )
for i I and j J .
Before delving into the main results, we first consider the following system:
d Φ ( t ) = Υ ( t , Φ ( t ) ) d t + ρ ( t , Φ ( t ) ) d ω ( t ) , t t τ , Φ ( 0 ) = Φ 0 , Δ Φ | t = t τ = Λ ( t τ , Φ ( t τ ) ) , t τ N ,
where Φ ( t ) denotes the state vector of the system in the set of real numbers R n . The functions Υ : R + × R n R n and ρ : R + × R n R n are continuous and predetermined, with the condition that Υ ( 0 , Φ ( 0 ) ) = 0 and ρ ( 0 , Φ ( 0 ) ) = 0 . Finally, Λ : R + × R n = R n is a function that is both continuously differentiable and locally Lipschitzian, and satisfies the condition Λ ( t , 0 ) = 0 .
Definition 1
([39]). Let N κ ( t 1 , t 2 ) denote the number of impulsives happening in the time interval ( t 1 , t 2 ) , and assume that there exist N 0 > 0 and ν τ such that
t 2 t 1 ν τ N 0 N κ ( t 1 , t 2 ) t 2 t 1 ν τ + N 0 .
Then, ν τ is the average impulse interval of impulses in the sequence κ = { t τ } τ N .
Definition 2
([40]). The zero solution of System (4) is considered to be stochastic FXT stable if the solution Φ ( t , Φ 0 ) with initial condition Φ 0 R n satisfies the following conditions:
1.
P r o { T ( Φ 0 , κ ) < } = 1 holds for any non-zero initial condition Φ 0 R n , where T ( Φ 0 , κ ) = inf { t | Φ ( t , Φ 0 ) = 0 } is an ST function;
2.
For any μ ( 0 , 1 ) and τ > 0 , there exists a δ = δ ( μ , τ ) > 0 such that P r o { | Φ ( t , Φ 0 ) | τ for all t 0 } 1 μ for any case where | Φ 0 | < δ ;
3.
E ( T ( Φ 0 , κ ) ) T ε for any Φ 0 R n , where E ( T ( Φ 0 , κ ) ) is the expected valued of T ( Φ 0 , κ ) and T ε is a positive constant.
After explaining the conditions for FXT stability as stated in Definition 2, we now proceed to introduce the definition of PDT stability.
Definition 3
([41]). The zero solution of System (4) is called PDT stable in probability if it is FXT stable in probability for any initial value Φ 0 R n and satisfies T ( Φ 0 , κ ) T c for any given positive constants T c .
In Definition 3, T c is the pre-assigned time, κ is defined in Definition 1, annd T ( Φ 0 ) is an ST function introduced in Definition 2. Next, we introduce several lemmas that are beneficial for the main results.
Lemma 1
([42]). Assume that 0 < p 1 , q > 1 , and μ l > 0 for l = 1 , 2 , , k ; then, the following inequalities hold true:
l = 1 k μ l p ( l = 1 k μ l ) p , l = 1 k μ l q k 1 q ( l = 1 k μ l ) q .
Lemma 2
([33,43]). If there is a Lyapunov function V ( t , ϕ ( t ) ) such that it satisfies the following:
1.
ε 1 ϕ ( t ) 2 V ( t , ϕ ( t ) ) ε 2 ϕ ( t ) 2 , t R + , ϕ R n
2.
£ V ( t , ϕ ( t ) ) K V ( t , ϕ ( t ) ) μ V p ( t , ϕ ( t ) ) λ V q ( t , ϕ ( t ) ) , t t τ , V ( t + , ϕ ( t τ + ) ) Λ V ( t τ , ϕ ( t τ ) ) , t = t τ
where ε 1 ,   ε 2 ,   K ,   μ ,   λ ,   Λ are positive scalars and where 0 < p < 1 , q > 1 , and K < min { μ , λ , ( ln Λ ν τ ) } , then System (4) is stochastic FXT stable with the ST
T 1 = 1 η ( 1 q ) ln 1 η λ ϖ + 1 η ( 1 p ) ln μ μ η π 2 ,
where ϖ = Λ τ 0 ( 1 γ ) s i g n ( 1 Λ ) , π = Λ τ 0 ( 1 p ) s i g n ( 1 Λ ) , η = K + ln Λ ν τ .
Lemma 3
([33]). If there is a Lyapunov function V ( t , ϕ ( t ) ) such that it satisfies the following:
1.
ε 1 ϕ ( t ) 2 V ( t , ϕ ( t ) ) ε 2 ϕ ( t ) 2 , t R + , ϕ R n
2.
£ V ( t , ϕ ( t ) ) T 0 T c K V ( t , ϕ ( t ) ) + μ V p ( t , ϕ ( t ) ) + λ V q ( t , ϕ ( t ) ) , t t τ , V ( t + , ϕ ( t τ + ) ) Λ V ( t τ , ϕ ( t τ ) ) , t = t τ
where T c is the predefined-time parameter, T 0 is given as T 0 = 1 λ ( q 1 ) ϖ + π 2 μ ( 1 p ) , ϖ = Λ τ 0 ( 1 γ ) s i g n ( 1 Λ ) , π = Λ τ 0 ( 1 p ) s i g n ( 1 Λ ) , ε 1 , ε 2 , K , μ , λ , Λ , p , q are positive scalars, 0 < p < 1 , q > 1 , and K < min { μ , λ , T c T 0 ( ln Λ ν τ ) } , then System (4) is stochastic PDT stable with predefined-time T c .

3. Main Results

In this section, we present two important theorems for the FXT synchronization and PDT synchronization of the drive system (1) and the response system (2). We design the external control inputs u i ( t , x ) and u ¯ j ( t , x ) as
u i ( t , x ) = α i e ^ i p 1 ( t ) e i ( t , x ) β i e ^ i q 1 ( t ) e i ( t , x ) e ^ ( t ) 0 , 0 , e ^ ( t ) = 0 , u ¯ j ( t , x ) = γ j ε ^ j p 1 ( t ) ε j ( t , x ) δ j ε ^ j q 1 ( t ) ε j ( t , x ) , ε ^ ( t ) 0 , 0 , ε ^ ( t ) = 0 ,
where α i , β i , γ j , δ j are positive scalars for i I and j J , while p and q satisfy 0 < p < 1 and q > 1 . Here, e ^ ( t ) and ε ^ ( t ) are defined as
e ^ ( t ) = Ω i = 1 n e i 2 ( t , x ) d x 1 2 , ε ^ ( t ) = Ω j = 1 m ε i 2 ( t , x ) d x 1 2 .
We define
y i = λ i 2 a i + η i + j = 1 m ( L j f | b i j | + L i g | b ¯ j i | ) , z j = λ ¯ j 2 a ¯ j + η ¯ j + i = 1 n ( L j f | b i j | + L i g | b ¯ j i | ) ,
for i I and j J ; then, we have the following theorem.
Theorem 1.
If Assumptions 1 and 2 are satisfied, then System (2) can achieve FXT stochastic synchronization with System (1) under Controller (7) provided that the inequality k < min { ψ , φ , ln ζ ν τ } is met with the ST
T 2 = 2 η ( 1 q ) ln 1 η φ ϖ + 2 η ( 1 p ) ln ψ ψ η π 2 ,
where k = max { max i I { y i } , max j J { z j } } , ψ = 2 min { min i I { α i } , min j J { γ j } } ,   φ = 2 3 q 2 min { min i I { β i } , min j J { δ j } } , ζ = max { max i I { ξ i 2 } , max j J { ξ ¯ j 2 } } , ϖ = ζ τ 0 1 q 2 s i g n ( 1 ζ ) , π = ζ τ 0 1 p 2 s i g n ( 1 ζ ) , η = k + ln ζ ν τ .
Proof. 
Consider the following Lyapunov function:
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where
V 1 ( t ) = Ω i = 1 n e i 2 ( t , x ) d x V 2 ( t ) = Ω j = 1 m ε j 2 ( t , x ) d x .
We can calculate the £ V 1 ( t ) along System (3) for t t τ as
£ V 1 ( t ) = Ω i = 1 n 2 e i ( t , x ) [ h = 1 l 2 e i ( t , x ) D i h 2 e i ( t , x ) x h 2 a i e i ( t , x ) + j = 1 m b i j F j ( ε j ( t , x ) ) u i ( t , x ) ] + ρ i 2 ( t , e i ( t , x ) ) d x .
Utilizing Assumption 1 and the inequality 2 a 1 a 2 a 1 2 + a 2 2 for any constants a 1 and a 2 , it can be shown that the following inequality is valid:
i = 1 n j = 1 m 2 e i ( t , x ) b i j F j ( ε j ( t , x ) ) i = 1 n j = 1 m 2 L j f | b i j | | e i ( t , x ) | | ε j ( t , x ) | i = 1 n j = 1 m L j f | b i j | ( e i 2 ( t , x ) + ε j 2 ( t , x ) ) .
Furthermore,
Ω i = 1 n 2 e i ( t , x ) α i e ^ p 1 ( t ) e i ( t , x ) d x = 2 e ^ p 1 ( t ) Ω i = 1 n α i e i 2 ( t , x ) d x 2 min i I { α i } V 1 p + 1 2 ( t ) .
Similarly,
Ω i = 1 n 2 e i ( t , x ) β i e ^ q 1 ( t ) e i ( t , x ) d x = 2 e ^ q 1 ( t ) Ω i = 1 n β i e i 2 ( t , x ) d x 2 min i I { β i } V 1 q + 1 2 ( t ) .
After that, following to Assumption 2, we can obtain
i = 1 n [ ρ i ( t , e i ( t , x ) ) ] 2 i = 1 n η i e i 2 ( t , x ) .
Finally, to consider the diffusion term, we can apply Green’s identities and the boundary condition of the model to obtain
Ω 2 e i ( t , x ) h = 1 l D i h 2 e i ( t , x ) x h 2 d x 2 D i Ω e i ( t , x ) Δ e i ( t , x ) d x = 2 D i Ω e i ( t , x ) e i ( t , x ) · ν d S 2 D i Ω e i ( t , x ) · e i ( t , x ) d x = 2 D i Ω h = 1 l e i ( t , x ) x h 2 d x ,
where D i = min h { D i h } . Using the Poincare inequality, there exist constants l h for h H such that the following holds true:
Ω 2 e i ( t , x ) h = 1 l D i h 2 e i ( t , x ) x h 2 d x 2 D i Ω h = 1 l e i ( t , x ) x h 2 d x 2 D i h = 1 l 1 l h Ω e i 2 ( t , x ) d x = λ i Ω e i 2 ( t , x ) d x ,
where λ i = h = 1 l 2 D i l h .
Substituting (11)–(15) into (10), we have
£ V 1 ( t ) Ω i = 1 n ( λ i 2 a i + η i ) e i 2 ( t , x ) + j = 1 m L j f | b i j | ( e i 2 ( t , x ) + ε j 2 ( t , x ) ) d x 2 min i I { α i } V 1 p + 1 2 ( t ) 2 min i I { β i } V 1 q + 1 2 ( t ) .
Similarly, we have the following inequality for V 2 ( t ) :
£ V 2 ( t ) Ω j = 1 m ( λ ¯ j 2 a ¯ j + η ¯ j ) ε j 2 ( t , x ) + i = 1 n L i g | b ¯ j i | ( e i 2 ( t , x ) + ε j 2 ( t , x ) ) d x 2 min j J { γ j } V 2 p + 1 2 ( t ) 2 min j J { δ j } V 2 q + 1 2 ( t ) .
Therefore, we can find
£ V ( t ) = £ V 1 ( t ) + £ V 2 ( t ) Ω [ i = 1 n λ i 2 a i + η i + j = 1 m ( L j f | b i j | + L i g | b ¯ j i | ) e i 2 ( t , x ) + j = 1 m λ ¯ j 2 a ¯ j + η ¯ j + j = 1 m L j f ( | b i j | + L i g | b ¯ j i | ) ε j 2 ( t , x ) ] d x 2 min i I { α i } V 1 p + 1 2 ( t ) 2 min i I { β i } V 1 q + 1 2 ( t ) 2 min j J { γ j } V 2 p + 1 2 ( t ) 2 min j J { δ j } V 2 q + 1 2 ( t ) Ω max i I { y i } i = 1 n e i 2 ( t , x ) + max j J { z j } j = 1 m ε j 2 ( t , x ) d x 2 min min i I { α i } , min j J { γ j } V 1 p + 1 2 ( t ) + V 2 p + 1 2 ( t ) 2 min min i I { β i } , min j J { δ j } V 1 q + 1 2 ( t ) + V 2 q + 1 2 ( t ) .
Applying Lemma 1, we have
£ V ( t ) Ω max max i I { y i } , max j J { z j } i = 1 n e i 2 ( t , x ) + j = 1 m ε j 2 ( t , x ) d x 2 min min i I { α i } , min j J { γ j } V p + 1 2 ( t ) 2 3 q 2 min min i I { β i } , min j J { δ j } V q + 1 2 ( t ) .
Then, we can deduce that
£ V ( t ) k V ( t ) ψ V p + 1 2 ( t ) φ V q + 1 2 ( t ) .
Using the impulse condition of System (3) at instant t = t τ , we then have
V ( t τ + ) = V 1 ( t τ + ) + V 2 ( t τ + ) = Ω i = 1 n e i 2 ( t τ + , x ) + j = 1 m ε j 2 ( t τ + , x ) d x = Ω i = 1 n ξ i 2 e i 2 ( t τ , x ) + j = 1 m ξ ¯ j 2 ε j 2 ( t τ , x ) d x max max i I { ξ i 2 } , max j J { ξ ¯ j 2 } Ω i = 1 n e i 2 ( t τ , x ) + j = 1 m z j 2 ( t τ , x ) d x = ζ V ( t τ ) .
Therefore, based on Equations (18) and (19) and according to Lemma 2, the drive–response system in (1) and (2) achieves FXT stochastic synchronization under the controller in (7). The proof is completed. □
Remark 1.
In previous studies [37,38], the authors demonstrated the synchronization of FXT in BAM neural networks incorporating a reaction–diffusion term. In the course of their work, they employed the inequality X u d x p X u p d x for u X , p 1 , with X as a bounded compact set with a smooth boundary X . However, they failed to show the sufficient condition for the validity of this inequality when 0 p < 1 . In the present paper, we introduce a controller into our proof that does not rely on the aforementioned inequality.
We consider the PDT synchronization of the given drive–response system in (1) and (2). We design the external control inputs u i ( t , x ) and u ¯ j ( t , x ) as
u i ( t , x ) = T 0 T c α i e ^ i p 1 ( t ) e i ( t , x ) + β i e ^ i q 1 ( t ) e i ( t , x ) e ^ ( t ) 0 , 0 , e ^ ( t ) = 0 , u ¯ j ( t , x ) = T 0 T c γ j ε ^ j p 1 ( t ) ε j ( t , x ) δ j ε ^ j q 1 ( t ) ε j ( t , x ) , ε ^ ( t ) 0 , 0 , ε ^ ( t ) = 0 ,
Theorem 2.
If Assumptions 1 and 2 are satisfied, then System (2) can achieve PDT stochastic synchronization with System (1) under Controller (20) provided that the following inequality is met:
k < min { ψ , φ , T c T 0 ln ζ ν τ } ,
where k = T c T 0 max { max i I { y i } ,   max j J { z j } } ,   ψ = 2 min { min i I { α i } ,   min j J { γ j } } ,   φ = 2 3 q 2 min {   min i I { β i } , min j J { δ j } } , ζ = max { max i I { ξ i 2 } ,   max j J { ξ ¯ j 2 } } , T c is predefined-time parameter, T 0 = 2 φ ( q 1 ) ϖ + 2 π 2 ψ ( 1 p ) , ϖ = ζ τ 0 1 q 2 s i g n ( 1 ζ ) , and π = ζ τ 0 1 p 2 s i g n ( 1 ζ ) .
Proof. 
To prove Theorem 2, we can consider the following Lyapunov function:
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where V 1 ( t ) and V 2 ( t ) defined in equation (9).
Similarly to Theorem 1, we are able to compute £ V 1 ( t ) in accordance with System (3) for t t τ as follows:
£ V 1 ( t ) = Ω i = 1 n 2 e i ( t , x ) [ h = 1 l 2 e i ( t , x ) D i h 2 e i ( t , x ) x h 2 a i e i ( t , x ) + j = 1 m b i j F j ( ε j ( t , x ) ) u i ( t , x ) ] + ρ i 2 ( t , e i ( t , x ) ) d x .
In a similar way to Theorem 1, it can be shown that the following inequality is valid:
T 0 T c Ω i = 1 n 2 e i ( t , x ) α i e ^ p 1 ( t ) e i ( t , x ) d x = 2 T 0 T c e ^ p 1 ( t ) Ω i = 1 n α i e i 2 ( t , x ) d x 2 T 0 T c min i I { α i } V 1 p + 1 2 ( t ) .
Similarly,
T 0 T c Ω i = 1 n 2 e i ( t , x ) β i e ^ q 1 ( t ) e i ( t , x ) d x = 2 T 0 T c e ^ q 1 ( t ) Ω i = 1 n β i e i 2 ( t , x ) d x 2 T 0 T c min i I { β i } V 1 q + 1 2 ( t ) .
Then, substituting (11), (14), (15), (24) and (25) into (23), we have
£ V 1 ( t ) Ω i = 1 n ( λ i 2 a i + η i ) e i 2 ( t , x ) + j = 1 m L j f | b i j | ( e i 2 ( t , x ) + ε j 2 ( t , x ) ) d x 2 T 0 T c min i I { α i } V 1 p + 1 2 ( t ) 2 T 0 T c min i I { β i } V 1 q + 1 2 ( t ) .
Similarly, we have
£ V 2 ( t ) Ω j = 1 m ( λ ¯ j 2 a ¯ j + η ¯ j ) ε j 2 ( t , x ) + i = 1 n L i g | b ¯ j i | ( e i 2 ( t , x ) + ε j 2 ( t , x ) ) d x 2 T 0 T c min j J { γ j } V 2 p + 1 2 ( t ) 2 T 0 T c min j J { δ j } V 2 q + 1 2 ( t ) .
Therefore, we can find
£ V ( t ) = £ V 1 ( t ) + £ V 2 ( t ) Ω [ i = 1 n λ i 2 a i + η i + j = 1 m ( L j f | b i j | + L i g | b ¯ j i | ) e i 2 ( t , x ) + j = 1 m λ ¯ j 2 a ¯ j + η ¯ j + j = 1 m L j f ( | b i j | + L i g | b ¯ j i | ) ε j 2 ( t , x ) ] d x 2 T 0 T c min i I { α i } V 1 p + 1 2 ( t ) 2 T 0 T c min i I { β i } V 1 q + 1 2 ( t ) 2 T 0 T c min j J { γ j } V 2 p + 1 2 ( t ) 2 T 0 T c min j J { δ j } V 2 q + 1 2 ( t ) Ω max i I { y i } i = 1 n e i 2 ( t , x ) + max j J { z j } j = 1 m ε j 2 ( t , x ) d x 2 T 0 T c min min i I { α i } , min j J { γ j } V 1 p + 1 2 ( t ) + V 2 p + 1 2 ( t ) 2 T 0 T c min min i I { β i } , min j J { δ j } V 1 q + 1 2 ( t ) + V 2 q + 1 2 ( t ) .
Applying Lemma 1, we have
£ V ( t ) T 0 T c [ T c T 0 max max i I { y i } , max j J { z j } Ω i = 1 n e i 2 ( t , x ) + j = 1 m ε j 2 ( t , x ) d x 2 min min i I { α i } , min j J { γ j } V p + 1 2 ( t ) 2 3 q 2 min min i I { β i } , min j J { δ j } V q + 1 2 ( t ) ] .
Then, we can deduce that
£ V ( t ) T 0 T c k V ( t ) ψ V p + 1 2 ( t ) φ V q + 1 2 ( t ) .
Therefore, based on Equations (19) and (28) andaccording to Lemma 3, the drive–response system in (1) and (2) achieves PDT stochastic synchronization with predefined-time T c under the controller in (20). The proof is completed. □
Remark 2.
Designing a controller is crucial for studying the synchronization of neural network systems. In this paper, we have proposed a novel controller that is simpler compared to the one mentioned in [18]. Our controller consists only of two terms, yet still achieves high quality PDT synchronization for the drive–response system in (1) and (2). This simplicity makes our controller more applicable in practical scenarios and can potentially save on control costs.
In Theorem 2, we discussed the PDT synchronization of BAM neural networks involving reaction–diffusion, impulsive, and stochastic effects. However, if we remove the reaction–diffusion term from the drive systems in (1) and (2), these systems can be regarded as impulsive BAM neural networks with stochastic perturbations:
d v i ( t ) = [ a i v i ( t ) + j = 1 m b i j h j ( w j ( t ) ) + I i ] d t + ϱ i ( t , v i ( t ) ) d ω ( t ) , t t τ , d w j ( t ) = [ a ¯ j w j ( t ) + i = 1 n b ¯ j i g i ( v i ( t ) ) + J j ] d t + ϱ ¯ j ( t , w j ( t ) ) d ω ( t ) , t t τ , Δ v i ( t τ ) = v i ( t τ + ) v i ( t τ ) = ( ξ i 1 ) v i ( t τ ) , Δ w j ( t τ ) = w j ( t τ + ) w j ( t τ ) = ( ξ ¯ j 1 ) w j ( t τ ) ,
d v ˜ i ( t ) = [ a i v ˜ i ( t ) + j = 1 m b i j h j ( w ˜ j ( t ) ) + I i + u i ( t ) ] d t + ϱ i ( t , v ˜ i ( t ) ) d ω ( t ) , t t τ , d w ˜ j ( t ) = [ a ¯ j w ˜ j ( t ) + i = 1 n b ¯ j i g i ( v ˜ i ( t ) ) + J j + v j ( t ) ] d t + ϱ ¯ j ( t , w ˜ j ( t ) ) d ω ( t ) , t t τ , Δ v ˜ i ( t τ ) = v ˜ i ( t τ + ) v ˜ i ( t τ ) = ( ξ i 1 ) v ˜ i ( t τ ) , Δ w ˜ j ( t τ ) = w ˜ j ( t τ + ) w ˜ j ( t τ ) = ( ξ ¯ j 1 ) w ˜ j ( t τ ) .
The relevance of PDT synchronization in the mentioned systems was considered in Theorem 2, Corollary 3, and Corollary 4 of reference [33]. This simplification allows us to focus solely on the impulsive dynamics and the stochastic influences on the synchronization behavior.
Taking controllers u i ( t ) and v j ( t ) in System (30) as follows:
u i ( t ) = T 0 T c α i s i g n ( e i ( t ) ) e i p ( t ) + β i s i g n ( e i ( t ) ) e i q ( t ) , v j ( t ) = T 0 T c γ j s i g n ( ε j ( t ) ) ε j p ( t ) + δ j s i g n ( ε j ( t ) ) ε j q ( t ) ,
where α i , β i , γ j , δ j , p and q are positive constants for i I , j J and p , q satisfy 0 p < 1 , q > 1 , then letting
y ˇ i = 2 a i + η i + j = 1 m ( L j f | b i j | + L i g | b ¯ j i | , z ˇ j = 2 a ¯ j + η ¯ j + i = 1 n ( L j f | b i j | + L i g | b ¯ j i | ) , k ˇ = max max i I { y ˇ i } , max j J { z ˇ j } ,
the following result holds true.
Corollary 1.
Suppose that the Assumption 1 and 2 hold true and that the control gains in (31) satisfy the inequality k ˇ < min ψ , φ , T c T 0 ln ζ ν τ . Then, the drive–response system in (29) and (30) achieves PDT stochastic synchronization under the controller in (31).
If the original system in (1) and (2) does not have any stochastic perturbations, then the following drive system can be derived:
d v i ( t ) = [ h = 1 l D i h 2 v i ( t , x ) x h 2 a i v i ( t , x ) + j = 1 m b i j f j ( w j ( t , x ) ) + I i ] d t , t t τ , d w j ( t ) = [ h = 1 l D ¯ j h 2 w j ( t , x ) x h 2 a ¯ j w j ( t , x ) + i = 1 n b ¯ j i g i ( v i ( t , x ) ) + J j ] d t , t t τ , Δ v i ( t τ , x ) = ( ξ i 1 ) v i ( t τ , x ) , Δ w j ( t τ , x ) = ( ξ ¯ j 1 ) w j ( t τ , x )
with the corresponding response system
d v ˜ i ( t ) = [ h = 1 l D i h 2 v ˜ i ( t , x ) x h 2 a i v ˜ i ( t , x ) + j = 1 m b i j f j ( w ˜ j ( t , x ) ) + I i + u i ( t , x ) ] d t , t t τ , d w ˜ j ( t ) = [ h = 1 l D ¯ j h 2 w ˜ j ( t , x ) x h 2 a ¯ j w ˜ j ( t , x ) + i = 1 n b ¯ j i g i ( v ˜ i ( t , x ) ) + J j + u ¯ j ( t , x ) ] d t , t t τ , Δ v ˜ i ( t τ , x ) = ( ξ i 1 ) v ˜ i ( t τ , x ) , Δ w ˜ j ( t τ , x ) = ( ξ ¯ j 1 ) w ˜ j ( t τ , x ) .
Now, denoting
y ¯ i = λ i 2 a i + j = 1 m ( L j f | b i j | + L i g | b ¯ j i | ) , z ¯ j = λ ¯ j 2 a ¯ j + i = 1 n ( L j f | b i j | + L i g | b ¯ j i | ) , k ¯ = max max i I { y ¯ i } , max j J { z ¯ j } ,
we have the following result.
Corollary 2.
Suppose that Assumptions 1 and 2 are satisfied; then, the drive–response system in (32) and (33) exhibits PDT synchronization in probability under the controller in (7) if the control gains α i , β i , γ j and δ j satisfy the inequality k ¯ < min ψ , φ , T c T 0 ln ζ ν τ .
Remark 3.
In previous works [32,33,34,41], the authors successfully achieved FXT and PDT synchronization of neural networks with or without stochastic perturbations by designing a controller that incorporates a discontinuous sign function. However, when the synchronization approaches zero, the chattering effect caused by the function’s discontinuity can result in a decline in synchronization performance. To address this issue, in Theorems 1 and 2 and Corollary 2 we have proposed a novel continuous controller that avoids the use of a discontinuous sign function.
Remark 4.
In previous studies [32,33,34,38,41], researchers have successfully achieved FXT and PDT synchronization in different types of impulsive neural networks. However, there is a lack of research on PDT synchronization in stochastic impulsive reaction–diffusion BAM neural networks. In this paper, we have addressed this gap by proposing a new controller to exhibit FXT and PDT synchronization of BAM neural networks with stochastic perturbations, reaction–diffusion terms, and general impulsive effects. The results obtained from our study have broader applicability.

4. Numerical Examples

In this section, we compare the numerical results with our theoretical predictions and evaluate the effectiveness of the proposed methods and models by implementing the Euler–Maruyama method in Python.
Example 1.
Consider the following impulsive stochastic BAMNNs with reaction–diffusion term
d v i ( t ) = [ D i 2 v i ( t , x ) x 2 a i v i ( t , x ) + j = 1 3 b i j f j ( w j ( t , x ) ) + I i ] d t + ϱ i ( t , v i ( t , x ) ) d ω ( t ) , t t τ , d w j ( t ) = [ D ¯ j 2 w j ( t , x ) x 2 a ¯ j w j ( t , x ) + i = 1 3 b ¯ j i g i ( v i ( t , x ) ) + J j ] d t + ϱ ¯ j ( t , w j ( t , x ) ) d ω ( t ) , t t τ , Δ v i ( t τ + , x ) = ξ i v i ( t τ , x ) , Δ w j ( t τ + , x ) = ξ ¯ j w j ( t τ , x )
in a one-dimensional case for space Ω, where D i = D ¯ j = 1 , h l = 5 , a 1 = a 2 = a 3 = 1.222 , a ¯ 1 = a ¯ 2 = a ¯ 3 = 1.1280 , b 11 = 1.8150 , b 12 = 4.6464 , b 13 = 4.6464 , b 21 = 4.6464 , b 22 = 1.5972 , b 23 = 6.3888 , b 31 = 4.6464 , b 32 = 6.3888 , b 33 = 1.4520 , b ¯ 11 = 1.9800 , b ¯ 12 = 5.0688 , b ¯ 13 = 5.0688 , b ¯ 21 = 5.0688 , b ¯ 22 = 1.7424 , b ¯ 23 = 6.9696 , b ¯ 31 = 5.0688 , b ¯ 32 = 6.9696 , b ¯ 33 = 1.5840 , { I i } = { J j } = 0 , { ξ i } = { ξ ¯ j } = 0.58 and where f j ( u ) = tanh ( u ) , g i ( u ) = tanh ( u ) , ϱ 1 ( u ) = ϱ 2 ( u ) = ϱ 3 ( u ) = 0.21 u , and ϱ ¯ 1 ( u ) = ϱ ¯ 2 ( u ) = ϱ ¯ 3 ( u ) = 0.21 u . Figure 2 and Figure 3 are the time-evolution diagram of System (34) when its initial values are r 1 = 0.0517 sin ( 3 x 5 ) , r 2 = 0.2146 sin ( 3 x 5 ) , r 3 = 0.4863 sin ( 3 x 5 ) , s 1 = 1.0239 sin ( 3 x 5 ) , s 2 = 1.1400 sin ( 3 x 5 ) , s 3 = 2.7070 sin ( 3 x 5 ) , indicating that the system has a chaotic attractor.
The response system of System (34) is
d v ˜ i ( t ) = [ D i 2 v ˜ i ( t , x ) x 2 a i v ˜ i ( t , x ) + j = 1 m b i j f j ( w ˜ j ( t , x ) ) + I i + u i ( t , x ) ] d t + ϱ i ( t , v ˜ i ( t , x ) ) d ω ( t ) , t t τ , d w ˜ j ( t ) = [ D ¯ j 2 w ˜ j ( t , x ) x 2 a ¯ j w ˜ j ( t , x ) + i = 1 n b ¯ j i g i ( v ˜ i ( t , x ) ) + J j + u ¯ j ( t , x ) ] d t + ϱ ¯ j ( t , w ˜ j ( t , x ) ) d ω ( t ) , t t τ , Δ v ˜ i ( t τ + , x ) = ξ i v ˜ i ( t τ , x ) , Δ w ˜ j ( t τ + , x ) = ξ ¯ j w ˜ j ( t τ , x ) ,
where k 1 = max i I 1 { y i } = 0.027 , k 2 = max j J 1 { z j } = 0.161 , k = T c T 0 max { k 1 , k 2 } = 0.1378 , L j f = L i g = 1 , η i = η ¯ j = 0.1 , and i I 1 { 1 , 2 , 3 } , j J 1 { 1 , 2 , 3 } . Setting parameters α 1 = α 2 = α 3 = 6 , β 1 = β 2 = β 3 = 6.3 , γ 1 = γ 2 = γ 3 = 4.8 , δ 1 = δ 2 = δ 3 = 4.2 , p = 0.4 , and q = 1.6 and setting the preassigned-time as T c = 0.75 , Assumptions 1 and 2 hold true and inequality k < min { ψ , ϕ , T c T 0 ln ζ ν τ } = min { 9.6 , 6.8229 , 0.9333 } is satisfied, where T 0 = 0.8755 . As a result, according to Theorem 2, the response system of the drive system shows convergence to the desired synchronization in preassigned-time T c = 0.75 , which is less than T 2 = 0.8251 . This validates the effectiveness of the controller in achieving predefined-time synchronization. The simulation results in Figure 4 and Figure 5 provide further evidence of the synchronization and robustness of the system.
Remark 5.
From Figure 2, it is evident that the BAM neural networks exhibit hyperchaotic attractors due to the combined effects of impulse and random noise, leading to enhanced associative memory capabilities. Figure 3 and Figure 4 further illustrate that the error system converges to zero within preassigned-time T c , as stated in Theorem 2. This convergence is more precise than the time T 2 mentioned in Theorem 1. When compared to the previous works referenced in [38,39,41,43], the findings in this paper offer a more practical and applicable approach.

5. Conclusions

In this paper, we have focused on investigating the FXT and PDT synchronization of impulsive reaction–diffusion BAM neural networks with stochastic perturbations. We started by introducing some relevant background information on FXT and PDT synchronization and stochastic neural networks. Building upon previous research in these areas, we then combined the effects of impulse and stochastic perturbations in BAM neural networks. In addition, we designed a linear controller for the system and derived sufficient conditions for ensuring the FXT and PDT synchronization of the drive–response systems using the Lyapunov function method. Finally, we have provided a numerical example to verify the theoretical results obtained with the proposed model. However, fuzziness is often unavoidable in numerous dynamical systems. It can allow the network to handle uncertain and ambiguous information, thereby improving its robustness and adaptability [31,38,41]. In this paper, we did not consider fuzzy terms. In future work, we may concentrate on synchronization of BAM neural networks with fuzzy terms as a means of enhancing the applicability of our dynamical system.

Author Contributions

Methodology, R.M.; Software, H.S.; Formal analysis, E.K.; Writing—original draft, R.M.; Writing—review and editing, R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the fund of the Guangdong Provincial Key Laboratory of Computational Science and Material Design (Grant No. 2019B030301001).

Data Availability Statement

There is no data associated with this paper.

Conflicts of Interest

The authors declare that they have no competing interests regarding the publication of this article.

References

  1. Kosko, B. Adaptive bi-directional associative memories. Appl. Opt. 1987, 26, 4947–4960. [Google Scholar] [CrossRef]
  2. Kosko, B. Bi-directional associative memories. IEEE Trans. Syst. Man Cybern. 1988, 18, 49–60. [Google Scholar] [CrossRef]
  3. Hasan, S.M.R.; Siong, N.K. A VLSI BAM neural network chip for pattern recognition applications. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 164–168. [Google Scholar]
  4. Wang, L.; Jiang, M.; Liu, R.; Tang, X. Comparison BAM and discrete Hopfield networks with CPN for processing of noisy data. In Proceedings of the 2008 9th International Conference on Signal Processing, Beijing, China, 26–29 October 2008; pp. 1708–1711. [Google Scholar]
  5. Wang, W.; Wang, X.; Luo, X. Finit-time projective synchronization of memristor-based BAM neural networks and applications in image encryption. IEEE Access 2018, 6, 56457–56476. [Google Scholar] [CrossRef]
  6. Li, S.; Li, H.; Wang, J. A new bi-directional associative memory model based on BAM networks. Neurocomputing 2009, 72, 2408–2414. [Google Scholar]
  7. Demirkaya, O.; Asyali, M.; Sahoo, P. Image Processing with MATLAB: Applications in Medicine and Biology; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  8. Song, Q.; Cao, J. Global exponential robust stability of cohen-grossberg neural network with time-varying delays and reaction-diffusion terms. J. Frankl. Inst. 2006, 343, 705–719. [Google Scholar] [CrossRef]
  9. Zhang, W.; Li, J. Global exponential synchronization of delayed BAM neural networks with reaction-diffusion terms and the Neumann boundary conditions. Bound. Value Probl. 2012, 2012, 2. [Google Scholar] [CrossRef]
  10. Zhang, H.; Pal, N.R. Distributed adaptive tracking synchronization coupled reaction-diffusion neural network. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1462–1475. [Google Scholar] [CrossRef]
  11. Mahemuti, R.; Halik, A.; Abdurahman, A. General decay synchronization of delayed bam neural networks with reaction-diffusion terms. Adv. Differ. Equ. 2020, 2020, 457. [Google Scholar] [CrossRef]
  12. Sun, Y.; Hu, C.; Yu, J.; Shi, T. Synchronization of fractional-order reaction-diffusion neural networks via mixed boundary control. Appl. Math. Comput. 2023, 450, 127982. [Google Scholar] [CrossRef]
  13. Li, X.; Song, S. Research on synchronization of chaotic delayed neural networks with stochastic perturbation using impulsive control method. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 3892–3900. [Google Scholar] [CrossRef]
  14. Rogers, L.C.G. Arbitrage with fractional Brownian motion. Math. Financ. 1997, 7, 95–105. [Google Scholar] [CrossRef]
  15. Comte, F.; Renault, E. Long memory continuous time models. J. Econom. 1996, 73, 101–149. [Google Scholar] [CrossRef]
  16. Torres, J.J.; Muñoz, M.A.; Cortés, J.M.; Mejías, J.F. Special issue on emergent effects in stochastic neural networks with application to learning and information processing. Neurocomputing 2021, 461, 632–634. [Google Scholar] [CrossRef]
  17. Liu, Y.; Liu, S.; Wang, Y.; Lombardi, F.; Han, J. A Survey of Stochastic Computing Neural Networks for Machine Learning Applications. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2809–2824. [Google Scholar] [CrossRef]
  18. Wang, X.; Cao, J.; Zhou, X.; Liu, Y.; Yan, Y.; Wang, J. A novel framework of prescribed time/fixed time/finite time stochastic synchronization control of neural networks and its application in image encryption. Neural Netw. 2023, 165, 755–773. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, Y.; Liu, H.; Zhang, B.; Wu, G. Extraction of if-then rules from trained neural network and its application to earthquake prediction. In Proceedings of the Third IEEE International Conference on Cognitive Informatics, Victoria, BC, Canada, 17 August 2004; pp. 109–115. [Google Scholar]
  20. Hu, B.; Guan, Z.; Chen, G.; Lewis, F.L. Multistability of delayed hybrid impulsive neural networks with application to associative memories. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1537–1551. [Google Scholar] [CrossRef] [PubMed]
  21. Li, H.; Li, C.; Ouyang, D.; Nguang, S.K. Impulsive synchronization of unbounded delayed inertial neural networks with actuator saturation and sampled-data control and its application to image encryption. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1460–1473. [Google Scholar] [CrossRef] [PubMed]
  22. Hertz, J.; Krogh, A.; Palmer, R.G. Introduction to the Theory of Neural Computation; CRC Press: Boca Raton, FL, USA, 1991. [Google Scholar]
  23. Sader, M.; Abdurahman, A.; Jiang, H. General decay synchronization of delayed BAM neural networks via nonlinear feedback control. Appl. Math. Comput. 2018, 337, 302–314. [Google Scholar] [CrossRef]
  24. Hu, D.; Tan, J.; Shi, K.; Ding, K. Switching synchronization of reaction-diffusion neural networks with time-varying delays. Chaos Solitons Fractals 2022, 155, 111766. [Google Scholar] [CrossRef]
  25. Wei, T.; Lin, P.; Zhu, Q.; Wang, L.; Wang, Y. Dynamical behavior of nonautonomous stochastic reaction-diffusion neural-network models. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1575–1580. [Google Scholar] [CrossRef]
  26. Wei, T.; Xie, X.; Li, X. Input-to-state stability of delayed reaction-diffusion neural networks with multiple impulses. AIMS Math. 2020, 6, 5786–5860. [Google Scholar] [CrossRef]
  27. Chen, W.; Ren, G.; Yu, Y.; Yuan, X. Quasi-synchronization of heterogeneous stochastic coupled reaction-diffusion neural networks with mixed time-varying delays via boundary control. J. Franklin Inst. 2023, 360, 10080–10099. [Google Scholar] [CrossRef]
  28. Velmurugan, G.; Rakkiyappan, R.; Cao, J. Finite-time synchronization of fractional-order memristor-based neural networks with time delays. Neural Netw. 2016, 73, 36–46. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, Y.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Zheng, M.; Zhao, H. Finite-time synchronization for memristor-based BAM neural 343 networks with stochastic perturbations and time-varying delays. Int. J. Robust Nonlinear Control 2018, 28, 5118–5139. [Google Scholar] [CrossRef]
  30. Wang, J.; Zhang, X.; Wu, X.; Huang, T.; Wang, Q. Finite-time passivity and synchronization of coupled reaction–diffusion neural networks with multiple weights. IEEE Trans. Cybern. 2019, 49, 3385–3397. [Google Scholar] [CrossRef]
  31. Xu, Y.; Liu, W.; Wu, Y.; Li, W. Finite-time synchronization of fractional-order fuzzy time-varying coupled neural networks subject to reaction–diffusion. IEEE Trans. Fuzzy Syst. 2023, 31, 3423–3432. [Google Scholar] [CrossRef]
  32. Wan, Y.; Cao, J.; Wen, G. Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks. Neural Netw. 2016, 73, 86–94. [Google Scholar] [CrossRef]
  33. You, J.; Abdurahman, A.; Sadik, H. Fixed/Predefined-Time synchronization of complex-valued stochastic BAM neural networks with stabilizing and destabilizing impulse. Mathematics 2022, 10, 4384. [Google Scholar] [CrossRef]
  34. Lin, L.; Wang, Q.; He, B.; Chen, Y.; Peng, X.; Mei, R. Adaptive Predefined-Time Synchronization of Two Different Fractional-Order Chaotic Systems With Time-Delay. IEEE Access 2021, 9, 31908–31920. [Google Scholar] [CrossRef]
  35. Wu, J.; Wang, X.; Liu, W. Smooth control steering global predefined-time synchronization for a class of nonlinear systems. IEEE Control Syst. Lett. 2023, 7, 1255–1260. [Google Scholar] [CrossRef]
  36. Abdurahman, A.; Abdusaimaiti, M.; Jiang, H. Fixed/predefined-time lag synchronization of complex-valued BAM neural networks with stochastic perturbations. Appl. Math. Comput. 2023, 444, 127811. [Google Scholar] [CrossRef]
  37. Li, R.; Cao, J.; Alsaedi, A.; Alsaadi, F. Exponential and fixed-time synchronization of Cohen–Grossberg neural networks with time-varying delays and reaction-diffusion terms. Appl. Math. Comput. 2017, 313, 37–51. [Google Scholar] [CrossRef]
  38. Sadik, H.; Abdurahman, A.; Tohti, R. Fixed-Time synchronization of reaction-diffusion fuzzy neural networks with stochastic perturbations. Mathematics 2023, 11, 1493. [Google Scholar] [CrossRef]
  39. Lee, L.; Liu, Y.; Liang, J.; Cai, X. Finite time stability of nonlinear impulsive systems and its applications in sampled-data systems. ISA Trans. 2015, 57, 172–178. [Google Scholar] [CrossRef] [PubMed]
  40. Yin, J.; Khoo, S.; Man, Z. Finite-time stability and instability of stochastic nonlinear systems. Automatica 2011, 47, 2671–2677. [Google Scholar] [CrossRef]
  41. Abudusaimaiti, M.; Abdurahman, A.; Jiang, H. Fixed/predefined-time synchronization of fuzzy neural networks with stochastic perturbations. Chaos Solitons Fractals 2022, 154, 111596. [Google Scholar] [CrossRef]
  42. Hardy, G.H.; Littlewood, J.E.; Plya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  43. Li, H.; Li, C.; Huang, T. Fixed-time stability and stabilization of impulsive dynamical systems. J. Frankl. Inst. 2017, 354, 8626–8644. [Google Scholar] [CrossRef]
Figure 1. Construction of a stochastic impulsive BAM neural network.
Figure 1. Construction of a stochastic impulsive BAM neural network.
Mathematics 12 01204 g001
Figure 2. The chaotic attractor of System (34).
Figure 2. The chaotic attractor of System (34).
Mathematics 12 01204 g002
Figure 3. The chaotic attractor of System (34) when x = 4 is fixed; v ( t , x ) (left), w ( t , x ) (right).
Figure 3. The chaotic attractor of System (34) when x = 4 is fixed; v ( t , x ) (left), w ( t , x ) (right).
Mathematics 12 01204 g003
Figure 4. The evolution diagram of e 1 ( t , x ) (left), e 2 ( t , x ) (middle), and e 3 ( t , x ) (right).
Figure 4. The evolution diagram of e 1 ( t , x ) (left), e 2 ( t , x ) (middle), and e 3 ( t , x ) (right).
Mathematics 12 01204 g004
Figure 5. The evolution diagram of ε 1 ( t , x ) (left), ε 2 ( t , x ) (middle), and ε 3 ( t , x ) (right).
Figure 5. The evolution diagram of ε 1 ( t , x ) (left), ε 2 ( t , x ) (middle), and ε 3 ( t , x ) (right).
Mathematics 12 01204 g005
Table 1. Comparative analysis with previous works.
Table 1. Comparative analysis with previous works.
Ref.Synchronization TypeReaction-Diffusion TermImpulse EffectStochastic PerturbationNumber Field
 [11]General decaywithwithoutwithout R
[24]Switchingwithwithoutwithout R
[27]Quasiwithwithoutwith R
[31]FNTwithwithoutwithout R
[33]FXT/PDTwithoutwithwith C
[36]FXT/PDTwithoutwithoutwith C
[38]FXTwithwithoutwith R
This paperFXT/PDTwithwithwith R
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahemuti, R.; Kasim, E.; Sadik, H. Stochastic Synchronization of Impulsive Reaction–Diffusion BAM Neural Networks at a Fixed and Predetermined Time. Mathematics 2024, 12, 1204. https://doi.org/10.3390/math12081204

AMA Style

Mahemuti R, Kasim E, Sadik H. Stochastic Synchronization of Impulsive Reaction–Diffusion BAM Neural Networks at a Fixed and Predetermined Time. Mathematics. 2024; 12(8):1204. https://doi.org/10.3390/math12081204

Chicago/Turabian Style

Mahemuti, Rouzimaimaiti, Ehmet Kasim, and Hayrengul Sadik. 2024. "Stochastic Synchronization of Impulsive Reaction–Diffusion BAM Neural Networks at a Fixed and Predetermined Time" Mathematics 12, no. 8: 1204. https://doi.org/10.3390/math12081204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop