Next Article in Journal
Sliding-Window TD-FrFT Algorithm for High-Precision Ranging of LFM Signals in the Presence of Impulse Noise
Next Article in Special Issue
Advances in Fractional-Order Neural Networks, Volume II
Previous Article in Journal
Optimal Fractional-Order Controller for the Voltage Stability of a DC Microgrid Feeding an Electric Vehicle Charging Station
Previous Article in Special Issue
Dynamical Analysis of the Incommensurate Fractional-Order Hopfield Neural Network System and Its Digital Circuit Realization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(9), 678; https://doi.org/10.3390/fractalfract7090678
Submission received: 13 June 2023 / Revised: 4 September 2023 / Accepted: 7 September 2023 / Published: 10 September 2023
(This article belongs to the Special Issue Advances in Fractional-Order Neural Networks, Volume II)

Abstract

:
This paper studies the finite-time synchronization problem of fractional-order stochastic memristive bidirectional associative memory neural networks (MBAMNNs) with discontinuous jumps. A novel criterion for finite-time synchronization is obtained by utilizing the properties of quadratic fractional-order Gronwall inequality with time delay and the comparison principle. This criterion provides a new approach to analyze the finite-time synchronization problem of neural networks with stochasticity. Finally, numerical simulations are provided to demonstrate the effectiveness and superiority of the obtained results.

1. Introduction

Artificial intelligence has been an active field of research, and neural networks have emerged as a prominent branch due to their intelligence characteristics and potential for real-world applications. Neural networks have revolutionized the field of artificial intelligence by enabling computers to process and analyze large volumes of complex data with remarkable accuracy. These models are based on the structure and processes of the brain, where neurons are interconnected and communicated by using electrical signals. Neural networks are characterized by their remarkable ability to learn from data and improve their performance over time without being explicitly programmed. Neural networks have evolved over time, with various models developed to address different types of problems, for example, Cohen–Grossberg neural networks [1], Hopfield neural networks [2], and cellular neural networks [3].
Kosko’s bidirectional associative memory neural networks (BAMNNs) are a noteworthy extension of the traditional single-layer neural networks [4]. BAMNNs consist of two layers of neurons that are not interconnected within their own layer. In contrast, the neurons in different layers are fully connected, allowing for bidirectional information flow between the two layers. This unique structure enables the BAMNNs to function as both input and output layers, providing powerful information storage and associative memory capabilities. In signal processing, the BAMNNs can be used to filter signals or extract features; while in pattern recognition, they can classify images or recognize speech. In optimization, the BAMNNs can be used to identify the optimal solution; while in automatic control, they can be used to regulate or stabilize a system [5,6]. The progress of artificial intelligence and the evolution of neural networks have created novel opportunities to tackle intricate issues in diverse domains. In summary, these advancements have paved the way for innovative problem-solving approaches that were previously unattainable. The BAMNNs’ unique architecture and capabilities make it a powerful tool for engineering applications, and it is anticipated that this technology will maintain its importance in future research and development and continue to make substantial contributions to various fields.
Due to the restriction of network size and synaptic elements, the functions of artificial neural networks are greatly limited. If common connection weights and the self-feedback connection weights of BAMNNs are established by memristor [7], then its model can be built in a circuit. Memristors [8,9] are a circuit element in electronic circuit theory that behave nonlinearly and have two terminals. Their unique feature has led to their widespread use and potential application in a variety of fields, including artificial intelligence, data storage, and neuromorphic computing. Adding memristors to neural networks makes it possible for artificial neural networks to simulate the human brain on the circuit, which makes the research on memristor neural networks more meaningful. Therefore, the resistance in the traditional neural networks is replaced by a memristor, and BAMNNs based on memristor are formed (MBAMNNs). Compared with traditional neural networks, memristor neural networks have stronger learning and associative memory abilities, allowing for more efficient processing and storage of information, thereby improving the efficiency and accuracy of artificial intelligence [10,11,12]. Additionally, due to the nonlinear characteristics of memristors and their applications in circuits, memristor neural networks also have lower energy consumption and higher speed. Therefore, the development of memristor neural networks have broad application prospects in the field of artificial intelligence.
However, due to the limitation of amplifier conversion speed, the phenomenon of time delay in neural network systems is inevitable. Research indicates that the presence of time delay is a significant factor contributing to complex dynamic behaviors such as system instability and chaos [13]. To enhance the versatility and efficiency of the BAMNNs, Ding and Huang [14] developed a novel BAMNNs model in 2006. The focus of their investigation was on analyzing the global exponential stability of the equilibrium point and studying its characteristics in this model. The time-delay BAMNNs have been advanced by its positive impact on their development [15,16,17,18].
Fractional calculus extends the traditional differentiation and integration operations to non-integer orders [19] and has been introduced into neural networks to capture the characteristics of memory and inheritance [20,21,22]. The emergence of fractional-order calculus has spurred the development of neural networks [8,9,23,24], which have found applications in diverse areas, including signal detection, fault diagnosis, optimization analysis, associative memory, and risk assessment. The fractional-order memristive neural networks (FMNN) are the specific type of fractional-order neural networks which have been widely studied for their stability properties. For instance, scholars have investigated the asymptotic stability of FMNNs with delay using Caputo fractional differentiation and Filippov solution properties [25], and have also investigated the asymptotic stability of FMNNs with delay by leveraging the properties of Filippov solutions and Leibniz theorem [26].
As one of the significant research directions in the nonlinear systems field, synchronization includes quasi-consistent synchronization [27], projective synchronization [28], full synchronization [29], Mittag-Leffler synchronization [30], global synchronization [31], and many other types. Additionally, it is widely used in cryptography [32], image encryption [33], and secure communication [34]. In engineering applications, people want to realize synchronization as soon as possible, so the concept of finite time synchronization is proposed. Due to its ability to achieve faster convergence speed in network systems, finite-time synchronization has become a crucial aspect in developing effective control strategies for realizing system stability or synchronization [35,36].
This paper addresses the challenge of achieving finite-time synchronization. The definition of finite-time synchronization used in this article is that the synchronization error be kept within a certain range for a limited time interval. However, dealing with the time delay term in this context is challenging. Previous studies have utilized the Hölder inequality [37] and the generalized Gronwall inequality [38,39] to address the finite-time synchronization problem of fractional-order time delay neural networks, providing valuable insights into the problem. However, this paper proposes a new criterion based on the quadratic fractional-order Gronwall inequality with the time delay and comparison principle, offering a fresh perspective on the problem.
This paper presents significant contributions towards the study of finite-time synchronization in fractional-order stochastic MBAMNNs with time delay. The key contributions are as follows:
(1)
We improved Lemma 2 in [39] by deriving a quadratic fractional-order Gronwall inequality with time delay, which is a crucial tool for analyzing the finite-time synchronization problem in stochastic neural networks.
(2)
A novel criterion for achieving finite-time synchronization is proposed, which allows for the computation of the required synchronization time T. This criterion provides a new approach to analyze finite-time synchronization and has the potential to be widely applicable in the field of neural networks research.
The paper is structured as follows: Section 2 introduces relevant concepts and presents the neural networks model used in this study. Section 3 proposes a novel quadratic fractional-order Gronwall inequality that takes time delay into account. This inequality is useful for studying the finite-time synchronization problem in fractional-order stochastic MBAMNNs with time delay, and by utilizing differential inclusion and set-valued mapping theory, a new criterion for determining the required time T for finite-time synchronization is derived. Section 4 provides a numerical example that demonstrates the effectiveness of the proposed results. Finally, suggestions for future research are presented.

2. Preliminaries and Model

This section provides an overview of the necessary preliminaries related to fractional-order derivatives and the model of fractional-order stochastic MBAMNNs. We begin by introducing the fundamental concepts related to fractional-order derivatives and then move on to describe the fractional-order stochastic MBAMNNs model. Additionally, the definition of finite-time synchronization is provided.

2.1. Preliminaries

Notations: The norm and absolute value of vectors and matrices are defined as follows. Let N and R denote the sets of positive integers and real numbers, respectively.
The norm of a vector e x ( t ) = ( e x , 1 ( t ) , e x , 2 ( t ) , , e x , n ( t ) ) C ( [ 0 , + ) , R n ) is given by e x ( t ) = κ = 1 n e x , κ ( t ) . Similarly, the norm of a vector e y ( t ) = ( e y , 1 ( t ) , e y , 2 ( t ) , , e y , m ( t ) ) C ( [ 0 , + ) , R m ) is defined as e y ( t ) = ι = 1 m e y , ι ( t ) , n , m N . The induced norm of a matrix A is denoted by A = max 1 ι n κ = 1 n a κ ι . The absolute value of a vector x ( t ) R n is defined as x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T .
Following that, we provide a review and introduction of several definitions and lemmas related to fractional calculus.
Definition 1
([40]). A fractional-order integral of a function κ ( t ) with order α can be defined as:
I t 0 α κ ( t ) = 1 Γ α t 0 t t θ α 1 κ ( θ ) d θ ,
where 0 < α < 1 ,   t t 0 and Γ ( α ) = 0 t α 1 e t d t .
Especially, I t 0 α κ t = κ ( t ) κ ( t 0 ) for 0 < α < 1 .
Definition 2
([40]). Suppose κ ( t ) C ι ( [ 0 , + ) , R ) , where ι is a positive integer. The Caputo derivative of order α of the function κ ( t ) can be expressed as:
c D 0 α κ ( t ) = t 0 t ( t θ ) α Γ ( ι α ) κ ( ι ) ( θ ) d θ ,
where κ 1 < α < κ , κ N .
For the convenience, we use D α to represent c D 0 α .

2.2. Model

We investigate a kind of fractional-order differential equation that captures the dynamics of fractional-order stochastic MBAMNNs with time delays. These equations are viewed as the driving system (1) that models the interactions between neurons in MBAMNNs and accounts for the influence of discontinuous jumps and time delays. Through examining the stability and analytical solutions of these equations, this study aims to enhance the comprehension of the behavior of MBAMNNs, ultimately leading to more comprehensive analysis for practical applications of this model.
D α x κ ( t ) = u κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r ι ( y ι ( t τ 2 ) ) d B ( t ) + I κ , κ = 1 , 2 , , n , D α y ι ( t ) = v ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι , ι = 1 , 2 , , m ,
where 0 < α < 1 , t [ 0 , + ) .
In this system, the positive parameters u κ ( x κ ( t ) ) and v ι ( y ι ( t ) ) represent the rates of neuron self-inhibition, whereas x κ ( t ) and y ι ( t ) denote the state variables of the κ -th and ι -th neuron, respectively. The activation functions without time delay are denoted by f ι ( y ι ( t ) ) and g κ ( x κ ( t ) ) , and those with time delay are denoted by f ι ( y ι ( t τ 2 ) ) and g κ ( x κ ( t τ 1 ) ) . The neural connection memristive weights matrices are represented by a ι κ ( x κ ( t ) ) , b ι κ ( x κ ( t τ 1 ) ) , c κ ι ( y ι ( t ) ) , and d κ ι ( y ι ( t τ 2 ) ) . Stochastic terms representing Brownian motion are denoted by r ι ( y ι ( t τ 2 ) ) d B ( t ) and s κ ( x κ ( t τ 1 ) ) d B ( t ) . The constant input vectors are represented by I κ and J ι . The time delay parameters τ 1 and τ 2 satisfy 0 τ 1 τ and 0 τ 2 τ , where τ is a constant.
The initial conditions of fractional-order stochastic MBAMNNs (1) are given by x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T , y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y m ( t ) ) T , where x κ ( r ) = ϕ κ ( r ) C ( [ τ 1 , 0 ] , R n ) and y ι ( r ) = ψ ι ( r ) C ( [ τ 2 , 0 ] , R m ) . Here, ϕ κ ( r ) and ψ ι ( r ) are continuous functions on [ τ 1 , 0 ] and [ τ 2 , 0 ] .
Then, the corresponding system of drive system (1) is given by:
D α x ˘ κ ( t ) = u κ ( x ˘ κ ( t ) ) x ˘ κ ( t ) + ι = 1 m a ι κ ( x ˘ κ ( t ) ) f ι ( y ˘ ι ( t ) ) + ι = 1 m b ι κ ( x ˘ κ ( t τ 1 ) ) f ι ( y ˘ ι ( t τ 2 ) ) + ι = 1 m r ι ( y ˘ ι ( t τ 2 ) ) d B ( t ) + I κ + γ κ , κ = 1 , 2 , , n , D α y ˘ ι ( t ) = v j ( y ˘ ι ( t ) ) y ˘ ι ( t ) + κ = 1 n c κ ι ( y ˘ ι ( t ) ) g κ ( x ˘ κ ( t ) ) + κ = 1 n d κ ι ( y ˘ ι ( t τ 2 ) ) g κ ( x ˘ κ ( t τ 1 ) ) + κ = 1 n s κ ( x ˘ κ ( t τ 1 ) ) d B ( t ) + J ι + η ι , ι = 1 , 2 , , m .
The initial conditions of the corresponding system (2) are x ˘ κ ( r ) = ϕ ˘ κ ( r ) C ( [ τ 1 , 0 ] , R n ) , y ˘ ι ( r ) = ψ ˘ ι ( r ) C ( [ τ 2 , 0 ] , R m ) ; γ κ and η ι are the following controllers:
γ κ = ζ κ ( x ˘ κ ( t ) x κ ( t ) ) , η ι = ζ ι ( y ˘ ι ( t ) y ι ( t ) ) ,
where ζ κ and ζ ι are both positive numbers called the control gain.
The synchronization error, as defined by systems (1) and (2), can be expressed as:
e x , κ ( t ) = x ˘ κ ( t ) x κ ( t ) , t [ t 0 , T ] , e y , ι ( t ) = y ˘ ι ( t ) y ι ( t ) , t [ t 0 , T ] , φ x , κ ( r ) = ϕ ˘ κ ( r ) ϕ κ ( r ) , r [ t 0 τ 1 , t 0 ] , φ ˘ y , ι ( r ) = ψ ˘ ι ( r ) ψ ι ( r ) , r [ t 0 τ 2 , t 0 ] .
Then, we obtain the synchronization error, which is e x ( t ) + e y ( t ) , and denote φ = sup s [ τ 1 , 0 ] φ x , κ ( r ) , φ ˘ = sup s [ τ 2 , 0 ] φ ˘ y , ι ( r ) .
Definition 3.
If there is a real number T > 0 such that for any t > T and ϵ > 0 , the synchronization error satisfies:
e x ( t ) + e y ( t ) ϵ , T < t < + ,
then, it can be inferred that the drive system (1) and the response system (2) achieve finite-time synchronization at T.
Remark 1.
To obtain a sufficient condition for achieving finite-time synchronization between systems (1) and (2), it is necessary to identify an appropriate evaluation function χ ( t ) . This function should satisfy e x ( t ) + e y ( t ) χ ( t ) ϵ .

3. Main Results

This section presents a novel approach to obtain the evaluation function χ 1 ( t ) by improving quadratic fractional Gronwall inequality with time delay. We then utilize Theorem 1 to convert this inequality into a format that is consistent with Lemma 2, enabling us to derive a synchronization criterion for the drive system (1) and corresponding system (2) in finite time. Specifically, the application of Lemma 2 leads to the novel criterion for finite-time synchronization.
Quadratic fractional Gronwall inequality with time delay is given below.
Lemma 1
([41]). Let t [ t 0 , T ] , u ( t ) , v ( t ) R , κ ( t ) C 1 ( [ t 0 , T ] , R ) satisfying:
κ ( t ) u ( t ) κ ( t ) + v ( t ) , t [ t 0 , T ] , κ ( t 0 ) κ 0 , κ 0 R .
Then, we have:
κ ( t ) κ 0 exp ( t 0 t u ( θ ) d θ ) + t 0 t exp ( θ t u ( s ) d s ) v ( θ ) d θ .
Lemma 2.
Let T ( 0 , + ) and ω ( t ) , a ( t ) , b ( t ) , σ ( t ) and υ ( t ) be continuous functions that are nonnegative and defined on [ t 0 , T ] . Let ϕ ( t ) be a nonnegative continuous function defined on [ t 0 Ω , t 0 ] and suppose:
ω 2 ( t ) υ 2 ( t ) t 0 T [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] p d θ 1 p + 2 σ 2 ( t ) , t [ t 0 , T ] , ω ( t ) ϕ ( t ) , t [ t 0 Ω , t 0 ] .
Assume σ ( t ) and υ ( t ) are nondecreasing on [ t 0 , T ] , ϕ ( t ) is nondecreasing on [ t 0 Ω , t 0 ] , and ϕ ( t 0 ) = σ ( t 0 ) . Then, the following hold:
(1) 
If σ ( t ) > υ ( t ) , then:
ω 2 ( t ) σ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p .
(2) 
If σ ( t ) υ ( t ) , then:
ω 2 ( t ) υ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 υ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p ,
where t [ t 0 , T ] , Ω > 0 , p 1 are all constants.
Proof .
See the Appendix A. □
Theorem 1.
Assume T ( 0 , + ) and non-negative continuous functions ω ( t ) , σ ( t ) , υ ( t ) , a ( t ) , and b ( t ) are defined on [ t 0 , T ] and ϕ ( t ) is a non-negative continuous function defined on [ t 0 Ω , t 0 ] , which satisfy:
ω 2 ( t ) 2 σ 2 ( t ) + υ ( t ) Γ 2 ( α ) t 0 t ( t θ ) 2 ( α 1 ) [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] d θ , t [ t 0 , T ] , ω ( t ) ϕ ( t ) , t [ t 0 Ω , t 0 ] .
Let σ ( t ) and υ ( t ) be nondecreasing functions on [ t 0 , T ] , and ϕ ( t ) be a nondecreasing function on [ t 0 Ω , t 0 ] with ϕ ( t 0 ) = σ ( t 0 ) . We have the following results:
(1) 
If σ ( t ) > H ( t ) , then:
ω 2 ( t ) σ 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p .
(2) 
If σ ( t ) H ( t ) , then:
ω 2 ( t ) H 2 ( t ) 2 + [ e x p ( t 0 t 2 p 1 υ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ ) 1 ] 1 p .
where t [ t 0 , T ] and p , q > 0 such that 1 q + 1 p = 1 , α > 1 + p 2 p and H 2 ( t ) = υ ( t ) ( t t 0 ) 2 ( α 1 ) + 1 q Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q .
Proof .
By using the Hölder inequality, it follows that:
ω 2 ( t ) 2 σ 2 ( t ) + υ ( t ) Γ 2 ( α ) t 0 t ( t θ ) 2 ( α 1 ) [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] d θ 2 σ 2 ( t ) + υ ( t ) Γ 2 ( α ) ( t 0 t ( t θ ) 2 q ( α 1 ) d θ ) 1 q ( t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] p d θ ) 1 p 2 σ 2 ( t ) + υ ( t ) Γ 2 ( α ) ( t t 0 ) 2 ( α 1 ) + 1 q [ 2 q ( α 1 ) + 1 ] 1 q ( t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] p d θ ) 1 p .
Since α > 1 + p 2 p and H 2 ( t ) = υ ( t ) ( t t 0 ) 2 ( α 1 ) + 1 q Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q , the above inequality can be simplified as follows:
ω 2 ( t ) 2 σ 2 ( t ) + H 2 ( t ) { t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] p d θ } 1 p .
Then, using Lemma 2, the proof is completed. □
To analyze the solutions of the discontinuous systems represented by Equations (1) and (2), Filippov regularization is used. This involves transforming the equations into differential inclusions and set-valued maps.
The drive system represented by Equation (1) can be expressed in terms of a differential inclusion, which is a powerful tool in the theory of differential inclusions. By using this approach, we can study the behavior of the system even when it experiences discontinuities or impulses.
Overall, Filippov regularization allows us to analyze the solutions of discontinuous systems such as Equations (1) and (2) in a rigorous and systematic way, providing insights into their behavior and enabling us to make informed decisions about their design and operation.
According to the definition of set-valued maps, let:
c o [ u κ ( ς ) ] = u ˙ κ , ς < T ˚ , u ¨ κ , ς > T ˚ , c o u ˙ κ , u ¨ κ , ς = T ˚ , c o [ a ι κ ( ς ) ] = a ˙ ι κ , ς < T ˚ , a ¨ ι κ , ς > T ˚ , c o a ˙ ι κ , a ¨ ι κ , ς = T ˚ , c o [ b ι κ ( ς ) ] = b ˙ ι κ , ς < T ˚ , b ¨ ι κ , ς > T ˚ , c o b ˙ ι κ , b ¨ ι κ , ς = T ˚ , c o [ v ι ( ς ) ] = v ˙ ι , ς < T ˚ , v ¨ ι , ς > T ˚ , c o v ˙ ι , v ¨ ι , ς = T ˚ , c o [ c κ ι ( ς ) ] = c ˙ κ ι , ς < T ˚ , c ¨ κ ι , ς > T ˚ , c o c ˙ κ ι , c ¨ κ ι , ς = T ˚ , c o [ d κ ι ( ς ) ] = d ˙ κ ι , ς < T ˚ , d ¨ κ ι , ς > T ˚ , c o d ˙ κ ι , d ¨ κ ι , ς = T ˚ ,
where ς R ; the switching jumps T ˚ κ is a positive constant; and u ˙ i , u ¨ i , a ˙ ι κ , a ¨ ι κ , b ˙ ι κ , b ¨ ι κ , v ˙ j , v ¨ j , c ˙ κ ι , c ¨ κ ι , d ˙ κ ι , and d ¨ κ ι are all constant numbers. c o [ u κ ( ς ) ] , c o [ a ι κ ( ς ) ] , c o [ b ι κ ( ς ) ] , c o [ v j ( ς ) ] , c o [ c κ ι ( ς ) ] , and c o [ d κ ι ( ς ) ] are all compact, closed, and convex.
According to the theory of differential inclusions, we have:
D α x κ ( t ) c o [ u κ ( x κ ( t ) ) ] x κ ( t ) + ι = 1 m c o [ a ι κ ( x κ ( t ) ) ] f ι ( y ι ( t ) ) + ι = 1 m c o [ b ι κ ( x κ ( t τ 1 ) ) ] × f ι ( y ι ( t τ 2 ) ) + ι = 1 m r ι ( y ι ( t τ 2 ) ) d B ( t ) + I κ , D α y ι ( t ) c o [ v ι ( y ι ( t ) ) ] y ι ( t ) + κ = 1 n c o [ c κ ι ( y ι ( t ) ) ] g κ ( x κ ( t ) ) + κ = 1 n c o [ d κ ι ( y ι ( t τ 2 ) ) ] × g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
Then, let:
u ´ κ ( ς ) c o [ u κ ( ς ) ] , a ´ ι κ ( ς ) c o [ a ι κ ( ς ) ] , b ´ ι κ ( ς ) c o [ b ι κ ( ς ) ] , v ´ ι ( ς ) c o [ v ι ( ς ) ] , c ´ κ ι ( ς ) c o [ c κ ι ( ς ) ] , d ´ κ ι ( ς ) c o [ d κ ι ( ς ) ] .
By modifying the drive system (1), we can achieve:
D α x κ ( t ) = u ´ κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ´ ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ´ ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r ι ( y ι ( t τ 2 ) ) d B ( t ) + I κ , D α y ι ( t ) = v ´ ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c ´ κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d ´ κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
Similarly, let:
u ` κ ( ς ) c o [ u κ ( ς ) ] , a ` ι κ ( ς ) c o [ a ι κ ( ς ) ] , b ` ι κ ( ς ) c o [ b ι κ ( ς ) ] , v ` ι ( ς ) c o [ v ι ( ς ) ] , c ` κ ι ( ς ) c o [ c κ ι ( ς ) ] , d ` κ ι ( ς ) c o [ d κ ι ( ς ) ] ,
by employing the similar method, we can modify the corresponding system (2) as follows:
D α x ˘ κ ( t ) = u ` κ ( x ˘ κ ( t ) ) x ˘ κ ( t ) + ι = 1 m a ` ι κ ( x ˘ κ ( t ) ) f ι ( y ˘ ι ( t ) ) + ι = 1 m b ` ι κ ( x ˘ κ ( t τ 1 ) ) f ι ( y ˘ ι ( t τ 2 ) ) + ι = 1 m r ι ( y ˘ ι ( t τ 2 ) ) d B ( t ) + I κ + γ κ , D α y ˘ ι ( t ) = v ` ι ( y ˘ ι ( t ) ) y ˘ ι ( t ) + κ = 1 n c ` κ ι ( y ˘ ι ( t ) ) g κ ( x ˘ κ ( t ) ) + κ = 1 n d ` κ ι ( y ˘ ι ( t τ 2 ) ) g κ ( x ˘ κ ( t τ 1 ) ) + κ = 1 n s κ ( x ˘ κ ( t τ 1 ) ) d B ( t ) + J ι + η ι .
Assumption 1.
Let the function f ι satisfy the Lipschitz condition, namely, there exists a positive constant f ι * such that:
f ι ( t 1 ) f ι ( t 2 ) f ι * t 1 t 2
where t 1 , t 2 R . Assume functions g κ , s κ , and r ι satisfy the Lipschitz condition equally.
Assumption 2.
f ι ( ± T ˚ ) = g κ ( ± T ˚ ) = 0 .
Lemma 3
([30]). Under Assumptions 1 and 2, we know for any a ` κ ι , a ´ κ ι c o [ a κ ι ( ϱ ) ] ,
a ` κ ι ( t 2 ) f κ ( t 2 ) a ´ κ ι ( t 1 ) f κ ( t 1 ) a κ ι * f κ * t 2 t 1 ,
where a κ ι * = max a ¨ κ ι , a ˙ κ ι .
The synchronization error system, by Assumption 1 and Lemma 3, can be expressed as:
D α e x , κ ( t ) ( u κ * + ζ κ ) e x , κ ( t ) + ι = 1 m a ι κ * f ι * e y , ι ( t ) + ι = 1 m b ι κ * f ι * e ˜ y , ι ( t ) + ι = 1 m r ι * e ˜ y , ι ( t ) d B ( t ) , D α e y , ι ( t ) ( v ι * + ζ ι ) e y , ι ( t ) + κ = 1 n c κ ι * g κ * e x , κ ( t ) + κ = 1 n d κ ι * g κ * e ˜ x , κ ( t ) + κ = 1 n s κ * e ˜ x , κ ( t ) d B ( t ) ,
where u κ * = max u ¨ κ , u ˙ κ ,   a ι κ * = max a ¨ ι κ , a ˙ ι κ , b ι κ * = max b ¨ ι κ , b ˙ ι κ ,   v ι * = max v ¨ ι , v ˙ ι , c κ ι * = max c ¨ κ ι , c ˙ κ ι d κ ι * = max d ¨ κ ι , d ˙ κ ι , and e ˜ y , ι ( t ) = e y , ι ( t τ 2 ) , e ˜ x , κ ( t ) = e x , κ ( t τ 1 ) .
For the sake of convenience, we can express inequality (6) as:
D α e x ( t ) U e x ( t ) + A F e y ( t ) + B F e ˜ y ( t ) + R e ˜ y ( t ) d B ( t ) , D α e y ( t ) V e y ( t ) + C G e x ( t ) + D G e ˜ x ( t ) + S e ˜ x ( t ) d B ( t ) ,
where U = d i a g { u 1 * + ζ 1 , u 2 * + ζ 2 , , u n * + ζ n } , V = d i a g { v 1 * + ζ 1 , v 2 * + ζ 2 , , v m * + ζ m } , A = ( a ι κ * ) m × n , B = ( b ι κ * ) m × n , C = ( c κ ι * ) n × m , D = ( d κ ι * ) n × m , F = max 1 l m { f ι * } , G = max 1 κ n { g κ * } , R = max 1 ι m { r ι * } , and S = max 1 κ n { s κ * } .
Remark 2.
To ensure that e x ( t ) + e y ( t ) χ ( t ) ϵ , we can simply find an evaluation function χ 1 ( t ) that satisfies max e x ( t ) , e y ( t ) χ 1 ( t ) with χ 1 ( t ) ϵ 2 . This will guarantee that e x ( t ) + e y ( t ) remains below ϵ and it will also ensure that χ ( t ) is never less than max e x ( t ) , e y ( t ) .
Lemma 4
([42]). (Burkholder–Davis–Gundy inequality) For any Φ L F t p ( [ τ , 0 ] ; H ) , then:
E sup t 0 t T t 0 t Φ ( u ) d ω ( u ) p c p t 0 T E Φ ( u ) 2 d u p 2 ,
where c p = p p + 1 2 ( p 1 ) p 1 p 2 ( p 2 ) .
Theorem 2.
Suppose Assumption 1, Assumption 2, and the following conditions are satisfied.
(1) 
If ϕ > H ( t ) , then:
ϕ 2 2 + exp ( 2 p 1 ϕ 2 p [ 4 U + A F 2 + 2 B 2 F 2 + R 2 ] p t ) 1 1 p ϵ 2 4 .
(2) 
If ϕ H ( t ) , then:
H 2 ( t ) 2 + exp ( 2 p 1 H 2 p ( t ) [ 4 U + A F 2 + 2 B 2 F 2 + R 2 ] p t ) 1 1 p ϵ 2 4 .
where t 0 , T , H 2 ( t ) = 4 t 2 α 1 1 p Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q , 0 < ϵ , 1 + p 2 p < α < 1 , and 1 p + 1 q = 1 ( p , q N ).
Then, the drive system (1) and the corresponding system (2) are finite-time synchronized.
Proof. 
By Definition 1, for 0 < α < 1 , we can obtain the following integral inequalities:
e x ( t ) e x ( 0 ) + 1 Γ ( α ) t 0 t ( t θ ) α 1 R e ˜ y ( θ ) d B ( θ ) + 1 Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ , e y ( t ) e y ( 0 ) + 1 Γ ( α ) t 0 t ( t θ ) α 1 S e ˜ x ( θ ) d B ( θ ) + 1 Γ α t 0 t ( t θ ) α 1 V e y ( θ ) + C G e x ( θ ) + D G e ˜ x ( θ ) d θ .
By taking the norm on both sides of inequality (8) simultaneously, we obtain:
e x ( t ) e x ( 0 ) + 1 Γ ( α ) t 0 t ( t θ ) α 1 R e ˜ y ( θ ) d B ( θ ) + 1 Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ ,
e y ( t ) e y ( 0 ) + 1 Γ ( α ) t 0 t ( t θ ) α 1 S e ˜ x ( θ ) d B ( θ ) + 1 Γ α t 0 t ( t θ ) α 1 V e y ( θ ) + C G e x ( θ ) + D G e ˜ x ( θ ) d θ .
Without loss of generality, we assume e x ( t ) > e y ( t ) ; by using Lemma 4, we get:
e x ( t ) 2 ( e x ( 0 ) + 1 Γ ( α ) t 0 t ( t θ ) α 1 R × e ˜ y ( θ ) d B ( θ ) + 1 Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ ) 2 2 e x ( 0 ) 2 + 2 ( 1 Γ ( α ) t 0 t ( t θ ) α 1 R e ˜ y ( θ ) d B ( θ ) + 1 Γ α t 0 t ( t θ ) α 1 U e x ( θ ) + A F e y ( θ ) + B F e ˜ y ( θ ) d θ ) 2 2 e x ( 0 ) 2 + 16 Γ 2 ( α ) t 0 t ( t θ ) 2 ( α 1 ) R 2 e ˜ x ( θ ) 2 d θ + 8 Γ 2 α t 0 t ( t θ ) 2 ( α 1 ) [ U + A F 2 e x ( θ ) 2 + B 2 F 2 e ˜ x ( θ ) 2 ] d θ 2 e x ( 0 ) 2 + 8 Γ 2 α t 0 t ( t θ ) 2 ( α 1 ) [ U + A F 2 e x ( θ ) 2 + ( B 2 F 2 + 2 R 2 ) e ˜ x ( θ ) 2 ] d θ .
Let ω ( t ) = e x ( t ) , σ ( t ) = e x ( 0 ) = ϕ , υ ( t ) = 8 , a ( t ) = U + A F 2 and b ( t ) = B 2 F 2 + 2 R 2 . The initial value t 0 = 0 and Ω = τ . It is easy to see that all of these functions are non-negative and continuous. Additionally, σ ( t ) and υ ( t ) are non-decreasing on [ t 0 , T ] and ϕ ( t ) is non-decreasing on [ t 0 Ω , t 0 ] and σ ( t 0 ) = ϕ ( t 0 ) .
By using Theorem 1, we can obtain the following results:
(1)
If ϕ > H ( t ) , then:
e x ( t ) 2 ϕ 2 2 + exp ( 2 p 1 ϕ 2 p [ 2 ( U + A F ) 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p .
(2)
If ϕ H ( t ) , then:
e x ( t ) 2 H 2 ( t ) 2 + exp ( 2 p 1 H 2 p ( t ) [ 2 ( U + A F ) 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p ,
where H 2 ( t ) = 4 t 2 α 1 1 p Γ 2 ( α ) ( 2 q ( α 1 ) + 1 ) 1 q .
Therefore, based on the hypothesis conditions, it can be concluded that systems (1) and (2) can achieve synchronization. □
When e x ( t ) = max e x ( t ) , e y ( t ) , Remark 2 indicates that the evaluation function χ 1 ( t ) can be determined as follows:
(1)
If ϕ > H ( t ) , then:
χ 1 2 ( t ) = ϕ 2 2 + exp ( 2 p 1 ϕ 2 p [ 2 U + A F 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p .
(2)
If ϕ H ( t ) , then:
χ 1 2 ( t ) = H 2 ( t ) 2 + exp ( 2 p 1 H 2 p ( t ) [ 2 ( U + A F ) 2 + B 2 F 2 + 2 R 2 ] p t ) 1 1 p .
In Section 4, we will present numerical examples to provide a more visual demonstration of the finite-time synchronization achieved between systems (1) and (2).

4. Numerical Examples

Compared to conventional neural networks, neural networks incorporating stochasticity possess greater adaptability and robustness in achieving finite-time synchronization [43,44]. This is because stochasticity can increase the complexity of the system, endowing it with enhanced fault-tolerance and adaptability, thus facilitating more efficient adaptation to diverse environments and application scenarios. Moreover, neural networks with stochasticity exhibit advantageous characteristics in handling nonlinear and complex problems. In practical applications, the parameters and states of the neural network systems are often uncertain, owing to the presence of uncertainty. Stochasticity can more effectively model such uncertainty and bolster the reliability of the neural network systems, thereby elevating its performance in practical applications. Therefore, investigating neural networks with stochasticity may contribute to enhancing the application capabilities and performance of neural networks.
We illustrate the practical application of Theorem 2 in achieving finite-time synchronization between the systems (1) and (2) through a numerical example. By showing this example, we can validate the effectiveness of the proposed synchronization method. Specifically, the example involves simulating the behavior of the systems with varying initial conditions and analyzing the resulting trajectories. The insights gained from this example are used to serve as evidence to show the practical relevance of the finite-time synchronization approach presented in this paper.
Example 1.
Consider the fractional-order stochastic MBAMNNs with time delay:
D α x κ ( t ) = u κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r j ( y ι ( t τ 2 ) d B ( t ) + I κ , D α y ι ( t ) = v ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
where:
u 1 ( x 1 ) = 0.03 , x 1 3 / 4 , 0.03 , x 1 > 3 / 4 , v 1 ( y 1 ) = 0.05 , y 1 3 / 4 , 0.05 , y 1 > 3 / 4 , a 11 ( x 1 ) = 0.1 , x 1 3 / 4 , 0.1 , x 1 > 3 / 4 , b 11 ( x ˜ 1 ) = 0.2 , x 1 3 / 4 , 0.2 , x 1 > 3 / 4 , c 11 ( y 1 ) = 0.2 , y 1 3 / 4 , 0.2 , y 1 > 3 / 4 , d 11 ( y ˜ 1 ) = 0.3 , y 1 3 / 4 , 0.3 , y 1 > 3 / 4 .
Let n = m = 1 , τ 1 = τ 2 = 0.1 , g κ ( x κ ) = sinh ( x κ 3 4 ) , f ι ( y ι ) = sinh ( y ι 3 4 ) , r ι ( y ι ) = 1 10 sinh ( r ι 3 4 ) , s κ ( x κ ) = 1 10 sinh ( s κ 3 4 ) , I 1 = J 1 = 1 , x ( 0 ) = 0.3 , y ( 0 ) = 0.5 . Let ζ κ = 0.01 , ζ ι = 0.02 , x ˘ ( 0 ) = 0.2 and y ˘ ( 0 ) = 0.6 in the corresponding system (2).
Let p = 2 , q = 2 , α = 0.8 > 1 + p 2 p , ϵ = 4 such that ϵ 2 = 2 . It is easy to calculate that φ = 0.1 , φ ˘ = 0.1 , U = 0.02 , V = 0.03 , F = G = 1 , S = R = 1 10 , A = 0.1 , B = 0.2 , C = 0.2 , and D = 0.3 . The norms of the synchronization error e x ( t ) and e y ( t ) , as well as the state trajectories of their squared values, are illustrated in Figure 1 and Figure 2 for systems (1) and (2). The state trajectories of e x ( t ) + e y ( t ) are depicted in Figure 3. Finally, based on Figure 4 and Theorem 2, we can deduce the finite-time synchronization time T 0.5062 .
Example 2.
Consider the fractional-order stochastic MBAMNNs with time delay:
D α x κ ( t ) = u κ ( x κ ( t ) ) x κ ( t ) + ι = 1 m a ι κ ( x κ ( t ) ) f ι ( y ι ( t ) ) + ι = 1 m b ι κ ( x κ ( t τ 1 ) ) f ι ( y ι ( t τ 2 ) ) + ι = 1 m r j ( y ι ( t τ 2 ) d B ( t ) + I κ , D α y ι ( t ) = v ι ( y ι ( t ) ) y ι ( t ) + κ = 1 n c κ ι ( y ι ( t ) ) g κ ( x κ ( t ) ) + κ = 1 n d κ ι ( y ι ( t τ 2 ) ) g κ ( x κ ( t τ 1 ) ) + κ = 1 n s κ ( x κ ( t τ 1 ) ) d B ( t ) + J ι .
where:
u 1 ( x 1 ) = 0.6 , x 1 7 / 10 , 0.6 , x 1 > 7 / 10 , u 2 ( x 2 ) = 0.4 , x 2 7 / 10 , 0.4 , x 2 > 7 / 10 , v 1 ( y 1 ) = 0.2 , y 1 7 / 10 , 0.2 , y 1 > 7 / 10 , v 2 ( y 2 ) = 0.3 , y 2 7 / 10 , 0.3 , y 2 > 7 / 10 , a 11 ( x 1 ) = 0.1 , x 1 7 / 10 , 0.1 , x 1 > 7 / 10 , a 12 ( x 2 ) = 0.1 , x 2 7 / 10 , 0.1 , x 2 > 7 / 10 , a 21 ( x 1 ) = 0.2 , x 1 7 / 10 , 0.2 , x 1 > 7 / 10 , a 22 ( x 2 ) = 0.2 , x 2 7 / 10 , 0.2 , x 2 > 7 / 10 , b 11 ( x ˜ 1 ) = 0.2 , x 1 7 / 10 , 0.2 , x 1 > 7 / 10 , b 12 ( x ˜ 2 ) = 0.2 , x 2 7 / 10 , 0.2 , x 2 > 7 / 10 , b 21 ( x ˜ 1 ) = 0.3 , x 1 7 / 10 , 0.3 , x 1 > 7 / 10 , b 22 ( x ˜ 2 ) = 0.3 , x 2 7 / 10 , 0.3 , x 2 > 7 / 10 , c 11 ( y 1 ) = 0.2 , y 1 7 / 10 , 0.2 , y 1 > 7 / 10 , c 12 ( y 2 ) = 0.2 , y 2 7 / 10 , 0.2 , y 2 > 7 / 10 , c 21 ( y 1 ) = 0.1 , y 1 7 / 10 , 0.1 , y 1 > 7 / 10 , c 22 ( y 2 ) = 0.2 , y 2 7 / 10 , 0.2 , y 2 > 7 / 10 , d 11 ( y ˜ 1 ) = 0.3 , y 1 7 / 10 , 0.3 , y 1 > 7 / 10 , d 12 ( y ˜ 2 ) = 0.3 , y 2 7 / 10 , 0.3 , y 2 > 7 / 10 ,
d 21 ( y ˜ 1 ) = 0.2 , y 1 7 / 10 , 0.2 , y 1 > 7 / 10 , d 22 ( y ˜ 2 ) = 0.3 , y 2 7 / 10 , 0.3 , y 2 > 7 / 10 .
Let n = m = 2 , τ 1 = τ 2 = 0.2 , g κ ( x κ ) = tanh ( x κ 7 10 ) , f ι ( y ι ) = tanh ( y ι 7 10 ) , r ι ( y ι ) = 1 10 tanh ( r ι 7 10 ) , s κ ( x κ ) = 1 10 tanh ( s κ 7 10 ) , I 1 = I 2 = J 1 = J 2 = 2 , x ( 0 ) = ( 0.1 , 0.2 ) T , y ( 0 ) = ( 0.1 , 0.2 ) T .
Then, let ζ κ = ( 0.5 , 0.5 ) T , ζ ι = ( 0.1 , 0.2 ) T , x ˘ ( 0 ) = ( 0.2 , 0.3 ) T and y ˘ ( 0 ) = ( 0.2 , 0.3 ) T in the corresponding system (2).
Let p = 2 , q = 2 , α = 0.8 > 1 + p 2 p , ϵ = 4 such that ϵ 2 = 2 . It is easy to calculate that φ = 0.2 , φ ˘ = 0.2 , U = 0.1 , V = 0.1 , F = G = 1 , S = R = 1 10 , A = 0.3 , B = 0.5 , C = 0.4 , and D = 0.6 . The trajectories of e x , 1 ( t ) , e y , 1 ( t ) , e x , 2 ( t ) , and e y , 2 ( t ) are shown in Figure 5 and Figure 6. Figure 7 and Figure 8 depict the time evolution of synchronization errors e x ( t ) and e y ( t ) , as well as the squares of their magnitudes. Finally, based on Figure 9 and Theorem 2, we can deduce the finite-time synchronization time T 0.3177 .

5. Conclusions

We have enhanced the fractional-order Gronwall inequality for studying finite-time synchronization in fractional-order stochastic MBAMNNs systems, based on the work originally proposed in [39]. Then, we presented illustrative examples to show the effectiveness of our proposed approach. However, it should be noted that we have only analyzed the finite-time synchronization of continuous neural network systems, and have not provided a detailed description of discontinuous neural networks with impulses. Hence, we will investigate the dynamic behaviors of fractional-order neural networks with impulses in the future.

Author Contributions

Conceptualization, L.C.; Methodology, Y.Z.; Investigation, M.G. and X.L.; Writing—original draft, M.G.; Writing—review & editing, L.C. and X.L.; Supervision, Y.Z. All authors read and approved the final manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation under grant ZR2020MA006, ZR2022LLZ003 and the Introduction and Cultivation Project of Young and Innovative Talents in Universities of Shandong Province.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our thanks to the anonymous referees and the editor for their constructive comments and suggestions, which greatly improved this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 2

Proof. 
Define a function d ( t ) by:
d ( t ) = t 0 t [ a ( θ ) ω 2 ( θ ) + b ( θ ) ω 2 ( θ Ω ) ] p d θ , t [ t 0 , T ] .
As shown in inequality (3), we get:
ω 2 ( t ) 2 σ 2 ( t ) + υ 2 ( t ) d 1 p ( t ) , t [ t 0 , T ] .
Subsequently, we proceed with a segmented analysis of the variable t.
(I)
For t [ t 0 , t 0 + Ω ] , d ( t 0 ) = 0 , then:
d ( t ) = [ a ( t ) ω 2 ( t ) + b ( t ) ω 2 ( t Ω ) ] p { a ( t ) [ 2 σ 2 ( t ) + υ 2 ( t ) d 1 p ( t ) ] + b ( t ) ϕ 2 ( t Ω ) } p 2 p 1 [ 2 a ( t ) σ 2 ( t ) + b ( t ) ϕ 2 ( t Ω ) ] p + a p ( t ) υ 2 p ( t ) d ( t ) = 2 p 1 [ 2 a ( t ) σ 2 ( t ) + b ( t ) ϕ 2 ( t Ω ) ] p + 2 p 1 a p ( t ) υ 2 p ( t ) d ( t ) .
Since d ( t 0 ) = 0 , from Lemma 1 it follows that:
d ( t ) t 0 t 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) ] p exp ( θ t 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ .
Substituting inequality (A2) into inequality (A1), we can obtain:
ω 2 ( t ) υ 2 ( t ) t 0 t 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) ] p exp ( θ t 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ 1 p + 2 σ 2 ( t ) .
As we know, σ ( t ) , υ ( t ) are nondecreasing on [ t 0 , T ] and ϕ ( t ) is nondecreasing on [ t 0 Ω , t 0 ] and σ ( t 0 ) = ϕ ( t 0 ) . Hence, it can be observed that ϕ ( t Ω ) σ ( t ) on the [ t 0 , t 0 + Ω ] . Now, we discuss the following cases.
(1)
If σ ( t ) > υ ( t ) , then:
ω 2 ( t ) υ 2 ( t ) t 0 t 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) ] p exp ( θ t 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ 1 p + 2 σ 2 ( t ) σ 2 ( t ) t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p exp ( θ t 2 p 1 [ 2 a ( s ) + b ( s ) ] p σ 2 p ( t ) d s ) d θ 1 p + 2 σ 2 ( t ) σ 2 ( t ) 2 + [ exp ( t 0 t 2 p 1 σ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ ) 1 ] 1 p .
(2)
If σ ( t ) υ ( t ) , similar to case (1), we obtain:
ω 2 ( t ) υ 2 ( t ) 2 + [ exp ( t 0 t 2 p 1 υ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ ) 1 ] 1 p .
(II)
For t [ t 0 + Ω , T ] , we get:
d ( t ) = [ a ( t ) ω 2 ( t ) + b ( t ) ω 2 ( t Ω ) ] p a ( t ) [ 2 σ 2 ( t ) + d 1 p ( t ) υ 2 ( t ) ] + b ( t ) [ 2 σ 2 ( t Ω ) + d 1 p ( t Ω ) σ 2 ( t Ω ) ] p [ υ 2 ( t ) a ( t ) + σ 2 ( t Ω ) b ( t ) ] d 1 p ( t ) + 2 σ 2 ( t ) a ( t ) + 2 σ 2 ( t Ω ) b ( t ) p 2 p 1 [ υ 2 ( t ) a ( t ) + σ 2 ( t Ω ) b ( t ) ] p d ( t ) + 2 p [ σ 2 ( t ) a ( t ) + σ 2 ( t Ω ) b ( t ) ] p .
From inequality (A2), we arrive at:
d ( t 0 + Ω ) t 0 t 0 + Ω 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) ] p exp ( θ t 0 + Ω 2 p 1 a p ( s ) υ 2 p ( s ) d s ) d θ .
Thus, by utilizing inequalities (A3) and (A4), along with Lemma 1, we obtain:
d ( t ) d ( t 0 + Ω ) exp t 0 + Ω t 2 p 1 a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) p d θ + t 0 + Ω t 2 p a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) p × exp θ t 2 p 1 a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s Ω ) p d s d θ . t 0 t 0 + Ω 2 p 1 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) p
× exp θ t 0 + Ω 2 p 1 a p ( s ) υ 2 p ( s ) d s d θ × exp t 0 + Ω t 2 p 1 a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) p d θ + t 0 + Ω t 2 p a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) p × exp θ t 2 p 1 a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s Ω ) p d s d θ .
Substituting the above inequality into (A1), we have:
ω 2 ( t ) 2 σ 2 ( t ) + υ 2 ( t ) t 0 t 0 + Ω 2 p 1 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) p × exp θ t 0 + Ω 2 p 1 a p ( s ) υ 2 p ( s ) d s d θ × exp t 0 + Ω t 2 p 1 a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) p d θ + t 0 + Ω t 2 p a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) p × exp θ t 2 p 1 a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s Ω ) p d s d θ 1 p .
Similarly, we notice that σ ( t ) , υ ( t ) are nondecreasing on [ t 0 , T ] and ϕ ( t ) is nondecreasing on t [ t 0 Ω , t 0 ] and σ ( t 0 ) = ϕ ( t 0 ) . Hence, it can be observed that ϕ ( t Ω ) σ ( t ) on the [ t 0 , t 0 + Ω ] . We discuss the following two cases.
(1)
If σ ( t ) > υ ( t ) , then:
ω 2 ( t ) υ 2 ( t ) t 0 t 0 + Ω 2 p 1 [ 2 a ( θ ) σ 2 ( θ ) + b ( θ ) ϕ 2 ( θ Ω ) ] p exp θ t 0 + Ω 2 p 1 a p ( s ) υ 2 p ( s ) d s d θ × exp t 0 + Ω t 2 p 1 [ a ( θ ) υ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) ] p d θ + t 0 + Ω t 2 p [ a ( θ ) σ 2 ( θ ) + b ( θ ) σ 2 ( θ Ω ) ] p × exp θ t 2 p 1 [ a ( s ) υ 2 ( s ) + b ( s ) υ 2 ( s Ω ) ] p d s d θ 1 p + 2 σ 2 ( t ) 2 σ 2 ( t ) + σ 2 ( t ) exp t 0 t 0 + Ω 2 p 1 σ 2 p ( t ) [ 2 b ( θ ) + a ( θ ) ] p d θ 1 × exp t 0 + Ω t 2 p 1 σ 2 p ( t ) [ b ( θ ) + a ( θ ) ] p d θ + exp t 0 + Ω t 2 p 1 σ 2 p ( t ) [ b ( θ ) + a ( θ ) ] p d θ 1 1 p σ 2 ( t ) exp t 0 t 2 p 1 σ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ 1 1 p + 2 σ 2 ( t ) .
(2)
If σ ( t ) υ ( t ) , then:
ω 2 ( t ) υ 2 ( t ) 2 + exp t 0 t 2 p 1 υ 2 p ( t ) [ b ( θ ) + 2 a ( θ ) ] p d θ 1 1 p .
Based on the above analysis, we get the following results:
(1)
If σ ( t ) > υ ( t ) , then:
ω 2 ( t ) σ 2 ( t ) 2 + e x p t 0 t 2 p 1 σ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ 1 1 p , t [ t 0 , T ] .
(2)
If σ ( t ) υ ( t ) , then:
ω 2 ( t ) υ 2 ( t ) 2 + e x p t 0 t 2 p 1 υ 2 p ( t ) [ 2 a ( θ ) + b ( θ ) ] p d θ 1 1 p , t [ t 0 , T ] .

References

  1. Cohen, M.A.; Grossberg, S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybernet. 1983, SMC-13, 815–826. [Google Scholar] [CrossRef]
  2. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  3. Chua, L.O.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuit. Syst. 1988, 35, 1257–1272. [Google Scholar] [CrossRef]
  4. Kosko, B. Bidirectional associative memories. IEEE Trans. Syst. Man Cybernet. 1988, 18, 49–60. [Google Scholar] [CrossRef]
  5. Liu, A.; Zhao, H.; Wang, Q.; Niu, S.; Gao, X.; Chen, C.; Li, L. A new predefined-time stability theorem and its application in the synchronization of memristive complex-valued BAM neural networks. Neural Netw. 2022, 153, 152–163. [Google Scholar] [CrossRef]
  6. Zhang, T.; Li, Y. Global exponential stability of discrete-time almost automorphic Caputo–Fabrizio BAM fuzzy neural networks via exponential Euler technique. Knowl.-Based Syst. 2022, 246, 108675. [Google Scholar] [CrossRef]
  7. Xiao, J.; Zhong, S.; Li, Y.; Xu, F. Finite-time Mittag–Leffler synchronization of fractional-order memristive BAM neural networks with time delays. Neurocomputing 2017, 219, 431–439. [Google Scholar] [CrossRef]
  8. Zhang, W.; Zhang, H.; Cao, J.; Alsaadi, F.E.; Chen, D. Synchronization in uncertain fractional-order memristive complex-valued neural networks with multiple time delays. Neural Netw. 2019, 110, 186–198. [Google Scholar] [CrossRef]
  9. Zhang, W.; Zhang, H.; Cao, J.; Zhang, H.; Chen, D. Synchronization of delayed fractional-order complex-valued neural networks with leakage delay. Physical A 2020, 556, 124710. [Google Scholar] [CrossRef]
  10. Lim, D.H.; Wu, S.; Zhao, R.; Lee, J.; Jeong, H.; Shi, L. Spontaneous sparse learning for PCM-based memristor neural networks. Nat. Commun. 2021, 12, 319. [Google Scholar] [CrossRef]
  11. Li, C.; Yang, Y.; Yang, X.; Zi, X.; Xiao, F. A tristable locally active memristor and its application in Hopfield neural network. Nonlinear Dyn. 2022, 108, 1697–1717. [Google Scholar]
  12. Ding, S.; Wang, N.; Bao, H.; Chen, B.; Wu, H.; Xu, Q. Memristor synapse-coupled piecewise-linear simplified Hopfield neural network: Dynamics analysis and circuit implementation. Chaos Solitons Fractals 2023, 166, 112899. [Google Scholar]
  13. Marcus, C.M.; Westervelt, R.M. Stability of analog neural networks with delay. Phys. Rev. A 1989, 39, 347. [Google Scholar]
  14. Ding, K.E.; Huang, N.J. Global robust exponential stability of interval general BAM neural network with delays. Neural Process. Lett. 2006, 23, 171–182. [Google Scholar] [CrossRef]
  15. Zhang, Z.; Yang, Y.; Huang, Y. Global exponential stability of interval general BAM neural networks with reaction–diffusion terms and multiple time-varying delays. Neural Netw. 2011, 24, 457–465. [Google Scholar]
  16. Wang, D.; Huang, L.; Tang, L. Dissipativity and synchronization of generalized BAM neural networks with multivariate discontinuous activations. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3815–3827. [Google Scholar]
  17. Xu, C.; Li, P.; Pang, Y. Global exponential stability for interval general bidirectional associative memory (BAM) neural networks with proportional delays. Math. Methods Appl. Sci. 2016, 39, 5720–5731. [Google Scholar] [CrossRef]
  18. Duan, L. Existence and global exponential stability of pseudo almost periodic solutions of a general delayed BAM neural networks. J. Syst. Sci. Complex. 2018, 31, 608–620. [Google Scholar] [CrossRef]
  19. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations. J. Math. Anal. Appl. 2002, 265, 229–248. [Google Scholar] [CrossRef]
  20. Magin, R.L. Fractional calculus models of complex dynamics in biological tissues. Comput. Math. Appl. 2010, 59, 1585–1593. [Google Scholar] [CrossRef]
  21. Li, L.; Wang, X.; Li, C.; Feng, Y. Exponential synchronizationlike criterion for state-dependent impulsive dynamical networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1025–1033. [Google Scholar] [CrossRef]
  22. Picozzi, S.; West, B.J. Fractional Langevin model of memory in financial markets. Phys. Rev. E 2001, 66, 046118. [Google Scholar] [CrossRef] [PubMed]
  23. Kaslik, E.; Sivasundaram, S. Nonlinear dynamics and chaos in fractional-order neural networks. Neural Netw. 2012, 32, 245–256. [Google Scholar] [PubMed]
  24. Ding, D.; You, Z.; Hu, Y.; Yang, Z.; Ding, L. Finite-time synchronization of delayed fractional-order quaternion-valued memristor-based neural networks. Int. J. Mod. Phys. B. 2021, 35, 2150032. [Google Scholar] [CrossRef]
  25. Chen, J.; Jiang, M. Stability of memristor-based fractional-order neural networks with mixed time-delay and impulsive. Neural Process. Lett. 2023, 55, 4697–4718. [Google Scholar] [CrossRef]
  26. Chen, J.; Chen, B.; Zeng, Z. Global asymptotic stability and adaptive ultimate Mittag–Leffler synchronization for a fractional-order complex-valued memristive neural networks with delays. IEEE Trans. Syst. Man Cybernet. 2018, 49, 2519–2535. [Google Scholar] [CrossRef]
  27. Yang, X.; Li, C.; Huang, T.; Song, Q.; Chen, X. Quasi-uniform synchronization of fractional-order memristor-based neural networks with delay. Neurocomputing 2017, 234, 205–215. [Google Scholar] [CrossRef]
  28. Yang, S.; Yu, J.; Hu, C.; Jiang, H. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks. Neural Netw. 2018, 104, 104–113. [Google Scholar] [CrossRef]
  29. Chen, L.; Wu, R.; Cao, J.; Liu, J.-B. Stability and synchronization of memristor-based fractional-order delayed neural networks. Neural Netw. 2015, 71, 37–44. [Google Scholar] [CrossRef]
  30. Chen, J.; Zeng, Z.; Jiang, P. Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks. Neural Netw. 2014, 51, 1–8. [Google Scholar] [CrossRef]
  31. Yang, X.; Li, C.; Huang, T.; Song, Q.; Huang, J. Synchronization of fractional-order memristor-based complex-valued neural networks with uncertain parameters and time delays. Chaos Solitons Fractals 2018, 110, 105–123. [Google Scholar] [CrossRef]
  32. Muthukumar, P.; Balasubramaniam, P. Feedback synchronization of the fractional order reverse butterfly-shaped chaotic system and its application to digital cryptography. Nonlinear Dyn. 2013, 74, 1169–1181. [Google Scholar] [CrossRef]
  33. Wen, S.; Zeng, Z.; Huang, T.; Meng, Q.; Yao, W. Lag synchronization of switched neural networks via neural activation function and applications in image encryption. IEEE Trans. Neural Netw. Learn. Syst. 2015, 7, 1493–1502. [Google Scholar] [CrossRef]
  34. Alimi, A.M.; Aouiti, C.; Assali, E.A. Finite-time and fixed-time synchronization of a class of inertial neural networks with multi-proportional delays and its application to secure communication. Neurocomputing 2019, 332, 29–43. [Google Scholar] [CrossRef]
  35. Ni, J.; Liu, L.; Liu, C.; Hu, X.; Li, S. Fast fixed-time nonsingular terminal sliding mode control and its application to chaos suppression in power system. IEEE Trans. Circ. Syst. II-Express Briefs 2017, 64, 151–155. [Google Scholar] [CrossRef]
  36. Zhang, D.; Cheng, J.; Cao, J.; Zhang, D. Finite-time synchronization control for semi-Markov jump neural networks with mode-dependent stochastic parametric uncertainties. Appl. Math. Comput. 2019, 344, 230–242. [Google Scholar] [CrossRef]
  37. Pratap, A.; Raja, R.; Cao, J.; Alsaadi, R.F.E. Further synchronization in finite time analysis for time-varying delayed fractional order memristive competitive neural networks with leakage delay. Neurocomputing 2018, 317, 110–126. [Google Scholar] [CrossRef]
  38. Ye, H.; Gao, J.; Ding, Y. A generalized Gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 2007, 328, 1075–1081. [Google Scholar] [CrossRef]
  39. Du, F.; Lu, J.G. New criterion for finite-time synchronization of fractional-order memristor-based neural networks with time delay. Appl. Math. Comput. 2021, 389, 125616. [Google Scholar] [CrossRef]
  40. Kilbas, A.A.; Marzan, S.A. Nonlinear differential equations with the Caputo fractional derivative in the space of continuously differentiable functions. Differ. Equ. 2005, 41, 84–89. [Google Scholar] [CrossRef]
  41. Bainov, D.D.; Simeonov, P.S. Integral Inequalities and Applications; Springer: New York, NY, USA, 1992. [Google Scholar]
  42. Rao, R.; Pu, Z. Stability analysis for impulsive stochastic fuzzy p-laplace dynamic equations under neumann or dirichlet boundary condition. Bound. Value Probl. 2013, 2013, 133. [Google Scholar] [CrossRef]
  43. Abuqaddom, I.; Mahafzah, B.A.; Faris, H. Oriented stochastic loss descent algorithm to train very deep multi-layer neural networks without vanishing gradients. Knowl.-Based Syst. 2021, 230, 107391. [Google Scholar] [CrossRef]
  44. Xu, D.; Liu, Y.; Liu, M. Finite-time synchronization of multi-coupling stochastic fuzzy neural networks with mixed delays via feedback control. Fuzzy Sets Syst. 2021, 411, 85–104. [Google Scholar] [CrossRef]
Figure 1. The errors e x ( t ) and e y ( t ) are computed for α = 0.8 .
Figure 1. The errors e x ( t ) and e y ( t ) are computed for α = 0.8 .
Fractalfract 07 00678 g001
Figure 2. The errors e x ( t ) + e y ( t ) are computed for α = 0.8 .
Figure 2. The errors e x ( t ) + e y ( t ) are computed for α = 0.8 .
Fractalfract 07 00678 g002
Figure 3. The errors e x ( t ) 2 and e y ( t ) 2 are computed for α = 0.8 .
Figure 3. The errors e x ( t ) 2 and e y ( t ) 2 are computed for α = 0.8 .
Fractalfract 07 00678 g003
Figure 4. The evaluation function χ 1 ( t ) with α = 0.8 .
Figure 4. The evaluation function χ 1 ( t ) with α = 0.8 .
Fractalfract 07 00678 g004
Figure 5. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 1 and with α = 0.8 .
Figure 5. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 1 and with α = 0.8 .
Fractalfract 07 00678 g005
Figure 6. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 2 and with α = 0.8 .
Figure 6. The errors e x , κ ( t ) and e y , ι ( t ) are computed for κ , ι = 2 and with α = 0.8 .
Fractalfract 07 00678 g006
Figure 7. Both systems (1) and (2) have errors that can be quantified by the magnitudes of e x ( t ) and e y ( t ) .
Figure 7. Both systems (1) and (2) have errors that can be quantified by the magnitudes of e x ( t ) and e y ( t ) .
Fractalfract 07 00678 g007
Figure 8. The square of the errors in both systems (1) and (2) can be measured by e x ( t ) 2 and e y ( t ) 2 .
Figure 8. The square of the errors in both systems (1) and (2) can be measured by e x ( t ) 2 and e y ( t ) 2 .
Fractalfract 07 00678 g008
Figure 9. The evaluation function χ 1 ( t ) with α = 0.8 .
Figure 9. The evaluation function χ 1 ( t ) with α = 0.8 .
Fractalfract 07 00678 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, L.; Gong, M.; Zhao, Y.; Liu, X. Finite-Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays. Fractal Fract. 2023, 7, 678. https://doi.org/10.3390/fractalfract7090678

AMA Style

Chen L, Gong M, Zhao Y, Liu X. Finite-Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays. Fractal and Fractional. 2023; 7(9):678. https://doi.org/10.3390/fractalfract7090678

Chicago/Turabian Style

Chen, Lili, Minghao Gong, Yanfeng Zhao, and Xin Liu. 2023. "Finite-Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays" Fractal and Fractional 7, no. 9: 678. https://doi.org/10.3390/fractalfract7090678

APA Style

Chen, L., Gong, M., Zhao, Y., & Liu, X. (2023). Finite-Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays. Fractal and Fractional, 7(9), 678. https://doi.org/10.3390/fractalfract7090678

Article Metrics

Back to TopTop