Next Article in Journal
Design and Optimization of Linking Matrix for a JSCC System Based on DP-LDPC Codes
Next Article in Special Issue
Complexity Synchronization of Organ Networks
Previous Article in Journal
A Model Selection Approach for Time Series Forecasting: Incorporating Google Trends Data in Australian Macro Indicators
Previous Article in Special Issue
General Nonlocal Probability of Arbitrary Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability of Delay Hopfield Neural Networks with Generalized Riemann–Liouville Type Fractional Derivative

by
Ravi P. Agarwal
1,* and
Snezhana Hristova
2
1
Department of Mathematics, Texas A & M University-Kingsville, Kingsville, TX 78363, USA
2
Faculty of Mathematics and Infromatics, Plovdiv University, Tzar Asen 24, 4000 Plovdiv, Bulgaria
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(8), 1146; https://doi.org/10.3390/e25081146
Submission received: 7 June 2023 / Revised: 22 June 2023 / Accepted: 28 July 2023 / Published: 31 July 2023
(This article belongs to the Special Issue Fractional Calculus and Fractional Dynamics)

Abstract

:
The general delay Hopfield neural network is studied. We consider the case of time-varying delay, continuously distributed delays, time-varying coefficients, and a special type of a Riemann–Liouville fractional derivative (GRLFD) with an exponential kernel. The kernels of the fractional integral and the fractional derivative in this paper are Sonine kernels and satisfy the first and the second fundamental theorems in calculus. The presence of delays and GRLFD in the model require a special type of initial condition. The applied GRLFD also requires a special definition of the equilibrium of the model. A constant equilibrium of the model is defined. An inequality for Lyapunov type of convex functions with the applied GRLFD is proved. It is combined with the Razumikhin method to study stability properties of the equilibrium of the model. As a partial case we apply quadratic Lyapunov functions. We prove some comparison results for Lyapunov function connected deeply with the applied GRLFD and use them to obtain exponential bounds of the solutions. These bounds are satisfied for intervals excluding the initial time. Also, the convergence of any solution of the model to the equilibrium at infinity is proved. An example illustrating the importance of our theoretical results is also included.

1. Introduction

Stability properties of Hopfield neural networks modeled by various types of derivatives have been studied in the literature. The applied different types of derivatives and their properties could adequately describe different behavior of the dynamics of the neurons. For example, the fractional derivatives have typical memory properties, the Riemann–Liouville type of fractional derivatives have a singularity at the initial time point, and they can model some abnormalities in the behavior of the neurons. We can refer to [1] for Caputo fractional derivative, [2] for conformable fractional derivative, ref. [3] for ordinary derivatives and time-varying delays, and refs. [4,5] for Caputo fractional derivative and delays. In the case when Riemann–Liouville fractional derivative is applied also, several results about the study of stability are published (see, for example, refs. [6,7,8,9] for delays, ref. [10] for random impulses). A good review of the neural networks with applied classical fractional derivatives is given in [11].
In the last few decades, many different definitions of fractional integrals and derivatives have been proposed. One of the ways to generalize the classical ones is to use Sonine kernels (see, for example [12]). The general fractional integrals and derivatives of Riemann–Liouville type were introduced and studied by Luchko [13]. Applying Sonine kernel h α ( t ) = t α 1 Γ ( α ) , the classical Riemann–Liouville integral (RLI) and Riemann–Liouville fractional derivative ( RLFD) could be written in the following form:
I R L f ( t ) = 0 t h α ( t s ) f ( s ) d s , t > 0 ,
D R L f ( t ) = d d t 0 t h 1 α ( t s ) f ( s ) d s .
Note that the kernel h α ( t ) satisfies the classical Sonine condition
( h α h 1 α ) ( t ) = { 1 } , t > 0 , α ( 0 , 1 ) .
Recently, in [14,15], an integral and two types of derivatives of Caputo and of Riemann–Liouville type, respectively, were introduced and they were called generalized proportional fractional integral and derivatives. The main characteristic of this derivatives is the exponential kernel in the corresponding integrals. The main disadvantage of these type of integrals and derivatives is the absence of mutually inversebility. For these derivatives, the first and the second fundamental theorem is not satisfied. In this paper, based on the ideas of the fractional integrals and derivatives with Sonine kernels, given and studied in [12], we will define a couple of exponential Sonine kernels. These kernels are a partial case of the studied ones in [16] (see Formulas (7.5) and (7.6)). We will use them to define generalized fractional integral and derivative of Riemann–Liouville type. The main advantage of these types of integrals and derivatives is that they satisfy the first and the second fundamental theorems (see Theorems 4 and 5 [17]); therefore, they form a general fractional calculus. One of their advantages is that as partial cases the classical Caputo and Riemann–Liouville fractional derivatives are obtained. We will prove an inequality for the defined general fractional derivative of complex Lyapunov functions. This inequality will be applied to study stability properties of the delays differential equations. An appropriate initial value problem for a model of neural networks with time-variable delays and distributed delays has dynamics which are modeled by the defined general fractional derivative of Riemann–Liouville type.
We consider Hopfield neural networks with both time-variable delays and distributed delays, and variable in time coefficients and external inputs. The dynamics of the units are modeled by GRLFD. Two types of initial conditions are set up. These initial conditions are connected with the singularity of the applied derivatives at the initial time point which coincides with the lower limit of the integral. The equilibrium of the model is defined in an appropriate way and its stability properties are studied. This equilibrium is deeply connected with the applied derivative.
By employing Razumikhin method and appropriate Lyapunov functions, we obtain several upper exponential bounds of the solutions. The obtained results are valid on intervals excluding the initial time which is a singular point for the solutions. We prove the convergence of the solutions to the equilibrium at infinity. The applied derivative gives us as a partial case the classical Riemann–Liouville fractional derivative, so all obtained results in this paper are a generalization of the Hopfield model with the classical fractional derivative.
The innovations of this article can be described as follows:
-
We define generalized fractional integral and generalized Riemann–Liouville fractional derivative based on exponential Sonine kernels which satisfy the first and the second fundamental theorems;
-
We prove an inequality for general convex Lyapunov functions with generalized Riemann–Liouville fractional derivative;
-
We propose a generalized Hopfield neural network model with both time-variable delays and distributed delays. The dynamic of the units are described by the special generalization of the Riemann–Liouville fractional derivative;
-
We define the equilibrium of the studied model. It depends significantly on the delays and the applied type of the derivative;
-
We use modified Razumikhin conditions deeply connected with the presence of the exponential kernel in the applied derivative;
-
We obtain an exponential bound of the equilibrium of the model. This bound is valid only for intervals excluding the initial time point;
-
We prove sufficient conditions for approaching any solution of the model to the equilibrium.
The structuring of the rest of the paper pursues the following scheme. The basic definitions for general fractional integral and derivatives are given in Section 2. The basic inequality for convex Lyapunov type functions with the generalized fractional derivative of Riemann–Liouville type is proved in Section 3. Some stability properties of the solutions of delay differential equations with those defined in the paper fractional derivative are proved in Section 4. These results are based on the application of the modified Razumikhin method. Section 5 is devoted to the study of the generalized Riemann–Liouville delay Hopfield neural network model. The model is defined and initial conditions are set up. The equilibrium is defined in an appropriate way. It depends significantly on the applied fractional derivative. Quadratic Lyapunov-like functions and the modified Razumikhin method are applied to obtain exponential bounds of the solutions of the model. These bounds are obtained on an interval excluding the initial time point. The convergence of any solution to the equilibrium is studied. An example illustrating theoretical results is elaborated in the next section. Some concluding comments are stated in the last section.

2. Some Basic Definitions and Preliminaries

The most classical fractional integrals and derivatives as well as their basic properties are well presented in the the classical books; see, for example [18,19,20]. In the last decades, many generalizations of these fractional derivatives have been proposed. Some of them are equivalent to the classical ones, whereas others are generalizations. One of the methods is the exponential factor included in the kernel of the derivative and the integral, and they are called tempered fractional integral and derivative (see, for example, their applications to stochastic process [21]).
Recall that the pair of functions M , K (Sonine kernels) satisfies the relation
( M K ) ( t ) = 0 t K ( s ) M ( t s ) d s = { 1 } , t > 0 ,
where { 1 } is the function that is identically equal to 1 for t 0 (for more details, see [16,17]).
If the functions ( M , K ) are Sonine kernels then the generalized fractional integral is defined by
I ( M ) R L f = 0 M ( t s ) f ( s ) d s
and the generalized fractional derivative of Riemann–Liouville type is defined by
D ( K ) R L f = d d t 0 K ( t s ) f ( s ) d s .
Consider the following pair of functions:
M ( t ) = ( λ t ) α 1 Γ ( α ) e | λ 1 | t = λ α 1 Γ ( α ) t α 1 e | λ 1 | t
and
K ( t ) = λ 1 α Γ ( 1 α ) t α e | λ 1 | t + | λ 1 | 0 t u α e | λ 1 | u d u
where α ( 0 , 1 ) , λ > 0 .
Both functions M , K are Sonine kernels. Indeed, we have
K ( t s ) = λ 1 α Γ ( 1 α ) ( t s ) α e | λ 1 | ( t s ) + | λ 1 | 0 t s u α e | λ 1 | u d u = λ 1 α Γ ( 1 α ) ( t s ) α e | λ 1 | ( t s ) + | λ 1 | s t ( v s ) α e | λ 1 | ( v s ) d v , 0 < s t ,
and applying that 0 t s α 1 ( t s ) α d s = Γ ( α ) Γ ( 1 α ) , we obtain
0 t M ( s ) K ( t s ) d s = 1 Γ ( α ) Γ ( 1 α ) ( 0 t s α 1 e | λ 1 | t ( t s ) α d s + | λ 1 | 0 t s α 1 s t ( v s ) α e | λ 1 | v d v d s ) = e | λ 1 | t + 1 Γ ( α ) Γ ( 1 α ) | λ 1 | 0 t e | λ 1 | v 0 v s α 1 ( v s ) α d s d v = e | λ 1 | t + | λ 1 | 0 t e | λ 1 | v d v = 1 , t > 0 .
Also, we obtain
d d t 0 t K ( t s ) ν ( s ) d s = λ 1 α Γ ( 1 α ) d d t ( 0 t [ ( t s ) α e | λ 1 | ( t s ) + | λ 1 | s t ( v s ) α e | λ 1 | ( v s ) d v ] ν ( s ) d s ) = λ 1 α Γ ( 1 α ) d d t ( 0 t ( t s ) α e | λ 1 | ( t s ) ν ( s ) d s + | λ 1 | 0 t s t ( v s ) α e | λ 1 | ( v s ) ν ( s ) d v d s ) = λ 1 α Γ ( 1 α ) d d t ( 0 t ( t s ) α e | λ 1 | ( t s ) ν ( s ) d s + | λ 1 | 0 t 0 v ( v s ) α e | λ 1 | ( v s ) ν ( s ) d s d v ) = λ 1 α Γ ( 1 α ) ( d d t 0 t ( t s ) α e | λ 1 | ( t s ) ν ( s ) d s + | λ 1 | 0 t ( t s ) α e | λ 1 | ( t s ) ν ( s ) d s ) , t > 0 .
Now, we will define the generalized fractional integral (GFI) and generalized Riemann–Liouville fractional derivative (GRLFD) applying the Sonine kernels M , K given by (4) and (5) and using Equation (8).
Definition 1. 
The generalized fractional integral (GFI) of a function υ : [ 0 , A ] R , A , and λ > 0 , α 0 , is defined by
0 I t α , λ υ ( t ) = 0 t M ( t s ) ν ( s ) d s = λ α 1 Γ ( α ) 0 t ( t s ) α 1 e | λ 1 | ( t s ) υ ( s ) d s , t ( 0 , A ] .
Definition 2. 
The generalized Riemann–Liouville fractional derivative (GRLFD) of a function υ : [ 0 , A ] R , A , and λ > 0 , α ( 0 , 1 ) is defined by
0 R L D t α , λ υ ( t ) = d d t 0 t K ( t s ) ν ( s ) d s = λ 1 α Γ ( 1 α ) ( d d t 0 t ( t s ) α e | λ 1 | ( t s ) ν ( s ) d s + | λ 1 | 0 t ( t s ) α e | λ 1 | ( t s ) ν ( s ) d s ) , t ( 0 , A ] .
Remark 1. 
From Equations (9) and (10) in the partial case λ = 1 , we obtain the classical Riemann fractional integral (RLFI) and Riemann–Liouville fractional derivative (RLFD) (see, for example, [18,19,20]).
Remark 2. 
The defined and applied above kernels are a partial case of Sonine kernels studied in [17]; therefore, the defined GFI and GRLFD by Definitions 1 and 2, respectively, satisfy the first and the second fundamental theorems (see Theorem 4 and Theorem 5 [17]), i.e.,
0 R L D t q , ρ 0 I t α , λ υ ( t ) = υ ( t ) , t > 0 ,
and
0 I t α , λ 0 R L D t q , ρ υ ( t ) = υ ( t ) , t > 0 .
Remark 3. 
In [14,15], the authors defined another type of integrals and derivatives with exponential kernels. The above-defined GFI and GRLFD are similar to the ones in the above papers but GFI and GRLFD, defined by Definitions 1 and 2, according to Remark 2, are a part of the fractional calculus (which is not true for the ones defined in [14,15]).
Proposition 1. 
For λ > 0 , α ( 0 , 1 ) we have
0 R L D t α , λ ( e | λ 1 | t t α 1 ) = 0 , t > 0 ,
and
0 R L D t α , λ ( 1 ) = λ Γ ( 1 α ) e | λ 1 | t ( 1 λ t ) α + | λ 1 λ | α γ ( 1 α , | λ 1 | t , , t > 0 .
Proof. 
From Definition 2 we obtain
0 R L D t α , λ ( e | λ 1 | t t α 1 ) = λ 1 α Γ ( 1 α ) ( d d t 0 t ( t s ) α e | λ 1 | ( t s ) e | λ 1 | s s α 1 d s + | λ 1 | 0 t ( t s ) α e | λ 1 | ( t s ) e | λ 1 | s s α 1 d s ) = λ 1 α Γ ( 1 α ) d d t e | λ 1 | t 0 t ( t s ) α s α 1 d s + | λ 1 | e | λ 1 | t 0 t ( t s ) α s α 1 d s = λ 1 α Γ ( 1 α ) d d t e | λ 1 | t Γ ( α ) Γ ( 1 α ) + | λ 1 | e | λ 1 | t Γ ( α ) Γ ( 1 α ) = 0 , t ( 0 , A ] .
We use the equalities
a T e | λ 1 | T s T s α d s = | λ 1 | α 1 γ ( 1 α , | λ 1 | ( T a ) ) , 0 a T ,
with the incomplete lower gamma function γ ( β , z ) ,
d d T a T e | λ 1 | T s T s α d s = e | λ 1 | ( T a ) ( T a ) α 0 a T ,
and we obtain
0 R L D t α , λ ( 1 ) = λ 1 α Γ ( 1 α ) d d t 0 t ( t s ) α e | λ 1 | ( t s ) d s + | λ 1 | 0 t ( t s ) α e | λ 1 | ( t s ) d s = λ 1 α Γ ( 1 α ) e | λ 1 | t t α + | λ 1 | α γ ( 1 α , | λ 1 | t , , t ( 0 , A ] .

3. Inequalities for Generalized Proportional Riemann–Liouville Fractional Derivatives

We will use the following class of multivariable functions:
Ω = { V C 2 ( R n , R ) : V ( 0 ) = 0 , V ( γ x + ( 1 γ ) y ) γ V ( x ) + ( 1 γ ) V ( y ) f o r γ [ 0 , 1 ] , x , y R n } .
Remark 4. 
The function V Ω iff V C 2 ( R n , R ) and V ( y ) V ( x ) + i = 1 n V ( x ) x i ( y i x i ) for all x , y R n , x = ( x 1 , x 2 , , x n ) , y = ( y 1 , y 2 , , y n ) .
Define the set of functions:
C α , λ ( [ 0 , A ] , R n ) = { υ : [ 0 , A ] R n : t ( 0 , A ] 0 R L D t α , λ υ ( t ) < } .
Lemma 1. 
Suppose the functions V Ω , x C α , λ ( [ 0 , b ] , R n ) , b , x = ( x 1 , x 2 , , x n ) , and V ( x ( . ) ) C q , ρ ( [ t 0 , b ] , [ 0 , ) ) . Then, the inequality
t 0 R L D t q , ρ V ( x ( t ) k = 1 n t 0 R L D t q , ρ x k ( t ) V ( x ( t ) ) x k , t ( 0 , b ]
holds.
Proof. 
Fix an arbitrary point T ( 0 , b ] . The inequality (17) is equivalent to
0 R L D t q , ρ V ( y ( t ) ) | t = T k = 1 n 0 R L D t q , ρ y k ( t ) | t = T V ( y ( T ) ) y k 0 .
We have
x k ( s ) = x k ( 0 ) + 0 s d d ξ x k ( ξ ) d ξ , k = 1 , 2 , , n , s [ 0 , T ] ,
and
V ( x ( s ) ) = V ( x ( 0 ) ) + k = 1 n 0 s V ( x ( ξ ) ) x k x k ( ξ ) d ξ , s [ 0 , T ] .
From Definition 2 and equalities (19) and (20), we obtain
Γ ( 1 α ) λ 1 α 0 R L D t q , ρ V ( x ( t ) ) | t = T k = 1 n V ( x ( T ) ) x k 0 R L D t q , ρ x k ( t ) | t = T = d d T 0 T ( T s ) α e | λ 1 | ( T s ) V ( x ( 0 ) ) + k = 1 n 0 s V ( x ( ξ ) ) x k x k ( ξ ) d ξ d s + | λ 1 | 0 T ( T s ) α e | λ 1 | ( T s ) V ( x ( 0 ) ) + k = 1 n 0 s V ( x ( ξ ) ) x k x k ( ξ ) d ξ d s k = 1 n V ( x ( T ) ) x k d d T 0 T ( T s ) α e | λ 1 | ( T s ) x k ( 0 ) + 0 s x k ( ξ ) d ξ d s | λ 1 | k = 1 n V ( x ( T ) ) x k 0 T ( T s ) α e | λ 1 | ( T s ) x k ( 0 ) + 0 s x k ( ξ ) d ξ d s = V ( x ( 0 ) ) V ( x ( T ) ) x k x k ( 0 ) d d T 0 T ( T s ) α e | λ 1 | ( T s ) d s + d d T 0 T ( T s ) α e | λ 1 | ( T s ) k = 1 n 0 s V ( x ( ξ ) ) x k x k ( ξ ) d ξ d s k = 1 n V ( x ( T ) ) x k d d T 0 T ( T s ) α e | λ 1 | ( T s ) ) 0 s x k ( ξ ) d ξ d s + | λ 1 | V ( x ( 0 ) ) V ( x ( T ) ) x k x k ( 0 ) 0 T ( T s ) α e | λ 1 | ( T s ) d s + | λ 1 | 0 T ( T s ) α e | λ 1 | ( T s ) k = 1 n 0 s V ( x ( ξ ) ) x k x k ( ξ ) d ξ d s | λ 1 | k = 1 n V ( x ( T ) ) x k 0 T ( T s ) α e | λ 1 | ( T s ) 0 s x k ( ξ ) d ξ d s .
We apply (19), (20), (14), (15), and the equalities
0 T 0 s f ( s , ξ ) d ξ d s = 0 T ξ T f ( s , ξ ) d s d ξ
for f ( s , ξ ) = e | λ 1 | ( T s ) ( T s ) α V ( x ( ξ ) ) x k x k ( ξ ) or f ( s , ξ ) = e | λ 1 | ( T s ) ( T s ) α x k ( ξ ) to equality (21) and we obtain
Γ ( 1 α ) λ 1 α 0 R L D t q , ρ V ( x ( t ) ) | t = T k = 1 n V ( x ( T ) ) x k 0 R L D t q , ρ x k ( t ) | t = T = V ( x ( 0 ) ) V ( x ( T ) ) x k x k ( 0 ) e | λ 1 | T T α + k = 1 n d d T 0 T ξ T ( T s ) α e | λ 1 | ( T s ) V ( x ( ξ ) ) x k x k ( ξ ) d s d ξ k = 1 n V ( x ( T ) ) x k d d T 0 T ξ T ( T s ) α e | λ 1 | ( T s ) ) x k ( ξ ) d s d ξ + | λ 1 | V ( x ( 0 ) ) V ( x ( T ) ) x k x k ( 0 ) 0 T ( T s ) α e | λ 1 | ( T s ) d s + | λ 1 | 0 T k = 1 n V ( x ( ξ ) ) x k V ( x ( T ) ) x k x k ( ξ ) ξ T ( T s ) α e | λ 1 | ( T s ) d s d ξ .
Note that we have
d d T 0 T ξ T V ( x ( ξ ) ) x k x k ( ξ ) e | λ 1 | T s T s α d s d ξ = 0 T V ( x ( ξ ) ) x k x k ( ξ ) d d T ξ T e | λ 1 | T s T s α d s d ξ = 0 T V ( x ( ξ ) ) x k x k ( ξ ) e | λ 1 | ( T ξ ) ( T ξ ) α d ξ
and
d d T 0 T x k ( ξ ) ξ T e | λ 1 | T s T s α d s d ξ = 0 T x k ( ξ ) e | λ 1 | ( T ξ ) ( T ξ ) α d ξ .
We substitute (23) and (24) in (22) and we obtain
Γ ( 1 α ) λ 1 α 0 R L D t q , ρ V ( x ( t ) ) | t = T k = 1 n V ( x ( T ) ) x k 0 R L D t q , ρ x k ( t ) | t = T = V ( x ( 0 ) ) V ( x ( T ) ) x k x k ( 0 ) e | λ 1 | T T α + 0 T k = 1 n V ( x ( ξ ) ) x k V ( x ( T ) ) x k x k ( ξ ) e | λ 1 | ( T ξ ) ( T ξ ) α d ξ + | λ 1 | V ( x ( 0 ) ) V ( x ( T ) ) x k x k ( 0 ) 0 T ( T s ) α e ( λ 1 ) ( T s ) d s + | λ 1 | 0 T k = 1 n V ( x ( ξ ) ) x k V ( x ( T ) ) x k x k ( ξ ) ξ T ( T s ) α e | λ 1 | ( T s ) d s d ξ .
We define the function P ( s ) = V ( x ( s ) ) V ( x ( T ) ) k = 1 n V ( x ( T ) ) x k ) [ x k ( s ) x k ( T ) ] for s [ 0 , T ] . From V Ω it follows that P ( s ) 0 for all s [ 0 , T ] ,   P ( T ) = 0 and d P ( s ) d s = k = 1 n V ( x ( s ) ) x k V ( x ( T ) ) x k x k ( s ) .
We integrate by parts and use the equalities lim s T P ( s ) ( T s ) q = lim s T P ( s ) q ( q 1 ) ( T s ) 2 α = 0 and d d s e | λ 1 | ( T s ) ( T s ) α = e | λ 1 | ( T s ) ( T s ) α ( | λ 1 | + α ( T s ) 1 ) and we obtain
0 T P ( ξ ) ξ T ( T s ) α e | λ 1 | ( T s ) d s d ξ = P ( ξ ) ξ T ( T s ) α e | λ 1 | ( T s ) d s | ξ = 0 ξ = T 0 T P ( ξ ) d d ξ ξ T ( T s ) α e | λ 1 | ( T s ) d s = P ( 0 ) 0 T ( T s ) α e | λ 1 | ( T s ) d s 0 T P ( ξ ) ( T ξ ) α e | λ 1 | ( T ξ ) d ξ P ( 0 ) 0 T ( T s ) α e | λ 1 | ( T s ) d s ,
and
0 T P ( ξ ) ( T ξ ) α e | λ 1 | ( T ξ ) d ξ = P ( ξ ) ( T ξ ) α e | λ 1 | ( T ξ ) d ξ | ξ = 0 ξ = T 0 T P ( ξ ) d d ξ ( T ξ ) α e | λ 1 | ( T ξ ) = P ( 0 ) T α e | λ 1 | T 0 T P ( ξ ) α ( T ξ ) α 1 e | λ 1 | ( T ξ ) + | λ 1 | ( T ξ ) α e | λ 1 | ( T ξ ) d ξ P ( 0 ) T α e | λ 1 | T d s .
From V Ω and Remark 4 with y = 0 , it follows that V ( x ( T ) ) + k = 1 n V ( x ( T ) ) x k x k ( T ) 0 ; thus, from (25), (27), and (26) we obtain
Γ ( 1 α ) λ 1 α 0 R L D t q , ρ V ( x ( t ) ) | t = T k = 1 n V ( x ( T ) ) x k 0 R L D t q , ρ x k ( t ) | t = T = P ( 0 ) e | λ 1 | T T α V ( x ( T ) ) + k = 1 n V ( x ( T ) ) x k x k ( T ) e | λ 1 | T T α + 0 T P ( ξ ) e | λ 1 | ( T ξ ) ( T ξ ) α d ξ + | λ 1 | P ( 0 ) 0 T ( T s ) α e ( λ 1 ) ( T s ) d s + | λ 1 | V ( x ( T ) ) V ( x ( T ) ) x k x k ( T ) 0 T ( T s ) α e ( λ 1 ) ( T s ) d s + | λ 1 | 0 T P ( ξ ) ξ T ( T s ) α e | λ 1 | ( T s ) d s d ξ P ( 0 ) e | λ 1 | T T α V ( x ( T ) ) + k = 1 n V ( x ( T ) ) x k x k ( T ) e | λ 1 | T T α P ( 0 ) e | λ 1 | T T α + | λ 1 | P ( 0 ) 0 T ( T s ) α e ( λ 1 ) ( T s ) d s | λ 1 | V ( x ( T ) ) + V ( x ( T ) ) x k x k ( T ) 0 T ( T s ) α e ( λ 1 ) ( T s ) d s | λ 1 | P ( 0 ) 0 T ( T s ) α e | λ 1 | ( T s ) d s 0 .
In the case V ( x ) = k = 1 n x k 2 for x R n : x = ( x 1 , x 2 , d o t s , x n ) , we obtain as a partial case of Lemma 1 the following result.
Corollary 1. 
Let the function ν C α , λ ( [ 0 , A ] , R n ) , 0 < A , and ν T ν C α , λ ( [ 0 , A ] , R ) . Then, the inequality 0 R L D t α , λ ν T ν ( t ) 2 ν T ( t ) 0 R L D t α , λ ν ( t ) for t ( 0 , A ] holds.
In our further study we will use the following result, for which the proof is very similar to the one in [22] and we omit it.
Lemma 2. 
Let the function υ C ( [ 0 , A ] , R ) , 0 < A < be Lipshitz, and there exists a point T ( 0 , A ] such that υ ( T ) = 0 , and υ ( t ) < 0 , for 0 t < T . Then, if the GRLFD of υ exists for t = T with α ( 0 , 1 ) , λ > 0 , then the inequality ( 0 R L D α , λ υ ) ( t ) | t = T 0 holds.
Remark 5. 
Note that a similar result to the one in Lemma 2 is proved in [23] for Riemann–Liouville fractional derivatives.

4. Some Results for Delay Differential Equations with GRLFD

Consider the delay nonlinear differential equation with GRLFD
0 R L D t α , λ y ( t ) = f ( t , y t ) f o r t > 0 ,
with initial conditions
y ( t ) = ϕ ( t ) , f o r t [ τ , 0 ) , 0 I t 1 α , λ y ( t ) | t = 0 + = ϕ ( 0 ) ,
where α ( 0 , 1 ) , λ > 0 , y t ( σ ) = y ( t + σ ) for σ [ τ , 0 ] , the initial function ϕ : [ τ , 0 ] R n and f : R + × R n R n .
We will assume that the initial value problem problem (29), (30) has a solution y ( t ; ϕ ) C α , λ ( [ 0 , ) , R n ) for any initial function ϕ C ( [ τ , 0 ] , R n ) ) .
For any vector ξ = ( ξ 1 , ξ 2 , , ξ n ) , we will use the norm | | ξ | | = i = 1 n ξ i 2 .
We will use the norm in C ( [ τ , 0 ] , R n ) , defined by | | ϕ | | 0 = max t [ τ , 0 ] | | ϕ ( t ) | | .
Theorem 1. 
Let there exist a continuous locally Lipschitz function V : R n [ 0 , ) such that
(i) 
There exists an increasing function a C ( [ 0 , ) , [ 0 , ) ) : a ( 0 ) = 0 , such that a ( | | ξ | | ) V ( ξ ) for ξ R n ;
(ii) 
For any solution y C α , λ ( [ 0 , ) , R n ) of (29), (30), the following conditions are satisfied:
-
There exists an increasing function g C ( [ 0 , ) , R ) : g ( 0 ) = 0 such that the inequality
lim t 0 + e | λ 1 | t t 1 α V ( y ( t ) ) g ( | | ϕ ( 0 ) | | )
holds;
-
For any point, T > 0 such that
e | λ 1 | ( T + σ ) ( T + σ ) 1 α V ( y ( T + σ ) ) < e | λ 1 | t T 1 α V ( y ( T ) ) f o r σ ( min { T , τ } , 0 ) ,
the GRLFD 0 R L D t α , λ V ( y ( t ) ) | t = T exists, and the inequality
0 R L D t α , λ V ( y ( t ) ) | t = T < 0
holds.
Then, there exists a point T α > 0 such that for any solution of (29), (30), the inequality
| | x ( t ) | | < a 1 g ( | | ϕ | | 0 ) e | λ 1 | t f o r t > T α
holds.
Proof. 
Let x ( t ) be a solution of (29), (30) with the initial function ϕ C ( [ τ , 0 ] , R n ) .
From condition (ii), we obtain
lim σ 0 + e | λ 1 | σ σ 1 α V ( x ( σ ) ) g | | ϕ ( 0 ) | | g ( | | ϕ | | 0 ) .
Therefore, there exists a number δ > 0 such that
V ( x ( t ) ) g | | ϕ | | 0 < e | λ 1 | t t α 1 g ( | | ϕ | | 0 ) , f o r t ( 0 , δ ) .
We define the function H ( t ) = g ( | | ϕ | | 0 ) e | λ 1 | t t α 1 C α , λ ( [ 0 , ) , R + ) and lim t H ( t ) = 0. Then, there exists T α > 0 such that t α 1 < 1 for t > T α , and, thus,
H ( t ) < g ( | | ϕ | | 0 ) e | λ 1 | t f o r t > T α .
Consider the function m ( t ) = V ( x ( t ) ) C α , λ ( [ 0 , ) , R + ) .
We will prove that
m ( t ) < H ( t ) , t > 0 .
Assume inequality (34) is not true for all t > 0 . Then, there exists a point η δ > 0 such that
m ( η ) = H ( η ) , a n d m ( t ) < H ( t ) , t ( 0 , η ) .
Thus, m ( t ) H ( t ) C α , λ ( [ 0 , η ] , R ) . According to Lemma 2 with T = η , ν ( t ) m ( t ) H ( t ) , the inequality 0 R L D t α , λ m ( t ) H ( t ) | t = η 0 holds. From Proposition 1, we obtain 0 R L D α , λ ( e | λ 1 | t t α 1 ) = 0 and, therefore,
0 R L D t α , λ m ( t ) | t = η = 0 R L D t α , λ m ( t ) H ( t ) | t = η 0 .
Case 1. Let η > τ . Then, min { η , τ } = τ . From (35), it follows that
e | λ 1 | t t 1 α m ( t ) < ε = e | λ 1 | η η 1 α m ( η ) , t ( 0 , η )
or
e | λ 1 | ( η + σ ) ( η + σ ) 1 α V ( ξ + σ , x ( η + σ ) ) < e | λ 1 | η η 1 α V ( η , x ( η ) ) , σ ( τ , 0 ) .
According to condition 2(iii), the inequality
0 R L D t α , λ V ( t , x ( t ) ) | t = η < 0
holds.
The inequality (37) contradicts (36).
Case 2. Let η τ . Then, min { η , τ } = ξ . From (35), it follows that
e | λ 1 | η η 1 α m ( η ) = ε > e | λ 1 | t t 1 α m ( t ) , t ( 0 , η ) ,
or
e | λ 1 | ( η + σ ) ( η + σ ) 1 α V ( η + σ , x ( η + σ ) ) < e | λ 1 | η η 1 α m ( η ) = e | λ 1 | η η 1 α V ( η , x ( η ) )
for σ ( ξ , 0 ) . Similar to Case 1, we obtain a contradiction.
From inequalities (33), (34) and condition (i), it follows that
a ( | | x ( t ) | | ) V ( x ( t ) ) = m ( t ) < H ( t ) < g ( | | ϕ | | 0 ) e | λ 1 | t f o r t > T α .
It proves the claim of Theorem 1. □
Using that for any fixed T 0 the function Q ( σ ) = e | λ 1 | ( T + σ ) ( T + σ ) 1 α is an increasing function for σ ( min { T , τ } , 0 ) we obtain the following result.
Corollary 2. 
Assume the condition (i) of Theorem 1:
(ii) *
For any solution y C α , λ ( [ 0 , ) , R n ) of (29), (30), the following conditions are satisfied:
-
There exists an increasing function g C ( [ 0 , ) , R ) : g ( 0 ) = 0 such that the inequality
e | λ 1 | t t 1 α V ( y ( t ) ) | t = 0 + = lim t 0 + e | λ 1 | t t 1 q V ( y ( t ) ) g ( | | ϕ ( 0 ) | | )
holds;
-
For any point T > 0 such that
V ( y ( T + σ ) ) < V ( y ( T ) ) f o r σ ( min { T , τ } , 0 ) ,
the GRLFD 0 R L D t α , λ V ( y ( t ) ) | t = T exists and the inequality
0 R L D t α , λ V ( y ( t ) ) | t = T < 0
holds.
Then, any solution of (29), (30) satisfies the inequality
| | x ( t ) | | < a 1 g ( | | ϕ | | 0 ) e | λ 1 | t f o r t > T α .
In the case that the Lyapunov function is a quadratic one, we obtain the following result.
Corollary 3. 
Let, for any solution y C α , λ ( [ 0 , ) , R n ) of (29), (30), y T y C α , λ ( [ 0 , ) , R n ) holds and
-
There exists an increasing function g C ( [ 0 , ) , R ) : g ( 0 ) = 0 such that the inequality
lim t 0 + e | λ 1 | t t 1 α y T ( t ) y ( t ) g ( | | ϕ ( 0 ) | | 2 )
holds;
-
For any point T > 0 such that
y T ( T + σ ) y ( T + σ ) < y T ( T ) y ( T ) f o r σ ( min { T , τ } , 0 ) ,
the GRLFD 0 R L D t α , g l y T ( t ) y ( t ) | t = T exists and the inequality
0 R L D t α , λ y T ( t ) y ( t ) | t = T < 0
holds.
Then, there exists T α > 0 such that
| | y ( t ) | | < g max t [ τ , 0 ] | | ϕ | | 2 e 0.5 | λ 1 | t f o r t > T α .
The proof follows from Theorem 1 by taking the Lyapunov function V ( x ) = i = 1 n x i 2 and using a ( u ) u 2 .

5. Delayed Model of Neural Networks by GRLFD

5.1. Model Description

We will consider the general model of Hopfield neural network with the GRLFD and time-varying delays and distributed delays:
0 R L D t α , λ u i ( t ) = A i ( t ) u i ( t ) + k = 1 n a i , k ( t ) f k ( u k ( t ) ) + k = 1 n b i , k ( t ) g k ( u k ( t κ ( t ) ) ) + k = 1 n c i , k ( t ) t Θ ( t ) t h k ( u k ( s ) ) d s + I i ( t ) , t > 0 , i = 1 , 2 , , n ,
where u i ( t ) , i = 1 , 2 , , n are the state variables of the i-th neuron at time t > 0 , a i j ( t ) , b i j ( t ) , c i j ( t ) represent the strengths of the neuron interconnection at time t (assuming they are time changeable), n is the number of units in the neural network, α ( 0 , 1 ) , λ > 0 , f j ( u ) , g j ( u ) and h j ( u ) are the activation functions of the j-th neuron, κ ( t ) is the time-varying delay, and Θ ( t ) is the length of interval of the distributed delay with 0 κ ( t ) κ , 0 Θ ( t ) Θ , and τ = max { κ , Θ } , I i ( t ) are the external inputs at time t.
The initial time interval is [ τ , 0 ] . The applied GRLFD leads to a singularity of the solutions at the initial time 0. It requires appropriate definition of the initial conditions. Some authors (see, for example, [24] for RLFD) used the integral of the type 0 I t 1 α u ( t ) for t [ τ , 0 ] in the initial condition but this integral is defined for t greater than the lower limit, which is 0 in our case.
We will use the initial conditions (30).
We use the assumptions:
A1.
The function A i C ( R , [ μ i , ) ) where μ i , i = 1 , 2 , , n , are positive constants.
A2.
The activation functions f i , g i , h i C ( R , R ) are Lipschitz with constants α i , β i , γ i , i = 1 , 2 , , n , respectively, i.e.,
| f i ( x ) f i ( y ) | α i | x y | , | g i ( x ) g i ( y ) | β i | x y | , | h i ( x ) h i ( y ) | γ i | x y | , x , y R .
A3.
The functions a i j , b i j , c i j C ( [ 0 , ) , R ) , i , j = 1 , 2 , , n .

5.2. Equilibrium of the Model

We define the equilibrium as a constant vector.
We also need to use the equality (16).
Definition 3. 
The constant vector V * = ( C 1 , C 2 , , C n ) is called an equilibrium of (40) if the equalities
C i λ 1 α Γ ( 1 α ) e | λ 1 | t t α + | λ 1 | α γ ( 1 α , | λ 1 | t ) = A i ( t ) C i + k = 1 n a i , k ( t ) f k ( C k ) + b i , k ( t ) g k ( C k ) + c i , k ( t ) Θ ( t ) h k ( C k ) + I i ( t ) f o r t 0 , i = 1 , 2 , n
hold.
Remark 6. 
The equilibrium V * satisfies the initial condition
0 I t 1 α , λ C k = λ α 1 C k Γ ( α ) 0 t ( t s ) α 1 e | λ 1 | ( t s ) d s = λ α 1 C k Γ ( α ) | λ 1 | α 1 γ ( 1 α , | λ 1 | t ) | t = 0 + = 0 .
Remark 7. 
If f i ( 0 ) = g i ( 0 ) = h i ( 0 ) = 0 , i = 1 , 2 , , n and there is no external input, then the model (40) has a zero equilibrium.
Remark 8. 
In most cases in the literature, the Hopfield neural networks are studied with constant coefficients a i k , b i k , c i k and constant external inputs I i . In the case when RLFD or GRLFD is applied and all coefficients are constants, the only equilibrium is zero. It is not the case when at least one of the coefficients is variable in time.
Let V * be an equilibrium of (40). We change the variables ν i ( t ) = u i ( t ) C i , t 0 , in system (40). Then, applying (41), we obtain
0 R L D t q , ρ ν i ( t ) = A i ( t ) ν i ( t ) + k = 1 n a i , k ( t ) F k ( ν k ( t ) ) + k = 1 n b i , k ( t ) G k ( ν k ( t ξ ( t ) ) ) + k = 1 n c i , k ( t ) t Θ ( t ) t H k ( ν k ( s ) ) d s , t > 0 , i = 1 , 2 , , n ,
with initial conditions (30), where
F i ( x ) = f i ( x + C i ) f i ( C i ) , G i ( x ) = g i ( x + C i ) g i ( C i ) , , H i ( x ) = h i ( x + C i ) h i ( C i ) .
Note that the system (42) has a zero solution (with zero initial function).
Remark 9. 
If assumption A2 is satisfied, then
| F i ( x ) | α i | x | , | G i ( x ) | β i | x | , | H i ( x ) | γ i | x | , x , y R .

5.3. Stability of the Model by Lyapunov Functions and Razumikhin Method

Note that the singularity of the solutions of the differential equations with GRLFD and RLFD at the initial time requires this point to be excluded and for some stability properties to be obtained on an interval without the initial time. It is totally different than the case of Caputo type fractional derivative or derivative with an integer order. Some authors apply RLFD but do not exclude the initial time and fact that for order α ( 0 , 1 ) . The expressions t α and t α 1 are not bounded for points enough close to the initial time 0 (see, for example, [6,25,26]). The main concepts of stability for differential equations with RLFD are discussed and studied in [27].
In connection with the applied GRLFD, we need to define the exponential stability of the equilibrium on an interval excluding the initial time 0.
Definition 4. 
The constant equilibrium V * of (40) is called exponentially stable in time if there exist a number C > 0 , a point  T > 0 , and an increasing function Ξ C ( R + , R + ) such that any solution y ( t ) ) of (40), (30) satisfies
y ( t ) V * Ξ ϕ 0 e C | λ 1 | t , t T .
Note that if a function is RL integrable (or generalized proportional RL integrable), it is not necessarily to be squared RL integrable (or squared generalized proportional RL integrable).
Example 1. 
Let ρ = 0.5 , q = 0.3 , and ν ( t ) = t 0.8 . Then, the integrals 0 1 e 1 s 1 s 0.3 s 0.8 d s and 0 1 1 s 0.3 s 0.8 d s exist but the corresponding integrals of the squared function u 2 ( t ) = t 1.6 , i.e., 0 1 e 1 s 1 s 0.3 s 1.6 d s and 0 1 1 s 0.3 s 1.6 d s , do not exist.
It is necessary, in the application of the squared Lyapunov function as well the product u T ( t ) u ( t ) for the solution u ( t ) of the model (40), to assume that any solution is squared RL integrable (or squared generalized proportional RL integrable) on the whole time interval of consideration. This is a huge restriction regarding the solutions of the model (for example, see Theorems 3.1 and 3.2 [24] where Lyapunov functional V 1 is used without assuming squared integrability of the solution).
Now we will apply quadratic Lyapunov functions to study the stability properties of the model (40), (30).
Theorem 2. 
Let the assumptions A1–A4 be satisfied and
1.
The fractional neural model (40) has an equilibrium V * = ( C 1 , C 2 , , C n ) .
2.
Any solution u C α , λ ( [ 0 , ) , R n ) is squared fractional integrable, i.e., u T u C α , λ ( [ 0 , ) , R n ) .
3.
For all t 0 , the inequalities
k = 1 n { max i = 1 , 2 , , n | a i , k ( t ) | + max i = 1 , 2 , , n | a k , i ( t ) | α + max i = 1 , 2 , , n | b i , k ( t ) | + max i = 1 , 2 , , n | b k , i ( t ) | β + max i = 1 , 2 , , n | c i , k ( t ) | + max i = 1 , 2 , , n | c k , i ( t ) | τ γ } < 2 min { μ i , i = 1 , 2 , , n }
hold, where α = max i = 1 , 2 , , n α i , β = max i = 1 , 2 , , n β i , and γ = max i = 1 , 2 , , n γ i .
Then, the equilibrium V * is exponentially stable in time, i.e., there exists a point T α > 0 such that any solution u C α , λ ( [ 0 , ) , R n ) of (40), (30) satisfies the inequality
| | u ( t ) V * | | < i = 1 n max t [ τ , 0 ] | ϕ i ( t ) | ρ 1 α Γ ( q ) e | λ 1 | t f o r t > T α .
Proof. 
Consider the Lyapunov function V ( x ) = 0.5 x T x = 0.5 i = 1 n x i 2 , x R n , x = ( x 1 , x 2 , , x n ) .
Let u ( t ) C α , λ ( [ 0 , ) , R n ) be a solution of (40), (30). Consider the system (10) with initial conditions (30). We will study the stability of its zero solution. Let ν i ( t ) = u i ( t ) C i , i = 1 , 2 , , n . Then, ν C α , λ ( [ 0 , ) , R n ) is a solution of (10), (30) and 0 R L D t α , λ V ( ν ( t ) ) = 0.5 i = 1 n 0 R L D t α , λ ν i 2 ( t ) , t > 0 .
Let point t > 0 be such that i = 1 n ν i 2 ( t + σ ) < i = 1 n ν i 2 ( t ) for σ [ min { τ , t } , 0 ] , i = 1 , 2 , , n .
Apply Corollary 1, assumptions A1–A3, inequality a b 0.5 a 2 + 0.5 b 2 , and we obtain
0 R L D t α , λ V ( ν ( t ) ) i = 1 n μ i ν i 2 ( t ) + 0.5 i = 1 n ν i 2 ( t ) k = 1 n | a i , k ( t ) | α k + 0.5 i = 1 n k = 1 n | a i , k ( t ) | α k ν k 2 ( t ) + 0.5 i = 1 n ν i 2 ( t ) k = 1 n | b i , k ( t ) | β k + 0.5 i = 1 n k = 1 n | b i , k ( t ) | β k ν k 2 ( t ξ ( t ) ) + 0.5 i = 1 n k = 1 n | c i , k ( t ) | ν i 2 ( t ) Θ ( t ) γ k + 0.5 i = 1 n k = 1 n | c i , k ( t ) | t Θ ( t ) t γ k ν k 2 ( s ) d s i = 1 n μ i ν i 2 ( t ) + 0.5 i = 1 n ν i 2 ( t ) k = 1 n max i = 1 , 2 , , n | a i , k ( t ) | α + 0.5 i = 1 n ν i 2 ( t ) k = 1 n max i = 1 , 2 , , n | a k , i ( t ) | α + 0.5 i = 1 n ν i 2 ( t ) k = 1 n max i = 1 , 2 , , n | b i , k ( t ) | β + 0.5 i = 1 n ν i 2 ( t ξ ( t ) ) k = 1 n max i = 1 , 2 , , n | b k , i ( t ) | β + 0.5 i = 1 n ν i 2 ( t ) k = 1 n max i = 1 , 2 , , n | c i , k ( t ) | τ γ + 0.5 t Θ ( t ) t i = 1 n ν i 2 ( s ) d s k = 1 n max i = 1 , 2 , , n | c k , i ( t ) | γ 0.5 i = 1 n ( 2 μ i + k = 1 n ( max i = 1 , 2 , , n | a i , k ( t ) | α + max i = 1 , 2 , , n | a k , i ( t ) | α + max i = 1 , 2 , , n | b i , k ( t ) | β + max i = 1 , 2 , , n | b k , i ( t ) | β + max i = 1 , 2 , , n | c i , k ( t ) | τ γ + τ max i = 1 , 2 , , n | c k , i ( t ) | γ ) ) ν i 2 ( t ) < 0 .
From Corollary 3, the claim of Theorem 2 follows. □
Corollary 4. 
Let the conditions of Theorem 2 be satisfied. Then, any solution u C α , λ ( [ 0 , ) , R n ) of (40), (30) satisfies lim t u i ( t ) = C i , i = 1 , 2 , , n , i.e., any solution of the model (40) approaches the equilibrium at infinity.

6. Applications

Example 2. 
Consider the following neural networks with three neurons with the GRLFD:
0 R L D t 0.3 , 0.5 u i ( t ) = A i ( t ) u i ( t ) + k = 1 3 a i , k ( t ) f k ( u k ( t ) ) + k = 1 3 b i , k ( t ) g k ( u k ( t 2 ) ) + k = 1 3 c i , k ( t ) t e t t h k ( u k ( s ) ) d s + I i ( t ) , t > 0 , i = 1 , 2 , 3 ,
with λ = 0.5 , α = 0.3 , ξ ( t ) = 2 , Θ ( t ) = e t , τ = 2 , coefficients A 1 ( t ) = 2 , A 2 ( t ) = 0.5 e t , A 3 ( t ) = 1 + 0.5 e t , and, therefore, μ 1 = 2 , μ 2 = 0.5 , μ 3 = 1.5 .
The activation functions f 1 ( x ) = g 1 ( x ) = h 1 ( x ) = x 1 + e x are the Swish functions with constants α 1 = β 1 = γ 1 = 1.1 , f 2 ( x ) = g 2 ( x ) = h 2 ( x ) = e x e x e x + e x are the tanh functions with constants α 2 = β 2 = γ 2 = 1 , and f 3 ( x ) = g 3 ( x ) = h 3 ( x ) = 0.5 ( | x + 1 | | x 1 | ) with α 3 = β 3 = γ 3 = 1 , the external inputs are given by
I 1 ( t ) = 2 + 0 . 5 0.5 Γ ( 0.7 ) e 0.5 t t 0.3 + 0 . 5 0.3 γ ( 0.5 , 0.5 t ) , , I 2 ( t ) = 0.1 e t e 1 + e , I 3 ( t ) = 0.001 e 1 + e
and the matrices of strengths of interconnections
{ a i , k ( t ) } = 0.01 0 0.02 0.01 e t 0.01 0 0 0.01 t 1 + t 0 , { b i , k ( t ) } = 0.01 0.1 0.1 0.1 e t 0 0.1 e 2 t 0 0.001 sin ( t ) 0 ,
{ c i , k ( t ) } = 0 0.01 0.01 e t 0.01 0.0022 t 1 + t 0 0.001 0 0.005 e t
with k = 1 3 max i = 1 , 2 , 3 | b k , i ( t ) | = 0.1 + 0.1 e t + 0.05 sin ( t ) 0.25 and k = 1 3 max i = 1 , 2 , 3 | c k , i ( t ) | = 0.01 + 0.01 + 0.005 e t 0.025 .
Then, the inequality
k = 1 3 { max i = 1 , 2 , 3 | a i , k ( t ) | + max i = 1 , 2 , 3 | a k , i ( t ) | 1.1 + max i = 1 , 2 , 3 | b i , k ( t ) | + max i = 1 , 2 , 3 | b k , i ( t ) | 1.1 + max i = 1 , 2 , 3 | c i , k ( t ) | + max i = 1 , 2 , 3 | c k , i ( t ) | 2.2 } ( 0.04 + 0.04 ) 1.1 + ( 0.25 + 0.3 ) 1.1 + ( 0.025 + 0.025 ) 2.2 = 0.803 < 2 min { μ 1 , μ 2 , μ 3 } = 1
is satisfied, i.e., condition 3 of Theorem 2 is satisfied.
The model (43) has an equilibrium V * = ( 1 , 0 , 0 ) because f 1 ( 1 ) = g 1 ( 1 ) = h 1 ( 1 ) = e 1 + e , f 2 ( 0 ) = g 2 ( 0 ) = h 2 ( 0 ) = 0 , f 3 ( 0 ) = g 3 ( 0 ) = h 3 ( 0 ) = 0 and the equalities
0 . 5 0.5 Γ ( 0.7 ) e 0.5 t t 0.3 + 0 . 5 0.3 γ ( 0.5 , 0.5 t ) = A 1 ( t ) + a 1 , 1 ( t ) + b 1 , 1 ( t ) + c 1 , 1 ( t ) e t e 1 + e + I 1 ( t ) , 0 = a 2 , 1 ( t ) + b 2 , 1 ( t ) + c 2 , 1 ( t ) e t e 1 + e + I 2 ( t ) 0 = a 3 , 1 ( t ) + b 3 , 1 ( t ) + c 3 , 1 ( t ) e t e 1 + e + I 3 ( t ) f o r t 0
hold.
According to Theorem 2, the equilibrium of (43) is exponentially stable in time, i.e., every solution ( u 1 ( · ) , u 2 ( · ) , u 3 ( · ) ) of (43) is satisfying the inequality
u 1 ( t ) 1 ) 2 + u 2 2 ( t ) + u 3 2 ( t ) i = 1 3 max t [ 2 , 0 ] | ϕ i ( t ) | 0 . 5 0.7 e t Γ ( 0.3 ) , t > T = 1 ,
where T : t 0.7 = 1 .

7. Conclusions

The main aim of the paper is to study Hopfield neural networks with both the variable delay and distributed delay. An important aspect of our study is that we consider the general case of variables in time coefficients and external inputs. The dynamic of the units are modeled by the appropriately defined generalized fractional derivatives of Riemann–Liouville type, satisfying the first and the second fundamental theorem in fractional calculus. This derivative is applied to adequately model the behavior with anomalies at the initial time point. An exponential type of stability is defined and this stability excludes the initial time because of the singularity of the solutions at the initial time. Quadratic Lyapunov functions and Razumikhin method are applied. Theoretical results are illustrated with an example. The main inequality for the general convex Lyapunov functions could be applied to theoretical study of stability behavior of fractional differential equations with the applied generalized fractional derivative as well to study the stability equilibrium of many models with this type of fractional derivative.

Author Contributions

Conceptualization, R.P.A. and S.H.; Validation, R.P.A. and S.H.; Formal analysis, R.P.A. and S.H.; Writing—original draft, R.P.A. and S.H.; Writing—review & editing, R.P.A. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Bulgarian National Science Fund grant number KP-06-PN62/1.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, S.; Yu, Y.; Wang, H. Mittag-Leffler stability of fractional-order Hopfield neural networks. Nonlinear Anal. Hybrid Syst. 2015, 16, 104–121. [Google Scholar] [CrossRef]
  2. Kutahyalıoglu, A.; Karakoc, F. Exponential stability of Hopfield neural networks with conformable fractional derivative. Neurocomputing 2021, 456, 263–267. [Google Scholar] [CrossRef]
  3. Tian, Y.; Wang, Z. Stability analysis for delayed neural networks: A fractional-order function method. Neurocomputing 2021, 464, 282–289. [Google Scholar] [CrossRef]
  4. Wang, H.; Yu, Y.; Wen, G.; Zhang, S. Stability Analysis of Fractional-Order Neural Networks with Time Delay. Neural Process Lett. 2015, 42, 479–500. [Google Scholar] [CrossRef]
  5. Wang, H.; Yu, Y.; Wen, G. Stability analysis of fractional-order Hopfield neural networks with time delays. Neural Netw. 2014, 55, 98–109. [Google Scholar] [CrossRef]
  6. Zhang, H.; Ye, R.; Cao, J.; Ahmed, A.; Li, X.; Wan, Y. Lyapunov functional approach to stability analysis of Riemann-Liouville fractional neural networks with time-varying delays. Asian J. Control 2017, 20, 1938–1951. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Li, J.; Zhu, S.; Wang, H. Asymptotical stability and synchronization of Riemann–Liouville fractional delayed neural networks. Comp. Appl. Math. 2023, 42, 20. [Google Scholar] [CrossRef]
  8. Hristova, S.; Tersian, S.; Terzieva, R. Lipschitz Stability in Time for Riemann-Liouville Fractional Differential Equations. Fractal Fract. 2021, 5, 37. [Google Scholar] [CrossRef]
  9. Agarwal, R.; Hristova, S.; O’Regan, D. Practical stability for Riemann–Liouville delay fractional differential equations. Arab. J. Math. 2021, 10, 271–283. [Google Scholar] [CrossRef]
  10. Agarwal, R.; Hristova, S.; O’Regan, D.; Kopanov, P. Mean-square stability of Riemann–Liouville fractional Hopfield’s graded response neural networks with random impulses. Adv. Differ. Equ. 2021, 2021, 98. [Google Scholar] [CrossRef]
  11. Viera-Martin, E.; Gómez-Aguilar, J.F.; Solís-Pérez, J.E.; Hernández-Pérez, J.A.; Escobar-Jiménez, R.F. Artificial neural networks: A practical review of applications involving fractional calculus. Eur. Phys. J. Spec. Top. 2022, 231, 2059–2095. [Google Scholar] [CrossRef] [PubMed]
  12. Luchko, Y. General fractional integrals and derivatives with the Sonine kernels. Mathematics 2021, 9, 594. [Google Scholar] [CrossRef]
  13. Luchko, Y. Fractional differential equations with the general fractional derivatives of arbitrary order in the Riemann-Liouville sense. Mathematics 2022, 10, 849. [Google Scholar] [CrossRef]
  14. Jarad, F.; Abdeljawad, T.; Alzabut, J. Generalized fractional derivatives generated by a class of local proportional derivatives. Eur. Phys. J. Spec. Top. 2017, 226, 3457–3471. [Google Scholar] [CrossRef]
  15. Jarad, F.; Abdeljawad, T. Generalized fractional derivatives and Laplace transform. Discret. Contin. Dyn. Syst. Ser. S 2020, 13, 709–722. [Google Scholar] [CrossRef] [Green Version]
  16. Samko, S.G.; Cardoso, R.P. Integral equations of the first kind of Sonine type. Intern. J. Math. Math. Sci. 2003, 57, 238394. [Google Scholar] [CrossRef] [Green Version]
  17. Luchko, Y. General fractional integrals and derivatives of arbitrary order. Symmetry 2021, 13, 755. [Google Scholar] [CrossRef]
  18. Das, S. Functional Fractional Calculus; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  19. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  20. Samko, G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach: Washington, DC, USA, 1993. [Google Scholar]
  21. Sabzikar, F.; Meerschaert, M.M.; Chen, J. Tempered fractional calculus. J. Comput. Phys. 2015, 293, 14–28. [Google Scholar] [CrossRef] [Green Version]
  22. Agarwal, R.P.; Hristova, S.; O’Regan, D. Mittag-Leffler Type Stability of BAM Neural Networks Modeled by the Generalized Proportional Riemann-Liouville Fractional Derivative. Axioms 2023, 12, 588. [Google Scholar] [CrossRef]
  23. Devi, J.V.; Rae, F.A.M.; Drici, Z. Variational Lyapunov method for fractional differential equations. Comput. Math. Appl. 2012, 64, 2982–2989. [Google Scholar] [CrossRef] [Green Version]
  24. Wu, X.; Liu, S.; Wang, Y. Stability analysis of Riemann-Liouville fractional-order neural networks with reaction-diffusion terms and mixed time-varying delays. Neurocomputing 2021, 431, 169–178. [Google Scholar] [CrossRef]
  25. Alidousti, J.; Ghaziani, R.K.; Eshkaftaki, A.B. Stability analysis of nonlinear fractional differential order systems with Caputo and Riemann–Liouville derivatives. Turk. J. Math. 2017, 41, 1260–1278. [Google Scholar] [CrossRef]
  26. Qin, Z.; Wu, R.; Lu, Y. Stability analysis of fractional order systems with the Riemann–Liouville derivative. Syst. Sci. Control Eng. Open Access J. 2014, 2, 727–731. [Google Scholar] [CrossRef] [Green Version]
  27. Agarwal, R.; Hristova, S.; O’Regan, D. Stability Concepts of Riemann-Liouville Fractional-Order Delay Nonlinear Systems. Mathematics 2021, 9, 435. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Agarwal, R.P.; Hristova, S. Stability of Delay Hopfield Neural Networks with Generalized Riemann–Liouville Type Fractional Derivative. Entropy 2023, 25, 1146. https://doi.org/10.3390/e25081146

AMA Style

Agarwal RP, Hristova S. Stability of Delay Hopfield Neural Networks with Generalized Riemann–Liouville Type Fractional Derivative. Entropy. 2023; 25(8):1146. https://doi.org/10.3390/e25081146

Chicago/Turabian Style

Agarwal, Ravi P., and Snezhana Hristova. 2023. "Stability of Delay Hopfield Neural Networks with Generalized Riemann–Liouville Type Fractional Derivative" Entropy 25, no. 8: 1146. https://doi.org/10.3390/e25081146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop