Next Article in Journal
On the Connection between Nelson’s Stochastic Quantum Mechanics and Nottale’s Theory of Scale Relativity
Previous Article in Journal
On a Generic Fractional Derivative Associated with the Riemann–Liouville Fractional Integral
Previous Article in Special Issue
Stability Analysis of a Credit Risk Contagion Model with Distributed Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cohen–Grossberg Neural Network Delay Models with Fractional Derivatives with Respect to Another Function—Theoretical Bounds of the Solutions

by
Ravi Agarwal
1,
Snezhana Hristova
2,* and
Donal O’Regan
3
1
Department of Mathematics and Systems Engineering, Florida Institute of Technology, Melbourne, FL 32901, USA
2
Faculty of Mathematics and Informatics, Plovdiv University “P. Hilendarski”, 4000 Plovdiv, Bulgaria
3
School of Mathematical and Statistical Sciences, University of Galway, H91 TK33 Galway, Ireland
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(9), 605; https://doi.org/10.3390/axioms13090605
Submission received: 3 August 2024 / Revised: 27 August 2024 / Accepted: 2 September 2024 / Published: 5 September 2024

Abstract

:
The Cohen–Grossberg neural network is studied in the case when the dynamics of the neurons is modeled by a Riemann–Liouville fractional derivative with respect to another function and an appropriate initial condition is set up. Some inequalities about both the quadratic function and the absolute values functions and their fractional derivatives with respect to another function are proved and they are based on an appropriate modification of the Razumikhin method. These inequalities are applied to obtain the bounds of the norms of any solution of the model. In particular, we apply the squared norm and the absolute values norms. These bounds depend significantly on the function applied in the fractional derivative. We study the asymptotic behavior of the solutions of the model. In the case when the function applied in the fractional derivative is increasing without any bound, the norms of the solution of the model approach zero. In the case when the applied function in the fractional derivative is equal to the current time, the studied problem reduces to the model with the classical Riemann–Liouville fractional derivative and the obtained results gives us sufficient conditions for asymptotic behavior of the solutions for the corresponding model. In the case when the function applied in the fractional derivative is bounded, we obtain a finite bound for the solutions of the model. This bound depends on the initial function and the solution does not approach zero. An example is given illustrating the theoretical results.

1. Introduction

Many different types of neural network models are considered in the literature and various properties of the behavior of the neurons are proved by many authors. Most of the studied models use ordinary derivatives to describe the dynamics of the neurons. For example, recently, in [1], stability properties and Hopf bifurcation for a neural network model in which two rings share a common node is investigated; in [2], the mean square exponential stability of a delayed Markovian jumping reaction–diffusion Hopfield neural networks with uncertain transition rates is studied. Recently, theoretical development of fractional calculus and applications to differential equations gives us the opportunity to widen the area of modeling neural networks by describing the dynamics of neurons with fractional derivatives. For example, in [3], a 2D memory-based Caputo fractional model is considered and the combined impact of calcium influx and efflux on nerve cells is studied.
The Cohen–Grossberg (CG) neural network model has been studied by many authors. For example, the model is investigated for ordinary derivatives and both time-varying delays and continuously distributed delays in [4] and a bibliographic analysis on fractional neural networks was given in [5]. Most of the studied fractional CG models consider various types of Caputo fractional derivatives since the initial conditions are similar to the ones with ordinary derivatives. Various qualitative properties of the solutions of CG fractional models with Caputo type derivatives are studied and known in the literature. For example, finite time stability for a delay model is studied in [6,7], almost periodicity for variable impulsive perturbations is considered in [8], stability for delayed model with uncertainties is investigated in [9], synchronization for an impulsive model is given in [10], finite-time synchronization of fractional-order fuzzy neural networks via nonlinear feedback control is studied in [11], exponential and power-type stability is studied for the model with distributed and discrete delays by Lyapunov-type functionals in [12], stability for the time-varying delay model is studied by the flexible terminal interpolation method in [13].
In the case when the neural network has an anomaly at the initial time point, the Caputo type fractional derivative cannot be applied and the Riemann–Liouville fractional derivative is more appropriate to use for modeling adequately the dynamics of neurons. The neural networks with Riemann-Lioville fractional derivatives are studied, for example, in [14] (synchronization), [15] (stability for model with reaction-diffusion term), and [16] (synchronization stability).
In this paper, we investigate the asymptotic behavior of the neurons for the Cohen–Grossberg neural network with dynamics modeled by the Riemann–Liouville (RL) fractional derivative. More generally, we will apply the RL fractional derivative with respect to another function (DwrtF) and we study the general model with delays. By employing the modified Razumikhin method, we obtain the upper bounds of the norms of the solutions. In particular, we study the behavior of the norms of the state variables when squared norms and absolute values norms are applied. These results are valid on intervals excluding the initial time, which is a singular point for the solutions. These bounds depend on both the initial function and the function involved in the DwrtF. In the case when the function applied in the fractional derivative is increasing without a bound, we prove the norms of any solution of the model approach zero. For example, when this function is g ( t ) = t , the studied fractional derivative reduces to the classical RL fractional derivative and our results are related to some published results ([14,17]). In the case when the function applied in the fractional derivative is approaching a finite number, we prove the norms of any solution of the model do not approach zero. An example is given illustrating the theoretical results and the influence of the function involved in DwrtF on the behavior of the solutions.
The main contributions in the paper can be summarized as follows:
-
The initial value problem for the Cohen–Grossberg neural network with time-varying delays and modeled by DwrtF is set up.
-
Two types of bounds of the norms of the solutions of the model with DwrtF are obtained by the application of the appropriate modification of the Razumikhin method (with absolute values and quadratic functions).
-
Bounds for both norms (quadratic norm and absolute value norm) of the solutions of the CG fractional model are obtained.
-
The significant influence of the applied function in the fractional derivative on the asymptotic behavior of the neurons is considered.

2. Notes on Fractional Calculus

The fractional integral with respect to another function was defined in 1970 by Osler [18]. Later, this definition was actualized and given in more appropriate form in [19,20], and studied and applied by many authors; see, for example, [21,22,23].
We will present the basic definitions and some properties of Riemann–Liouville fractional integrals and derivatives with respect to a function.
Let 0 < B be a given number.
We introduce the assumption (H):
(H). The function g C 1 ( [ 0 , B ] , ( 0 , ) ) is a strictly increasing function.
Definition 1
([19,20]). The Riemann–Liouville fractional integral with respect to another function g ( . ) (IwrtF) of order μ > 0 of a given function y : [ 0 , B ] R is defined by
I g ( t ) μ 0 y ( t ) = 1 Γ ( μ ) 0 t ( g ( t ) g ( s ) μ 1 g ( s ) y ( s ) d s , t ( 0 , B ] ,
and the Riemann–Liouville fractional derivative with respect to another function g ( . ) (DwrtF) of order μ ( 0 , 1 ) of a given function y : [ 0 , B ] R is defined by
  0 R L D g ( t ) μ y ( t ) = 1 g ( t ) d d t I g ( t ) 1 μ 0 y ( t ) = 1 g ( t ) Γ ( 1 μ ) d d t 0 t g ( t ) g ( s ) μ g ( s ) y ( s ) d s , t ( 0 , B ] ,
where the function g ( . ) satisfies condition (H).
The integral (IwrtF) and derivative (DwrtF) of a vector-valued function is defined component-wisely.
We will use the following vector space:
C g μ ( ( 0 , B ) , R n ) = { f : [ 0 , B ] R n :   0 R L D g ( t ) μ f ( t ) , t ( 0 , B ] } .
Lemma 1
(Lemma 11 [24]). Suppose the function v = ( v 1 , v 2 , , v n ) : v k C g μ ( ( 0 , B ) , R ) , v k 2 C g μ ( ( 0 , B ) , R ) , k = 1 , 2 , , n . Then the inequality
k = 1 n   0 R L D g ( t ) μ v k 2 ( t ) 2 k = 1 n v k ( t )   0 R L D g ( t ) μ v k ( t ) , t [ 0 , B ] ,
holds.
Lemma 2
(Corollary 4 [24]). Suppose the function v = ( v 1 , v 2 , , v n ) : v k C g μ ( ( 0 , B ) , R ) , k = 1 , 2 , , n . Then for any point t [ 0 , B ] such that v k ( t ) 0 , k = 1 , 2 , , n , the inequality
k = 1 n   0 R L D g ( t ) μ | v k ( t ) | 2 k = 1 n s i g n ( v k ( t ) )   0 R L D g ( t ) μ v k ( t )
holds.
We will prove a result for the applied DwrtF, which will be applied in the main proofs.
Lemma 3.
Let ω C ( [ 0 , B ] , R ) and there exists a point T ( 0 , B ] such that ω ( T ) = 0 , and ω ( t ) < 0 , for t ( 0 , B ) . Then, if the DwrtF of ω exists for t = T with μ ( 0 , 1 ] , the inequality   0 R L D g ( t ) μ ω ( t ) | t = T 0 holds.
Proof. 
Let H ( t ) = 0 t g ( t ) g ( s ) μ g ( s ) ω ( s ) d s for t ( 0 , B ] . According to (2), we have
  0 R D g ( t ) μ ω ( t ) | t = T = 1 g ( t ) Γ ( 1 μ ) lim h 0 + H ( T + h ) H ( T ) h .
Applying condition (H), we get
H ( T + h ) H ( T ) = 0 T + h g ( T + h ) g ( s ) μ g ( s ) ω ( s ) d s 0 T g ( T ) g ( s ) μ g ( s ) ω ( s ) d s = 0 T g ( T + h ) g ( s ) μ g ( T ) g ( s ) μ g ( s ) ω ( s ) d s + T T + h g ( T + h ) g ( s ) μ g ( s ) ω ( s ) d s 0 T g ( T + h ) g ( s ) μ g ( T ) g ( s ) μ g ( s ) ω ( s ) d s .
From (6), we obtain
lim h 0 + H ( T + h ) H ( T ) h 0 T lim h 0 + g ( T h ) g ( s ) μ g ( T ) g ( s ) μ h g ( s ) ω ( s ) d s = 0 T d d T g ( T ) g ( s ) μ g ( s ) ω ( s ) d s 0 .

3. Some Results for Delay Differential Equations with DwrtF

Consider the following nonlinear delay differential equation with DwrtF
  0 R L D g ( t ) μ y ( t ) = F ( t , y ( t ) , y t ) for t > 0 ,
with initial conditions
y ( t ) = ϕ ( t ) , for t [ τ , 0 ) , lim x 0 + I g ( t ) 1 μ 0 y ( x ) = ϕ ( 0 ) ,
where μ ( 0 , 1 ) , y t ( Θ ) = y ( t + Θ ) , Θ [ τ , 0 ] , ϕ : [ τ , 0 ] R n and F : R + × R n × R n R n , F = ( F 1 , F 2 , , F n ) .
We will assume that for any initial function ϕ C ( [ τ , 0 ] , R n ) ) , (8) and (9) have a solution y ( t ; ϕ ) C g μ ( [ 0 , ) , R n ) .
The applied Riemann–Liouville type fractional derivative of order between 0 and 1 and the delay in the equation require special types of initial conditions such as (9). We will prove these initial conditions can be presented in an equivalent limit form. This form will be used later in the modified Razumikhin method.
Lemma 4.
Let μ ( 0 , 1 ) , condition (H) is satisfied and y C g μ ( [ 0 , ) , R ) .
(i) 
Suppose
lim t 0 + ( g ( t ) g ( 0 ) ) 1 μ y ( t ) = c < .
Then I g ( t ) 1 μ 0 y ( t ) | t = 0 = c Γ ( μ ) .
(ii) 
Let I g ( t ) 1 μ 0 y ( t ) | t = 0 + = b < .
If l i m t 0 + ( g ( t ) g ( 0 ) ) 1 μ y ( t ) exists, then
lim t 0 + ( g ( t ) g ( 0 ) ) 1 μ y ( t ) = b Γ ( μ ) .
Proof. 
Suppose (10) holds. Let ε > 0 be an arbitrary. There exists a number δ > 0 such that
| ( g ( t ) g ( 0 ) ) 1 μ y ( t ) c | < ε , t ( 0 , δ ) .
Use I g 1 μ 0 ( g ( t ) g ( 0 ) ) μ 1 = Γ ( μ ) and obtain
| I 1 μ 0 y ( t ) c Γ ( μ ) | = | I 1 μ 0 y ( t ) I g 1 μ 0 ( g ( t ) g ( 0 ) ) μ 1 c | = | I 1 μ 0 ( g ( t ) g ( 0 ) ) μ 1 ( g ( t ) g ( 0 ) ) 1 μ y ( t ) c | < ε Γ ( μ ) .
Inequality (12) proves I 1 μ 0 y ( t ) | t = 0 + = c Γ ( μ ) .
Let I 1 μ 0 y ( t ) | t = 0 + = b and lim t 0 + ( g ( t ) g ( 0 ) ) 1 μ y ( t ) = c < exists. Then from the above b = c Γ ( μ ) . □
For any vector x R n : x = ( x 1 , x 2 , , x n ) , we will use the norms
| | x | | 1 = i = 1 n | x i | ,
and
| | x | | 2 = i = 1 n x i 2 .
For any vectors u , v R n : u = ( u 1 , u 2 , , n ) , v = ( v 1 , v 2 , , v n ) , we use the dot product u · v = i = 1 n u i v i .
For any ϕ C ( [ τ , 0 ] , R n ) , we denote
| | ϕ | | 0 , 1 = max t [ τ , 0 ] | | ϕ ( t ) | | 1 = max t [ τ , 0 ] i = 1 n | ϕ i ( t ) | ,
and
| | ϕ | | 0 , 2 = max t [ τ , 0 ] | | ϕ ( t ) | | 2 = max t [ τ , 0 ] i = 1 n ϕ i 2 ( t ) .
Remark 1.
According to Lemma 4 the second equality I 1 μ 0 y ( t ) | t = 0 + = ϕ ( 0 ) in the initial condition (9) could be replaced by its equivalent form lim t 0 + ( g ( t ) g ( 0 ) ) 1 μ y ( t ) = ϕ ( 0 ) Γ ( μ ) .
We will use the quadratic Lyapunov function and the modified Razumikhin method to obtain a bound on the quadratic norm of the solution.
Theorem 1.
Suppose that condition (H) is satisfied and for any solution y C g μ ( ( 0 , ) , R n ) of (8) and (9), the following conditions hold:
(a) 
y T y C g μ ( ( 0 , ) , [ 0 , ) ) .
(b) 
there exists a function G C ( [ 0 , ) , [ 0 , ) ) such that the inequality
lim t 0 + ( g ( t ) g ( 0 ) ) 1 μ y T ( t ) y ( t ) < G ( | | ϕ | | 0 , 2 )
holds;
(c) 
for any point t > 0 such that
( g ( t + Θ ) g ( 0 ) ) 1 μ y T ( t + Θ ) y ( t + Θ ) < ( g ( t ) g ( 0 ) ) 1 μ y T ( t ) y ( t ) for Θ ( min { t , τ } , 0 ) ,
the inequality
y ( t ) · F ( t , y ( t ) , y t ) < 0
holds, where u · v is the dot product of the vector u , v R n .
Then, any solution of (8) and (9) satisfies the inequality
| | y ( t ) | | 2 < G ( | | ϕ | | 0 , 2 ) ( g ( t ) g ( 0 ) ) μ 1 for t > 0 .
Proof. 
Let y ( t ) = y ( t ; ϕ ) be a solution of (8) and (9).
From condition (b), there exists a number δ > 0 such that
y T ( t ) y ( t ) ( g ( t ) g ( 0 ) ) μ 1 G ( | | ϕ | | 0 , 2 ) , for t ( 0 , δ ) .
Consider the function Λ ( t ) = G ( | | ϕ | | 0 , 2 ) ( g ( t ) g ( 0 ) ) μ 1 , t ( 0 , ) . From μ ( 0 , 1 ) and condition (H), the function Λ ( . ) is a decreasing function on ( 0 , ) . From Definition 1, we have   0 R L D g ( t ) μ ( g ( t ) g ( 0 ) ) μ 1 = 0 and therefore,   0 R L D g ( t ) μ Λ ( t ) = 0 .
We now prove that
y T ( t ) y ( t ) < Λ ( t ) , t > 0 .
Note that the inequality (17) holds for t ( 0 , δ ) according to inequality (16). Assume inequality (17) is not true for all t > 0 . Therefore, there exists a point ξ δ > 0 such that
y T ( ξ ) y ( ξ ) = Λ ( ξ ) , and y T ( t ) y ( t ) < Λ ( t ) , t ( 0 , ξ ) .
According to Lemma 3 with T = ξ , ω ( t ) = y T ( t ) y ( t ) Λ ( t ) , the inequality
  0 R L D g ( t ) μ y T ( t ) y ( t ) Λ ( t ) | t = ξ 0
holds, and therefore,
  0 R L D g ( t ) μ y T ( t ) y ( t ) | t = ξ =   0 R L D g ( t ) μ y T ( t ) y ( t ) Λ ( t ) | t = ξ 0 .
Apply Lemma 1 to inequality (19), use (8) and obtain
y ( t ) · F ( t , y ( t ) , y t ) | t = ξ 0 .
Case 1. Let ξ > τ . Then, min { ξ , τ } = τ and for Θ [ τ , 0 ) , we have ξ + Θ ( 0 , ξ ) . From the definition of function Λ ( . ) and Equation (18), it follows that
( g ( ξ + Θ ) g ( 0 ) ) 1 μ y T ( ξ + Θ ) y ( t + Θ ) < ( g ( ξ + Θ ) g ( 0 ) ) 1 μ Λ ( ξ + Θ ) = G ( | | ϕ | | 0 , 2 ) = ( g ( ξ ) g ( 0 ) ) 1 μ Λ ( ξ ) = ( g ( ξ ) g ( 0 ) ) 1 μ y T ( ξ ) y ( ξ ) , Θ [ τ , 0 ] .
According to condition (c) for t = ξ , the inequality
y ( ξ ) · F ( ξ , y ( ξ ) , y ξ ) < 0
holds.
The inequality (22) contradicts (20).
Case 2. Let ξ ( 0 , τ ] . Then, min { ξ , τ } = ξ and for Θ [ ξ , 0 ) , we have ξ + Θ ( 0 , ξ ) .
From the choice of the point ξ , the definition of function Λ ( . ) , and (18), we get the equality
( g ( t ) g ( 0 ) ) 1 μ | | y ( ξ ) | | 2 = G ( | | ϕ | | 0 , 2 ) ,
and the inequalities
( g ( ξ + Θ ) g ( 0 ) ) 1 μ y T ( ξ + Θ ) y ( t + Θ ) < ( g ( ξ + Θ ) g ( 0 ) ) 1 μ Λ ( ξ + Θ ) = G ( | | ϕ | | 0 , 2 ) = ( g ( ξ ) g ( 0 ) ) 1 μ Λ ( ξ ) = ( g ( ξ ) g ( 0 ) ) 1 μ y T ( ξ ) y ( ξ ) , for Θ ( ξ , 0 ) .
Therefore, inequality (13) holds for t = ξ .
According to condition (c) for t = ξ the inequality (22) holds. The obtained contradiction proves inequality (17).
Inequality (17) proves the claim of Theorem 1. □
Remark 2.
Note that if the inequality
y T ( t + Θ ) y ( t + Θ ) < y T ( t ) y ( t ) , Θ ( min { t , τ } , 0 )
holds, then inequality (13) is satisfied. Therefore, inequality (13) in Theorem 1 could be replaced by (24).
Remark 3.
The bound (15) is not satisfied for the initial time point zero.
We will use the Lyapunov function with absolute values and the modified Razumikhin method to obtain a bound on the absolute values norm of the solution.
Theorem 2.
Suppose that condition (H) is satisfied and for any solution y C g μ ( ( 0 , ) , R n ) of (8) and (9) the following conditions hold:
(a) 
| | y ( . ) | | 1 C g μ ( ( 0 , ) , [ 0 , ) ) .
(b) 
there exists a function G C ( [ 0 , ) , [ 0 , ) ) such that the inequality
lim t 0 + ( g ( t ) g ( 0 ) ) 1 μ | | y ( t ) | | 1 < G ( | | ϕ | | 0 , 1 )
holds;
(c) 
for any point t > 0 such that
( g ( t + Θ ) g ( 0 ) ) 1 μ | | y ( t + Θ ) | | 1 < ( g ( t ) g ( 0 ) ) 1 μ | | y ( t ) | | 1 for Θ ( min { t , τ } , 0 ) ,
the inequality
s i g n y ( t ) · F ( t , y ( t ) , y t ) < 0
holds, where s i g n u = ( s i g n u 1 , s i g n u 2 , , s i g n u n ) , u · v is the dot product of the vectors u , v R n .
Then, any solution of (8), (9) satisfies the inequality
| | y ( t ) | | 1 < G ( | | ϕ | | 0 , 1 ) ( g ( t ) g ( 0 ) ) μ 1 for t > 0 .
Proof. 
Let y ( t ) = y ( t ; ϕ ) be a solution of (8) and (9).
From condition (b), there exists a number δ > 0 such that
| | y ( t ) | | 1 ( g ( t ) g ( 0 ) ) μ 1 G ( | | ϕ | | 0 , 1 ) , for t ( 0 , δ ) .
Similar to the proof of Theorem 1, we define the function
Λ ( t ) = G ( | | ϕ | | 0 , 1 ) ( g ( t ) g ( 0 ) ) μ 1 , for t ( 0 , ) .
We now prove that
| | y ( t ) | | 1 < Λ ( t ) , for t > 0 .
Note that inequality (29) holds for t ( 0 , δ ) according to inequality (28). Assume inequality (29) is not true for all t > 0 . Therefore, there exists a point ξ δ > 0 such that
| | y ( ξ ) | | 1 = Λ ( ξ ) , and | | y ( t ) | | 1 < Λ ( t ) , for t ( 0 , ξ ) .
From the choice of the point ξ , the definition of function Λ ( . ) , and (30) we have the equality
( g ( ξ ) g ( 0 ) ) 1 μ | | y ( ξ ) | | 1 = G ( | | ϕ | | 0 , 1 ) ,
and also the inequality (consider ξ > τ and ξ ( 0 , τ ] separately)
( g ( ξ + Θ ) g ( 0 ) ) 1 μ | | y ( ξ + Θ ) | | 1 < ( g ( ξ + Θ ) g ( 0 ) ) 1 μ Λ ( ξ + Θ ) = G ( | | ϕ | | 0 , 1 ) = ( g ( ξ ) g ( 0 ) ) 1 μ Λ ( ξ ) = ( g ( ξ ) g ( 0 ) ) 1 μ | | y ( ξ ) | | 1 , for Θ ( min ( ξ , τ ) , 0 ) .
Therefore, inequality (25) holds for t = ξ .
According to condition (c) for t = ξ , the inequality
s i g n y ( ξ ) · F ( ξ , y ( ξ ) , y ξ ) < 0
holds.
If we assume y ( ξ ) = 0 , i.e., y k ( ξ ) = 0 , k = 1 , 2 , , n , then | | y ( ξ ) | | 1 = 0 and if we substitute in inequality (25) for t = ξ , we obtain a contradiction. Therefore, y ( ξ ) 0 .
According to Lemma 3 with T = ξ , ω ( t ) = | | y ( t ) | | 1 Λ ( t ) , the inequality
  0 R L D g ( t ) μ | | y ( t ) | | 1 Λ ( t ) | t = ξ 0
holds, and therefore,
  0 R L D g ( t ) μ k = 1 n | y k ( t ) | | t = ξ =   0 R L D g ( t ) μ | | y k ( t ) | | 1 | t = ξ =   0 R L D g ( t ) μ | | y ( t ) | | 1 Λ ( t ) | t = ξ 0 .
Apply Lemma 2 for t = ξ , v y , use inequality (19) and Equation (8) for t = ξ and obtain
0 0.5 k = 1 n   0 R L D g ( t ) μ | y k ( t ) | | t = ξ k = 1 n s i g n ( y k ( t ) )   0 R L D g ( t ) μ y k ( t ) | t = ξ = k = 1 n s i g n ( y k ( t ) ) F k ( t , y ( t ) , y t ) | t = ξ = s i g n y ( t ) · F ( t , y ( t ) , y t ) | t = ξ .
Inequality (33) contradicts (31).
Inequality (29) proves the claim of Theorem 2. □
Remark 4.
If for a point t > 0 and any k = 1 , 2 , , n , the inequalities
( g ( t + Θ ) g ( 0 ) ) 1 μ | y k ( t + Θ ) | < ( g ( t ) g ( 0 ) ) 1 μ | y k ( t ) |
hold, then inequality (25) is satisfied where y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y n ( t ) ) .

4. Asymptotic Properties of the Delay GG-Fractional Model with DwrtF

We will apply theoretical results obtained in Section 3 to a particular neural network model with a variable delay and in which the dynamic of the neurons is described by DwrtF. Accordingly, we will set up appropriate initial conditions to the model.
Consider the general model of the Cohen–Grossberg neural network with delays and dynamics described by DwrtF (CGFM)
  0 R L D g ( t ) μ y i ( t ) = Υ i ( y i ( t ) ) Ω i ( y i ( t ) ) + k = 1 n α i , k ( t ) F k ( y k ( t ) ) + k = 1 n β i , k ( t ) P k ( y k , t ) , t > 0 , i = 1 , 2 , , n ,
where y i ( t ) , i = 1 , 2 , , n are the state variables of the i-th neuron at time t > 0 , Υ i ( x ) are the amplification functions, Ω i ( x ) are the behaved functions, α i j ( t ) , β i j ( t ) represent the strengths of the neuron interconnection at time t (assuming they are time changeable), n is the number of units in the neural network, μ ( 0 , 1 ) , F j ( u ) and P j ( u ) denote the activation functions of the j-th neuron, y k , t ( Θ ) = y k ( t + Θ ) , Θ [ τ , 0 ] , k = 1 , 2 , , n .
The initial time interval is [ τ , 0 ] . The applied DwrtF leads to a singularity of the solutions at the initial time 0. This requires an appropriate definition of the initial conditions associated with the model (34):
I t 1 q , ρ 0 y i ( t ) | t = 0 = ϕ i ( 0 ) , y i ( t ) = ϕ i ( t ) for t [ τ , 0 ) ,
where ϕ i C ( [ τ , 0 ] , R ) , i = 1 , 2 , , n .
We will introduce the following assumptions:
(A1)
The function Υ i C ( R , [ η i , κ i ] ) where η i , κ i , i = 1 , 2 , , n are positive constants.
(A2)
The functions F i , P i C ( R , R ) : F i ( 0 ) = 0 , P i ( 0 ) = 0 , and there exist positive constants ρ i , ζ i , i = 1 , 2 , , n such that
| F i ( x ) F i ( y ) | ρ i | x y | , | P i ( x ) P i ( y ) | ζ i | x y | , x , y R .
(A3)
The functions α i j , β i j C ( [ 0 , ) , R ) , i , j = 1 , 2 , , n .
(A4)
The functions Ω i C ( R , R ) : Ω i ( 0 ) = 0 and there exist positive constants ξ i such that x Ω i ( x ) ξ i x 2 , x R .
(A5)
The functions Ω i C ( R , R ) : Ω ( 0 ) = 0 and there exist positive constants ξ i such that ( s i g n x ) Ω ( x ) ξ i | x | , x R .
The goal is to study asymptotic properties of the state variables of the model (34) with initial conditions (35). According to theoretical results in Section 3, we will obtain two types of bounds on the norms of the state variables. These bounds depend on the properties of the state variables and on which norm could be applied to them.
In the case when the solution of CGFM is a squared integrable, i.e., its square is from the set C g μ ( ( 0 , ) , R n ) , we obtain the following result:
Theorem 3.
Suppose the assumptions (H), (A1)–(A4) are satisfied and for all k = 1 , 2 , , n and t 0 , the inequalities
κ k j = 1 n ρ j | α k , j ( t ) | + ζ j | β k , j ( t ) | + ρ k j = 1 n κ j | α j , k ( t ) | + ζ k j = 1 n κ j | β j , k ( t ) | < 2 η k ξ k
hold.
Then, for any solution y C g μ ( [ 0 , ) , R n ) of CGFM (34) and (35) such that y T y C g μ ( [ 0 , ) , R ) and satisfies the condition (b) of Theorem 1, the inequality
| | y ( t ) | | 2 < G ( | | ϕ | | 0 , 2 ) ( g ( t ) g ( 0 ) ) μ 1 for t ( 0 , )
holds.
Proof. 
Let the function y ( . ) C g μ ( [ 0 , ) , R n ) be a solution of CGFM (34) and (35) such that y T y C g μ ( [ 0 , ) , R ) and satisfies condition (b) of Theorem 1.
Define the function F : F = ( F 1 , F 2 , F ) by
F k ( t , y ( t ) , y t ) = Υ k ( y k ( t ) ) Ω k ( y k ( t ) ) + j = 1 n α k , j ( t ) F j ( y j ( t ) ) + j = 1 n β k , j ( t ) P j ( y j , t ) , t 0 .
Let the point t > 0 be such that
( g ( t + Θ ) g ( 0 ) ) 1 μ y T ( t + Θ ) y ( t + Θ ) < ( g ( t ) g ( 0 ) ) 1 μ y T ( t ) y ( t ) , for Θ ( min { t , τ } , 0 ) .
Then we obtain
y ( t ) · F ( t , y ( t ) , y t ) = k = 1 n y k ( t ) F k ( t , y ( t ) , y t ) = k = 1 n y k ( t ) Υ k ( y k ( t ) ) Ω k ( y k ( t ) ) + j = 1 n α k , j ( t ) F j ( y j ( t ) ) + j = 1 n β k , j ( t ) P j ( y j , t ) = k = 1 n ( y k 2 ( t ) Υ k ( y k ( t ) ) Ω k ( y k ( t ) ) y k ( t ) + y k ( t ) Υ k ( y k ( t ) ) j = 1 n α k , j ( t ) F j ( y j ( t ) ) + y k ( t ) Υ k ( y k ( t ) ) j = 1 n β k , j ( t ) P j ( y j , t ) ) k = 1 n η k ξ k y k 2 ( t ) + k = 1 n | y k ( t ) | κ k j = 1 n ρ j | α k , j ( t ) | | y j ( t ) ) | + k = 1 n | y k ( t ) | κ k j = 1 n ζ j | β k , j ( t ) | | y j ( t ) ) | = k = 1 n η k ξ k y k 2 ( t ) + k = 1 n κ k j = 1 n ρ j | α k , j ( t ) | | y k ( t ) | | y j ( t ) ) | + k = 1 n κ k j = 1 n ζ j | β k , j ( t ) | | y k ( t ) | | y j ( t ) ) | k = 1 n η k ξ k y k 2 ( t ) + 0.5 k = 1 n κ k j = 1 n ρ j | α k , j ( t ) | + ζ j | β k , j ( t ) | y k 2 ( t ) + 0.5 k = 1 n κ k j = 1 n ρ j | α k , j ( t ) | + ζ j | β k , j ( t ) | y j 2 ( t ) = k = 1 n η k ξ k + 0.5 κ k j = 1 n ρ j | α k , j ( t ) | + ζ j | β k , j ( t ) | y k 2 ( t ) + 0.5 j = 1 n κ j k = 1 n ρ k | α j , k ( t ) | + ζ k | β j , k ( t ) | y k 2 ( t ) = 0.5 k = 1 n ( 2 η k ξ k + κ k j = 1 n ρ j | α k , j ( t ) | + ζ j | β k , j ( t ) | + ρ k j = 1 n κ j | α j , k ( t ) | + ζ k j = 1 n κ j | β j , k ( t ) | ) y k 2 ( t ) .
From inequalities (36) and Theorem 1, we have the claim of Theorem 3. □
In the case when the solution of CGFM is integrable as absolute values, we obtain the following result:
Theorem 4.
Suppose the assumptions (H), (A1)–(A3), (A5) are satisfied and for all k = 1 , 2 , , n and t 0 , the inequalities
ρ k j = 1 n κ j | α j , k ( t ) | + ζ k j = 1 n κ j | β j , k ( t ) | < η k ξ k
hold.
Then, for any solution y ( . ) C g μ ( [ 0 , ) , R n ) , y = ( y 1 , y 2 , , y n ) , of CGFM (34) and (35) such that | y k ( . ) | C g μ ( [ 0 , ) , R ) , k = 1 , 2 , , n , and satisfies condition (b) of Theorem 2, the inequality
| | y ( t ) | | 1 < G ( | | ϕ | | 0 , 1 ) ( g ( t ) g ( 0 ) ) μ 1 for t ( 0 , )
holds.
Proof. 
Let the function y ( . ) C g μ ( [ 0 , ) , R n ) be a solution of CGFM (34) and (35) such that | y k ( . ) | C g μ ( [ 0 , ) , R ) , k = 1 , 2 , , n , and satisfies condition (b) of Theorem 2.
Define the function F : F = ( F 1 , F 2 , F ) by (37).
Let the point t > 0 be such that
( g ( t + Θ ) g ( 0 ) ) 1 μ | | y ( t + Θ ) | | 1 < ( g ( t ) g ( 0 ) ) 1 μ | | y ( t ) | | 1 , for Θ ( min { t , τ } , 0 ) .
Then by assumption (A5), we obtain
s i g n y ( t ) · F ( t , y ( t ) , y t ) = k = 1 n s i g n y k ( t ) F k ( t , y ( t ) , y t ) = k = 1 n s i g n y k ( t ) Υ k ( y k ( t ) ) Ω k ( y k ( t ) ) + j = 1 n α k , j ( t ) F j ( y j ( t ) ) + j = 1 n β k , j ( t ) P j ( y j , t ) = k = 1 n ( s i g n y k ( t ) Υ k ( y k ( t ) ) Ω k ( y k ( t ) ) + s i g n y k ( t ) Υ k ( y k ( t ) ) j = 1 n α k , j ( t ) F j ( y j ( t ) ) + s i g n y k ( t ) Υ k ( y k ( t ) ) j = 1 n β k , j ( t ) P j ( y j , t ) ) k = 1 n η k ξ k | y k ( t ) | + k = 1 n κ k j = 1 n ρ j | α k , j ( t ) | ζ j | β k , j ( t ) | | y j ( t ) | = k = 1 n η k ξ k | y k ( t ) | + k = 1 n j = 1 n ( κ j ρ k | α j , k ( t ) | ζ k | β j , k ( t ) | | y k ( t ) | ) = k = 1 n ( η k ξ k + j = 1 n κ j ρ k | α j , k ( t ) | ζ k | β j , k ( t ) | | y k ( t ) | ) .
Thus, inequality (26) holds. From inequalities (39) and (40), inequality (26) holds. According to Theorem 2, we have the claim of Theorem 4. □
Remark 5.
Note that all sufficient conditions given in Theorem 1 do not depend on the fractional order. At the same time, the bound of the norm of the solution depends significantly on the function applied in the definition of the fractional derivative. This function has an influence on the behavior of the solution.
Corollary 1.
Suppose the conditions of Theorem 3 (Theorem 4) are fulfilled and the function in the DwrtF (2) satisfies lim t g ( t ) = . Then any solution y C g μ ( [ 0 , ) , R n ) of (34) and (35) satisfies lim t y ( t ) = 0 .
Remark 6.
Since in the case g ( t ) = t , DwrtF defined by (2) reduces to the Riemann–Liouville fractional derivative, in this case, Theorems 2 and 3 give us sufficient conditions for stability of the solution of the reduced CG model (see, for example, [25,26,27]).
Corollary 2.
Suppose the conditions of Theorem 3 (Theorem 4) are fulfilled and the function in DwrtF (2) satisfies lim t g ( t ) = C < .
Then any solution y C g μ ( [ 0 , ) , R n ) of CG model (34) and (35) does not approach zero.
Remark 7.
If g ( t ) = 0 t e x 2 d x in the DwrtF (2), then lim t 0 t e x 2 d x = 0.5 π and the solution of the corresponding fractional CG model does not approach zero.

5. Application

Consider the following CG model with three neurons and delays, with the state dynamics modeled by DwrtF:
  0 R L D g ( t ) 0.3 y i ( t ) = Υ i ( t ) u i ( t ) Ω i ( u i ( t ) ) + k = 1 3 α i , k ( t ) F k ( u k ( t ) ) + k = 1 3 β i , k ( t ) P k ( u k ( t 1 ) ) , for t > 0 , i = 1 , 2 , 3 ,
with μ = 0.3 , ξ ( t ) = 1 , τ = 1 , Υ 1 ( t ) = 2 , Υ 2 ( t ) = 0.5 ( 1 + e t ) , Υ 3 ( t ) = 1 + 0.05 e t , and Ω i ( u ) = u e | u | , u R , i = 1 , 2 , 3 .
Then η 1 = κ 1 = 2 , κ 2 = 0.5 , η 2 = 1 , κ 3 = 1 , η 3 = 1.05 , and s i g n ( u ) Ω i ( u ) | u | for u R , ξ i = 1 , i = 1 , 2 , 3 .
Let the activation functions F 1 ( x ) = P 1 ( x ) = x 1 + e x be the Swish functions with constants ρ 1 = ζ 1 = 1.1 , and let F 2 ( x ) = P 2 ( x ) = e x e x e x + e x be the tanh functions with constants ρ 2 = ζ 2 = 1 , and F 3 ( x ) = P 3 ( x ) = 0.5 ( | x + 1 | | x 1 | ) with ρ 3 = ζ 3 = 1 .
Let the matrices of the strengths of interconnections at time t be A ( t ) = { α i , k ( t ) } i , k = 1 , 2 , 3 and B ( t ) = { β i , k ( t ) } i , k = 1 , 2 , 3 . They are given by
A ( t ) = 0.01 0 0.02 0.01 e t 0.01 0 0 0.01 t 1 + t 0 , B ( t ) = 0.01 0.1 0.1 0.1 e t 0 0.1 e 2 t 0 0.001 sin ( t ) 0 ,
with
ρ 1 j = 1 3 κ j | α j , 1 ( t ) | + ζ 1 j = 1 3 κ j | β j , 1 ( t ) | = 1.1 ( 2 0.01 + 0.5 0.01 + 1 0 ) + 1.1 ( 2 0.01 + 0.5 0.1 + 1 0 ) < η 1 ξ 1 = 2 1 ,
ρ 2 j = 1 3 κ j | α j , 2 ( t ) | + ζ 2 j = 1 3 κ j | β j , 2 ( t ) | = 1 ( 2 0 + 0.5 0.01 + 1 0.01 ) + 1 ( 2 0.1 + 0.5 0 + 1 0.001 ) < η 2 ξ 2 = 1 1 ,
ρ 3 j = 1 3 κ j | α j , 3 ( t ) | + ζ 3 j = 1 3 κ j | β j , 3 ( t ) | = 1 ( 2 0.02 + 0.5 0 + 1 0 ) + 1 ( 2 0.1 + 0.5 0.1 + 1 0 ) < η 3 ξ 3 = 1.05 1 .
Therefore, for all i = 1 , 2 , 3 , inequalities (39) hold.
Let the function y C g 0.3 ( [ 0 , ) , R 3 ) , y = ( y 1 , y 2 , y 3 ) , be a solution of CGFM (41) and (35) (with n = 3 , τ = 1 and initial function ϕ 0 ) such that | y k | C g 0.3 ( [ 0 , ) , R ) .
According to Lemma 4 and the first equality in the initial condition (35), we get
lim t 0 + ( g ( t ) g ( 0 ) 1 0.3 | y k ( t ) | = | ϕ k ( 0 ) | Γ ( 0.3 ) , k = 1 , 2 , 3 ,
and
lim t 0 + ( g ( t ) g ( 0 ) 0.7 | | y ( t ) | | 1 3 Γ ( 0.3 ) | | ϕ | | 0 , 1 < | | ϕ | | 0 , 1 ,
i.e., the condition (b) of Theorem 2 is satisfied with G ( u ) u , u [ 0 , ) .
According to Theorem 4, the inequality
| y ( t ) | | 1 = | y 1 ( t ) | + | y 2 ( t ) | + | y 3 ( t ) | < max t [ 1 , 0 ] | ϕ 1 ( t ) | + | ϕ 2 ( t ) | + | ϕ 3 ( t ) | g ( t ) g ( 0 ) 0.3 , for t > 0
holds.
In the case g ( t ) = t 2 , the solution of (41) and (35) satisfies lim t | y ( t ) | = 0 .
In the case g ( t ) = 0 t e x 2 d x , according to Remark 7, the solution of (41) and (35) satisfies lim t | y ( t ) | = max t [ 1 , 0 ] | ϕ 1 ( t ) | + | ϕ 1 ( t ) | + | ϕ 1 ( t ) | 0.5 π , which is zero only in the case of zero initial function.
Therefore, the choice of the function g ( . ) involved in DwrtF has a huge influence on the asymptotic behavior of the solution of CGFD.

6. Conclusions

The main aim of this paper is to study the asymptotic behavior of the state variables of neurons in CGFM in the case when the dynamics of neurons is described by DwrtF. Additionally, to study the influence of the past behavior on the present one, we introduce the delay. We apply two types of Lyapunov functions, the quadratic one and the absolute value one, to obtain the bounds of the solutions and to study the asymptotic behavior of the solutions. The presence of delay requires an appropriate modification of the Razhimikhin method with DwrtF of the applied Lyapunov functions. We prove the behavior of the state variable depends significantly on the function involved in the fractional integral. It is proved that this function can change significantly the asymptotic behavior of the state variable. This gives wider opportunities for modeling the behavior of the neurons in the neural networks by choosing an appropriate function in the DwrtF.

Author Contributions

Conceptualization, R.A., S.H. and D.O.; methodology, R.A., S.H. and D.O.; validation, R.A., S.H. and D.O.; formal analysis, R.A., S.H. and D.O.; investigation, R.A., S.H. and D.O.; writing—original draft preparation, R.A., S.H. and D.O.; writing—review and editing, R.A., S.H. and D.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Bulgarian National Science Fund under Project KP-06-N62/1.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

For a better understanding of the paper we will present the list of all abbreviations used in this paper:
CGCohen–Grossberg neural network model;
RLRiemann–Liouville;
DwrtFRiemann–Liouville fractional derivative with respect to another function;
IwrtFRiemann–Liouville fractional integral with respect to another function;
CGFMCohen–Grossberg neural network with delays and Riemann–Liouville fractional integral with respect to another function.

References

  1. Xing, R.; Xiao, M.; Zhang, Y.; Qiu, J. Stability and Hopf bifurcation analysis of an (n+m)-neuron double-ring neural network model with multiple time delays. J. Syst. Sci. Complex. 2022, 35, 159–178. [Google Scholar] [CrossRef]
  2. Kao, Y.; Shao, C.; Chen, X. Stability of high-order delayed Markovian jumping reaction-diffusion HNNs with uncertain transition rates. Appl. Math. Comput. 2021, 389, 125559. [Google Scholar]
  3. Joshi, H.; Jha, B.K. 2D memory-based mathematical analysis for the combined impact of calcium influx and efflux on nerve cells. Comput. Math. Appl. 2023, 134, 33–44. [Google Scholar] [CrossRef]
  4. Song, Q.; Cao, J. Stability analysis of Cohen–Grossberg neural network with both time-varying and continuously distributed delays. J. Comput. Appl. Math. 2006, 197, 188–203. [Google Scholar] [CrossRef]
  5. Viera-Martin, E.; Gómez-Aguilar, J.F.; Solís-Pérez, J.E.; Hernández-Pérez, J.A.; Escobar-Jiménez, R.F. Artificial neural networks: A practical review of applications involving fractional calculus. Eur. Phys. J. Spec. Top. 2022, 231, 2059–2095. [Google Scholar] [CrossRef] [PubMed]
  6. Ke, Y.; Miao, C. Stability analysis of fractional-order Cohen–Grossberg neural networks with time delay. Int. J. Comput. Math. 2024, 92, 1102–1113. [Google Scholar] [CrossRef]
  7. Du, F.; Lu, J.-G. New results on finite-time stability of fractional-order Cohen-rossberg neural networks with time delays. Asian J. Control 2022, 24, 2328–2337. [Google Scholar] [CrossRef]
  8. Stamova, I.; Sotirov; Sotirova, E.; Stamov, G. Impulsive Fractional Cohen–Grossberg Neural Networks: Almost Periodicity Analysis. Fractal Fract. 2021, 5, 78. [Google Scholar] [CrossRef]
  9. Aravind, R.V.; Balasubramaniam, P. Stability criteria for memristor-based delayed fractional-order Cohen–Grossberg neural networks with uncertainties. J. Comput. Appl. Math. 2023, 420, 114764. [Google Scholar] [CrossRef]
  10. Agarwal, R.; Hristova, S. Impulsive Memristive Cohen–Grossberg Neural Networks Modeled by Short Term Generalized Proportional Caputo Fractional Derivative and Synchronization Analysis. Mathematics 2022, 10, 2355. [Google Scholar] [CrossRef]
  11. Li, H.-L.; Hu, C.; Zhang, L.; Jiang, H.; Cao, J. Complete and finite-time synchronization of fractional-order fuzzy neural networks via nonlinear feedback control. Fuzzy Sets Syst. 2022, 443, 50–69. [Google Scholar] [CrossRef]
  12. Kassim, M.D.; Tatar, N.E. General stability for a Cohen–Grossberg neural network system. Arab. J. Math. 2024, 13, 133–147. [Google Scholar] [CrossRef]
  13. Li, B.; Sun, Y. Stability analysis of Cohen–Grossberg neural networks with time-varying delay by flexible terminal interpolation method. AIMS Math. 2023, 8, 17744–17764. [Google Scholar] [CrossRef]
  14. Gu, Y.; Wang, H.; Yu, Y. Stability and synchronization for Riemann–Liouville fractional-order time-delayed inertial neural networks. Neurocomputing 2019, 340, 270–280. [Google Scholar] [CrossRef]
  15. Wu, X.; Liu, S.; Wang, Y. Stability analysis of Riemann–Liouville fractional-order neural networks with reaction-diffusion terms and mixed time-varying delays. Neurocomputing 2021, 431, 169–178. [Google Scholar] [CrossRef]
  16. Zhang, H.; Ye, M.; Ye, R.; Cao, J. Synchronization stability of Riemann–Liouville fractional delay-coupled complex neural networks. Phys. A Stat. Mech. Appl. 2018, 508, 155–165. [Google Scholar] [CrossRef]
  17. Agarwal, R.P.; Hristova, S.; O’Regan, D. Lyapunov functions and stability properties of fractional Cohen–Grossberg neural network models with delays. Fractal Fract. 2023, 7, 732. [Google Scholar] [CrossRef]
  18. Osler, T.J. Leibniz rule for fractional derivatives generalized and an application to infinite series. SIAM J. Appl. Math. 1970, 18, 658–674. [Google Scholar] [CrossRef]
  19. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers: New York, NY, USA; London, UK, 1993. [Google Scholar]
  20. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  21. Fahad, H.M.; Fernandez, A. Operational calculus for the Riemann–Liouville fractional derivative with respect t0 a function and its applications. Fract. Calc. Appl. Anal. 2021, 24, 518–540. [Google Scholar] [CrossRef]
  22. Almeida, R. A Caputo fractional derivative of a function with respect to another function. Commun. Nonl. Sci. Numer. Simul. 2017, 44, 460–481. [Google Scholar] [CrossRef]
  23. Nieto, J.J.; Alghanmi, M.; Ahmad, B.; Alsaedi, A.; Alharbi, B. On fractional integrals and derivatives of a function with respect to another function. Fractals 2023, 31, 2340066. [Google Scholar] [CrossRef]
  24. Agarwal, R.P.; Hristova, S.; O’Regan, D. Inequalities for Riemann–Liouville-type fractional derivatives of convex Lyapunov functions and applications to stability theory. Mathematics 2023, 11, 3859. [Google Scholar] [CrossRef]
  25. Korkmaz, E.; Ozdemir, A.; Yildirim, K. Asymptotical stability of Riemann–Liouville nonlinear fractional neutral neural networks with time-varying delays. J. Math. 2022, 2022, 6832472. [Google Scholar] [CrossRef]
  26. Altun, Y. Further results on the asymptotic stability of Riemann–Liouville fractional neutral systems with variable delays. Adv. Diff. Equ. 2019, 2019, 437. [Google Scholar] [CrossRef]
  27. Liu, S.; Wu, X.; Zhou, X.F.; Jiang, W. Asymptotical stability of Riemann–Liouville fractional nonlinear systems. Nonlinear Dyn. 2016, 86, 65–71. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Agarwal, R.; Hristova, S.; O’Regan, D. Cohen–Grossberg Neural Network Delay Models with Fractional Derivatives with Respect to Another Function—Theoretical Bounds of the Solutions. Axioms 2024, 13, 605. https://doi.org/10.3390/axioms13090605

AMA Style

Agarwal R, Hristova S, O’Regan D. Cohen–Grossberg Neural Network Delay Models with Fractional Derivatives with Respect to Another Function—Theoretical Bounds of the Solutions. Axioms. 2024; 13(9):605. https://doi.org/10.3390/axioms13090605

Chicago/Turabian Style

Agarwal, Ravi, Snezhana Hristova, and Donal O’Regan. 2024. "Cohen–Grossberg Neural Network Delay Models with Fractional Derivatives with Respect to Another Function—Theoretical Bounds of the Solutions" Axioms 13, no. 9: 605. https://doi.org/10.3390/axioms13090605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop