Next Article in Journal
Fractal Analysis of Particle Distribution and Scale Effect in a Soil–Rock Mixture
Next Article in Special Issue
The Traveling Wave Solutions in a Mixed-Diffusion Epidemic Model
Previous Article in Journal
Inclusion Relations for Dini Functions Involving Certain Conic Domains
Previous Article in Special Issue
Analysis of a Time-Fractional Substantial Diffusion Equation of Variable Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Variable-Order Fractional Discrete Neural Networks: Solvability and Stability

1
Department of Mathematics and Computer Science, University of Larbi Ben M’hidi, Oum El Bouaghi 04000, Algeria
2
Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman 20550, United Arab Emirates
3
Dipartimento Ingegneria Innovazione, Universita del Salento, 73100 Lecce, Italy
4
Department of Mathematics, Faculty of Science and Technology, Irbid National University, Irbid 2600, Jordan
5
Department of Mathematics, Faculty of Science, University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(2), 119; https://doi.org/10.3390/fractalfract6020119
Submission received: 18 January 2022 / Revised: 13 February 2022 / Accepted: 14 February 2022 / Published: 18 February 2022

Abstract

:
Few papers have been published to date regarding the stability of neural networks described by fractional difference operators. This paper makes a contribution to the topic by presenting a variable-order fractional discrete neural network model and by proving its Ulam–Hyers stability. In particular, two novel theorems are illustrated, one regarding the existence of the solution for the proposed variable-order network and the other regarding its Ulam–Hyers stability. Finally, numerical simulations of three-dimensional and two-dimensional variable-order fractional neural networks were carried out to highlight the effectiveness of the conceived theoretical approach.

1. Introduction

Fractional calculus deals with non-integer-order differential equations or non-integer-order difference equations [1]. It is worth noting that, while the first known reference of fractional derivatives can be found in the correspondence of G. W. Leibniz and Marquis de l’Hospital in 1695, fractional difference operators were introduced only in 1974 [2]. Consequently, non-integer-order discrete-time systems have received less attention in the literature with respect to non-integer-order continuous-time systems [1].
Referring to fractional continuous/discrete systems, some manuscripts in the literature have investigated their stability properties [3,4,5,6,7,8,9]. For example, in [3], an extension of the Mikhailov stability criterion to fractional discrete systems described by the nabla Grünwald–Letnikov operator was introduced. The stability methods in [3] are efficient from the computational point of view and can be applied to both commensurate and incommensurate systems. In [4], a novel definition of the Mittag–Leffler stability for discrete fractional systems was introduced. In particular, some useful criteria were developed in order to ensure the stability of nabla discrete fractional systems for both the Caputo definition and the Riemann–Liouville definition. In [5], the stability and synchronization for some fractional-order difference equations were investigated. In [9], the fractional delay discrete systems were studied using a discrete delayed Mittag–Leffler matrix function. Moreover, a criterion on the finite-time stability of fractional delay difference equations with constant coefficients was derived.
It is worth noting that very few papers have been published in the literature regarding the Ulam–Hyers stability in fractional discrete systems [10,11,12]. For example, in [10], the Ulam–Hyers stability of linear and non-linear nabla fractional Caputo difference equations on finite intervals was investigated. In [11], the Ulam–Hyers stability for Riemann–Liouville fractional nabla difference equations was studied. In [12], the Hyers–Ulam stability of an inverted pendulum modeled by a discrete fractional Duffing equation was investigated.
Among the different types of non-integer-order discrete-time dynamical systems, an important class is represented by neural networks described by non-integer-order difference operators [13,14,15,16,17,18,19,20,21]. Even though the stability of fractional discrete neural networks is a prerequisite for their successful applications [13], few papers have been published to date on the topic. For example, in [14], the global Mittag–Leffler stability and the finite-time stability of fractional discrete neural networks (in the complex field) were studied for two different types of activation functions. In [15], a fractional discrete quaternion-valued memristive neural network was introduced. Then, a sufficient condition to ensure the quasi-stability of the network equilibrium point was illustrated. In [16], Lyapunov’s direct method was utilized to ensure the stability of fractional discrete complex-valued neural networks, whereas in [17], the stability and synchronization of non-integer-order discrete neural networks with time delays were studied. In [18], the fixed-point theorem was exploited to derive some stability conditions for fractional difference equations. Then, the obtained stability results were applied to fractional discrete neural networks with and without delay. In [19], the exponential stability of non-integer-order discrete quaternion-valued neural networks was investigated. In particular, the Lyapunov–Krasovskii functional and matrix inequality were utilized to prove the exponential stability of the network equilibrium point. In [20], the Arzela–Ascoli theorem was exploited to ensure the finite-time stability of fractional discrete complex-valued neural networks. Finally, referring to variable-order fractional discrete neural networks (i.e., networks where the fractional order changes over discrete time), an attempt to study their stability was carried out in [21]. However, the results in [21] were related to the Mittag–Leffler stability. No result is available in the literature regarding the Ulam–Hyers stability of fractional discrete neural networks, including those with a variable order.
Based on the absence in the literature mentioned above, this paper makes a contribution to the topic of the stability of fractional discrete neural networks by presenting a variable-order model based on the Caputo h-difference operator and by proving its Ulam–Hyers stability. In particular, two novel theorems are illustrated, one regarding the existence of the solution for the proposed variable-order network and the other regarding its Ulam–Hyers stability. The paper is organized as follows. In Section 2, the background on fractional discrete calculus is provided. In Section 3, the equations of the proposed variable-order fractional discrete neural network are introduced. In Section 4, a novel theorem is presented, with the aim to prove the existence of the solution for the considered neural network model. In Section 5, the Ulam–Hyers stability of the proposed variable-order fractional discrete neural network is proven by means of a novel theorem. Finally, in Section 6, numerical simulations of three-dimensional and two-dimensional variable-order fractional neural networks are carried out to highlight the effectiveness of the conceived theoretical approach.

2. Background on Fractional Discrete Calculus

The following definitions for discrete fractional calculus are introduced.
Definition 1
(Fractional sum [22]). Let x : ( h N ) a R and 0 < v be given. a is a starting point. The v th order h-sum is given by:
h Δ a v x ( t ) = h Γ ( v ) s = a h t h v ( t σ ( s h ) ) h v 1 x ( s h ) , σ ( s h ) = ( s + 1 ) h , t ( h N ) a + v h
where the h-falling factorial function is defined as:
t h ( v ) = h v Γ ( t h + 1 ) Γ ( t h + 1 v ) , t , v R
while ( h N ) a + ( 1 v ) h = { a + ( 1 v ) h , a + ( 2 v ) h , } .
Definition 2
(Caputo delta difference [22]). For x ( t ) defined on ( h N ) a and 0 < α , the Caputo-like difference is defined by:
h C Δ a α x ( t ) = Δ a ( m α ) Δ h m x ( t ) , t ( h N ) a + ( m α ) h
where Δ h x ( t ) = x ( t + h ) x ( t ) h and m = [ α ] + 1 .
If the fractional order α is a positive integer m, then we have the following definition:
h C Δ a α x ( t ) = Δ h m x ( t ) , t ( h N ) a .
Lemma 1
([22]). In what follows, some useful proprieties used in this paper are reported:
  • Discrete Leibniz integral law:
    h Δ a + ( 1 v ) h v h C Δ a v x ( t ) = x ( t ) x ( a ) , 0 < v 1 , t ( h N ) a + h ;
  • Fractional Caputo difference of a constant c:
    h C Δ a v c = 0 , 0 < v 1 ;
  • Delta difference of the h–falling factorial function:
    Δ s ( t s h ) h ( v ) = v ( t σ ( s h ) ) h ( v 1 ) .
Lemma 2
(Arzela–Ascoli’s theorem [23]). A bounded, uniformly Cauchy subset Ω of Banach space E is relatively compact.
Lemma 3
(Krasnoselskii fixed-point theorem [23]). Let Ω be a nonempty, closed, convex, and bounded subset of Banach space E. Suppose that S, T are two operators, such that:
(i)
S is a contraction;
(ii)
For any x , y Ω , S x + T y Ω ;
(iii)
T is continuous, and B ( Ω ) is relatively compact.
Then, for x Ω , the operator equation S x + T x = x has a solution.

3. Variable-Order Fractional Discrete Neural Network

Discrete-time fractional-order neural networks are well suited to modeling the dynamics of fractional-order discrete non-linear systems. In reality, because they have modeling capabilities to a required precision, they are considered a contender for a generic, parametric, non-linear model of a large class of discrete non-linear systems with a fractional order. In this paper, we explore a type of variable-order fractional discrete neural network described by:
h C Δ t k l v k x ( t ) = A x ( t + v k h ) + B f ( t ; x ( t + v k h ) ) ,
where (1) can be reduced as:
h C Δ t 0 v 0 x ( t ) = A x ( t + v 0 h ) + B f ( t ; x ( t + v 0 h ) ) , t { t 0 + ( 1 v 0 ) h , , t l v 0 h } h C Δ t l v 1 x ( t ) = A x ( t + v 1 h ) + B f ( t ; x ( t + v 1 h ) ) , t { t l + ( 1 v 1 ) h , , t 2 l v 1 h } h C Δ t k l v k x ( t ) = A x ( t + v k h ) + B f ( t ; x ( t + v k h ) ) , t { t k l + ( 1 v k ) h , , t ( k + 1 ) l v k h }
in which t ( h N ) t k l + ( 1 v k ) h , 0 < v k 1 , k = 0 , , m 1 , m is the number of intervals, h C Δ t k l v k denotes the variable-order fractional Caputo h-difference operator of order v k , x ( t ) = ( x 1 ( t ) , , x p ( t ) ) T R p is the state of the unit at time t, p is the dimension, A = d i a g ( a 1 , , a p ) , where a i > 0 represents the matrix with which the neurons will reset their potentials to the resting state when disconnected from the network, while B R p × p corresponds to the connection weights, and finally, f ( t , x ( t ) ) C ( ( h N ) t k l + ( 1 v k ) h , R p ) is the activation function.
Lemma 4.
A function x ( t ) is called the solution of (1) if x ( t ) satisfies:
x ( t ) = x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t h v 0 ( t σ ( s h ) ) h v 0 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] , t { t 0 + h , , t l } x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] , t { t ( m 1 ) l + h , , t m l } .
Proof. 
Actually, the system (2) is equivalent to:
x ( t ) = x ( t 0 ) + h Δ t 0 + ( 1 v 0 ) h v 0 [ A x ( t ) + B f ( t , x ( t ) ) ] , t { t 0 + h , , t l } x ( t ) = x ( t l ) + h Δ t l + ( 1 v 1 ) h v 1 [ A x ( t ) + B f ( t , x ( t ) ) ] , t { t l + h , , t 2 l } x ( t ) = x ( t ( m 1 ) l ) + h Δ t ( m 1 ) l + ( 1 v m 1 ) h v ( m 1 ) [ A x ( t ) + B f ( t , x ( t ) ) ] , t { t ( m 1 ) l + h , , t m l } .
As for k = 0 , t { t 0 + ( 1 v 0 ) h , , t l v 0 h } , we obtain that the solution of (1) can be expressed as:
x ( t ) = x ( t 0 ) + h Δ t 0 + ( 1 v 0 ) h v 0 [ A x ( t ) + B f ( t , x ( t ) ) ] .
We can further obtain:
x ( t ) = x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t h v 0 ( t σ ( s h ) ) h v 0 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] ,
and we can have x ( t l ) = x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t l h v 0 ( t l σ ( s h ) ) h v 0 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] as an initial condition of:
h C Δ t l v 1 x ( t ) = A x ( t + v 1 h ) + B f ( t ; x ( t + v 1 h ) ) , t { t l + ( 1 v 1 ) h , , t 2 l v 1 h } .
Using the same approach, we can obtain the expression of x ( t ) for any t { t l + ( 1 v 1 ) h , , t 2 l v 1 h } . In other words, we can have:
x ( t ) = x ( t l ) + h Γ ( v 1 ) s = t 0 h + 1 v 1 t h v 1 ( t σ ( s h ) ) h v 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] = x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t l h v 0 ( t l σ ( s h ) ) h v 0 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + h Γ ( v 1 ) s = t l h + 1 v 1 t h v 1 ( t σ ( s h ) ) h v 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] .
Now, for any t { t k l + ( 1 v k ) h , , t ( k + 1 ) l v k h } , k = 2 , 3 , , m 1 , we can similarly derive x ( t ) as follows:
x ( t ) = x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t h v 0 ( t σ ( s h ) ) h v 0 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] , t { t 0 + h , , t l } x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h , s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] , t { t ( m 1 ) l + h , , t m l }
and hence, the proof is completed. □

4. Existence of the Solution

The following operator can now be defined:
P x ( t ) = x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( x ( s h , s h ) ) ] , t ( h N ) a + h .
It is easily concluded that x is a solution of (1) iff x is a fixed point of the operator P. We adopt Kransnoselkii’s fixed-point Theorem 3 to establish existence results. For any t ( h N ) a + h , we can define the following two operators:
T x ( t ) = n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ]
and:
S x ( t ) = x ( t 0 ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] .
Now, we introduce the following assumptions:
( A 1 )  
For all t ( h N ) t ( m 1 ) l + ( 1 v m 1 ) h , f ( t , x ) represents a continuous function with respect to x, and there exists a constant L R + such that:
f ( t , x ) f ( t , y ) L x y ;
( A 2 )  
There exists a constant M R + such that M < 1 , where:
M = [ M A + L M B ] n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 )
and:
M A = A , M B = B .
Now, a novel theorem is illustrated, which proves the existence of the solution of the variable-order fractional discrete neural network (1).
Theorem 1.
Assume ( A 1 ) and ( A 2 ) hold. For any x R p , if there exists positive κ such that x ( t 0 ) κ , then the system (1) has at least one bounded solution in O m e g a = { x R p : x r } if the following condition holds:
r > κ 1 M .
Proof. 
Let x : ( h N ) a R P , | x ( t ) | denote the norm l of the vector x ( t ) ,   | x ( t ) | =   max i = 1 , p | x i ( t ) | , and let the supremum norm x ( t ) = sup t ( h N ) a x ( t ) be defined on the set Ω . The matrix norm is used as B = max i = 1 , , p j = 1 p | b i j | . It is obvious that Ω is a nonempty, closed, bounded, and convex subset of R p . Hence, we have the following description:
Step   1 .  
We show that S maps Ω into Ω . For any x Ω , we have:
S x ( t )   x ( t 0 ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 A x ( s h ) + B f ( s h , x ( s h ) ) κ + [ M A + L M B ] sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) x κ + [ M A + L M B ] sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) r r .
This implies S Ω Ω ;
Step   2 .  
We need to prove that S is continuous. Let { x n } be a sequence of Ω satisfying x n x as n + . Then, we can obtain:
S x n ( t ) S x ( t )   x n ( t 0 ) x ( t 0 ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A ( x n ( s h ) x ( s h ) ) + B ( f ( s h , x n ( s h ) ) f ( s h , x ( s h ) ) ) ]   x n ( t 0 ) x ( t 0 ) + [ M A + L M B ] sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) x n ( t ) x ( t ) .
Then, we can conclude that S x n ( t ) S x ( t ) 0 when n + , which implies that S is continuous;
Step   3 .  
We show that S is relatively compact. We choose t 1 , t 2 { t k l + h , , t ( k + 1 ) l } , k = 1 , 2 , m 1 , and t 1 < t 2 . Then, we have:
S x ( t 1 ) S x ( t 2 ) h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t 1 h v m 1 ( t 1 σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] s = t ( m 1 ) l h + 1 v m 1 t 2 h v m 1 ( t 2 σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t 1 h v m 1 ( t 1 σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] s = t ( m 1 ) l h + 1 v m 1 t 1 h v m 1 ( t 2 σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] s = t 1 h + 1 v m 1 t 2 h v m 1 ( t 2 σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t 1 h v m 1 ( t 1 σ ( s h ) ) h v m 1 1 ( t 2 σ ( s h ) ) h v k 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] s = t 1 h + 1 v m 1 t 2 h v m 1 ( t 2 σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] h Γ ( v m 1 ) [ M A + L M B ] ( s = t ( m 1 ) l h + 1 v m 1 t 1 h v m 1 | ( t 1 σ ( s h ) ) h v m 1 1 ( t 2 σ ( s h ) ) h v k 1 | + s = t 1 h + 1 v m 1 t 2 h v m 1 ( t 2 σ ( s h ) ) h v m 1 1 ) r 0 , as t 1 t 2 .
This implies that { S x : x Ω } is a bounded and uniformly Cauchy subset E , and together with Arzela–Ascoli’s lemma 2, we obtain that S Ω is relatively compact;
Step   4 .  
We choose a fixed y Ω and x = T x + S y for all k = 0 , 1 , 2 , , m 1 . Then, we have:
x   T x + S y n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 A x ( s h ) + B f ( s h , x ( s h ) ) + y ( t 0 ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 A y ( s h ) + B f ( s h , y ( s h ) )
[ M A + L M B ] r n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 + κ + [ M A + L M B ] r h Γ ( v m 1 ) sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) κ + [ M A + L M B ] ( n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) ) r r .
Therefore, x Ω . Finally, we prove that the operator T is the contraction mapping. For x ( t ) , y ( t ) Ω , taking the norm of T x ( t ) T y ( t ) yields:
T x ( t ) T y ( t )   =   n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v k n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A ( x ( s h ) y ( s h ) ) + B ( f ( s h , x ( s h ) ) f ( s h , y ( s h ) ) ) ] [ M A + L M B ] x y n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) < x y .
According to ( A 2 ) , it can be concluded that the operator T is a contraction mapping. From Lemma (3), P = T + S has a fixed point in Ω , which is a solution of (1).
Remark 1.
The goal of Theorem 1 is to establish certain fixed-point outcomes that will be needed in this section. By constructing the set Ω and assuming that the initial condition is bounded, which is necessary, and establishing an essential condition that relies on the assumptions ( A 1 ) and ( A 2 ) , we ensure the existence of at least one solution when the Kransnoselkii’s fixed-point theorem ia applied.

5. Ulam–Hyers Stability of a Variable-Order Fractional Discrete Neural Network

Definition 3
(Ulam–Hyers stability [10]). We say (1) is Ulam–Hyers stable if there exists c > 0 such that for arbitrary ϵ > 0 , if x R p satisfies:
h C Δ t k l v k x ( t ) + A x ( t + v k h ) B f ( t ; x ( t + v k h ) ϵ ,
for t ( h N ) t k l + ( 1 v k ) h , 0 < v k 1 , k = 0 , , m 1 , then there exists a solution y of (1) satisfying:
x ( t ) y ( t ) c ϵ .
First, the following lemma is proven.
Lemma 5.
If x solves (4), then:
x ( t )     x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] d ϵ ,
where:
d = n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) .
Proof. 
If x ( t ) satisfies (4), there exists a function g ( t ) satisfying g ( t ) ϵ such that:
h C Δ t k l v k x ( t ) + A x ( t + v k h ) B f ( t ; x ( t + v k h ) ) = g ( t ) , t { t k l + h , , t ( k + 1 ) l } , k = 0 , 1 , 2 , , m 1 ,
which means:
h C Δ t 0 v 0 x ( t ) + A x ( t + v 0 h ) B f ( t ; x ( t + v 0 h ) ) = g ( t ) , t { t 0 + h , , t l } , h C Δ t l v 1 x ( t ) + A x ( t + v 1 h ) B f ( t ; x ( t + v 1 h ) ) = g ( t ) , t { t l + h , , t 2 l } , h C Δ t ( m 1 ) l v m 1 x ( t ) + A x ( t + v m 1 h ) B f ( t ; x ( t + v m 1 h ) ) = g ( t ) , t { t ( m 1 ) l + h , , t m l } .
That is equivalent to:
x ( t ) x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t h v 0 ( t σ ( s h ) ) h v 0 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] = h Γ ( v 0 ) s = t 0 h + 1 v 0 t h v 0 ( t σ ( s h ) ) h v 0 1 g ( s h ) x ( t ) x ( t l ) + h Γ ( v 1 ) s = t l h + 1 v 1 t h v 1 ( t σ ( s h ) ) h v 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] = h Γ ( v 1 ) s = t 1 h + 1 v 1 t h v 1 ( t σ ( s h ) ) h v 1 1 g ( s h ) x ( t ) x ( t ( m 1 ) l ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] = h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 g ( s h ) .
Therefore, we have:
x ( t l ) = x ( t 0 ) + h Γ ( v 0 ) s = t 0 h + 1 v 0 t l h v 0 ( t l σ ( s h ) ) h v 0 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + h Γ ( v 0 ) s = t 0 h + 1 v 0 t l h v 0 ( t l σ ( s h ) ) h v 0 1 g ( s h ) x ( t 2 l ) = x ( t l ) + h Γ ( v 1 ) s = t l h + 1 v 1 t 2 l h v 1 ( t 2 l σ ( s h ) ) h v 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] + h Γ ( v 1 ) s = t 1 h + 1 v 1 t 2 l h v 1 ( t 2 l σ ( s h ) ) h v 1 1 g ( s h ) x ( t ) x ( t ( m 1 ) l ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] = h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 g ( s h ) ,
which leads us to the following equality:
x ( t )     x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] = n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 g ( s h ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 g ( s h ) .
Now, by taking the norm of each side, one can have:
x ( t )     x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) B f ( s h , x ( s h ) ) ] =   n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 g ( s h ) + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 g ( s h ) n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) g n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) ϵ ,
which completes the proof. □
Now, a novel theorem is illustrated, which proves the Ulam–Hyers stability of the variable-order fractional discrete neural network (1).
Theorem 2.
Under Assumptions ( A 1 ) and ( A 2 ) , the system (1) is Ulam–Hyers stable.
Proof. 
It is clear that the solution y of (1) satisfies:
y ( t ) = x ( t 0 ) + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A y ( s h ) B f ( s h , y ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A y ( s h ) B f ( s h , y ( s h ) ) ] .
Therefore, we can have:
x ( t ) y ( t ) =   x ( t ) x ( t 0 ) n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A y ( s h ) + B f ( s h , y ( s h ) ) ] h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A y ( s h ) + B f ( s h , y ( s h ) ) ] =   x ( t ) x ( t 0 ) n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A y ( s h ) + B f ( s h , y ( s h ) ) ] h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A y ( s h ) + B f ( s h , y ( s h ) ) ] n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] + h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ]   x ( t ) x ( t 0 ) n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A x ( s h ) + B f ( s h , x ( s h ) ) ] +   n = 1 m 1 h Γ ( v n 1 ) s = t ( n 1 ) l h + 1 v n 1 t n l h v n 1 ( t n l σ ( s h ) ) h v n 1 1 [ A ( x ( s h ) y ( s h ) ) + B ( f ( s h , x ( s h ) ) f ( s h , y ( s h ) ) ) ] +   h Γ ( v m 1 ) s = t ( m 1 ) l h + 1 v m 1 t h v m 1 ( t σ ( s h ) ) h v m 1 1 [ A ( x ( s h ) y ( s h ) ) + B ( f ( s h , x ( s h ) ) f ( s h , y ( s h ) ) ) ] n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) ϵ + [ M A + L M B ] { n = 1 m 1 ( t n l + ( v n 1 1 ) h + t ( n 1 ) l ) h ( v n 1 ) Γ ( 1 + v n 1 ) + sup t { t ( m 1 ) l + h , , t m l } ( t + ( v m 1 1 ) h + t ( m 1 ) l ) h ( v m 1 ) Γ ( 1 + v m 1 ) } x ( t ) y ( t ) d 1 M ϵ c ϵ ,
where
c = d 1 M .
Thus, we consequently conclude that (1) is Ulam–Hyers stable. □

6. Numerical Simulations

Example 1.
Let us consider the following fractional variable-order neural network:
h C Δ t 0 v 0 x ( t ) = A x ( t + v 0 h ) + B tanh ( x ( t + v 0 h ) ) , t { t 0 + ( 1 v 0 ) h , , t l v 0 h } h C Δ t l v 1 x ( t ) = A x ( t + v 1 h ) + B tanh ( x ( t + v 1 h ) ) , t { t l + ( 1 v l ) h , , t 2 l v 1 h } h C Δ t 2 l v 2 x ( t ) = A x ( t + v 2 h ) + B tanh ( x ( t + v 2 h ) ) , t { t 2 l + ( 1 v 2 ) h , , t 3 l v 2 h } ,
where h = 0.5 ,   k = 0 , 1 , 2 ,   t ( h N ) t k l ( 1 v k ) h and ( v 0 , v 1 , v 2 ) = ( 0.05 , 0.1 , 0.15 ) , tanh ( x ( t ) ) = ( tanh ( x 1 ( t ) ) , tanh ( x 2 ( t ) ) T ) , and where:
A = 0.2 0 0 0 0.2 0 0 0 0.2 , B = 0.002 0.004 0.0015 0.002 0.001 0.002 0.0025 0.0015 0.003 , x ( 0 ) = 0.9 0.6 0.3 .
We note that the assumptions ( A 1 ) and ( A 2 ) are satisfied for M A = 0.2 and M B = 0.0075 . This results in M = 0.9866 < 1 . Then, the conditions of Theorem 1 are fulfilled for r 67.16 , which indicates the existence of at least one solution. Theorem 2 is also satisfied for c = d 1 M where d = 4.7547 , and hence, the system (5) is Ulam–Hyers stable. We use the following numerical formulas to obtain the numerical solution in Figure 1 along with the initial condition x ( 0 ) = ( 0.9 , 0.6 , 0.3 ) T , which shows the stability of the considered neural network:
x ( t ) = x ( t 0 ) + h v 0 Γ ( v 0 ) t 0 + 1 t Γ ( t j + v 0 ) Γ ( t j + 1 ) A x ( j ) + B tanh ( x ( j ) ) , x ( t ) = x ( t l ) + h v 1 Γ ( v 1 ) t l + 1 t Γ ( t j + v 0 ) Γ ( t j + 1 ) A x ( j ) + B tanh ( x ( j ) ) , x ( t ) = x ( t 2 l ) + h v 2 Γ ( v 2 ) t 2 l + 1 t Γ ( t j + v 0 ) Γ ( t j + 1 ) A x ( j ) + B tanh ( x ( j ) ) .
Example 2.
Consider the following two-dimensional fractional discrete-time neural network:
h C Δ t k l v k x ( t ) = A x ( t + v k h ) + B sin ( x ( t + v k h ) ) , 0 < v k 1 t ( h N ) k l + ( 1 v k ) h k = 0 , 1 , 2 , 3 ,
where s i n ( x ( t ) ) = ( s i n ( x 1 ( t ) ) , s i n ( x 2 ( t ) ) ) T . Suppose x ( 0 ) = ( 1 , 1 ) T , ( v 0 , v 1 , v 2 , v 3 ) = ( 0.5 , 0.4 , 0.3 , 0.2 ) , h = 1.3 and m = 4 and:
A = 0.046 0 0 0.046 , B = 0.0003 0 0.0004 0.0002 .
We can discuss the stability over different time domains. The parameters satisfy ( A 1 ) and ( A 2 ) for M A = 0.046 ,   M B = 0.0006 , and M = 0.9858 < 1 . All the conditions in Theorem 1 for the existence of the solution are valid with r 70.42 . Theorem 2 also holds true for the constant d = 21.155 and c = d 1 M , which guarantees that the discrete-time neural network (6) is Ulam–Hyers stable. The numerical solution of the system is illustrated in Figure 2.
Example 3.
Consider the following neural network:
h C Δ t k l v k x 1 ( t ) = a 1 x 1 ( t + v k h ) + b 11 sin ( x 1 ( t + v k h ) ) + b 12 sin ( x 2 ( t + v k h ) ) , h C Δ t k l v k x 2 ( t ) = a 2 x 2 ( t + v 1 h ) + b 21 sin ( x 1 ( t + v k h ) ) + b 22 sin ( x 2 ( t + v k h ) ) ,
where t ( h N ) t k l + ( 1 v k ) h , m = 4 and 0 < v k 1 . We consider in this example the following two cases:
Case   1  
Let t [ 0 , 173 ] , h = 1.9 , ( v 0 , v 1 , v 2 , v 3 ) = ( 0.8 , 0.6 , 0.3 , 0.4 ) . and:
A = 0.015 0 0 0.015 , B = 0.0003 0 0.0004 0.0002 , x ( 0 ) = 0.001 0.0009 .
Here, from the given data, we obtain M A = 0.015 and M B = 0.0006 . Clearly, the assumptions ( A 1 ) and ( A 2 ) hold with M = 0.9212 < 1 . Thus, all the conditions of Theorem 1 and Theorem 2 are satisfied. Therefore, the system (7) has at least one solution that is Ulam–Hyers stable for r 0.0127 ,   d = 59.051 and c = d 1 M ;
Case   2  
We admit that t [ 0 , 120 ] , h = 1.25 , ( v 0 , v 1 , v 2 , v 3 ) = ( 0.05 , 0.05 , 0.05 , 0.05 ) , and:
A = 0.1 0 0 0.1 , B = 0.03 0.02 0.01 0.06 , x ( 0 ) = 0.1 0.1 .
As observed, ( A 1 ) and ( A 2 ) are valid for M A = 0.1 ,   M B = 0.07 , and M = 0.9212 . With r 1.269 , Theorem 1 is valid, which implies the existence of the solution. We can easily confirm that the neural network (7) is Ulam–Hyers stable, as there is a constant c = d 1 M where d = 5.419 , and hence, Theorem 2 accurately holds.
The stability is shown in Figure 3 and Figure 4 using the following numerical formulas:
x 1 ( t ) = x 1 ( t k l ) + h v k Γ ( v k ) t k + 1 i Γ ( t j + v k ) Γ ( t j + 1 ) a 1 x 1 ( j ) + b 11 sin ( x 1 ( j ) ) + b 12 sin ( x 2 ( j ) ) x 2 ( t ) = x 2 ( t k l ) + h v k Γ ( v k ) t k + 1 i Γ ( t j + v k ) Γ ( t j + 1 ) a 2 x 2 ( j ) + b 21 sin ( x 1 ( j ) ) + b 22 sin ( x 2 ( j ) .
In this paper, we addressed the existence and stability of a novel type of discrete fractional neural network with a variable order. In comparison to the literature that already exists concerning the technique used in our work, Reference [10] provided Ulam–Hyers stability results for Caputo nabla fractional difference equations in both the linear and non-linear cases using the generalized Gronwall inequality, and on the other hand, Reference [11] investigated the Hyers–Ulam stability of a linear fractional neural network with the help of the discrete Laplace transform. Furthermore, the existence of solutions for an initial-value discrete fractional Duffing equation with a forcing term using the Krasnoselskii fixed-point theorem and the Hyers–Ulam stability was shown in [12]. According to the literature reviewed above, it should be noted that the model addressed in this work is far more complex than the ones explored previously, and this type of discrete neural network with the variable-order fractional Caputo h-difference operator has not previously been investigated. As a consequence, when compared to [10,11,12], all of the results and discussions in this work are novel and enhanced.

7. Conclusions

This paper makes a contribution to the topic of the stability of fractional discrete neural networks by presenting a variable-order network model and by proving its Ulam–Hyers stability. Specifically, two novel theorems were proven: one theorem regards the existence of the solution for the proposed variable-order model, whereas the other regards its Ulam–Hyers stability. Finally, numerical simulations were carried out to show the effectiveness of the theoretical approach developed herein.

Author Contributions

Conceptualization, A.H. and I.M.B.; methodology, A.O.; software, T.-E.O.; validation, G.G., S.M.; formal analysis, A.O.; investigation, T.-E.O.; resources, G.G.; data curation, A.H.; writing—original draft preparation, A.O.; writing—review and editing, I.M.B.; visualization, S.M.; supervision, G.G.; project administration, S.M.; funding acquisition, G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hilfer, R. Applications of Fractional Calculus in Physics; World Scientific: Singapore, 2000. [Google Scholar]
  2. Goodrich, C.; Peterson, A.C. Discrete Fractional Calculus; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  3. Stanisławski, R.; Latawiec, K.J. A modified Mikhailov stability criterion for a class of discrete-time noncommensurate fractional-order systems. Commun. Nonlinear Sci. Numer. Simul. 2021, 96, 105697. [Google Scholar] [CrossRef]
  4. Wei, Y.; Wei, Y.; Chen, Y.; Wang, Y. Mittag–Leffler stability of nabla discrete fractional-order dynamic systems. Nonlinear Dyn. 2020, 101, 407–417. [Google Scholar] [CrossRef]
  5. Khennaoui, A.-A.; Ouannas, A.; Bendoukha, S.; Grassi, G.; Wang, X.; Pham, V.-T.; Alsaadi, F.E. Chaos, control, and synchronization in some fractional-order difference equations. Adv. Differ. Equ. 2019, 2019, 412. [Google Scholar] [CrossRef] [Green Version]
  6. Humphries, U.; Rajchakit, G.; Kaewmesri, P.; Chanthorn, P.; Sriraman, R.; Samidurai, R.; Lim, C.P. Global Stability Analysis of Fractional-Order Quaternion-Valued Bidirectional Associative Memory Neural Networks. Mathematics 2020, 8, 801. [Google Scholar] [CrossRef]
  7. Ratchagit, K. Asymptotic stability of delay-difference system of Hopfield neural networks via matrix inequalities and application. Int. J. Neural Syst. 2007, 17, 425–430. [Google Scholar] [CrossRef]
  8. Pratap, A.; Raja, R.; Alzabut, J.; Cao, J.; Rajchakit, G.; Huang, C. Mittag–Leffler stability and adaptive impulsive synchronization of fractional order neural networks in quaternion field. Math. Methods Appl. Sci. 2020, 43, 6223–6253. [Google Scholar] [CrossRef]
  9. Du, F.; Jia, B. Finite time stability of fractional delay difference systems: A discrete delayed Mittag–Leffler matrix function approach. Chaos Solitons Fractals 2020, 141, 110430. [Google Scholar] [CrossRef]
  10. Chen, C.; Bohner, M.; Jia, B. Ulam–Hyers stability of Caputo fractional difference equations. Math. Methods Appl. Sci. 2019, 42, 7461–7470. [Google Scholar] [CrossRef]
  11. Jonnalagadda, J.M. Hyers-Ulam stability of fractional nabla difference equations. Int. J. Anal. 2016, 2016, 1–5. [Google Scholar] [CrossRef] [Green Version]
  12. Selvam, A.G.M.; Baleanu, D.; Alzabut, J.; Vignesh, D.; Abbas, S. On Hyers–Ulam Mittag–Leffler stability of discrete fractional Duffing equation with application on inverted pendulum. Adv. Differ. Equ. 2020, 2020, 456. [Google Scholar] [CrossRef]
  13. Sun, H.; Zhang, Y.; Baleanu, D.; Chen, W.; Chen, Y. A new collection of real world applications of fractional calculus in science and engineering. Commun. Nonlinear Sci. Numer. Simul. 2018, 64, 213–231. [Google Scholar] [CrossRef]
  14. Pratap, A.; Raja, R.; Cao, J.; Huang, C.; Niezabitowski, M.; Bagdasar, O. Stability of discrete-time fractional-order time-delayed neural networks in complex field. Math. Methods Appl. Sci. 2021, 44, 419–440. [Google Scholar] [CrossRef]
  15. Li, R.; Cao, J.; Xue, C.; Manivannan, R. Quasi-stability and quasi-synchronization control of quaternion-valued fractional-order discrete-time memristive neural networks. Appl. Math. Comput. 2021, 395, 125851. [Google Scholar] [CrossRef]
  16. You, X.; Song, Q.; Zhao, Z. Global Mittag–Leffler stability and synchronization of discrete-time fractional-order complex-valued neural networks with time delay. Neural Netw. 2020, 122, 382–394. [Google Scholar] [CrossRef] [PubMed]
  17. Gu, Y.; Wang, H.; Yu, Y. Synchronization for fractional-order discrete-time neural networks with time delays. Appl. Math. Comput. 2020, 372, 124995. [Google Scholar] [CrossRef]
  18. Wu, G.-C.; Abdeljawad, T.; Liu, J.; Baleanu, D.; Wu, K.-T. Mittag–Leffler Stability Analysis of Fractional Discrete-Time Neural Networks via Fixed Point Technique; Institute of Mathematics and Informatics: Sofia, Bulgaria, 2019. [Google Scholar]
  19. You, X.; Dian, S.; Guo, R.; Li, S. Exponential stability analysis for discrete-time quaternion-valued neural networks with leakage delay and discrete time-varying delays. Neurocomputing 2021, 430, 71–81. [Google Scholar] [CrossRef]
  20. You, X.; Song, Q.; Zhao, Z. Existence and finite-time stability of discrete fractional-order complex-valued neural networks with time delays. Neural Netw. 2020, 123, 248–260. [Google Scholar] [CrossRef] [PubMed]
  21. Huang, L.-L.; Park, J.H.; Wu, G.-C.; Mo, Z.-W. Variable-order fractional discrete-time recurrent neural networks. J. Comput. Appl. Math. 2020, 370, 112633. [Google Scholar] [CrossRef]
  22. Baleanu, D.; Wu, G.-C.; Bai, Y.-R.; Chen, F.-L. Stability analysis of Caputo–like discrete fractional systems. Commun. Nonlinear Sci. Numer. Simul. 2017, 48, 520–530. [Google Scholar] [CrossRef]
  23. Burton, T.A.; Furumochi, T. Krasnoselskii’s fixed point theorem and stability. Nonlinear Anal. Theory Methods Appl. 2002, 49, 445–454. [Google Scholar] [CrossRef]
Figure 1. Numerical solution of the variable-order neural network (5) (red = 0.05, green = 0.1, blue = 0.15).
Figure 1. Numerical solution of the variable-order neural network (5) (red = 0.05, green = 0.1, blue = 0.15).
Fractalfract 06 00119 g001
Figure 2. The numerical solution of the neural network (6).
Figure 2. The numerical solution of the neural network (6).
Fractalfract 06 00119 g002
Figure 3. Numerical solution of the neural network (7), Case 1.
Figure 3. Numerical solution of the neural network (7), Case 1.
Fractalfract 06 00119 g003
Figure 4. Neural network (7), Case 2.
Figure 4. Neural network (7), Case 2.
Fractalfract 06 00119 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hioual, A.; Ouannas, A.; Oussaeif, T.-E.; Grassi, G.; Batiha, I.M.; Momani, S. On Variable-Order Fractional Discrete Neural Networks: Solvability and Stability. Fractal Fract. 2022, 6, 119. https://doi.org/10.3390/fractalfract6020119

AMA Style

Hioual A, Ouannas A, Oussaeif T-E, Grassi G, Batiha IM, Momani S. On Variable-Order Fractional Discrete Neural Networks: Solvability and Stability. Fractal and Fractional. 2022; 6(2):119. https://doi.org/10.3390/fractalfract6020119

Chicago/Turabian Style

Hioual, Amel, Adel Ouannas, Taki-Eddine Oussaeif, Giuseppe Grassi, Iqbal M. Batiha, and Shaher Momani. 2022. "On Variable-Order Fractional Discrete Neural Networks: Solvability and Stability" Fractal and Fractional 6, no. 2: 119. https://doi.org/10.3390/fractalfract6020119

APA Style

Hioual, A., Ouannas, A., Oussaeif, T. -E., Grassi, G., Batiha, I. M., & Momani, S. (2022). On Variable-Order Fractional Discrete Neural Networks: Solvability and Stability. Fractal and Fractional, 6(2), 119. https://doi.org/10.3390/fractalfract6020119

Article Metrics

Back to TopTop