Next Article in Journal
Multiple Solutions to the Fractional p-Laplacian Equations of Schrödinger–Hardy-Type Involving Concave–Convex Nonlinearities
Previous Article in Journal
Multifractal Analysis of 3D Correlated Nanoporous Networks
Previous Article in Special Issue
Non-Polynomial Collocation Spectral Scheme for Systems of Nonlinear Caputo–Hadamard Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Semi-Analytical Scheme to Deal with Fractional Partial Differential Equations (PDEs) of Variable-Order

by
Samad Kheybari
1,2,*,
Farzaneh Alizadeh
1,2,3,
Mohammad Taghi Darvishi
4,
Kamyar Hosseini
2,5,* and
Evren Hincal
1,2,3
1
Faculty of Art and Science, University of Kyrenia, TRNC, Mersin 10, Kyrenia 99320, Turkey
2
Mathematics Research Center, Near East University, TRNC, Mersin 10, Nicosia 99138, Turkey
3
Department of Mathematics, Near East University, TRNC, Mersin 10, Nicosia 99138, Turkey
4
Department of Mathematics, Razi University, Kermanshah 67149, Iran
5
Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102 2801, Lebanon
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2024, 8(7), 425; https://doi.org/10.3390/fractalfract8070425
Submission received: 17 May 2024 / Revised: 11 July 2024 / Accepted: 18 July 2024 / Published: 20 July 2024
(This article belongs to the Special Issue Spectral Methods for Fractional Functional Models)

Abstract

:
This article introduces a new numerical algorithm dedicated to solving the most general form of variable-order fractional partial differential models. Both the time and spatial order of derivatives are considered as non-constant values. A combination of the shifted Chebyshev polynomials is used to approximate the solution of such equations. The coefficients of this combination are considered a function of time, and they are obtained using the collocation method. The theoretical aspects of the method are investigated, and then by solving some problems, the efficiency of the method is presented.

1. Introduction

Fractional-order integration and differentiation constitute the essence of fractional calculus. From the middle of the previous century, fractional calculus has had a very important effect in many different areas. One may find a few such areas in the book that is written by Diethelm [1]. In fractional calculus, the order of the integration or differentiation operator is usually indicated by a constant, positive, and real number, e.g., α .
In 1993, Ross and Samko [2] presented the concept of the fractional derivative and integration operators with variable-orders. The order of their operators was a function of t, i.e., α ( t ) . In such operations, the order of operators can vary from point to point. In fact, they extended the constant-order fractional derivative to a variable-order one. Their work was followed by some researchers, most of whom worked on the theoretical aspect of the subject. Samko [3] presented a thorough investigation into fractional integration and differentiation with the variable-order. Further, Sun et al. [4] performed a comparative analysis between constant-order and variable-order fractional models. There is a suitable review on the variable-order of fractional calculus in [5]. Also, one may find good acknowledgment of fractional-order calculus of variations in [6]. Chechkin et al. [7] obtained a time-fractional equation along with a spatially varying fractional order of the time differentiation by starting from a continuous case time random walk approach with a space-dependent waiting-time probability density function. Zheng et al. [8] investigated the solution regularity of a well-posed reaction–diffusion problem in its variable-order of the time-fractional case. They considered the kernel of the equation as the Mittag–Leffler one. It must be noted that in practical problems, a constant-order fractional derivative cannot adequately capture the variability of memory with respect to time. This limitation causes using the variable-order fractional operator. Guo and Zheng [9] recently investigated a diffusion equation with a variable-order time fractional and a Mittag–Leffler kernel.
Variable-order fractional partial differential equations (VOFPDEs) serve crucial roles in two important areas, namely, mechanics [10] and viscoelasticity [11]. Therefore, there have been some efforts to solve VOFPDEs by different methods, e.g., the operational matrix method [12], approximation by Jacobi [13] and Chebyshev polynomials [14], the spectral method [15], approximation by Legendre wavelets [16], the finite difference method [17], the reproducing kernel approach [18], and approximation by Bernstein polynomials [19].
The following general form for a space-time VOFPDE is considered in this article:
D t α ( t ) 0 C v ( x , t ) + i = 1 n 1 a i ( x , t ) D t α i ( t ) 0 C v ( x , t ) + i = 1 n 2 b i ( x , t ) D t β i ( t ) 0 C D x λ i ( x ) a C v ( x , t ) = f ( x , t ) ,
and the initial/boundary conditions are considered as:
v ( x , 0 ) = h 0 ( x ) , i v ( x , 0 ) t i = h i ( x ) , i = 1 , 2 , , n 1 , and a x b , v ( a , t ) = g 0 ( t ) , v ( b , t ) = g 1 ( t ) , t [ 0 , T ] ,
where α ( t ) ( n 1 , n ] , α i ( t ) , β i ( t ) ( n i 1 , n i ] , λ i ( x ) ( 0 , 1 ] , or λ i ( x ) ( 1 , 2 ] and n N . Also, a i ( x , t ) , b i ( x , t ) , f ( x , t ) , h 0 ( x ) , h i ( x ) , g 0 ( t ) , and g 1 ( t ) are sufficiently smooth known functions.
To approximate the solution of Equation (1), a combination of shifted Chebyshev polynomials is employed, which makes a semi-analytical solution for it. The determination of the unknown coefficients in this combination is achieved through the collocation method.

2. Background Information

Within this section, we outline crucial definitions and notations that are fundamental for our subsequent discussions and analyses.

2.1. Fractional Derivative Operator

Here, we focus on the variable-order Caputo fractional derivative operator, whose order depends on time or space. The emphasis on variable-order fractional operators stems from their capacity to provide more accurate descriptions of numerous complex real-world problems. Some comparative investigations which characterize the importance of dealing with these types of operators rather than constant-order fractional derivatives were provided in [20,21,22]. Driven by the aforementioned considerations and motivations, the following definition for the variable-order Caputo fractional derivative can be stated.
Definition 1. 
The variable-order of Caputo fractional differentiation is expressed as:
D x α ( x ) a C v ( x , t ) = 1 Γ ( n α ( x ) ) a x ( x τ ) n α ( x ) 1 n v ( τ , t ) τ n d τ , α ( x ) ( n 1 , n ] , n v ( x , t ) x n , α ( x ) n ,
wherein Γ ( · ) shows the well-known Gamma function.
Remark 1. 
By referring to Definition 1 and taking v ( x , t ) = ( x a ) , we have:
D x α ( x ) a C ( x a ) = Γ ( + 1 ) Γ ( α ( x ) + 1 ) ( x a ) α ( x ) , ( N 0 n ) ( N > n 1 ) , 0 , N 0 < n ,
where n 1 < α ( x ) n , and N 0 = N { 0 } .

2.2. Shifted Chebyshev Polynomials

Numerous studies in the past have extensively investigated the operational matrices of fractional-order derivatives, including the articles cited in [23,24]. In this present research, we adopt a similar approach to derive the Caputo derivative operator with variable-order using shifted Chebyshev polynomials. Therefore, the following special framework of weighted L 2 -spaces is considered:
L ω 2 ( Λ ) = f : Λ R | Λ | f ( x ) | 2 ω ( x ) d x < , and f is measurable ,
where weight function ω ( x ) shows a measurable positive, and Λ R . Additionally, the relevant scalar product and norm functions are described as follows:
( f , g ) ω = Λ f ( x ) g ( x ) ω ( x ) d x , and f ω , 2 = Λ f ( x ) 2 ω ( x ) d x .
It must be noted that the orthogonality between functions f , g L ω 2 ( Λ ) is established when their inner product is zero. To have a family of orthogonal polynomials { P k ( x ) | k N 0 } , each of degree k, one can apply the well-known Gram–Schmidt algorithm on the family of standard polynomial basis functions { x j | j N 0 } .
Citing [25], it is established that for any function v L ω 2 ( Λ ) , there is the unique best-approximating polynomial of degree non-greater than N, which can be explicitly obtained as:
v N * ( x ) = i = 0 N ( P i , v ) ω P i ω , 2 2 P i ( x ) .
The Chebyshev polynomials have particular significance for orthogonal polynomials, which are defined on interval [ 1 , 1 ] . Their unique characteristics in the realms of approximation theory and computational fields have sparked considerable interest and prompted extensive study. The Chebyshev polynomial of the first kind, denoted as T n ( x ) , n N { 0 } are eigenfunctions of the singular Sturm–Liouville problem:
( 1 x 2 ) y x y + n 2 y = 0 , n N { 0 } , x ( 1 , 1 ) .
Additionally, the Chebyshev polynomials of the first kind can be derived using the following recurrence formula:
T n ( x ) = 2 x T n 1 ( x ) T n 2 ( x ) , for n = 2 , 3 ,
where the starting polynomials of this family are the zero-order polynomial T 0 ( x ) = 1 , and the first-order polynomial T 1 ( x ) = x . Considering ω ( x ) = 1 1 x 2 and Λ = [ 1 , 1 ] , these polynomials are orthogonal, that is,
( T n , T k ) ω = 0 , n k , π 2 , n = k 0 , π , n = k = 0 .
Chebyshev polynomials play a fundamental role in approximation theory by utilizing their roots as nodes in polynomial interpolation. This approach mitigates Runge’s phenomenon, resulting in an interpolation polynomial that closely approximates the optimal polynomial for a continuous function under the maximum norm. Moreover, Chebyshev polynomials provide a stable representation exclusively within the interval ( 1 , 1 ) . Within this interval, Chebyshev polynomials form a complete set, enabling any square integrable function f ( x ) to be expanded as a series using Chebyshev polynomials as the basis i.e., f ( x ) = n = 0 + c n T n ( x ) . For a square-integrable function f ( x ) that is infinitely differentiable on the interval ( 1 , 1 ) , the coefficients c n in the Chebyshev expansion diminish exponentially as n increases. This phenomenon occurs because Chebyshev polynomials are eigenfunctions of the singular Sturm–Liouville problem [26]. While Laguerre, Legendre, and Hermite polynomials are also eigenfunctions of the Sturm–Liouville problem, Chebyshev polynomials handle boundary conditions more effectively [26]. Consequently, if the function f ( x ) is well behaved within ( 1 , 1 ) , a relatively small number of terms will suffice to accurately represent the function.
To utilize these polynomials on a general interval [ a , b ] , the concept of shifted Chebyshev polynomials (SCPs) through the variable transformation x = b a 2 z + b + a 2 is introduced. After the implementation of the intended mapping, the resulting shifted Chebyshev polynomials are denoted as T n * ( x ) , and are written as:
T n * ( x ) = T n a + b 2 x a b , x [ a , b ] , n N 0 .
As it can be seen from Equation (3), polynomials T n * ( x ) have the following properties:
T n * ( a ) = ( 1 ) n , and T n * ( b ) = 1 , for n N 0 .
The orthogonality property of T n ( x ) , along with the change in variable z = 2 x ( a + b ) b a , establishes the orthogonality of SCPs, i.e.,
( T n , T m ) ω = 1 1 T n ( z ) T m ( z ) 1 z 2 d z = a b T n * ( x ) T m * ( x ) ( b x ) ( x a ) d x = ( T n * , T m * ) ω * ,
where the corresponding weighted function for the SCPs is represented by ω * ( x ) : = 1 ( b x ) ( x a ) .
The following theorem demonstrates the existence of the best approximating polynomials for any function v L ω * 2 ( [ a , b ] ) .
Theorem 1. 
Let Π N = span { x m | m = 0 , 1 , , N } , and v ( x ) L ω * 2 ( [ a , b ] ) be an arbitrary function. The optimal polynomial v N * ( x ) Π N to approximate v ( x ) can be expressed by the following combination of SCPs:
v N * ( x ) = k = 0 N ( T k * , v ) ω * T k * ω * , 2 2 T k * ( x ) ,
such that it satisfies:
v N * v ω * , 2 = inf v N Π N v N v ω * , 2 .
Proof. 
Refer to Theorem 3.14 [25] for the proof.    □
Theorem 2. 
The error’s upper bound for the optimal polynomial v N * ( x ) Π N that approximates a sufficiently smooth function v ( x ) defined over the interval [ a , b ] in the ω * -norm is:
v N * v ω * , 2 π 4 ( b a ) N + 1 ( N + 1 ) ! m N + 1 ,
where
d N + 1 v ( x ) d x N + 1 m N + 1 , x [ a , b ] .
Proof. 
Owing to the first theorem, the optimal polynomial v N * ( x ) to approximate v ( x ) is obtained as:
v N * ( x ) = k = 0 N ( T k * , v ) ω * T k * ω * , 2 2 T k * ( x ) .
Furthermore, for any v N ( x ) Π N , the polynomial v N * ( x ) fulfills the following inequality:
v N * v ω * , 2 v N v ω * , 2 .
Let v N ( x ) be the Taylor expansion of v ( x ) around a truncated to its first N + 1 terms, given by:
v N ( x ) = k = 0 N ( x a ) k k ! v ( k ) ( a ) , for x [ a , b ] .
By employing the variable transformation z = ( x a ) 1 2 ( b x ) 1 2 and considering Inequality (4), we can derive the following:
v N * v ω * , 2 v N v ω * , 2 = v ( N + 1 ) ( ζ ( x ) ) ( N + 1 ) ! ( x a ) N + 1 ω * , 2 m N + 1 ( N + 1 ) ! ( x a ) N + 1 ω * , 2 = m N + 1 ( N + 1 ) ! a b ( x a ) 2 N + 2 ( b x ) ( x a ) d x 1 2 = m N + 1 ( N + 1 ) ! 0 + 2 ( b a ) 2 N + 2 ( 1 + z 2 ) 2 N + 3 d z 1 2 = m N + 1 ( N + 1 ) ! π ( b a ) 2 N + 2 Γ ( 2 N + 5 2 ) Γ ( 2 N + 3 ) 1 2 π 4 ( b a ) N + 1 ( N + 1 ) ! m N + 1 ,
that provides our desired outcome.    □
Corollary 1. 
If v ( x ) is a sufficiently smooth function on interval [ a , b ] , and ϵ > 0 is an arbitrary real number, then we can establish the following results:
(i) 
For all N N 0 , where N 0 N is sufficiently large, the following relation holds for the error of the optimal approximating polynomial v N * ( x ) of v ( x ) :
v N * v ω * , 2 < ϵ .
(ii) 
As k + , the following limit holds
lim k + ( T k * , v ) ω * T k * ω * , 2 2 = 0 .
Proof. (i) 
Theorem 2 implies that:
lim N + v N * v ω * , 2 lim N + π 4 ( b a ) N + 1 ( N + 1 ) ! M N + 1 = 0 ,
which directly implies the corollary’s statement.
(ii) 
By part (i), we have
lim N + v N * v ω * , 2 = 0 v ( x ) = lim N + k = 0 N ( T k * , v ) ω * T k * ω * , 2 2 T k * ( x ) .
The convergence of lim N + k = 0 N ( T k * , v ) ω * T k * ω * , 2 2 T k * ( x ) to v ( x ) is equivalent to the establishment of Equation (5).
Therefore, the proof is accomplished.    □

2.3. Expansion for Variable-Order Caputo Fractional Derivatives

It is well known that T N ( z ) for N N 0 can be analytically expressed as:
T N ( z ) = ( 1 ) N + k = 1 N ( 1 ) N k N k N + k 1 N k 2 k 1 ( z + 1 ) k , z [ 1 , 1 ] .
To apply these polynomials over [ a , b ] , the straightforward change in variable z = 2 x ( a + b ) b a is employed. This transformation leads to the following analytical form of SCPs for N N 0 :
T N * ( x ) = T N 2 x ( a + b ) b a = ( 1 ) N + m = 1 N ( 1 ) N m 2 2 m 1 N m ( b a ) m N + m 1 N m ( x a ) m .
Introducing the subsequent formula for the coefficients of (6):
γ N , m = ( 1 ) N m 2 2 m 1 N ( b a ) m m N + m 1 N m , 0 < m N , ( 1 ) N , m = 0 ,
we can reformulate (6) as:
T N * ( x ) = m = 0 N γ N , m ( x a ) m , N N 0 , a x b .
Assuming v ( x ) L ω * 2 ( [ a , b ] ) , it can be obtained in terms of SCPs as:
v ( x ) = m = 0 + c m * T m * ( x ) = m = 0 + k = 0 m c m * γ m , k ( x a ) k ,
where c m * = ( T m * , v ) ω * T m * ω * , 2 2 . In practice, a truncated series of this expansion with N + 1 terms is commonly used:
v ( x ) m = 0 N k = 0 m c m * γ m , k ( x a ) k .
Differentiating both sides of Equation (7) with respect to x using a Caputo fractional derivative of variable-order α ( x ) yields:
D x α ( x ) a C v ( x ) = m = n N k = n m c m * γ m , k Γ ( k + 1 ) Γ ( k α ( x ) + 1 ) ( x a ) k α ( x ) , α ( x ) ( n 1 , n ] , x [ a , b ] , t [ 0 , T ] .

3. Algorithm of Solution for the Main Equation

The initiation of the presented method involves an initial approximation of the function v ( x , t ) in the following manner:
v ( x , t ) = m = 0 N c m * ( t ) T m * ( x ) , for x [ a , b ] , and t [ 0 , T ] .
Upon inserting (8) into the main Equation (1), we obtain:
m = 0 N D t α ( t ) 0 C c m * ( t ) + i = 1 n 1 a i ( x , t ) D t α i ( t ) 0 C c m * ( t ) T m * ( x ) + i = 1 n 2 b i ( x , t ) m = 0 N D t β i ( t ) 0 C c m * ( t ) D x λ i ( x ) a C T m * ( x ) = f ( x , t ) .
By replacing the spatial derivatives of order λ i ( x ) from T m * ( x ) in (9), we have:
m = 0 N D t α ( t ) 0 C c m * ( t ) + i = 1 n 1 a i ( x , t ) D t α i ( t ) 0 C c m * ( t ) T m * ( x ) + i = 1 n 2 b i ( x , t ) m = i N k = i m γ m , k Γ ( k + 1 ) D t β i ( t ) 0 C c m * ( t ) Γ ( k λ i ( x ) + 1 ) ( x a ) k λ i ( x ) = f ( x , t ) ,
wherein i 1 < λ i ( x ) i . Furthermore, the initial conditions (2) can be considered as:
v ( x , 0 ) = h 0 ( x ) m = 0 N c m * ( 0 ) T m * ( x ) = h 0 ( x ) , k v ( x , 0 ) t k = h k ( x ) m = 0 N d k c m * ( 0 ) d t k T m * ( x ) = h k ( x ) , k = 1 , , n 1 ,
where c m * ( 0 ) = ( T m * , h 0 ) ω * T m * 2 , ω * 2 and d k c m * ( 0 ) d t k = ( T m * , h k ) ω * T m * 2 , ω * 2 . Also, the boundary conditions (2) can be considered as follows:
v ( a , t ) = g 0 ( t ) m = 0 N c m * ( t ) T m * ( a ) = g 0 ( t ) m = 0 N ( 1 ) m c m * ( t ) = g 0 ( t ) , v ( b , t ) = g 1 ( t ) m = 0 N c m * ( t ) T m * ( b ) = g 1 ( t ) m = 0 N c m * ( t ) = g 1 ( t ) .
Furthermore, solving the boundary conditions (11) in terms of c N 1 * ( t ) and c N * ( t ) results in:
c N 1 * ( t ) = m = 0 N 4 2 c 2 m + 1 * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) , N E , m = 0 N 3 2 c 2 m * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) , N O , c N * ( t ) = m = 0 N 2 2 c 2 m * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) , N E , m = 0 N 3 2 c 2 m + 1 * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) , N O ,
where E , and O , respectively, indicate the sets of even and odd integer numbers. Certainly, both c N 1 * ( t ) , c N * ( t ) are reliant on the remaining unknown components { c m * ( t ) | 0 m N 2 } , which is an independent set. To ascertain these unknown components, we utilize the Chebyshev collocation method along the spatial variable which yields
m = 0 N D t α ( t ) 0 C c m * ( t ) + i = 1 n 1 a i ( x j , t ) D t α i ( t ) 0 C c m * ( t ) T m * ( x j ) + i = 1 n 2 b i ( x j , t ) m = i N k = i m γ m , k Γ ( k + 1 ) D t β i ( t ) 0 C c m * ( t ) Γ ( k λ i ( x j ) + 1 ) ( x j a ) k λ i ( x j ) = f ( x j , t ) ,
wherein j = 0 , 1 , , N 2 and x j s represent the following Chebyshev collocation points over the interval [ a , b ] :
x j = b + a 2 + b a 2 cos j π N 2 , j = 0 , 1 , 2 , , N 2 .
The correspondence between the number of our used collocation points and independent unknown components within set { c m * ( t ) | 0 m N 2 } is noteworthy.
To simplify matters, we can rephrase (12) given the conditions (10) as
T j * ( x j ) D t α ( t ) 0 C c j * ( t ) = F j ( C * , t ) + f ( x j , t ) , for j = 0 , 1 , 2 , , N 2 , c j * ( 0 ) = T j * , h 0 ω * T j * ω * , 2 2 , c j * ( k ) ( 0 ) = T j * , h k ω * T j * ω * , 2 2 , k = 1 , 2 , , n 1 ,
wherein C * = c 0 * ( t ) , c 1 * ( t ) , c 2 * ( t ) , , c N * ( t ) , and
F j ( C * , t ) = m = 0 m j N D t α ( t ) 0 C c m * ( t ) + i = 1 n 1 a i ( x j , t ) D t α i ( t ) 0 C c m * ( t ) T m * ( x j ) + i = 1 n 2 b i ( x j , t ) m = i N k = i M γ m , k Γ ( k + 1 ) D t β i ( t ) 0 C c m * ( t ) Γ ( k λ i ( x j ) + 1 ) ( x j a ) k λ i ( x j ) .
Since solving (13) depends on having { c m * ( t ) | 0 m N 2 } , we proceed to approximate them as:
c j * ( t ) c j , M * ( t ) = μ j ( t ) + i = 0 M r i , j ρ i ( t ) , j = 0 , 1 , 2 , , N 2 .
In this context, μ j ( t ) and { ρ i ( t ) | 0 i M } stand as particular solutions to the auxiliary differential equations, established to meet the initial conditions of (13) for c j , M * ( t ) . To ensure that (14) conforms to the initial conditions of (13), it is required that:
c j , M * ( 0 ) = μ j ( 0 ) + i = 0 M r i , j ρ i ( 0 ) = T j * , h 0 ω * T j * ω * , 2 2 , c j , M * ( k ) ( 0 ) = μ j ( k ) ( 0 ) + i = 0 M r i , j ρ i ( k ) ( 0 ) = T j * , h k ω * T j * ω * , 2 2 , j = 0 , 1 , 2 , , N 2 , k = 1 , 2 , , n 1 .
Hence, we adopt the subsequent initial conditions for μ j ( t ) and { ρ i ( t ) | 0 i M } as:
μ j ( 0 ) = T j * , h 0 ω * T j * ω * , 2 2 , ρ i ( 0 ) = 0 , μ j ( k ) ( 0 ) = T j * , h k ω * T j * ω * , 2 2 , ρ i ( k ) ( 0 ) = 0 , j = 0 , 1 , , N 2 , k = 1 , 2 , , n 1 , i = 0 , 1 , , M .
To further elaborate, the above-mentioned conditions will be adequate to ensure that c j , M * ( t ) complies with the initial condition (13). Consider V = { t i δ | 0 i M , δ ( 0 , 1 ] } as a set of basis functions defined over the interval [ 0 , T ] . We will now establish { μ j ( t ) | 0 j N 2 } and { ρ i ( t ) | 0 i M } through the solution of a homogeneous problem and a non-homogeneous problem as:
D t α ( t ) 0 C μ j ( t ) = f ( x j , t ) T j * ( x j ) , j = 0 , 1 , 2 , , N 2 , μ j ( 0 ) = T j * , h 0 ω * T j * ω * , 2 2 , μ j ( k ) ( 0 ) = T j * , h k ω * T j * ω * , 2 2 , k = 1 , 2 , , n 1 , and D t α ( t ) 0 C ρ i ( t ) = t i δ T j * ( x j ) , i = 0 , 1 , 2 , , M , ρ i ( 0 ) = 0 , ρ i ( k ) ( 0 ) = 0 , k = 1 , 2 , , n 1 .
Hence, solutions of μ j ( t ) and ρ i ( t ) can be explicitly represented as follows:
μ j ( t ) = 0 t ( t τ ) α ( t ) 1 f ( x j , τ ) Γ ( α ( t ) ) T j * ( x j ) d τ + k = 0 n 1 T j * , h k ω * T j * ω * , 2 2 t k , ρ i ( t ) = Γ ( i δ + 1 ) t i δ + α ( t ) Γ ( i δ + α ( t ) + 1 ) T j * ( x j ) , j = 0 , 1 , , N 2 , i = 0 , 1 , , M .
Considering (14) and (15) for j = 0 , 1 , , N 2 , we derive:
c j , M * ( t ) = 0 t ( t τ ) α ( t ) 1 f ( x j , τ ) Γ ( α ( t ) ) T j * ( x j ) d τ + k = 0 n 1 T j * , h k ω * T j * ω * , 2 2 t k + i = 0 M r i , j Γ ( i δ + 1 ) t i δ + α ( t ) Γ ( i δ + α ( t ) + 1 ) T j * ( x j ) .
To aid clarity, let us introduce the following notations:
r = r 0 , 0 , , r 0 , N 2 , , r M , 0 , , r M , N 2 , C M * = c 0 , M * ( t ) , c 1 , M * ( t ) , , c N , M * ( t ) .
The final stage of the presented method entails determining suitable values for r . These components are crucial for deriving the approximate solution to (13). To accomplish this, the residual functions are generated by substituting c j , M * ( t ) into (13) as follows:
R j ( r ; t ) = T j * ( x j ) D t α ( t ) 0 C c j , M * ( t ) + F j ( C M * , t ) f ( x j , t ) , 0 t T , j = 0 , 1 , , N 2 .
Our residual functions in (16) are adjusted to become zero at the specific collocation points t i = T 2 1 + cos π i M | 0 i M . Therefore, the values of unknown components of r can be ascertained by solving the following system.
R j ( r ; t i ) = 0 , j = 0 , 1 , 2 , , N 2 , and i = 0 , 1 , , M .
Thus, the approximate solution of C M * from C * is acquired via the values of the elements of r . Furthermore, v ( x , t ) is calculated by substituting C * with C M * in (8).

Algorithm

In the current section, a detailed presentation of the algorithm for the established scheme is provided. Algorithm  1 is applicable for solving Equation (1).
Algorithm 1: Proposed method.
  • Require:  N > 2 , M N , δ ( 0 , 1 ] .
  • for  j = 0 : N 2  do
  •     % Derive the M-th term approximation for c j * ( t )
  •      c j * ( t ) c j , M * ( t ) = 0 t ( t τ ) α ( t ) 1 f ( x j , τ ) Γ ( α ( t ) ) T j * ( x j ) d τ + k = 0 n 1 T j * , h k ω * T j * ω * , 2 2 t k + i = 0 M r i , j Γ ( i δ + 1 ) t i δ + α ( t ) Γ ( i δ + α ( t ) + 1 ) T j * ( x j ) .
  • end for
  • if N is an even number, then
  •      c N 1 * ( t ) c N 1 , M * ( t ) = m = 0 N 4 2 c 2 m + 1 * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) , c N * ( t ) c N , M * ( t ) = m = 0 N 2 2 c 2 m * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) .
  • else
  •      c N 1 * ( t ) c N 1 , M * ( t ) = m = 0 N 3 2 c 2 m * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) , c N * ( t ) c N , M * ( t ) = m = 0 N 3 2 c 2 m + 1 * ( t ) + 1 2 g 0 ( t ) + g 1 ( t ) .
  • end if
  • for  j = 0 : N 2  do
  •     % Generate the residual functions:
  •      R j ( r ; t ) = T j * ( x j ) D t α ( t ) 0 C c j , M * ( t ) + F j ( C M * , t ) f ( x j , t ) .
  • end for
  • for  i = 0 : M   do
  •      t i = T 2 1 + cos i π M .
  • end for
  • for  i = 0 : M  do
  •     for  j = 0 : N 2  do
  •         % Generate equations by calculating the residual functions at the collocation points:
  •          R j ( r ; t i ) = 0 .
  •     end for
  • end for
  • Find the solution to the resulting system R j ( r ; t i ) = 0 and obtain the values of the unknown coefficients { r i , j | 0 i M , 0 j N 2 } .
  • Utilize the acquired values of { r i , j | 0 i M , 0 j N 2 } to form { c j , M * ( t ) | 0 j N 2 } .
  • Formulate the approximate solution of (1) as v ( x , t ) m = 0 N c j , M * ( t ) T j * ( x ) , for a x b and 0 t T .

4. Convergence Analysis

Based on Algorithm 1, the approximate solution is
v ( x , t ) m = 0 N c j , M * ( t ) T j * ( x ) = v a p p N , M ( x , t ) .
By substituting v a p p N , M ( x , t ) in (1), we can establish the expected residual function for the governing equation as:
E N , M ( x , t , r ) = D t α ( t ) 0 C v a p p N , M ( x , t ) + i = 1 n 1 a i ( x , t ) D t α i ( t ) 0 C v a p p N , M ( x , t ) + i = 1 n 2 b i ( x , t ) D t β i ( t ) 0 C D x λ i ( x ) a C v a p p N , M ( x , t ) f ( x , t ) .
Next, we proceed by collocating E N , M ( x , t , r ) at the zeros of T N 1 * ( x ) , also defining E N , M j ( t , r ) as E N , M j ( x j , t , r ) .
The ensuing theorem establishes the possibility of obtaining r in a manner such that | E N , M j ( x j , t , r ) | can be minimized.
Theorem 3. 
Let { a i ( x , t ) | 1 i n 1 } , { b i ( x , t ) | 1 i n 2 } , and f ( x , t ) be continuous functions defined over area [ a , b ] × [ 0 , T ] . For any given positive ϵ, there is a positive integer M 0 such that we have the following:
M M 0 , 0 t T : E N , M j ( t , r ) < ϵ .
Proof. 
Given the aforementioned assumptions, E N , M j ( t , r ) | 0 j N 2 present a set of continuous functions. Consequently, this allows us to expand | E N , M j ( t , r ) | in terms of the SCPs as:
E N , M j ( t , r ) = i = 0 c i , j T i * ( t ) , j = 0 , 1 , 2 , , N 2 ,
where { c i , j | 0 j N 2 , i 0 } depends on the components of r . Additionally, it can be determined through the following expression:
c i , j = T i * , E N , M j ω * , 2 T i * ω * , 2 2 = 2 δ 0 , i π 0 T E N , M j ( t , r ) T i * ( t ) ω * ( t ) d t .
Suppose that we set
c i , j = 0 , for j = 0 , 1 , 2 , , N 2 , and i = 0 , 1 , , M .
The solution of system (17) provides the values of r i , j for j = 0 , 1 , 2 , , N 2 and i = 0 , 1 , , M . As | T i * ( t ) | 1 , we can derive the following inequality:
E N , M j ( t , r ) = i = M + 1 c i , j T i * ( t ) i = M + 1 c i , j , j = 0 , 1 , 2 , , N 2 .
Since E N , M j ( t , r ) shows the shifted Chebyshev expansion function in this paper, the convergence property of the expansion of E N , M j ( t , r ) implies that for any positive ϵ , there is a positive integer M 0 such that for each M M 0 , we have:
E N , M j ( t , r ) i = M + 1 c i , j < ϵ , j = 0 , 1 , 2 , , N 2 .
Therefore, the proof is concluded. □

5. Numerical Illustrations

In this section, we employ our presented scheme to solve some test problems to demonstrate its notable performance. To enhance the error analysis for each problem, we introduce the following convergence indicators, accompanied by their corresponding experimental convergence rates:
  • Reference error: R . E . ( N , M ) = max a x b 0 t T v ( x , t ) v a p p N , M ( x , t ) ;
  • Maximum absolute residual function: M . R . E . ( N , M ) = max a x b 0 t T E N , M ( x , t , r ) ;
  • Relative error: R e l . E ( N , M ) = i = 1 N j = 1 M v ( x i , t j ) v a p p N , M ( x i , t j ) 2 i = 1 N j = 1 M v ( x i , t j ) 2 ,
    where x i = a + ( b a ) i N , and t j = j T M .
If we designate ε to represent any of the aforementioned error indicators, then its corresponding experimental convergence rate is determined by:
r n , m ε ( N , M ) = log 2 ε ( N , M ) ε ( N + n , M + m ) .
In practice, r n , m ε ( N , M ) = k signifies that as the count of SCPs in (8) arises from N to N + n , and the number of approximating terms ρ i ( t ) arises from M to M + m , the maximum value of ε diminishes by a factor of 2 k .
Remark 2. 
The maximum absolute residual function M . R . E . ( N , M ) can be used to evaluate the accuracy of the approximate solution. If the exact solution is unavailable, this error indicator is applicable. When M . R . E . ( N , M ) 0 , the error of the approximate solution is deemed negligible.
Problem 1. 
As a first challenge, we employ our method on the ensuing one-dimensional time-fractional diffusion equation:
D t α ( t ) 0 C v ( x , t ) = k 2 v ( x , t ) x 2 + 2 t 2 α ( t ) Γ ( 3 α ( t ) ) + k π 2 t 2 L 2 sin π x L , 0 < α ( t ) < 1 ,
subject to the following conditions:
v ( x , 0 ) = 0 , 0 x L , v ( 0 , t ) = v ( L , 0 ) = 0 , 0 t T .
The analytical solution of Equation (18) is given by v ( x , t ) = t 2 sin π x L . We solve this equation for α ( t ) = 0.8 + 0.2 t T , k = 0.01 , L = 10 , and T = 0.5 . In Table 1, we can see the absolute errors of our results which are juxtaposed with those documented in [27] at x = 5 . These results underscore the superior precision of our novel algorithm over alternative methods. Table 2 presents the convergence analysis of our algorithm, detailing the error indicators and the corresponding experimental order of convergence of our computed results. Additionally, depicted in Figure 1 are the graphical representations of error indicators evaluated by proposed method for the parameters N = 23 , M = 8 , and δ = 0.4 .
Problem 2. 
For the second test problem, we apply our approach to solve the following fractional Burgers’ equation with variable-order. This is a one-dimensional linear inhomogeneous equation. This renowned Burgers’ equation is used extensively in the simulation of phenomena such as gas dynamics, fluid mechanics, traffic flow, and turbulence:
D t α ( t ) 0 C v ( x , t ) + v ( x , t ) x = 2 v ( x , t ) x 2 + 2 t 2 α ( t ) Γ ( 3 α ( t ) ) + 2 x 2 , 0 < α ( t ) < 1 ,
subject to the following initial and boundary conditions:
v ( x , 0 ) = x 2 , 0 x 1 , v ( 0 , t ) = t 2 , v ( 1 , t ) = 1 + t 2 , 0 t 1 .
The analytical solution of Equation (19) is given by v ( x , t ) = x 2 + t 2 . In our numerical results, we make the assumption that α ( t ) = 50 t + 49 100 . Figure 2 shows the graphical representations of error indicators evaluated through our approach, with parameters set as N = 8 , M = 8 , and δ = 0.4 . Additionally, in Table 3, the reference error, maximum absolute residual error, and relative error of our results across various values of N and M are reported.
Problem 3. 
This problem involves a variable-order time fractional PDE which is a multi-term equation as shown below:
D t 25 + t 2 20 0 C v ( x , t ) + D t 24 + cos t 20 0 C v ( x , t ) + D t 23 + t 20 0 C v ( x , t ) + D t 22 + sin t 20 0 C v ( x , t ) = 2 v ( x , t ) x 2 + f ( x , t ) ,
with the following conditions:
v ( x , 0 ) = 0 , v ( x , 0 ) t = 0 , 1 x 1 , v ( 1 , t ) = v ( 1 , t ) = sech ( 0.9 ) + ( sech 1.1 ) t 2 , 0 t 1 .
Equation (20) possesses the exact solution v ( x , t ) = sech ( x 0.1 ) + sech ( x + 0.1 ) t 2 , with the following f ( x , t ) :
f ( x , t ) = 2 sech ( x 0.1 ) + sech ( x + 0.1 ) [ t 15 t 2 20 Γ 35 t 2 20 + t 16 cos ( t ) 20 Γ 36 cos ( t ) 20 + t 17 t 20 Γ 37 t 20 + t 18 sin ( t ) 20 Γ 38 sin ( t ) 20 t 2 2 ] + 2 t 2 sech 3 ( x 0.1 ) + sech 3 ( x + 0.1 ) .
In Table 4, the reference error, maximum absolute residual error, and relative error of our results across various values of N and M are reported. Furthermore, the relative errors are juxtaposed with those documented in [28]. Table 5 illustrates the convergence analysis of our algorithm, providing error indicators and the corresponding experimental order of convergence for our approximations. Additionally, Figure 3 exhibits graphical depictions of error indicators assessed by our proposed method, utilizing parameters N = 28 , M = 28 , and δ = 0.25 .
Problem 4. 
Our last variable-order time fractional PDE is:
D t 38 + t 20 0 C v ( x , t ) + D t 8 + sin t 5 0 C v ( x , t ) = 2 v ( x , t ) x 2 + f ( x , t ) , 0 x 1 , 0 t 1 ,
with the following conditions:
v ( x , 0 ) = v ( x , 0 ) t = 0 , 0 x 1 , v ( 0 , t ) = exp ( 4 ) t 2 , v ( 1 , t ) = exp ( 64 ) t 2 , 0 t 1 .
Equation (21) possesses the exact solution v ( x , t ) = t 2 exp 100 ( x 0.2 ) 2 , with the following f ( x , t ) :
f ( x , t ) = 2 exp 4 ( 5 x 1 ) 2 t 2 t 20 Γ 22 t 20 + t 2 sin ( t ) 5 Γ 7 sin ( t ) 5 100 t 2 200 x 2 80 x + 7 .
In Table 6, the reference error, maximum absolute residual error, and relative error of our results across various values of N and M are reported. Furthermore, the relative errors are juxtaposed with those documented in [28]. Table 7 illustrates the convergence analysis of our algorithm, providing error indicators and the corresponding experimental order of convergence for our approximations. Additionally, Figure 4 exhibits graphical depictions of error indicators assessed by our proposed method, utilizing parameters N = 25 , M = 25 , and δ = 1 .
Based on the findings presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, it is evident that the maximum absolute values of error indicators exhibit a rapid decrease with increasing values of N or M. Moreover, the reported experimental convergence order in the tables underscores the efficacy and dependability of the proposed method.

6. Conclusions

We presented a novel and efficient algorithm aimed at dealing with the variable-order in space and time of fractional PDEs in their most general form. This encompasses a comprehensive investigation of these equations and their associated boundary conditions, formulated in a generalized framework. Our method involves approximating the solution of the governing equation using a combination of shifted Chebyshev polynomials with time-dependent coefficients. Through the integration of boundary conditions and the application of collocation methods, we have determined these coefficients. Theoretical analysis and numerical experiments corroborated the effectiveness and efficiency of our approach. Consequently, our method can be a suitable algorithm for addressing a wide range of differential equations.

Author Contributions

Conceptualization, S.K. and F.A.; Methodology, S.K. and F.A.; Software, S.K. and F.A.; Formal analysis, M.T.D.; Writing—original draft, S.K., F.A. and M.T.D.; Writing—review & editing, K.H. and E.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of the study are available upon reasonable request from the authors.

Acknowledgments

The authors deeply appreciate the valuable feedback from the esteemed reviewers and the respected editor, which has greatly enhanced the quality of this paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Diethelm, K. The Analysis of Fractional Differential Equations; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  2. Ross, B.; Samko, S. Integration and differentiation to a variable fractional order. Integral Transform. Spec. Funct. 1993, 1, 277–300. [Google Scholar] [CrossRef]
  3. Samko, S. Fractional integration and differentiation of variable order. Anal. Math. 1995, 21, 213–236. [Google Scholar] [CrossRef]
  4. Sun, H.; Chen, W.; Wei, H.; Chen, Y. A comparative study of constant-order and variable-order fractional models in characterizing memory property of systems. Eur. Phys. J. Spec. Top. 2011, 193, 185–192. [Google Scholar] [CrossRef]
  5. Samko, S. Fractional integration and differentiation of variable order: An overview. Nonlinear Dyn. 2013, 71, 653–662. [Google Scholar] [CrossRef]
  6. Almeida, R.; Tavares, D.; Torres, D.F.M. The Variable-Order Fractional Calculus of Variations; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  7. Chechkin, A.; Gorenflo, R.; Sokolov, I. Fractional diffusion in inhomogeneous media. J. Phys. Math. Gen. 2005, 38, L679–L684. [Google Scholar] [CrossRef]
  8. Zheng, X.; Wang, H.; Fu, H. Analysis of a physically-relevant variable-order time-fractional reaction–diffusion model with mittag-leffler kernel. Appl. Math. Lett. 2021, 112, 106804. [Google Scholar] [CrossRef]
  9. Guo, X.; Zheng, X. Variable-order time-fractional diffusion equation with mittag-leffler kernel: Regularity analysis and uniqueness of determining variable order. Z. FÜR Angew. Math. Phys. ZAMP 2023, 74, 64–68. [Google Scholar] [CrossRef]
  10. Coimbra, C. Mecganics with variable-order differential operators. Ann. Der Phys. 2001, 12, 692–703. [Google Scholar] [CrossRef]
  11. Orosco, J.; Coimbra, C. On the control and stability of variable-order mechanical systems. Nonlinear Dyn. 2016, 86, 695–710. [Google Scholar] [CrossRef]
  12. Ganji, R.M.; Jafari, H.; Adem, A.R. A numerical scheme to solve variable order diffusion-wave equations. Therm. Sci. 2019, 23, S2063–S2071. [Google Scholar] [CrossRef]
  13. Ganji, R.; Jafari, H. A numerical approach for multi-variable orders differential equations using jacobi polynomials. Int. J. Applied Comput. Math. 2019, 5, 34. [Google Scholar] [CrossRef]
  14. Ganji, R.; Jafari, H.; Baleanu, D. A new approach for solving multi variable orders differential equations with mittag-leffler kernel. Chaos Solitons Fractals 2020, 130, 109405. [Google Scholar] [CrossRef]
  15. Doha, E.H.; Abdelkawy, M.A.; Amin, A.Z.M.; Baleanu, D. Spectral technique for solving variable-order fractional volterra integro-differential equations. Numer. Methods Partial. Differ. Equ. 2018, 34, 1659–1677. [Google Scholar] [CrossRef]
  16. Chen, Y.-M.; Wei, Y.-Q.; Liu, D.-Y.; Yu, H. Numerical solution for a class of nonlinear variable order fractional differential equations with legendre wavelets. Appl. Math. Lett. 2015, 46, 83–88. [Google Scholar] [CrossRef]
  17. Xu, Y.; Ertürk, V.S. A finite difference technique for solving variable-order fractional integro-differential equations. Bull. Iran. Soc. 2014, 40, 699–712. [Google Scholar]
  18. Yang, J.; Yao, H.; Wu, B. An efficient numerical method for variable order fractional functional differential equation. Appl. Math. 2018, 76, 221–226. [Google Scholar] [CrossRef]
  19. Jafari, H.; Tajadodi, H.; Ganji, R.M. A numerical approach for solving variable order differential equations based on bernstein polynomials. Comput. Math. Methods 2019, 1, e1055. [Google Scholar] [CrossRef]
  20. Tavares, D.; Almeida, R.; Torres, D.F. Caputo derivatives of fractional variable order: Numerical approximations. Commun. Nonlinear Sci. Numer. Simul. 2016, 35, 69–87. [Google Scholar] [CrossRef]
  21. Santamaria, F.; Wils, S.; Schutter, E.D.; Augustine, G.J. Anomalous diffusion in purkinje cell dendrites caused by spines. Neuron 2006, 52, 635–648. [Google Scholar] [CrossRef]
  22. Sun, H.; Chen, W.; Chen, Y. Variable-order fractional differential operators in anomalous diffusion modeling. Phys. Stat. Mech. Its Appl. 2009, 388, 4586–4592. [Google Scholar] [CrossRef]
  23. Kheybari, S. Numerical algorithm to Caputo type time-space fractional partial differential equations with variable coefficients. Math. Comput. Simul. 2021, 182, 66–85. [Google Scholar] [CrossRef]
  24. Kheybari, S.; Darvishi, M.T.; Hashemi, M.S. A semi-analytical approach to caputo type time-fractional modified anomalous sub-diffusion equations. Appl. Numer. Math. 2020, 158, 103–122. [Google Scholar] [CrossRef]
  25. Shen, J.; Tang, T.; Wang, L.-L. Spectral Methods: Algorithms, Analysis and Applications; Springer Science & Business Media: Cham, The Netherland, 2011; Volume 41. [Google Scholar]
  26. Gottlieb, D.; Orszag, S.A. Numerical Analysis of Spectral Methods, Theory and Applications; SIAM-CBMS: Philadelphia, PA, USA, 1977. [Google Scholar] [CrossRef]
  27. Sun, H.; Chen, W.; Li, C.; Chen, Y. Finite difference schemes for variable-order time fractional diffusion equation. Int. J. Bifurc. Chaos 2012, 22, 1250085. [Google Scholar] [CrossRef]
  28. Tian, X.; Reutskiy, S.Y.; Fu, Z.-J. A novel meshless collocation solver for solving multi-term variable-order time fractional pdes. Eng. Comput. 2021, 38, 1527–1538. [Google Scholar] [CrossRef]
Figure 1. Graphical representations of error indicators for Problem 1 with the specified parameters ( N , M , δ ) = ( 23 , 8 , 0.4 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Figure 1. Graphical representations of error indicators for Problem 1 with the specified parameters ( N , M , δ ) = ( 23 , 8 , 0.4 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Fractalfract 08 00425 g001
Figure 2. Graphical representations of error indicators for Problem 2 with the specified parameters ( N , M , δ ) = ( 8 , 8 , 0.4 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Figure 2. Graphical representations of error indicators for Problem 2 with the specified parameters ( N , M , δ ) = ( 8 , 8 , 0.4 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Fractalfract 08 00425 g002
Figure 3. Graphical representations of error indicators for Problem 3 with the specified parameters ( N , M , δ ) = ( 28 , 28 , 0.25 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Figure 3. Graphical representations of error indicators for Problem 3 with the specified parameters ( N , M , δ ) = ( 28 , 28 , 0.25 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Fractalfract 08 00425 g003
Figure 4. Graphical representations of error indicators for Problem 4 with the specified parameters ( N , M , δ ) = ( 25 , 25 , 1 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Figure 4. Graphical representations of error indicators for Problem 4 with the specified parameters ( N , M , δ ) = ( 25 , 25 , 1 ) . (a) Logarithmic plot of the absolute error function: log 10 v ( x , t ) v a p p N , M ( x , t ) . (b) Logarithmic plot of the absolute residual function: log 10 E N , M ( x , t , r ) .
Fractalfract 08 00425 g004
Table 1. Comparative analysis of absolute errors at x = 5 for the equation detailed in Problem 1.
Table 1. Comparative analysis of absolute errors at x = 5 for the equation detailed in Problem 1.
tPresented Method M = 8 , δ = 0.4 Explicit Scheme [27]Implicit Scheme [27]Crank-Nicholson Scheme [27]
N = 8 N = 16
0.1 3.44 × 10 12 6.18 × 10 22 1.20 × 10 04 5.74 × 10 04 4.18 × 10 04
0.2 2.40 × 10 11 4.30 × 10 21 3.56 × 10 03 1.30 × 10 03 6.98 × 10 04
0.3 7.48 × 10 11 1.33 × 10 20 1.26 × 10 02 2.45 × 10 03 5.68 × 10 04
0.4 1.68 × 10 10 2.98 × 10 20 2.90 × 10 02 4.28 × 10 03 2.33 × 10 04
0.5 3.14 × 10 10 5.55 × 10 20 5.43 × 10 02 7.12 × 10 03 2.03 × 10 03
Table 2. Error indicators and their corresponding convergence orders associated with our obtained results for Problem 1.
Table 2. Error indicators and their corresponding convergence orders associated with our obtained results for Problem 1.
MN R . E . ( N , M ) r 2 , 0 R . E . ( N , M ) M . R . E . ( N , M ) r 2 , 0 M . R . E . ( N , M )
84 3.3790 × 10 03 6.4279 1.0656 × 10 02 6.4475
6 3.9247 × 10 05 7.0499 1.2210 × 10 04 7.1165
8 2.9619 × 10 07 7.5739 8.7989 × 10 07 7.7512
10 1.5545 × 10 09 7.8778 4.0840 × 10 09 8.1624
12 6.6088 × 10 12 8.1984 1.4255 × 10 11 8.6177
14 2.2498 × 10 14 8.4095 3.6289 × 10 14 8.9109
16 6.6166 × 10 17 8.4291 7.5394 × 10 17 8.8769
18 1.9196 × 10 19 9.1585 1.6037 × 10 19 9.9178
20 3.3592 × 10 22 9.1056 1.6579 × 10 22 9.7552
22 6.0977 × 10 25 1.9185 × 10 25
Table 3. Error indicators associated with our obtained results for Problem 2.
Table 3. Error indicators associated with our obtained results for Problem 2.
δ NM R . E . ( N , M ) M . R . E . ( N , M ) Rel . E . ( N , M )
0.441 3.5059 × 10 01 5.3833 × 10 + 01 3.7283 × 10 02
2 3.0927 × 10 02 4.6816 × 10 + 00 5.6831 × 10 06
3 6.0000 × 10 30 5.8067 × 10 29 2.5572 × 10 59
4 5.1000 × 10 30 1.2173 × 10 28 1.1066 × 10 59
0.461 4.0426 × 10 01 2.9841 × 10 + 02 3.7557 × 10 02
2 3.7206 × 10 02 2.5806 × 10 + 01 1.9739 × 10 04
3 7.4300 × 10 29 9.5601 × 10 27 4.1602 × 10 58
4 6.4300 × 10 29 7.4819 × 10 27 7.1249 × 10 58
0.481 3.9392 × 10 01 9.7281 × 10 + 02 3.1186 × 10 02
2 3.5865 × 10 02 8.4053 × 10 + 01 1.6343 × 10 04
3 6.9800 × 10 27 1.9503 × 10 25 3.2995 × 10 54
4 5.0207 × 10 27 3.7009 × 10 25 2.6485 × 10 54
Table 4. Error indicators of our approximation and relative error in comparison with [28] for Problem 3.
Table 4. Error indicators of our approximation and relative error in comparison with [28] for Problem 3.
Presented Method with δ = 0.25 Meshless Collocation Method [28] with δ = 0.25
( N , M ) R . E . ( N , M ) M . R . E . ( N , M ) Rel . E . ( N , M ) ( N , k ) Rel . E . ( N , k )
( 8 , 8 ) 3.1877 × 10 03 5.5490 × 10 02 4.6816 × 10 07 ( 10 , 4 ) 3.96 × 10 05
( 12 , 12 ) 3.4328 × 10 05 5.9762 × 10 04 3.3557 × 10 11 ( 20 , 4 ) 6.14 × 10 06
( 16 , 16 ) 3.9025 × 10 07 6.7939 × 10 06 3.2339 × 10 15 ( 40 , 4 ) 1.12 × 10 06
( 20 , 20 ) 2.6614 × 10 08 4.6333 × 10 07 1.4053 × 10 17 ( 80 , 4 ) 1.69 × 10 07
( 24 , 24 ) 5.5820 × 10 10 9.7178 × 10 09 5.6482 × 10 21 ( 160 , 4 ) 2.32 × 10 08
( 28 , 28 ) 1.4360 × 10 11 1.8167 × 10 10 2.6389 × 10 24 ( 320 , 4 ) 2.99 × 10 09
Table 5. Error indicators and their corresponding convergence orders associated with our obtained results for Problem 3.
Table 5. Error indicators and their corresponding convergence orders associated with our obtained results for Problem 3.
δ ( N , M ) R . E . ( N , M ) r 2 , 2 R . E . ( N , M ) M . R . E . ( N , M ) r 2 , 2 M . R . E . ( N , M )
0.25(8,8) 3.1877 × 10 03 3.0451 5.5490 × 10 02 3.0449
(10,10) 3.8621 × 10 04 3.4919 6.7236 × 10 03 3.4919
(12,12) 3.4328 × 10 05 2.5247 5.9762 × 10 04 5.8467
(14,14) 5.9654 × 10 07 3.9341 1.0385 × 10 05 0.6122
(16,16) 3.9025 × 10 07 1.3480 6.7939 × 10 06 1.3480
(18,18) 1.5330 × 10 07 2.5261 2.6689 × 10 06 2.5261
(20,20) 2.6614 × 10 08 2.7561 4.6333 × 10 07 2.7561
(22,22) 3.9396 × 10 09 2.8192 6.8584 × 10 08 2.8192
(24,24) 5.5820 × 10 10 9.7178 × 10 09
Table 6. Error indicators of our approximation and relative error in comparison with [28] for Problem 4.
Table 6. Error indicators of our approximation and relative error in comparison with [28] for Problem 4.
Presented Method with δ = 1 Meshless Collocation Method [28] with δ = 1
( N , M ) R . E . ( N , M ) M . R . E . ( N , M ) Rel . E . ( N , M ) ( N , k ) Rel . E . ( N , k )
( 8 , 8 ) 1.6177 × 10 03 5.4111 × 10 02 2.5990 × 10 06 ( 16 , 5 ) 7.09 × 10 04
( 12 , 12 ) 1.4687 × 10 06 5.4567 × 10 05 3.2460 × 10 12 ( 32 , 5 ) 1.19 × 10 04
( 16 , 16 ) 1.0544 × 10 09 8.8332 × 10 08 9.0815 × 10 19 ( 64 , 5 ) 3.31 × 10 05
( 20 , 20 ) 9.3629 × 10 13 1.3710 × 10 10 1.7478 × 10 24 ( 128 , 5 ) 2.11 × 10 06
( 24 , 24 ) 8.3417 × 10 16 4.1830 × 10 13 1.2302 × 10 30 ( 256 , 5 ) 5.04 × 10 07
Table 7. Error indicators and their corresponding convergence orders associated with our obtained results for Problem 4.
Table 7. Error indicators and their corresponding convergence orders associated with our obtained results for Problem 4.
δ ( N , M ) R . E . ( N , M ) r 2 , 2 R . E . ( N , M ) M . R . E . ( N , M ) r 2 , 2 M . R . E . ( N , M )
1(8,8) 1.6177 × 10 03 2.1598 5.4111 × 10 02 2.7559
(10,10) 3.6203 × 10 04 7.9454 8.0108 × 10 03 7.1978
(12,12) 1.4687 × 10 06 4.8439 5.4567 × 10 05 4.7305
(14,14) 5.1142 × 10 08 5.6000 2.0554 × 10 06 4.5403
(16,16) 1.0544 × 10 09 5.4143 8.8332 × 10 08 4.3594
(18,18) 2.4725 × 10 11 4.7229 4.3035 × 10 09 4.9722
(20,20) 9.3629 × 10 13 7.4682 1.3710 × 10 10 2.9650
(22,22) 5.2877 × 10 15 2.6642 1.7558 × 10 11 5.3914
(24,24) 8.3417 × 10 16 4.1830 × 10 13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kheybari, S.; Alizadeh, F.; Darvishi, M.T.; Hosseini, K.; Hincal, E. A Novel Semi-Analytical Scheme to Deal with Fractional Partial Differential Equations (PDEs) of Variable-Order. Fractal Fract. 2024, 8, 425. https://doi.org/10.3390/fractalfract8070425

AMA Style

Kheybari S, Alizadeh F, Darvishi MT, Hosseini K, Hincal E. A Novel Semi-Analytical Scheme to Deal with Fractional Partial Differential Equations (PDEs) of Variable-Order. Fractal and Fractional. 2024; 8(7):425. https://doi.org/10.3390/fractalfract8070425

Chicago/Turabian Style

Kheybari, Samad, Farzaneh Alizadeh, Mohammad Taghi Darvishi, Kamyar Hosseini, and Evren Hincal. 2024. "A Novel Semi-Analytical Scheme to Deal with Fractional Partial Differential Equations (PDEs) of Variable-Order" Fractal and Fractional 8, no. 7: 425. https://doi.org/10.3390/fractalfract8070425

APA Style

Kheybari, S., Alizadeh, F., Darvishi, M. T., Hosseini, K., & Hincal, E. (2024). A Novel Semi-Analytical Scheme to Deal with Fractional Partial Differential Equations (PDEs) of Variable-Order. Fractal and Fractional, 8(7), 425. https://doi.org/10.3390/fractalfract8070425

Article Metrics

Back to TopTop