Next Article in Journal
A Hybrid MCDM Approach in Third-Party Logistics (3PL) Provider Selection
Previous Article in Journal
An Efficient Discrete Model to Approximate the Solutions of a Nonlinear Double-Fractional Two-Component Gross–Pitaevskii-Type System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Methods for Caputo–Hadamard Fractional Differential Equations with Graded and Non-Uniform Meshes

1
Department of Mathematical and Physical Sciences, University of Chester, Chester CH1 4BJ, UK
2
Department of Mathematics, Lvliang University, Lvliang 033000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(21), 2728; https://doi.org/10.3390/math9212728
Submission received: 9 September 2021 / Revised: 17 October 2021 / Accepted: 25 October 2021 / Published: 27 October 2021
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
We consider the predictor-corrector numerical methods for solving Caputo–Hadamard fractional differential equations with the graded meshes log t j = log a + log t N a j N r , j = 0 , 1 , 2 , , N with a 1 and r 1 , where log a = log t 0 < log t 1 < < log t N = log T is a partition of [ log t 0 , log T ] . We also consider the rectangular and trapezoidal methods for solving Caputo–Hadamard fractional differential equations with the non-uniform meshes log t j = log a + log t N a j ( j + 1 ) N ( N + 1 ) , j = 0 , 1 , 2 , , N . Under the weak smoothness assumptions of the Caputo–Hadamard fractional derivative, e.g., D C H a , t α y ( t ) C 1 [ a , T ] with α ( 0 , 2 ) , the optimal convergence orders of the proposed numerical methods are obtained by choosing the suitable graded mesh ratio r 1 . The numerical examples are given to show that the numerical results are consistent with the theoretical findings.

1. Introduction

Recently, fractional differential equations have become an active research area due to their applications in a wide range of fields including mechanics, computer science, and biology [1,2,3,4]. There are different kinds of fractional derivatives, e.g., Caputo, Riemman–Liouville, Riesz, which have been studied extensively in the literature. However, the Hadamard fractional derivative is also very important and used to model the different physical problems [5,6,7,8,9,10,11].
The Hadamard fractional derivative was suggested in early 1892 [12]. More recently, a new derivative which involved a Caputo-type modification on the Hadamard derivative known as the Caputo–Hadamard derivative was suggested [8]. The aim of this paper is to study and analyze some useful numerical methods for solving Caputo–Hadamard fractional differential equations with graded and non-uniform meshes under the weak smoothness assumptions of the Caputo–Hadamard fractional derivative, e.g., D C H a , t α y ( t ) C 1 [ a , T ] with α ( 0 , 2 ) .
We thus consider the following Caputo–Hadamard fractional differential equation, with α > 0 , [8]
D C H a , t α y ( t ) = f ( t , y ( t ) ) , 1 a t T , δ k y ( a ) = y a ( k ) , k = 0 , 1 , , α 1 ,
where f ( t , y ) is a nonlinear function with respect to y R , and the initial values y a ( k ) are given and n 1 < α < n , for n = 1 , 2 , 3 , . Here the fractional derivative D C H a , t α denotes the Caputo–Hadamard derivative defined by
D C H a , t α y ( t ) = 1 Γ ( α α ) a t log t s α α 1 δ n y ( s ) d s s , t a 1 ,
with δ n y ( s ) = ( s d d s ) n y ( s ) , and where α denotes the smallest integer greater than or equal to α [8].
To make sure that (1) has a unique solution, we assume that the function f is continuous and satisfies the following Lipschitz condition with respect to the second variable y [7,13]
| f ( t , y 1 ) f ( t , y 1 ) | L | y 1 y 2 | for L > 0 , y 1 , y 2 R .
For some recent existence and uniqueness results for Caputo–Hadamard fractional differential equations, the readers can refer to [14,15,16] and the references therein.
It is well known that the Equation (1) is equivalent to the following Volterra integral equation, with α > 0 , [5]
y ( t ) = ν = 0 α 1 y a ( ν ) ( log t a ) ν ν ! + 1 Γ ( α ) a t log t s α 1 f ( s , y ( s ) ) d s s .
Let us review some numerical methods for solving (1). Gohar et al. [7] studied the existence and uniqueness of the solution of (1) and Euler and predictor-corrector methods were considered. Gohar et al. [13] further considered the rectangular, trapezoidal, and predictor-corrector methods for solving (1) with uniform meshes under the smooth assumption of the fractional derivative, e.g., D C H a , t α y ( t ) C 2 [ a , T ] with α ( 0 , 1 ) . There are also some numerical methods for solving Caputo–Hadamard time fractional partial differential equations [7,17]. In this paper, we shall assume that D C H a , t α y ( t ) C 2 [ a , T ] with α ( 0 , 2 ) and assume that D C H a , t α y ( t ) behaves as log t a σ with σ ( 0 , 1 ) which implies that the derivatives of D C H a , t α y ( t ) have the singularities at log a . In such case, we can not expect the numerical methods with uniform meshes have the optimal convergence orders. To obtain the optimal convergence orders, we shall use the graded and non-uniform meshes as in Liu et al. [18,19] for solving Caputo fractional differential equations. We shall show that the predictor-corrector method has the optimal convergence orders with the graded meshes log t j = log a + log t N a j N r , j = 0 , 1 , 2 , , N for some suitable r 1 . We also show that the rectangular, trapezoidal methods also have the optimal convergence orders with some non-uniform meshes log t j = log a + log t N a j ( j + 1 ) N ( N + 1 ) , j = 0 , 1 , 2 , , N .
For some recent works for the numerical methods for solving fractional differential equations with graded and non-uniform meshes, we refer to [17,20,21,22]. In particular, Stynes et al. [23,24] applied a graded mesh on a finite difference method for solving subdiffusion equations when the solutions of the equations are not sufficiently smooth. Liu et al. [18,19] applied a graded mesh for solving Caputo fractional differential equation by using a fractional Adams method with the assumption that the solution was not sufficiently smooth. The aim of this work is to extend the ideas in Liu et al. [18,19] for solving Caputo fractional differential equations to solve the Caputo–Hadamard fractional differential equations.
The paper is organized as follows. In Section 2 we consider the error estimates of the predictor-corrector method for solving (1) with the graded meshes. In Section 3 we consider the error estimates of the rectangular, trapezoidal methods for solving (1) with non-uniform meshes. In Section 4 we will provide several numerical examples which support the theoretical conclusions made in Section 2 and Section 3.
Throughout this paper, we denote by C a generic constant depending on y , T , α , but independent of t > 0 and N, which could be different at different occurrences.

2. Predictor-Corrector Method with Graded Meshes

In this section, we shall consider the error estimates of the predictor-corrector method for solving (1) with graded meshes. We first recall the following smoothness properties of the solutions to (1).
Theorem 1
([25]). Let α > 0 . Assume that f C 2 ( G ) where G is a suitable set. Define v ^ = 1 α 1 . Then there exists a function ϕ C 1 [ a , T ] and some constants c 1 , c 2 , , c v ^ R such that the solution y of (1) can be expressed in the following form
y ( t ) = ϕ ( t ) + c 1 log t a α + c 2 log t a 2 α + + c v ^ log t a v ^ α .
An example of this would be when 0 < α < 1 , f C 2 ( G ) . We would have v ^ = 1 α 1 1 and
y = c log t a α + smoother terms .
This implies that the solution y of (1) would behave as ( log t a ) α , 0 < α < 1 . As such the solution y C 2 [ a , T ] .
Theorem 2
([25]). If y C m [ a , T ] for some m N and 0 < α < m , then
D C H a , t α y ( t ) = Φ ( t ) + l = 0 m α 1 δ l + α y ( a ) Γ ( a α + l + 1 ) log t a a α + l ,
where Φ C m α [ a , T ] and δ n y ( s ) = ( s d ds ) n y ( s ) with n N .
With the above two theorems, we can see that if one of y and D C H a , t α y ( t ) is sufficiently smooth then the other will not be sufficiently smooth unless some special conditions have been met.
Recall that, by (3), the solution of (1) can be written as the following form, with α ( 0 , 1 ) and y a = y a ( 0 ) ,
y ( t ) = y a + 1 Γ ( α ) a t log t log s α 1 D C H a , s α y ( s ) d s s .
Therefore it is natural to introduce the following smoothness assumptions for the fractional derivative D C H a , t α y ( t ) in (1).
Assumption 1.
Let 0 < σ < 1 and α > 0 . Let y be the solution of (1). Assume that D C H a , t α y ( t ) can be expressed as a function of log t , that is, there exists a smooth function G a : [ 0 , ) R such that
G a ( log t ) : = D C H a , t α y ( t ) C 2 ( a , T ] .
Further we assume that G a ( · ) satisfies the following smooth assumptions, with 1 a t T ,
| G a ( log t ) | C ( log t log a ) σ 1 , | G a ( log t ) | C ( log t log a ) σ 2 ,
where G a ( · ) and G a ( · ) denote the first and second order derivatives of G a , respectively.
Denote
g a ( t ) : = G a ( log t ) , 1 a t T .
We then have
δ g a ( t ) : = ( t d dt ) g a ( t ) = G a ( log t ) , δ 2 g a ( t ) : = ( t d dt ) 2 g a ( t ) = ( t d dt ) ( t d g a dt ) = G a ( log t ) ,
Hence the assumptions (6) is equivalent to, with 1 a t T ,
| δ g a ( t ) | C log t a σ 1 , | δ 2 g a ( t ) | C log t a σ 2 ,
which are similar to the smoothness assumptions given in Liu et al. [18] for the Caputo fractional derivative D C 0 , t α y ( t ) .
Remark 1.
Assumption 1 gives the behavior of g a ( t ) near t = a and implies that g a ( t ) has the singularity near t = a . It is obvious that g a C 2 [ a , T ] . For example, we may choose g a ( t ) = ( log t a ) σ with 0 < σ < 1 .
Let N be a positive integer and let a = t 0 < t 1 < < t N = T be the partition on [ a , T ] . We define the following graded mesh on [ log ( a ) , log ( T ) ] with
log a = log t 0 < log t 1 < < log t N = log T ,
such that, with r 1 ,
log t j log a log t N log a = j N r ,
which implies that
log t j = log a + log t N log a j N r .
When j = N we have log t N = log T . Further we have
log t j + 1 log t j = log t j + 1 t j = log T a j + 1 N r j N r .
Denote y k y ( t k ) , k = 0 , 1 , 2 , , N the approximation of y ( t k ) . Let us introduce the different numerical methods for solving (3) with α ( 0 , 1 ) below. Similarly we may define the numerical methods for solving (3) with α 1 . The fractional rectangular method for solving (3) is defined as
y k + 1 = y 0 + j = 0 k b j , k + 1 f ( t j , y j ) ,
where the weights b j , k + 1 are defined as
b j , k + 1 = 1 α + 1 log t k + 1 t j α log t k + 1 t j + 1 α , j = 0 , 1 , 2 , , k .
The fractional trapezoidal method for solving (3) is defined as
y k + 1 = y 0 + j = 0 k + 1 a j , k + 1 f ( t j , y j ) ,
where the weights a j , k + 1 for j = 0 , 1 , 2 , , k + 1 are defined as
a j , k + 1 = 1 Γ ( α + 2 ) 1 log t 1 a A 0 , j = 0 , 1 log t j + 1 t j A j + 1 log t j 1 t j B j , j = 1 , 2 , , k , log t k + 1 t k α , j = k + 1 ,
A j = log t k + 1 t j + 1 α + 1 log t k + 1 t j α + 1 + ( α + 1 ) log t j + 1 t j log t k + 1 t j α , j = 0 , 1 , , k , B j = log t k + 1 t j α + 1 log t k + 1 t j 1 α + 1 + ( α + 1 ) log t j t j 1 log t k + 1 t j 1 α , j = 1 , 2 , , k .
The predictor-corrector Adams method for solving (3) is defined as, with α ( 0 , 1 ) , k = 0 , 1 , , N 1 ,
y k + 1 P = y 0 + j = 0 k b j , k + 1 f ( t j , y j ) , y k + 1 = y 0 + j = 0 k a j , k + 1 f ( t j , y j ) + a k + 1 , k + 1 f ( t k + 1 , y k + 1 P ) ,
where the weights b j , k + 1 and a j , k + 1 are defined as above.
If we assume that g a ( t ) : = D C H a , t α y ( t ) satisfies Assumption 1, we shall prove the following error estimate.
Theorem 3.
Assume that g a ( t ) : = D C H a , t α y ( t ) satisfies Assumption 1. Further assume that y ( t j ) and y j are the solutions of (3) and (13), respectively.
  • If 0 < α 1 , then we have
    max 0 j N | y ( t j ) y j | C N r ( σ + α ) , i f r ( σ + α ) < 1 + α , C N r ( σ + α ) log ( N ) , i f r ( σ + α ) = 1 + α , C N ( 1 + α ) , i f r ( σ + α ) > 1 + α .
  • If α > 1 , then we have
    max 0 j N | y ( t j ) y j | C N r ( 1 + σ ) , i f r ( 1 + σ ) < 2 , C N 2 log N , i f r ( 1 + σ ) = 2 , C N 2 , i f r ( 1 + σ ) > 2 .

Proof of Theorem 3

In this subsection, we shall prove Theorem 3. To help with this we will start by proving some preliminary Lemmas. In Lemma 1 we will be finding the error estimate between g a ( s ) and the piecewise linear function P 1 ( s ) for both 0 < α 1 and α > 1 . This will be used to estimate one of the terms in our main proof.
Lemma 1.
Assume that g a ( t ) satisfies Assumption 1
  • If 0 < α 1 , then we have
    a t k + 1 log t k + 1 s α 1 ( g a ( s ) P 1 ( s ) ) d s s C N r ( σ + α ) , i f r ( σ + α ) < 2 , C N 2 log N , i f r ( σ + α ) = 2 , C N 2 , i f r ( σ + α ) > 2 .
  • If α > 1 , then we have
    a t k + 1 log t k + 1 s α 1 ( g a ( s ) P 1 ( s ) ) d s s C N r ( 1 + σ ) , i f r ( 1 + σ ) < 2 , C N 2 log N , i f r ( 1 + σ ) = 2 , C N 2 , i f r ( 1 + σ ) > 2 ,
    where P 1 ( s ) is the piecewise linear function defined by,
P 1 ( s ) = log s t j + 1 log t j t j + 1 g ( t j ) + log s t j log t j + 1 t j g ( t j + 1 ) , s [ t j , t j + 1 ] .
Proof. 
Note that, with k = 0 , 1 , 2 , , N 1 ,
a t k + 1 log t k + 1 s α 1 ( g a ( s ) P 1 ( s ) ) d s s = a t 1 + j = 1 k 1 t j t j + 1 + t k t k + 1 log t k + 1 s α 1 ( g a ( s ) P 1 ( s ) ) d s s = I 1 + I 2 + I 3 .
For I 1 , we have
I 1 = a t 1 log t k + 1 s α 1 g a ( s ) P 1 ( s ) ds s .
Note that, with s [ a , t 1 ] ,
g a ( s ) P 1 ( s ) = g a ( s ) log s log t 1 log a log t 1 g a ( a ) + log s log a log t 1 log a g a ( t 1 ) = log s log t 1 log a log t 1 g a ( s ) g a ( a ) + log s log a log t 1 log a g a ( s ) g a ( t 1 ) = log s log t 1 log a log t 1 a s G a ( log τ ) d log τ + log s log a log t 1 log a t 1 s G a ( log τ ) d log τ ,
which implies that, by Assumption 1,
| g a ( s ) P 1 ( s ) | a s | G a ( log τ ) | d log τ + s t 1 | G a ( log τ ) | d log τ C a s log τ a σ 1 d log τ a + C s t 1 log τ a σ 1 d log τ a C log s a σ + C log t 1 a σ .
Thus we have, by (14),
| I 1 | C a t 1 log t k + 1 s α 1 log s a σ d s s + C a t 1 log t k + 1 s α 1 log t 1 a σ d s s .
Note that there exists a constant C > 0 such that
log t k + 1 a log t k + 1 t 1 C log t k + 1 a , k = 1 , 2 , , N 1 ,
which follows from
1 log t k + 1 a log t k + 1 t 1 = k + 1 N r k + 1 N r 1 N r = 1 + 1 ( k + 1 ) r 1 1 + 1 2 r 1 C .
Thus we have, for 0 < α 1 ,
| I 1 | C log t k + 1 t 1 α 1 a t 1 log s a σ d s s + C log t k + 1 t 1 α 1 log t 1 a σ + 1 C log t k + 1 t 1 α 1 log t 1 a σ + 1 C log t k + 1 a α 1 log t 1 a σ + 1 C log t k a α 1 log t 1 a σ + 1 = C log T a α 1 k N r ( α 1 ) log T a σ + 1 1 N r ( σ + 1 ) = C ( k r ( α 1 ) N r ( α + σ ) ) C N r ( α + σ ) .
For α > 1 , we have
| I 1 | C log t k + 1 a α 1 a t 1 log s a σ d s s + C log t k + 1 a α 1 log t 1 a σ + 1 C log t k + 1 a α 1 log t 1 a σ + 1 C log t k a α 1 log t 1 a σ + 1 = C log T a α 1 k N r ( α 1 ) log T a σ + 1 1 N r ( σ + 1 ) = C ( k r ( α 1 ) N r ( α + σ ) ) C N r ( 1 + σ ) .
For I 2 we have, with ξ j ( t j , t j + 1 ) , j = 1 , 2 , , k 1 and k = 2 , 3 , , N 1 ,
| I 2 | = 1 2 j = 1 k 1 t j t j + 1 log t k + 1 s α 1 δ 2 g a ( ξ j ) log s t j log s t j + 1 d s s ,
where we have used the following fact, with s ( t j , t j + 1 ) ,
g a ( s ) log s log t j + 1 log t j log t j + 1 g a ( t j ) + log s log t j log t j + 1 log t j g a ( t j + 1 ) = 1 2 ! δ 2 g a ( ξ j ) ( log s log t j ) ( log s log t j + 1 ) ,
which can be seen easily by noting g a ( s ) = G a ( log s ) and (7).
By Assumption 1 and by using [24] (Section 5.2), we have, with k 4 ,
| I 2 | C j = 1 k 1 log t j + 1 t j 2 log t j a σ 2 t j t j + 1 log t k + 1 t j α 1 d s s C j = 1 k 1 2 1 log t j + 1 t j 2 log t j a σ 2 t j t j + 1 log t k + 1 t j α 1 d s s + C j = k 1 2 k 1 log t j + 1 t j 2 log t j a σ 2 t j t j + 1 log t k + 1 t j α 1 d s s = I 21 + I 22 ,
where k 1 2 defines the ceiling function defined as before. For each of these integrals we shall consider the cases when 0 < α 1 and when α > 1 .
For I 21 , when 0 < α 1 , we have, with k 4 ,
I 21 C j = 1 k 1 2 1 log t j + 1 t j 2 log t j a σ 2 log t k + 1 t j + 1 α 1 log t j + 1 t j C j = 1 k 1 2 1 log t j + 1 t j 3 log t j a σ 2 log t k + 1 t j + 1 α 1 .
Note that, with ξ j [ j , j + 1 ] , j = 1 , 2 , , k 1 2 1 ,
log t j + 1 t j = log t N a ( ( j + 1 ) r j r ) N r = C r ξ j r 1 N r C r ( j + 1 ) r 1 N r C j r 1 N r ,
and
log t k + 1 t j + 1 α 1 = log t N a α 1 N r ( k + 1 ) r ( j + 1 ) r 1 α log t N a α 1 N r ( k + 1 ) r k + 1 2 r 1 α C ( N r ( k + 1 ) r ) 1 α C ( N / k ) r ( 1 α ) .
Thus, with k 4 ,
I 21 C j = 1 k 1 2 1 ( j r 1 N r ) 3 ( j / N ) r ( σ 2 ) ( N / k ) r ( 1 α ) = C j = 1 k 1 2 1 j r ( α + σ ) 3 N r ( σ + α ) ( j / k ) r ( 1 α ) = C N r ( σ + α ) j = 1 k 1 2 1 j r ( α + σ ) 3 .
Case 1, If r ( σ + α ) < 2 , we have
I 21 C N r ( σ + α ) j = 1 k 1 2 1 j r ( σ + α ) 3 C N r ( σ + α ) .
Case 2, If r ( σ + α ) = 2 , we have
I 21 C N 2 j = 1 k 1 2 1 j 1 C N 2 1 + 1 2 + + 1 N C N 2 log N .
Case 3, If r ( σ + α ) > 2 , we have
I 21 C N r ( σ + α ) j = 1 k 1 2 1 j r ( σ + α ) 3 C N r ( σ + α ) k r ( σ + α ) 2 = C ( k / N ) r ( σ + α ) 2 N 2 C N 2 .
Thus, we have that for 0 < α 1
I 21 C N r ( σ + α ) , if r ( σ + α ) < 2 , C N 2 log N , if r ( σ + α ) = 2 , C N 2 , if r ( σ + α ) > 2 .
Next we will take the case for when α > 1 , we have, with k 4 ,
I 21 C j = 1 k 1 2 1 log t j + 1 t j 2 log t j a σ 2 log t k + 1 t j α 1 log t j + 1 t j C j = 1 k 1 2 1 log t j + 1 t j 3 log t j a σ 2 log t k + 1 a α 1 C j = 1 k 1 2 1 ( j r 1 N r ) 3 ( j / N ) r ( σ 2 ) ( k / N ) r ( α 1 ) C N r r σ j = 1 k 1 2 1 j r ( 1 + σ ) 3 .
Thus, we have that for α > 1 ,
I 21 C N r ( 1 + σ ) , if r ( 1 + σ ) < 2 , C N 2 log N , if r ( 1 + σ ) = 2 , C N 2 , if r ( 1 + σ ) > 2 .
For I 22 , we have, noting that with k 1 2 j k 1 , k 2 ,
log t j a σ 2 = log t N a σ 2 ( j / N ) r ( σ 2 ) = log t N a ( N / j ) r ( 2 σ ) C ( N / k ) r ( 2 σ ) ,
which implies that
I 22 C j = k 1 2 k 1 ( k r 1 N r ) 2 ( N / k ) r ( 2 σ ) t j t j + 1 log t k + 1 s α 1 d s s C k r σ 2 N r σ t k 1 2 t k log t k + 1 s α 1 d s s .
Note that
t k 1 2 t k log t k + 1 s α 1 d s s = 1 α log t k + 1 t k 1 2 α log t k + 1 t k α 1 α log t k + 1 t k 1 2 α 1 α log t k + 1 a α = 1 α log t N a α ( ( k + 1 ) / N ) r α C ( k / N ) r α ,
we get, with k 2 and α > 0 ,
I 22 C k r σ 2 N r σ ( k / N ) r α = C N r ( σ + α ) k r ( σ + α ) 2 C N r ( σ + α ) , if r ( σ + α ) < 2 , C N 2 , if r ( σ + α ) 2 .
For I 3 , we have, with ξ k ( t k , t k + 1 ) , k = 1 , 2 , , N 1 ,
| I 3 | = t k t k + 1 log t k + 1 s ( g a ( s ) P 1 ( s ) ) d s s = t k t k + 1 log t k + 1 s δ 2 g ( ξ k ) log s t k log s t k + 1 d s s .
By Assumption 1, we then have, with α > 0 ,
| I 3 | C log t k + 1 t k 2 log t k a σ 2 t k t k + 1 log t k + 1 s α 1 d s s = C log t k + 1 t k 2 log t k a σ 2 1 α log t k + 1 t k α = C log t k + 1 t k 2 + α log t k a σ 2 C log t N a 2 + α ( k r 1 N r ) 2 + α log t N a σ 2 ( k / N ) r ( σ 2 ) = C k r ( α + σ ) 2 α N r ( α + σ ) C N r ( σ + α ) , if r ( σ + α ) < 2 + α , C N ( 2 + α ) , if r ( σ + α ) 2 + α .
Obviously the bound for I 3 is stronger than the bound for I 21 . Together these estimates complete the proof of this lemma. □
In Lemma 2 below, we state that the weights a j , k + 1 and b j , k + 1 are positive for all values of j.
Lemma 2.
Let α > 0 . We have
  • a j , k + 1 > 0 , j = 0 , 1 , 2 , , k + 1 where a j , k + 1 are the weights defined in (12),
  • b j , k + 1 > 0 , j = 0 , 1 , 2 , , k + 1 where a j , k + 1 are the weights defined in (10).
Proof. 
The proof is obvious, we omit the proof here. □
For Lemma 3, we are attempting to find an upper bound for a k + 1 , k + 1 . This will be used in the main proof when addressing the a k + 1 , k + 1 term.
Lemma 3.
Let α > 0 . We have, with k = 0 , 1 , 2 , , N 1 ,
a k + 1 , k + 1 C N r α k ( r 1 ) α ,
where a k + 1 , k + 1 is defined in (12).
Proof. 
We have, by (12), with ξ k ( k , k + 1 ) ,
a k + 1 , k + 1 1 Γ ( α + 2 ) log t k + 1 t k α C log t N a α N r α ( ( k + 1 ) r k r ) α = C N r α ( r ξ k r 1 ) α = C N r α ( r ( k + 1 ) ( r 1 ) ) α = C N r α k ( r 1 ) α .
 □
In Lemma 4 we will be finding the error estimate between g a ( s ) and the piecewise constant function P 0 ( s ) for both 0 < α 1 and α > 1 . This will be used to estimate one of the terms in our main proof.
Lemma 4.
Assume that g a ( t ) satisfies Assumption 1.
  • If 0 < α 1 , then we have
    a k + 1 , k + 1 a t k + 1 log t k + 1 s α 1 ( g a ( s ) P 0 ( s ) ) d s s C N r ( σ + α ) , i f r ( σ + α ) < 1 + α , C N r ( σ + α ) log N , i f r ( σ + α ) = 1 + α , C N 1 α , i f r ( σ + α ) > 1 + α .
  • If α > 1 , then we have
    a k + 1 , k + 1 a t k + 1 log t k + 1 s α 1 ( g a ( s ) P 0 ( s ) ) d s s C N r ( σ + α ) , i f r ( σ + α ) < 1 + α , C N 1 α , i f r ( σ + α ) 1 + α ,
where P 0 ( s ) is the piecewise constant function defined as below, with j = 0 , 1 , 2 , , k
P 0 ( s ) = g a ( t j ) , s [ t j , t j + 1 ] .
Proof. 
The proof is similar to the proof of Lemma 1. Note that
a k + 1 , k + 1 a t k + 1 log t k + 1 s α 1 ( g a ( s ) P 0 ( s ) ) d s s = a k + 1 , k + 1 0 t 1 + j = 1 k 1 t j t j + 1 + t k t k + 1 log t k + 1 s α 1 ( g a ( s ) P 0 ( s ) ) d s s = I 1 + I 2 + I 3 .
For I 1 , by Assumption 1, we have
| g a ( s ) | = | G a ( log s ) | C log s a σ , | P 0 ( s ) | = | g a ( a ) | = 0 .
Hence we get
| I 1 | a k + 1 , k + 1 a t 1 log t k + 1 s α 1 | g a ( s ) | d s s + a t 1 log t k + 1 s α 1 | P 0 ( s ) | d s s ( C N r α k ( r 1 ) α ) a t 1 log t k + 1 s α 1 log s a σ d s s + a t 1 log t k + 1 s α 1 0 σ d s s = ( C N r α k ( r 1 ) α ) a t 1 log t k + 1 s α 1 log s a σ d s s .
If 0 < α 1 , we have
| I 1 | ( C N r α k ( r 1 ) α ) log t k + 1 t 1 α 1 log t 1 a σ + 1 ( C N r α k ( r 1 ) α ) log t k + 1 a α 1 log t 1 a σ + 1 = ( C N r α k ( r 1 ) α ) log T a α 1 k + 1 N r ( α 1 ) log T a σ + 1 1 N r ( σ + 1 ) ( C N r α k ( r 1 ) α ) ( C N r ( α + σ ) ) = C ( k / N ) r α k α ( C N r ( α + σ ) ) C N r ( α + σ ) .
If α > 1 , we have
| I 1 | ( C N r α k ( r 1 ) α ) log t k + 1 a α 1 log t 1 a σ + 1 = ( C N r α k ( r 1 ) α ) log T a α 1 k + 1 N r ( α 1 ) log T a σ + 1 1 N r ( σ + 1 ) ( C N r α k ( r 1 ) α ) ( C N r ( 1 + σ ) ) C ( k / N ) ( r 1 ) α N α N r ( 1 + σ ) C N r ( 1 + σ ) α C N 1 α .
For I 2 , we have, with ξ j ( t j , t j + 1 ) , j = 1 , 2 , , k 1 ,
| I 2 | a k + 1 , k + 1 j = 1 k 1 t j t j + 1 log t k + 1 s α 1 | δ g a ( ξ j ) | log s t j d s s .
Hence, by Assumption 1,
| I 2 | C a k + 1 , k + 1 j = 1 k 1 2 1 + k 1 2 k 1 log t j + 1 t j log t j a σ 1 t j t j + 1 log t k + 1 s α 1 d s s = I 21 + I 22 .
For I 21 , if 0 < α 1 , then we have, with k 4 ,
I 21 ( C N r α k ( r 1 ) α ) j = 1 k 1 2 1 log t j + 1 t j 2 log t j a σ 1 log t k + 1 t j + 1 α 1 = ( C N r α k ( r 1 ) α ) j = 1 k 1 2 1 ( j r 1 N r ) 2 ( j / N ) r ( σ 1 ) ( N / k ) r ( 1 α ) C ( k / N ) r α j = 1 k 1 2 1 j r ( α + σ ) 2 α ( j / k ) α ( j / k ) r ( 1 α ) N r ( α + σ ) C N r ( α + σ ) j = 1 k 1 2 1 j r ( α + σ ) 2 α C N r ( α + σ ) , if r ( α + σ ) < 1 + α , C N r ( α + σ ) log N , if r ( α + σ ) = 1 + α , C N 1 α , if r ( α + σ ) > 1 + α .
If α > 1 , we have
I 21 ( C N r α k ( r 1 ) α ) j = 1 k 1 2 1 log t j + 1 t j 2 log t j a σ 1 log t k + 1 a α 1 ( C N r α k ( r 1 ) α ) j = 1 k 1 2 1 ( j r 1 N r ) 2 ( j / N ) r ( σ 1 ) ( N / k ) r ( 1 α ) = C ( k / N ) ( r 1 ) α N α N r σ r j = 1 k 1 2 1 j r + r σ 2 C N α r σ r j = 1 k 1 2 1 j r + r σ 2 .
Note that r + r σ 2 > 1 for any r 1 . Hence, we have
I 21 C N α r σ r k r + r σ 1 = C ( k / N ) r + r σ 1 N 1 α C N 1 α .
For I 22 , we have
I 22 ( C N r α k ( r 1 ) α k 1 2 k 1 log t j + 1 t j log t j a σ 1 t j t j + 1 log t k + 1 s α 1 d s s .
Noting that, with k 1 2 j k 1 , k 2 ,
log t j a σ 1 = log t N a σ 1 ( j / N ) r ( σ 1 ) = log t N a σ 1 ( N / j ) r ( 1 σ ) C ( N / k ) r ( 1 σ ) ,
we have, with α > 0 ,
I 22 ( C N r α k ( r 1 ) α ) k 1 2 k 1 ( C k r 1 N r ) ( N / k ) r ( 1 σ ) t j t j + 1 log t k + 1 s α 1 d s s ( C N r α k ( r 1 ) α ) k r 1 r + σ N r + r r σ ( k / N ) r α C k r ( σ + α ) 1 α N r ( σ + α ) C N r ( σ + α ) , if r ( σ + α ) < 1 + α , C N 1 α , if r ( σ + α ) 1 + α .
For I 3 , we have, with α > 0 ,
| I 3 | ( C N r α k ( r 1 ) α ) log t k + 1 t k log t k a σ 1 log t k + 1 t k α ( C N r α k ( r 1 ) α ) log t k + 1 t k α + 1 log t k a σ 1 .
Further we have
| I 3 | ( C N r α k ( r 1 ) α ) ( k r 1 N r ) 1 + α ( k / N ) r ( σ 1 ) = C ( k / N ) r α k α k r ( α + σ ) α 1 N r ( α + σ ) C k r ( α + σ ) α 1 N r ( α + σ ) C N r ( σ + α ) , if r ( σ + α ) < 1 + α , C N 1 α , if r ( σ + α ) 1 + α .
Together these estimates complete the proof of this Lemma. □
For Lemma 5, we are attempting to find an upper bound for the sum of our weights. This will be used in the main proof when simplifying several terms.
Lemma 5.
Let α > 0 . There exists a positive constant C such that
j = 0 k a j , k + 1 C log T a α ,
j = 0 k b j , k + 1 C log T a α ,
where a j , k + 1 and b j , k + 1 ,   j = 0 , 1 , 2 , , k are defined by (12) and (10), respectively.
Proof. 
We only prove (21). The proof of (22) is similar. Note that
a t k + 1 log t k + 1 s α 1 g a ( s ) d s s = j = 0 k + 1 a j , k + 1 g ( t j ) + R 1 ,
where R 1 is the remainder term. Let g a ( s ) = 1 , we have
j = 0 k + 1 a j , k + 1 = a t k + 1 log t k + 1 s α 1 · 1 d s s = 1 α log t k + 1 a α C log T a α .
Thus, (21) follows by the fact a k + 1 , k + 1 > 0 in Lemma 2. □
We will now use the above lemmas to prove the error estimates of Theorem 3.
Proof of Theorem 3.
Subtracting (13) from (3), we have
y ( t k + 1 ) y k + 1 = 1 Γ ( α ) a t k + 1 log t k + 1 s α 1 ( f ( s , y ( s ) ) P 1 ( s ) ) d s s + j = 0 k a j , k + 1 ( f ( t j , y ( t j ) ) f ( t j , y j ) ) + a k + 1 , k + 1 ( f ( t k + 1 , y ( t k + 1 ) ) f ( t k + 1 , y k + 1 P ) ) = 1 Γ ( α ) ( I + I I + I I I ) .
The term I is estimated by Lemma 1. For II, we have, by Lemma 2 and the Lipschitz condition of f,
| I I | = j = 0 k a j , k + 1 ( f ( t j , y ( t j ) ) f ( t j , y j ) ) j = 0 k a j , k + 1 ( f ( t j , y ( t j ) ) f ( t j , y j ) ) L j = 0 k a j , k + 1 | y ( t j ) y j | .
For I I I , we have, by Lemma 2 and the Lipschitz condition for f,
| I I I | = | a k + 1 , k + 1 ( f ( t k + 1 , y ( t k + 1 ) ) f ( t k + 1 , y k + 1 P ) ) | a k + 1 , k + 1 L | y ( t k + 1 ) y k + 1 P | .
Note that
y ( t k + 1 ) y k + 1 P = 1 Γ ( α ) a t k + 1 log t k + 1 s α 1 ( f ( s , y ( s ) ) P 0 ( s ) ) d s s + j = 0 k b j , k + 1 ( f ( t j , y ( t j ) ) f ( t j , y j ) ) .
Thus,
| I I I | C a k + 1 , k + 1 L a t k + 1 log t k + 1 s α 1 | f ( s , y ( s ) ) P 0 ( s ) | d s s + C a k + 1 , k + 1 L j = 0 k b j , k + 1 | f ( t j , y ( t j ) ) f ( t j , y j ) | = I I I 1 + I I I 2 .
The term I I I 1 is estimated by Lemma 4. For I I I 2 , we have, by Lemma 2,
I I I 2 C a k + 1 , k + 1 L j = 0 k b j , k + 1 | y ( t j ) y j | ( C N r α k ( r 1 ) α ) j = 0 k b j , k + 1 | y ( t j ) y j | C ( k / N ) ( r 1 ) α N α j = 0 k b j , k + 1 | y ( t j ) y j | C N α j = 0 k b j , k + 1 | y ( t j ) y j | .
Hence, we obtain
| y ( t k + 1 ) y k + 1 |   C | I | + C j = 0 k a j , k + 1 | y ( t j ) y j | + C | I I I 1 | + C N α j = 0 k b j , k + 1 | y ( t j ) y j | .
The rest of the proof is exactly the same as the proof of [18] (Theorem 1.4). The proof of Theorem 3 is complete. □

3. Rectangular and Trapezoidal Methods with Non-Uniform Meshes

In this section, we will consider the error estimates for the fractional rectangular and trapezoidal methods for solving (1). These results are based on the error estimates proposed by Liu et al. [19]. First, we will introduce the non-uniform meshes for solving (1).
Let N be a positive integer and let a = t 0 < t 1 < < t N = T be the partition on [ a , T ] . We define the following non-uniform mesh on [ log ( a ) , log ( T ) ] with
log a = log t 0 < log t 1 < < log t N = log T ,
such that
log t j log a log t N log a = j ( j + 1 ) N ( N + 1 ) ,
which implies that
log t j = log a + log t N log a j ( j + 1 ) N ( N + 1 ) .
Now we see when j = 0 , we have log t 0 = log a . When j = N we have log t N = log T . Further we have
τ j : = log t j + 1 log t j = log t j + 1 t j = 2 ( j + 1 ) N ( N + 1 ) log t N a .

3.1. Rectangular Method

In this subsection, we prove the following error estimate for the rectangular method over the given non-uniform mesh.
Theorem 4.
Assume that g a ( t ) : = D C H a , t α y ( t ) satisfies Assumption 1. Further assume that y ( t j ) and y j are the solutions of (3) and (9), respectively.
  • If 0 < α 1 , then we have
    max 0 j N | y ( t j ) y j | C N 2 ( σ + α ) , i f 0 < 2 ( σ + α ) < 1 , C N 2 ( σ + α ) log ( N ) , i f 2 ( σ + α ) = 1 , C N 1 , i f 2 ( σ + α ) > 1 .
  • If α > 1 , then we have
    max 0 j N | y ( t j ) y j | C N 1 .
To prove Theorem 4, we need some preliminary lemmas. Here we only state the lemmas without proofs since the proofs are similar as in Liu et al. [19]. In Lemma 6 we will be defining a key estimate which we will be using in our main proof.
Lemma 6.
Assume that g a ( t ) : = D C H a , t α y ( t ) satisfies Assumption 1.
  • If 0 < α 1 , then we have, with k = 0 , 1 , 2 , , N 1 , N 1 ,
    | 1 Γ ( α ) j = 0 k t j t j + 1 log t k + 1 s α 1 g a ( s ) d s s j = 0 k b j , k + 1 g ( t j ) |
    C N 2 ( σ + α ) , i f 0 < 2 ( σ + α ) < 1 , C N 2 ( σ + α ) log ( N ) , i f 2 ( σ + α ) = 1 , C N 1 , i f 2 ( σ + α ) > 1 .
  • If 1 < α < 2 , then we have
    | 1 Γ ( α ) j = 0 k t j t j + 1 log t k + 1 s α 1 g a ( s ) d s s j = 0 k b j , k + 1 g ( t j ) | C N 1 .
In Lemma 7 we will find some upper bounds for our weights b j , k + 1 and a j , k + 1 .
Lemma 7.
If α > 0 , k is a non-negative integer and τ j τ j + 1 , j = 0 , 1 , , k 1 , then the weights b j , k + 1 and a j , k + 1 defined by equations (10) and (12), have the following estimates:
b j , k + 1 C α τ j log t k + 1 t j α 1 , j = 0 , 1 , 2 , , k ,
and
a j , k + 1 C α τ 0 log t k + 1 a α 1 , j = 0 , τ j log t k + 1 t j α 1 + τ j 1 log t k + 1 t j 1 α 1 , j = 1 , 2 , , k + 1 ,
where C α = 1 Γ ( α + 1 ) max { 2 , α } .
In Lemma 8 we will give an adapted Gronwall inequality to be used in the main results.
Lemma 8.
Assume that α , C 0 , T > 0 and b j , k = C 0 τ j log t k t j α 1 , j = 0 , 1 , 2 , , k 1 for 0 = t 0 < t 1 < < t k < < t N = T , k = 1 , 2 , , N where N is a positive integer and τ j = log t j + 1 t j . Let g 0 be positive and the sequence { ψ k } meet
ψ 0 g 0 , ψ k j = 1 k 1 b j , k ψ j + g 0 ,
then
ψ k C g 0 , k = 1 , 2 , , N .
Proof of Theorem 4.
For k = 0 , 1 , 2 , , N 1 , we have
| y ( t k + 1 y k + 1 | = | 1 Γ ( α ) a t k + 1 log t k + 1 s α 1 g a ( s ) d s s j = 0 k b j , k + 1 f ( t j , y j ) | 1 Γ ( α ) | j = 0 k t j t j + 1 log t k + 1 s α 1 ( g a ( s ) g ( t j ) ) d s s | + | j = 0 k b j , k + 1 ( g ( t j ) f ( t j , y j ) ) | = I + I I .
The first term I can be estimated by Lemma 6. For I I , we can apply Lemma 2 and the Lipschitz condition of f,
I I = | j = 0 k b j , k + 1 ( g ( t j ) f ( t j , y j ) ) | L j = 0 k b j , k + 1 | y ( t j ) y j | .
Substituting into the original we get
| y ( t k + 1 ) y k + 1 | I + L j = 0 k b j , k + 1 | y ( t j ) y j | .
By applying Lemma 8, we will get
| y ( t k + 1 ) y k + 1 | C I .
This completes the proof of Theorem 4. □

3.2. Trapezoid Formula

In this subsection we will consider the error estimates of the trapezoid method over the non-uniform mesh. We shall prove the following theorem
Theorem 5.
Assume that g a ( t ) : = D C H a , t α y ( t ) satisfies Assumption 1. Further assume that y ( t j ) and y j are the solutions of (3) and (11), respectively.
  • If 0 < α 1 , then we have
    max 0 j N | y ( t j ) y j | C N 2 ( σ + α ) , i f 0 < 2 ( σ + α ) < 2 , C N 2 ( σ + α ) log ( N ) , i f 2 ( σ + α ) = 2 , C N 2 , i f 2 ( σ + α ) > 2 .
  • If 1 < α < 2 , then we have
    max 0 j N | y ( t j ) y j | C N 2 .
To prove Theorem 5, we need the following lemma. In Lemma 9 we will be defining a key estimate which we will be using in our main proof.
Lemma 9.
Assume that g a ( t ) : = D C H a , t α y ( t ) satisfies Assumption 1.
  • If 0 < α 1 , then we have, with k = 0 , 1 , 2 , N 1 , N 1 ,
    | 1 Γ ( α ) j = 0 k t j t j + 1 log t k + 1 s α 1 g a ( s ) d s s j = 0 k a j , k + 1 g ( t j ) |
    C N 2 ( σ + α ) , i f 0 < 2 ( σ + α ) < 2 , C N 2 ( σ + α ) log ( N ) , i f 2 ( σ + α ) = 2 , C N 2 , i f 2 ( σ + α ) > 2 .
  • If 1 < α < 2 , then we have
    | 1 Γ ( α ) j = 0 k t j t j + 1 log t k + 1 s α 1 g a ( s ) d s s j = 0 k a j , k + 1 g ( t j ) | C N 2 .
Proof of Theorem 5.
For k = 0 , 1 , 2 , , N 1 , we have
| y ( t k + 1 ) y k + 1 | = | 1 Γ ( α ) a t k + 1 log t k + 1 s α 1 g a ( s ) d s s j = 0 k + 1 a j , k + 1 f ( t j , y j ) | | 1 Γ ( α ) j = 0 k t j t j + 1 log t k + 1 s α 1 g a ( s ) log s log t j + 1 log t j log t j + 1 g ( t j ) log s log t j log t j + 1 log t j g ( t j + 1 ) d s s | + | j = 0 k + 1 a j , k + 1 ( g ( t j ) f ( t j , y j ) ) | = I + I I .
The term I is estimated by Lemma 9. For II we can apply Lemma 7 and the Lipschitz condition of f,
I I = | j = 0 k + 1 a j , k + 1 ( g ( t j ) f ( t j , y j ) ) | L j = 0 k + 1 a j , k + 1 | y ( t j ) y j | .
Thus we obtain
| y ( t k + 1 ) y k + 1 | I + L j = 0 k + 1 a j , k + 1 | y ( t j ) y j | .
By using the corresponding Gronwall Lemma 8 we have | y ( t k + 1 ) y k + 1 | C I . This completes the proof of Theorem 5. □

4. Numerical Examples

In this section, we will consider some numerical examples to confirm the theoretical results obtained in the previous sections. For simplicity, all the examples below will take 0 < α < 1 . All the following results may be adapted for all α > 1 .
Example 1.
Consider the following nonlinear fractional differential equation, with α ( 0 , 1 ) and a = 1 ,
D C H a , t α y ( t ) = f ( t , y ) , 1 a < t T , y ( a ) = 0 ,
where
f ( t , y ) = Γ ( 6 ) Γ ( 6 α ) ( log t ) 5 α Γ ( 5 ) Γ ( 5 α ) ( log t ) 4 α + 2 Γ ( 4 ) Γ ( 4 α ) ( log t ) 3 α y 2 + ( log t ) 5 ( log t ) 4 + 2 ( log t ) 3 2 .
The exact solution of this equation is y ( t ) = ( log t ) 5 ( log t ) 4 + 2 ( log t ) 3 . We will be solving Example 1 over the interval [ 1 , 2 ] . Let N be a positive integer and let log a = log t 0 < log t 1 < < log t N = log T be the graded mesh on the interval [ log a , log T ] . This mesh is defines as log t j = log a + log T a ( j / N ) r for j = 0 , 1 , 2 , , N with r 1 . Therefore, we have by Theorem 3,
| | e N | | : = max 0 j N | y ( t j ) y j | C N ( 1 + α ) .
In Table 1 we can see the maximum absolute error and experimental order of convergence (EOC) for the predictor-corrector method at varying α and N values. For our different 0 < α < 1 , we have chosen N values as N = 10 × 2 l , l = 0 , 1 , 2 , , 7 . For this example we have taken r = 1 . The maximum absolute errors | | e N | | were obtained as shown above with respect to N and we calculate the experimental order of convergence or EOC as log | | e N | | | | e 2 N | | .
As we can see, the EOCs for this example are almost O ( N ( 1 + α ) ) which was predicted by Theorem 3. Due to the solution of the FODE being sufficiently smooth, any value of r will give the optimal convergence order given above. As we are using r = 1 , this means that we are using a uniform mesh and so can compare these results with the methods introduced by Gohar et al. [13]. We can see, we have obtained a similar result.
In Figure 1, we have plotted the order of convergence for Example 1. From Equation (24) we have, with h = 1 / N ,
log 2 | | e N | | log 2 C + log 2 N ( 1 + α ) log 2 C + 1 + α log 2 h .
Let y = log 2 | | e N | | and let x = log 2 h . We then plotted a graph for y against x for h = 1 5 × 2 l , l = 0 , 1 , , 7 . Doing this, we get that the gradient of the graph would equal the EOC. To compare this to the theoretical order of convergence, we have also plotted the straight line y = ( 1 + α ) x . For Figure 1 we choose α = 0.8 . We can observe that the two lines drawn are parallel. Therefore we can conclude that the order of convergence of this predictor-corrector method is O ( h 1 + α ) .
Example 2.
Consider the following nonlinear fractional differential equation, with α , β ( 0 , 1 ) and a = 1 ,
D C H a , t α y ( t ) = f ( t , y ) , 1 a < t T , y ( a ) = 0 ,
where
f ( t , y ) = Γ ( 1 + β ) Γ ( 1 + β α ) ( log t ) β α + ( log t ) 2 β y 2 .
We will be solving Example 2 over the interval [ 1 , 2 ] . The exact solution of this equation is y = ( log t a ) β and D C H a , t α y ( t ) = Γ ( 1 + β ) Γ ( 1 + β α ) ( log t ) β α . This implies that the regularity of D C H a , t α y ( t ) behaves as ( log t ) β α . This means that D C H a , t α y ( t ) satisfies Assumption 1. We will be using the same graded mesh as in Example 1. Therefore, we have by Theorem 3, with σ = β α ,
| | e N | | : = max 0 j N | y ( t j ) y j | C N r β , if r < 1 + α β , C N r β log N , if r = 1 + α β , C N ( 1 + α ) , if r > 1 + α β .
In Table 2, Table 3 and Table 4 we can see the EOC for the predictor-corrector method with varying values of α and with r values at r = 1 and r = 1 + α β . With a fixed β = 0.9 we have obtain the EOC and maximum absolute error for increasing values of N. By doing so we can see that the EOC are almost O ( N r β ) = 0.9 when r = 1 and the EOC are almost O ( N ( 1 + α ) ) = 1 + α when r = 1 + α β .
When r = 1 , we are using a uniform mesh and we can see that the EOC obtained is the same as those obtained by Gohar et al. [13]. Comparing these to the results of the graded mesh when r = 1 + α β we can see that a higher EOC has been obtained and an optimal order of convergence is recovered.
In Figure 2, we have plotted the order of convergence for Example 2 when r = 1 + α β and α = 0.8 . This plot is the same as for Figure 1. We have also plotted the straight line y = ( 1 + α ) x . We can observe that the two lines drawn are parallel. Therefore we can conclude that the order of convergence of this predictor-corrector method is O ( h 1 + α ) .
Example 3.
Consider the following nonlinear fractional differential equation, with α , β ( 0 , 1 ) and a = 1 ,
D C H a , t α y ( t ) + y ( t ) = 0 , 1 a < t T , y ( a ) = 1 ,
The exact solution of this FODE is y ( t ) = E α , 1 ( log t ) α . Therefore D C H a , t α y ( t ) = E α , 1 ( log t ) α , where E α , γ ( z ) is defined as the Mittag–Leffler function
E α , γ ( z ) = k = 0 z k Γ ( α k + γ ) , α , γ > 0 .
Therefore
D C H a , t α y ( t ) = k = 0 log t α k Γ ( α k + γ ) = 1 log t α Γ ( α + 1 ) log t 2 α Γ ( α + 1 ) , α > 0 .
This shows that D C H a , t α y ( t ) behaves as c + c log t α . This means that D C H a , t α y ( t ) satisfies Assumption 1. Therefore, with σ = α , we have by Theorem 3,
| | e N | | : = max 0 j N | y ( t j ) y j | C N r ( 2 α ) , if r < 1 + α 2 α , C N r ( 2 α ) log N , if r = 1 + α 2 α , C N ( 1 + α ) , if r > 1 + α 2 α .
We will be solving this equation over the same graded mesh as in Example 1 with varying r values. In Table 5, Table 6 and Table 7, we have calculated the EOC and maximum absolute error with respect to increasing N values and with r values at r = 1 and r = 1 + α 2 α . The experimental orders of convergence are shown to be almost O ( N r ( 2 α ) ) if we choose r = 1 and almost O ( N r ( 1 + α ) ) if we choose r = ( 1 + α ) 2 α . Once again it is shown when we use a graded mesh at the optimal r value, we get a higher order of convergence to that obtained by the uniform mesh at r = 1 .
In Figure 3, we have plotted the order of convergence for Example 3 when r = 1 + α β and α = 0.8 . This plot is the same as for Figure 1. We have also plotted the straight line y = ( 1 + α ) x . We can observe that the two lines drawn are parallel. Therefore we can conclude that the order of convergence of this predictor-corrector method is O ( h 1 + α ) for choosing the suitable graded mesh ratio r.
Example 4.
In this example we will be applying the rectangular and trapezoidal methods for solving (27). Let N be a positive integer and let log t j = log a + log t N log a j ( j + 1 ) N ( N + 1 ) be the graded mesh on the interval [ log a , log T ] for j = 0 , 1 , 2 , , N . We will be using a = 1 and T = 2 .
In Table 8, we have calculated the EOC and maximum absolute error with respect to increasing N values and with α = 0.2 , 0.4 , 0.6 for the rectangular method. By once again using the fact that σ = α and applying Theorem 4 we can say
max 0 j N | y ( t j ) y j | C N 4 α , if 0 < 4 α < 1 , C N 4 α log ( N ) , if 4 α = 1 , C N 1 , if 4 α > 1 .
The experimental orders of convergence are shown to be almost O ( N 4 α ) if we choose α < 0.25 and almost O ( N 1 ) if we choose α 0.25 . This confirms the theoretical error estimates calculated in Section 4. In Table 9, we have used the same method to solve (27) but using the uniform mesh. This shows how a larger EOC is achieved when using non-uniform mesh over a uniform mesh.
In Table 10, we have calculated the EOC and maximum absolute error with respect to increasing N values and with α = 0.2 , 0.4 , 0.6 for the trapezoidal method. By once again using the fact that σ = α and applying Theorem 4 we can say
max 0 j N | y ( t j ) y j | C N 4 α , if 0 < 4 α < 2 , C N 4 α log ( N ) , if 4 α = 2 , C N 2 , if 4 α > 2 .
The experimental orders of convergence are shown to be almost O ( N 4 α ) if we choose α < 0.5 and almost O ( N 2 ) if we choose α 0.5 . This confirms the theoretical error estimates calculated in Section 4. In Table 11, we have used the same method to solve (27) but using the uniform mesh. This shows how a larger EOC is achieved when using graded mesh over a uniform mesh.

5. Conclusions

In this paper we propose several numerical methods for solving Caputo–Hadamard fractional differential equations with graded and non-uniform meshes. We first introduce a predictor-corrector method and calculate the convergence and error estimates over a graded mesh so to show that the optimal convergence orders can be recovered when the solutions are not sufficiently smooth. We then introduce the error estimates on the fractional rectangle and fractional trapezoidal methods with some non-uniform meshes. Finally, we consider several numerical simulations to support the theoretical results made for the above methods on the convergence orders and error estimates.

Author Contributions

We have equal contributions to this work. C.W.H.G. considered the theoretical analysis and wrote the original version of the work. Y.L. considered the theoretical analysis and performed the numerical simulation. Y.Y. introduced and guided this research topic. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  2. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  3. Oldman, B.K.; Spanier, J. The Fractional Calculus; Academic Press: New York, NY, USA, 1974. [Google Scholar]
  4. Podlubny, I. Fractional Differential Equations; Math, Science and Engineering; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  5. Adjabi, Y.; Jarad, F.; Baleanu, D.; Abdeljawad, T. On Cauchy problems with Caputo Hadamard fractional derivatives. J. Comput. Anal. Appl. 2016, 21, 661–681. [Google Scholar]
  6. Garra, R.; Mainardi, F.; Spada, G. A generalization of the Lomnitz logarithmic creep law via Hadamard fractional calculus. Chaos Solitons Fractals 2017, 102, 333–338. [Google Scholar] [CrossRef] [Green Version]
  7. Gohar, M.; Li, C.; Yin, C. On Caputo Hadamard fractional differential equations. Int. J. Comput. Math. 2020, 97, 1459–1483. [Google Scholar] [CrossRef]
  8. Jarad, F.; Abdeljawad, T.; Baleanu, D. Caputo-type modification of the Hadamard fractional derivatives. Adv. Differ. Equ. 2012, 2012, 142. [Google Scholar] [CrossRef] [Green Version]
  9. Kilbas, A.A. Hadamard-type fractional calculus. J. Korean Math. Soc. 2001, 38, 1191–1204. [Google Scholar]
  10. Li, C.; Cai, M. Theory and Numerical Approximations of Fractional Integrals and Derivatives; SIAM: Philadelphia, PA, USA, 2019. [Google Scholar]
  11. Ma, L. On the kinetics of Hadamard-type fractional differential systems. Fract. Calc. Appl. Anal. 2020, 23, 553–570. [Google Scholar] [CrossRef]
  12. Hadamard, J. Essai sur létude des fonctions donnes par leur developpment de Taylor. J. Pure Appl. Math. 1892, 4, 101–186. [Google Scholar]
  13. Gohar, M.; Li, C.; Li, Z. Finite Difference Methods for Caputo-Hadamard Fractional Differential Equations. Mediterr. J. Math. 2020, 17, 194. [Google Scholar] [CrossRef]
  14. Abbas, S.; Benchohra, M.; Hamidi, N.; Henderson, J. Caputo-Hadamard fractional differential equations in Banach spaces. Fract. Calc. Appl. Anal. 2018, 21, 1027–1045. [Google Scholar] [CrossRef]
  15. Samei, M.E.; Hedayati, V.; Rezapour, S. Existence results for a fraction hybrid differential inclusion with Caputo–Hadamard type fractional derivative. Adv. Differ. Equ. 2019, 2019, 163. [Google Scholar] [CrossRef]
  16. Ardjouni, A. Existence and uniqueness of positive solutions for nonlinear Caputo-Hadamard fractional differential equations. Proyecciones 2021, 40, 139–152. [Google Scholar] [CrossRef]
  17. Li, C.; Li, Z.; Wang, Z. Mathematical Analysis and the Local Discontinuous Galerkin Method for Caputo-Hadamard Fractional Partial Differential Equation. J. Sci. Comput. 2020, 85, 41. [Google Scholar] [CrossRef]
  18. Liu, Y.; Roberts, J.; Yan, Y. Detailed error analysis for a fractional Adams method with graded meshes. Numer. Algorithms 2018, 78, 1195–1216. [Google Scholar] [CrossRef] [Green Version]
  19. Liu, Y.; Roberts, J.; Yan, Y. A note on finite difference methods for nonlinear fractional differential equations with non-uniform meshes. Int. J. Comput. Math. 2018, 95, 1151–1169. [Google Scholar] [CrossRef]
  20. Li, C.; Yi, Q.; Chen, A. Finite difference methods with non-uniform meshes for nonlinear fractional differential equations. J. Comput. Phys. 2016, 316, 614–631. [Google Scholar] [CrossRef]
  21. Diethelm, K. Generalized compound quadrature formulae for finite-part integrals. IMA J. Numer. Anal. 1997, 17, 479–493. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Sun, Z.; Liao, H. Finite difference methods for the time fractional diffusion equation on non-uniform meshes. Int. J. Comput. Math. 2014, 265, 195–210. [Google Scholar] [CrossRef]
  23. Stynes, M. Too much regularity may force too much uniqueness. Fract. Calc. Appl. Anal. 2016, 19, 1554–1562. [Google Scholar] [CrossRef] [Green Version]
  24. Stynes, M.; Oriordan, E.; Gracia, J.L. Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation. SIAM J. Numer. Anal. 2017, 55, 1057–1079. [Google Scholar] [CrossRef]
  25. Lubich, C. Runge-Kutta theory for Volterra and Abel integral equations of the second kind. Math. Comput. 1983, 41, 87–102. [Google Scholar] [CrossRef]
Figure 1. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 1 with α = 0.8 .
Figure 1. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 1 with α = 0.8 .
Mathematics 09 02728 g001
Figure 2. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 2 with α = 0.8 and r = 1 + α β .
Figure 2. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 2 with α = 0.8 and r = 1 + α β .
Mathematics 09 02728 g002
Figure 3. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 3 with α = 0.8 and r = 1 + α 2 α .
Figure 3. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 3 with α = 0.8 and r = 1 + α 2 α .
Mathematics 09 02728 g003
Table 1. Table showing the maximum absolute error and EOC for solving (23) using the predictor-corrector method.
Table 1. Table showing the maximum absolute error and EOC for solving (23) using the predictor-corrector method.
N α = 0.4 EOC α = 0.6 EOC α = 0.8 EOC
103.475 × 10 2 1.734 × 10 2 9.960 × 10 3
201.263 × 10 2 1.4605.427 × 10 3 1.6762.761 × 10 3 1.851
404.446 × 10 3 1.5071.686 × 10 3 1.6877.617 × 10 4 1.858
801.562 × 10 3 1.5095.275 × 10 4 1.6762.106 × 10 4 1.854
1605.543 × 10 4 1.4951.668 × 10 4 1.6615.850 × 10 5 1.848
3201.992 × 10 4 1.4775.328 × 10 5 1.6461.632 × 10 5 1.842
6407.241 × 10 5 1.4601.716 × 10 5 1.6354.568 × 10 6 1.837
12802.657 × 10 5 1.4465.562 × 10 6 1.6251.283 × 10 6 1.832
Table 2. Table showing the maximum absolute error and EOC for solving (25) using the predictor-corrector method for α = 0.4 , β = 0.9 .
Table 2. Table showing the maximum absolute error and EOC for solving (25) using the predictor-corrector method for α = 0.4 , β = 0.9 .
N r = 1 EOC r = 1 + α β EOC
101.100 × 10 2 1.858 × 10 2
205.635 × 10 3 0.9656.141 × 10 3 1.598
403.177 × 10 3 0.8272.048 × 10 3 1.584
801.737 × 10 3 0.8717.009 × 10 4 1.547
1609.380 × 10 4 0.8892.457 × 10 4 1.512
3205.043 × 10 4 0.8958.780 × 10 5 1.485
6402.706 × 10 4 0.8983.184 × 10 5 1.464
12801.451 × 10 4 0.8991.167 × 10 5 1.448
Table 3. Table showing the maximum absolute error and EOC for solving (25) using the predictor-corrector scheme for α = 0.6 , β = 0.9 .
Table 3. Table showing the maximum absolute error and EOC for solving (25) using the predictor-corrector scheme for α = 0.6 , β = 0.9 .
N r = 1 EOC r = 1 + α β EOC
102.151 × 10 2 6.370 × 10 3
201.193 × 10 2 0.8511.922 × 10 3 1.728
406.468 × 10 3 0.8835.954 × 10 4 1.691
803.480 × 10 3 0.8941.888 × 10 4 1.657
1601.868 × 10 3 0.8986.083 × 10 5 1.634
3201.001 × 10 3 0.8991.980 × 10 5 1.620
6405.368 × 10 4 0.9006.482 × 10 6 1.611
12802.877 × 10 4 0.9002.130 × 10 6 1.605
Table 4. Table showing the maximum absolute error and EOC for solving (25) using the predictor-corrector method for α = 0.8 , β = 0.9 .
Table 4. Table showing the maximum absolute error and EOC for solving (25) using the predictor-corrector method for α = 0.8 , β = 0.9 .
N r = 1 EOC r = 1 + α β EOC
103.536 × 10 2 4.523 × 10 3
201.916 × 10 2 0.8841.299 × 10 3 1.800
401.030 × 10 2 0.8953.731 × 10 4 1.800
805.528 × 10 3 0.8981.071 × 10 4 1.800
1602.963 × 10 3 0.9003.077 × 10 5 1.800
3201.588 × 10 3 0.9008.836 × 10 6 1.800
6408.510 × 10 4 0.9002.537 × 10 6 1.800
12804.561 × 10 4 0.9007.287 × 10 7 1.800
Table 5. Table showing the maximum absolute error and EOC for solving (27) using the predictor-corrector method for α = 0.4 .
Table 5. Table showing the maximum absolute error and EOC for solving (27) using the predictor-corrector method for α = 0.4 .
N r = 1 EOC r = 1 + α 2 α EOC
109.399 × 10 3 3.677 × 10 3
202.049 × 10 3 2.1971.234 × 10 3 1.575
404.752 × 10 4 2.1084.687 × 10 4 1.397
801.000 × 10 3 −1.0742.116 × 10 4 1.147
1609.226 × 10 4 0.1168.834 × 10 5 1.260
3206.885 × 10 4 0.4223.542 × 10 5 1.319
6404.670 × 10 4 0.5601.388 × 10 5 1.352
12803.002 × 10 4 0.6375.367 × 10 6 1.371
Table 6. Table showing the maximum absolute error and EOC for solving (27) using the predictor-corrector method for α = 0.6 .
Table 6. Table showing the maximum absolute error and EOC for solving (27) using the predictor-corrector method for α = 0.6 .
N r = 1 EOC r = 1 + α 2 α EOC
106.864 × 10 4 1.512 × 10 3
209.020 × 10 4 −0.3944.756 × 10 4 1.669
405.967 × 10 4 0.6451.766 × 10 4 1.429
803.767 × 10 4 0.9146.423 × 10 5 1.459
1601.495 × 10 4 1.0342.233 × 10 5 1.524
3206.982 × 10 5 1.0987.587 × 10 6 1.558
6403.177 × 10 5 1.1362.545 × 10 6 1.576
12801.423 × 10 5 1.1598.473 × 10 7 1.586
Table 7. Table showing the maximum absolute error and EOC for solving (27) using the predictor-corrector method for α = 0.8 .
Table 7. Table showing the maximum absolute error and EOC for solving (27) using the predictor-corrector method for α = 0.8 .
N r = 1 EOC r = 1 + α 2 α EOC
104.175 × 10 4 6.100 × 10 4
201.700 × 10 4 1.2971.717 × 10 4 1.829
407.021 × 10 5 1.2754.972 × 10 5 1.788
802.589 × 10 5 1.4391.459 × 10 5 1.769
1609.062 × 10 6 1.5144.308 × 10 6 1.760
3203.089 × 10 6 1.5531.274 × 10 6 1.758
6401.038 × 10 6 1.5743.766 × 10 7 1.758
12803.459 × 10 7 1.5851.111 × 10 7 1.760
Table 8. Table showing the maximum absolute error and EOC for solving (27) using the rectangular method on a graded mesh.
Table 8. Table showing the maximum absolute error and EOC for solving (27) using the rectangular method on a graded mesh.
N α = 0.2 EOC α = 0.4 EOC α = 0.6 EOC
407.919 × 10 2 8.348 × 10 3 2.852 × 10 3
804.843 × 10 2 0.7102.869 × 10 3 1.1411.404 × 10 3 1.023
1602.921 × 10 2 0.7309.688 × 10 4 1.1666.951 × 10 4 1.014
3201.742 × 10 2 0.7453.239 × 10 4 1.1813.454 × 10 4 1.009
6401.030 × 10 2 0.7581.491 × 10 4 1.1191.720 × 10 4 1.006
12806.053 × 10 3 0.7677.336 × 10 5 1.0238.577 × 10 5 1.004
Table 9. Table showing the maximum absolute error and EOC for solving (27) using the rectangular method on a uniform mesh.
Table 9. Table showing the maximum absolute error and EOC for solving (27) using the rectangular method on a uniform mesh.
N α = 0.2 EOC α = 0.4 EOC α = 0.6 EOC
401.734 × 10 1 4.650 × 10 2 9.971 × 10 3
801.375 × 10 1 0.3352.795 × 10 2 0.7354.475 × 10 3 1.156
1601.085 × 10 1 0.3421.661 × 10 2 0.7511.986 × 10 3 1.172
3208.519 × 10 2 0.3489.793 × 10 3 0.7628.750 × 10 4 1.182
6406.667 × 10 2 0.3545.737 × 10 3 0.7713.839 × 10 4 1.189
12805.199 × 10 2 0.3593.345 × 10 3 0.7781.728 × 10 4 1.152
Table 10. Table showing the maximum absolute error and EOC for solving (27) using the trapezoidal method on a graded mesh.
Table 10. Table showing the maximum absolute error and EOC for solving (27) using the trapezoidal method on a graded mesh.
N α = 0.2 EOC α = 0.4 EOC α = 0.6 EOC
408.193 × 10 3 1.266 × 10 3 9.466 × 10 5
805.211 × 10 3 0.6534.391 × 10 4 1.5271.832 × 10 5 2.370
1603.241 × 10 3 0.6851.491 × 10 4 1.5593.506 × 10 6 2.385
3201.981 × 10 3 0.7115.000 × 10 5 1.5776.675 × 10 7 2.393
6401.193 × 10 3 0.7311.664 × 10 5 1.5871.321 × 10 7 2.338
12807.110 × 10 4 0.7475.517 × 10 6 1.5933.300 × 10 8 2.003
Table 11. Table showing the maximum absolute error and EOC for solving (27) using the trapezoidal method on a uniform mesh.
Table 11. Table showing the maximum absolute error and EOC for solving (27) using the trapezoidal method on a uniform mesh.
N α = 0.2 EOC α = 0.4 EOC α = 0.6 EOC
401.640 × 10 2 6.803 × 10 3 7.617 × 10 4
801.341 × 10 2 0.2914.150 × 10 3 0.7132.106 × 10 4 1.854
1601.087 × 10 2 0.3022.494 × 10 3 0.7355.850 × 10 5 1.848
3208.754 × 10 3 0.3131.482 × 10 3 0.7511.632 × 10 5 1.842
6407.001 × 10 3 0.3228.733 × 10 4 0.7634.568 × 10 6 1.837
12805.567 × 10 3 0.3315.115 × 10 4 0.7191.283 × 10 6 1.832
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Green, C.W.H.; Liu, Y.; Yan, Y. Numerical Methods for Caputo–Hadamard Fractional Differential Equations with Graded and Non-Uniform Meshes. Mathematics 2021, 9, 2728. https://doi.org/10.3390/math9212728

AMA Style

Green CWH, Liu Y, Yan Y. Numerical Methods for Caputo–Hadamard Fractional Differential Equations with Graded and Non-Uniform Meshes. Mathematics. 2021; 9(21):2728. https://doi.org/10.3390/math9212728

Chicago/Turabian Style

Green, Charles Wing Ho, Yanzhi Liu, and Yubin Yan. 2021. "Numerical Methods for Caputo–Hadamard Fractional Differential Equations with Graded and Non-Uniform Meshes" Mathematics 9, no. 21: 2728. https://doi.org/10.3390/math9212728

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop