Next Article in Journal
On Conditional Axioms and Associated Inference Rules
Previous Article in Journal
The Generalized Eta Transformation Formulas as the Hecke Modular Relation
Previous Article in Special Issue
Fractional Order Mathematical Modelling of HFMD Transmission via Caputo Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of Fractional Pseudospectral Differentiation Matrices with Applications

School of Mathematics and Statistics, Shandong University of Technology, Zibo 255049, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(5), 305; https://doi.org/10.3390/axioms13050305
Submission received: 27 March 2024 / Revised: 1 May 2024 / Accepted: 2 May 2024 / Published: 4 May 2024
(This article belongs to the Special Issue Fractional Calculus and the Applied Analysis)

Abstract

:
Differentiation matrices are an important tool in the implementation of the spectral collocation method to solve various types of problems involving differential operators. Fractional differentiation of Jacobi orthogonal polynomials can be expressed explicitly through Jacobi–Jacobi transformations between two indexes. In the current paper, an algorithm is presented to construct a fractional differentiation matrix with a matrix representation for Riemann–Liouville, Caputo and Riesz derivatives, which makes the computation stable and efficient. Applications of the fractional differentiation matrix with the spectral collocation method to various problems, including fractional eigenvalue problems and fractional ordinary and partial differential equations, are presented to show the effectiveness of the presented method.

1. Introduction

A differentiation matrix plays a key role in implementing the spectral collocation method (also known as the pseudospectral method) to obtain numerical solutions to problems such as partial differential equations. The efficient computation of the differentiation matrix of integer order is discussed in [1,2,3,4,5] and a Matlab suite is introduced in [6].
During the past several decades, the theory and applications of fractional calculus have developed rapidly [7,8,9,10,11]. Researchers note that fractional calculus has broad and significant application prospects [8,12,13,14]. Numerous differential equation models that involve fractional calculus operators, referring to fractional differential equations, have emerged in various areas, including physics, chemistry, engineering and even finance and the social sciences.
Publications devoted to numerical solutions to fractional differential equations are numerous. Among them, the spectral collocation method is highlighted as one of the most important numerical methods. Zayernouri et al. [15] studied fractional spectral collocation methods for linear and nonlinear variable-order fractional partial differential equations. Jiao et al. [16] suggested fractional collocation methods using a fractional Birkhoff interpolation basis for fractional initial and boundary value problems. Dabiri et al. [17] presented a modified Chebyshev differentiation matrix and utilized it to solve fractional differential equations. Al-Mdallal et al. [18] presented a fractional-order Legendre collocation method to solve fractional initial value problems. Gholami et al. [19] derived a new pseudospectral integration matrix to solve fractional differential equations. Wu et al. [20] applied fractional differentiation matrices in solving Caputo fractional differential equations. Moreover, the spectral collocation method has been applied to solve tempered fractional differential equations [21,22,23], variable-order Fokker–Planck equations [24] and Caputo–Hadamard fractional differential equations [25].
The spectral collocation method approximates the unknown solution to fractional differential equations with classical orthogonal polynomials and equates the numerical solution at collocation points (usually Gauss-type quadrature nodes). Fractional differentiation in physical space is performed by the differentiation matrix as a discrete version of its continuous fractional derivative operator. Therefore, a crucial step to implement the spectral collocation method is to form the differentiation matrix. One way to construct a fractional differentiation matrix for the spectral collocation method is based on the following formula of Jacobi polynomials (see Equation (3.96) of p. 71, [26])
P n α , β ( x ) = k = 0 n Γ ( n + α + 1 ) Γ ( n + k + α + β + 1 ) k ! ( n k ) ! Γ ( k + α + 1 ) Γ ( n + α + β + 1 ) x 1 2 k ,
which suffers from the roundoff error [27,28]. Another way is based on the well-known three-term recurrence relationship of Jacobi polynomials (Equation (7) in next section), which gives a fast and stable evaluation of a fractional differentiation matrix [29]. Similar recurrence relationships are derived to evaluate the differentiation matrix [21,23,24,25] for various problems. In addition, there is little research on efficiently evaluating the fractional collocation differentiation matrix.
The key point in implementing the spectral collocation method is to stably and efficiently evaluate the collocation differentiation matrix and to solve it. The aim of this work is to present an algorithm to compute the collocation differentiation matrix for fractional integrals and derivatives. In effect, we present a representation of the fractional differentiation matrix that is a product of some special matrices. This representation gives a direct way to form differentiation matrices with relatively fewer operations, which is easy to program. Another benefit of this representation is that the inverses of the differentiation matrices can be obtained in the meantime, which makes the discrete system easy to solve. To show the effectiveness of the algorithm, we apply the fractional differentiation matrix with the Jacobi spectral collocation method to a fractional eigenvalue problem, fractional ordinary differential equations and fractional partial differential equations. In addition, our results provide an alternative option to compute fractional collocation differentiation matrices. We expect that our findings will contribute to further applications of the spectral collocation method to fractional-order problems.
The main contribution of this work is to represent some fractional differentiation matrices as a product of several special matrices. This representation gives not only a direct, fast and stable algorithm of fractional differentiation matrices, but also more information which can be used for inverse or preconditioning.
This paper is organized as follows: We recall several definitions of fractional calculus and Jacobi polynomials with some basic properties in Section 2. We describe the pseudospectral differentiation matrix and its properties in Section 3. We present a representation of the fractional differentiation matrix in Section 4. We also discuss the distribution of the spectrum of the fractional differentiation matrix in this section. We apply the fractional differentiation matrix to fractional eigenvalue problems in Section 5. We apply the fractional differentiation matrix with the Jacobi spectral collocation method to fractional differential equations in Section 6. We finish with the conclusions in Section 7.

2. Preliminaries

2.1. Definitions of Fractional Calculus

In this subsection, we recall some definitions of fractional calculi that include the widely used Riemann–Liouville integral, Riemann–Liouville derivative, Caputo derivative and Riesz derivative. We also present some basic properties of these notions.
Definition 1. 
For z > 0 , Euler’s Gamma function Γ ( z ) is defined as
Γ ( z ) = 0 t z 1 e t d t .
Definition 2 
([7,8]). For a function f ( x ) on ( a , b ) R , the μth order left- and right-sided Riemann–Liouville integrals are defined, respectively, as
  R L D a , x μ f ( x ) = 1 Γ ( μ ) a x ( x η ) μ 1 f ( η ) d η , μ > 0 ,
and
  R L D x , b μ f ( x ) = 1 Γ ( μ ) x b ( η x ) μ 1 f ( η ) d η , μ > 0 .
Lemma 1 
([8]). For μ , ν > 0 , the Riemann–Liouville integral operator has the following properties:
  R L D a , x μ [ R L D a , x ν f ( x ) ] =   R L D a , x ( μ + ν ) f ( x ) ,   R L D x , b μ [ R L D x , b ν f ( x ) ] =   R L D x , b ( μ + ν ) f ( x ) .
Lemma 2 
([11]). For n > 1 , we have
  R L D a , x μ ( x a ) n = Γ ( n + 1 ) Γ ( n + μ + 1 ) ( x a ) n + μ ,   R L D x , b μ ( b x ) n = Γ ( n + 1 ) Γ ( n + μ + 1 ) ( b x ) n + μ .
Definition 3 
([7,8]). For a function f ( x ) on ( a , b ) R , the μth order left- and right-sided Riemann–Liouville derivatives are defined as
  R L D a , x μ f ( x ) = 1 Γ ( m μ ) d d x m a x ( x η ) m μ 1 f ( η ) d η ,
and
  R L D x , b μ f ( x ) = 1 Γ ( m μ ) d d x m x b ( η x ) m μ 1 f ( η ) d η ,
respectively. Here and in the subsequent sections, m is a positive integer satisfying m 1 < μ < m , m N .
In Definition 2, the given function f ( x ) can be absolutely continuous or of bounded variation, or, even more generally, in L p ( a , b ) with p 1 . As for f ( x ) in Definitions 3 and 4, it can be in the absolute continuous function space A C m ( a , b ) . In the following discussions, we always assume that f ( x ) satisfies the aforementioned conditions such that its Riemann–Liouville integrals and Riemann–Liouville and Caputo derivatives exist.
Lemma 3 
([8]). For μ > 0 , the following properties are valid:
  R L D a , x μ ( R L D a , x μ f ( x ) ) = f ( x ) ,   R L D x , b μ ( R L D x , b μ f ( x ) ) = f ( x ) .
Moreover,
  R L D a , x μ ( R L D a , x μ f ( x ) ) = f ( x ) j = 1 m [ R L D a , x μ j f ] ( a ) Γ ( μ j + 1 ) ( x a ) μ j ,   R L D x , b μ ( R L D x , b μ f ( x ) ) = f ( x ) j = 1 m ( 1 ) m j [ R L D x , b μ j f ] ( b ) Γ ( μ j + 1 ) ( b x ) μ j .
Definition 4 
([7,8]). For a function f ( x ) defined on ( a , b ) R , the μth order left- and right-sided Caputo derivative is defined, respectively, as
  C D a , x μ f ( x ) = 1 Γ ( m μ ) a x ( x η ) m μ 1 f ( m ) ( η ) d η ,
and
  C D x , b μ f ( x ) = ( 1 ) m Γ ( m μ ) x b ( η x ) m μ 1 f ( m ) ( η ) d η ,
Lemma 4 
([7]). A link between the Riemann–Louville and Caputo derivatives is given as:
  C D a , x μ f ( x ) =   R L D a , x μ f ( x ) j = 1 m f ( j 1 ) ( a ) Γ ( j μ ) ( x a ) j μ 1 ,   C D x , b μ f ( x ) =   R L D x , b μ f ( x ) j = 1 m f ( j 1 ) ( b ) Γ ( j μ ) ( b x ) j μ 1 .
Definition 5 
([30,31]). Let 0 < μ < 2 , μ 1 . The Riesz derivative is defined as
  R Z D x μ f ( x ) = 1 2 cos π μ 2 [ R L D a , x μ f ( x ) + R L D x , b μ f ( x ) ] .
Definition 6 
([11]). The Mittag–Leffler function with two parameters is defined by
E λ , ρ ( z ) = k = 0 z k Γ ( λ k + ρ ) , λ , ρ > 0 , z C ,
which is analytic on the whole complex plane.
The Mittag–Leffler function can be efficiently numerically evaluated [32].

2.2. Jacobi Polynomials

Let I : = [ 1 , 1 ] and P N be the collection of all algebraic polynomials defined on I with degree at most N. The Pochhammer symbol, for c R and j N , is defined by
( c ) 0 = 1 , ( c ) j = : c ( c + 1 ) ( c + j 1 ) = Γ ( c + j ) Γ ( c ) , j > 0 .
Jacobi polynomials P n α , β ( s ) , s I with parameters α , β R ([33]) are defined as
P n α , β ( s ) = ( α + 1 ) n n ! + j = 1 n 1 ( n + α + β + 1 ) j ( α + j + 1 ) n j j ! ( n j ) ! s 1 2 j + ( n + α + β + 1 ) n n ! s 1 2 n , n 1 ,
and P 0 α , β ( s ) = 1 .
The well-known three-term recurrence relationship of Jacobi polynomials P n α , β ( s ) with parameters α , β R is fulfilled for ( α + β + 1 ) N + :
P n + 1 α , β ( s ) = ( A n α , β s B n α , β ) P n α , β ( s ) C n α , β P n 1 α , β ( s ) , n 1 P 0 α , β ( s ) = 1 , P 1 α , β ( s ) = α + β + 2 2 s + α β 2 ,
where
A n α , β = ( 2 n + α + β + 1 ) ( 2 n + α + β + 2 ) 2 ( n + 1 ) ( n + α + β + 1 ) , B n α , β = ( β 2 α 2 ) ( 2 n + α + β + 1 ) 2 ( n + 1 ) ( n + α + β + 1 ) ( 2 n + α + β ) , C n α , β = ( n + α ) ( n + β ) ( 2 n + α + β + 2 ) ( n + 1 ) ( n + α + β + 1 ) ( 2 n + α + β ) .
For α , β > 1 , the Jacobi polynomials are orthogonal with respect to the weight ω α , β ( s ) = ( 1 s ) α ( 1 + s ) β , namely
( P n α , β , P m α , β ) ω α , β : = 1 1 P n α , β ( s ) P m α , β ( s ) ω α , β ( s ) d s = γ n α , β δ m n ,
where
γ n α , β = 2 α + β + 1 Γ ( n + α + 1 ) Γ ( n + β + 1 ) ( 2 n + α + β + 1 ) n ! Γ ( n + α + β + 1 ) ,
and δ m n is the Kronecker symbol, i.e.,
δ m n = 1 , if m = n , 0 , otherwise .
Gauss-type quadratures hold for α , β > 1 . Let { x j α , β , ω j α , β } j = 0 N be Jacobi–Gauss–Lobatto nodes and weights (the definition and computation of these nodes and weights can be found in [26], pp. 83–86). Then,
1 1 p ( s ) ω α , β ( s ) d x = j = 0 N p ( x j α , β ) ω j α , β , p ( s ) P 2 N 1 .
Some additional properties of Jacobi polynomials can be found in [26,33].
A transformation of the Jacobi polynomials from index pair ( α 1 , β 1 ) to ( α 2 , β 2 ) is also easy to perform [34].
Theorem 1. 
Let α 1 , β 1 > 1 . If
k = 0 n a k P k α 1 , β 1 ( s ) = k = 0 n b k P k α 2 , β 2 ( s ) ,
then there exists a unique transform matrix T n J 2 J 1 with J 1 = ( α 1 , β 1 ) and J 2 = ( α 2 , β 2 ) such that
a = T n J 2 J 1 b ,
where a = [ a 0 , , a n ] T , b = [ b 0 , , b n ] T and the ( i , j ) -th entry, denoted as t i , j J 1 , J 2 , of T n J 2 J 1 can be generated by
t i , j J 1 , J 2 = 0 , j < i , t i , j J 1 , J 2 = a ˜ i , j J 1 , J 2 t i + 1 , j 1 J 1 , J 2 + b ˜ i , j J 1 , J 2 t i , j 1 J 1 , J 2 + c ˜ i , j J 1 , J 2 t i 1 , j 1 J 1 , J 2 C j 1 J 2 t i , j 2 J 1 , J 2 , j i 1 , t 0 , j J 1 , J 2 = a ˜ 0 , j J 1 , J 2 t 1 , j 1 J 1 , J 2 + b ˜ 0 , j J 1 , J 2 t 0 , j 1 J 1 , J 2 C j 1 J 2 t 0 , j 2 J 1 , J 2 , j 1 , t 0 , 1 J 1 , J 2 = b ˜ 0 , 1 J 1 , J 2 t 0 , 0 J 1 , J 2 , t 0 , 0 J 1 , J 2 = 1
with
a ˜ i , j J 1 , J 2 = A j 1 J 2 1 A i J 1 γ i + 1 J 1 γ i J 1 = 2 ( i + α 1 + 1 ) ( i + β 1 + 1 ) ( 2 i + α 1 + β 1 + 2 ) ( 2 i + α 1 + β 1 + 3 ) A j 1 J 2 , b ˜ i , j J 1 , J 2 = A j 1 J 2 B i J 1 A i J 1 B j 1 J 2 = β 1 2 α 1 2 ( 2 i + α 1 + β 1 ) ( 2 i + α 1 + β 1 + 2 ) A j 1 J 2 B j 1 J 2 , c ˜ i , j J 1 , J 2 = A j 1 J 2 C i J 1 A i J 1 γ i 1 J 1 γ i J 1 = 2 i ( i + α 1 + β 1 ) ( 2 i + α 1 + β 1 1 ) ( 2 i + α 1 + β 1 ) A j 1 J 2 .
Proof. 
The existence and uniqueness of the transform T n J 2 J 1 are clear. From the orthogonality (8) of the Jacobi polynomials, we have
a i ( P i J 1 , P i J 1 ) ω J 1 = k = 0 n b k ( P i J 1 , P k J 2 ) ω J 1 .
Hence, we have
t i , j J 1 , J 2 = 1 γ i J 1 ( P i J 1 , P j J 2 ) ω J 1 .
It is clear that t i , j J 1 , J 2 = 0 if i > j . For i j ( i > 0 , j > 1 ) , we have
( P i J 1 , P j J 2 ) ω J 1 = A j 1 J 2 ( x P i J 1 , P j 1 J 2 ) ω J 1 B j 1 J 2 ( P i J 1 , P j 1 J 2 ) ω J 1 C j 1 J 2 ( P i J 1 , P j 2 J 2 ) ω J 1 = A j 1 J 2 A i J 1 ( P i + 1 J 1 + B i J 1 P i J 1 + C i J 1 P i 1 J 1 , P j 1 J 2 ) ω 1 J B j 1 J 2 ( P i J 1 , P j 1 J 2 ) ω J 1 C j 1 J 2 ( P i J 1 , P j 2 J 2 ) ω J 1 = A j 1 J 2 A i J 1 ( P i + 1 J 1 , P j 1 J 2 ) ω 1 J + A j 1 J 2 A i J 1 B i J 1 B j 1 J 2 ( P i J 1 , P j 1 J 2 ) ω J 1 C j 1 J 2 ( P i J 1 , P j 2 J 2 ) ω J 1 + A j 1 J 2 A i J 1 C i J 1 ( P i 1 J 1 , P j 1 J 2 ) ω J 1 .
Then, we have
t i , j J 1 , J 2 = A j 1 J 2 1 A i J 1 γ i + 1 J 1 γ i J 1 t i + 1 , j 1 J 1 , J 2 + A j 1 J 2 B i J 1 A i J 1 B j 1 J 2 t i , j 1 J 1 , J 2 C j 1 J 2 t i , j 2 J 1 , J 2 + A j 1 J 2 C i J 1 A i J 1 γ i 1 J 1 γ i J 1 t i 1 , j 1 J 1 , J 2 .
For i = 0 and j > 1 , we have
( P 0 J 1 , P j J 2 ) ω J 1 = A j 1 J 2 ( x P 0 J 1 , P j 1 J 2 ) ω J 1 B j 1 J 2 ( P 0 J 1 , P j 1 J 2 ) ω J 1 C j 1 J 2 ( P 0 J 1 , P j 2 J 2 ) ω J 1 = A j 1 J 2 2 α 1 + β 1 + 2 ( P 1 J 1 , P j 1 J 2 ) ω J 1 + A j 1 J 2 β 1 α 1 α 1 + β 1 + 2 ( P 0 J 1 , P j 1 J 2 ) ω J 1 B j 1 J 2 ( P 0 J 1 , P j 1 J 2 ) ω J 1 C j 1 J 2 ( P 0 J 1 , P j 2 J 2 ) ω J 1 .
Thus,
t 0 , j J 1 , J 2 = A j 1 J 2 1 A 0 J 1 γ 1 J 1 γ 0 J 1 t 1 , j 1 J 1 , J 2 + A j 1 J 2 B 0 J 1 A 0 J 1 B j 1 J 2 t 0 , j 1 J 1 , J 2 C j 1 J 2 t 0 , j 2 J 1 , J 2 .
It is easy to determine that t 0 , 0 J 1 , J 2 = 1 , and
t 0 , 1 J 1 , J 2 = A 0 J 2 B 0 J 1 A 0 J 1 B 0 J 2 .
Thus, this completes the proof. □
Remark 1. 
The transform matrix T n J 2 J 1 is an upper triangle, and
( T n J 2 J 1 ) 1 = T n J 1 J 2 .
It is clear that T n J 1 J 1 = I n is the identity matrix of ( n + 1 ) × ( n + 1 ) . Since only the orthogonality of P n J 1 ( x ) with respect to ω J 1 is used in the derivation, the condition α 1 , β 1 > 1 ensures the transform is valid.
The following results are very useful.
Lemma 5 
([35]). Let μ > 0 , α R , β > 1 . Then, the following relations are true
  R L D 1 , s μ { ( 1 + s ) β P n α , β ( s ) } = Γ ( n + β + 1 ) Γ ( n + β + μ + 1 ) ( 1 + s ) β + μ P n α μ , β + μ ( s ) ,   R L D 1 , s μ { ( 1 + s ) β + μ P n α μ , β + μ ( s ) } = Γ ( n + β + μ + 1 ) Γ ( n + β + 1 ) ( 1 + s ) β P n α , β ( s ) ,
and the following relations are true for α > 1 , β R
  R L D s , 1 μ { ( 1 s ) α P n α , β ( s ) } = Γ ( n + α + 1 ) Γ ( n + α + μ + 1 ) ( 1 s ) α + μ P n α + μ , β μ ( s ) ,   R L D s , 1 μ { ( 1 s ) α + μ P n α + μ , β μ ( s ) } = Γ ( n + α + μ + 1 ) Γ ( n + α + 1 ) ( 1 s ) α P n α , β ( s ) .

3. Pseudospectral Differentiation/Integration Matrix

To set up a pseudospectral differentiation/integration matrix, let { x j } j = 0 N [ a , b ] , and we interpolate the unknown function f ( x ) at nodes { x j } j = 0 N with Lagrange basis functions as
f ( x ) f N ( x ) = j = 0 N f ( x j ) L j ( x ) ,
where
L j ( x ) = i = 0 , i j N x x i x j x i , j = 0 , 1 , , N .
The pseudospectral differentiation/integration matrix is derived by performing an integral or derivative operator on L j ( x ) at the nodes { x j } j = 0 N . In order to impose an initial or boundary value condition, endpoints a and b could be included in { x j } j = 0 N . For the purpose of efficient implementation of the pseudospectral method, Gauss-type quadrature nodes, e.g., Gauss–Lobatto or Gauss–Radau points (the definition and computation of these points can be found in [26], pp. 80–86), are usually employed. Considering this idea, we take a = 1 , b = 1 and x j = x j α , β as the Jacobi–Gauss–Lobatto nodes in increasing order ( x 0 = 1 , x N = 1 ), which are zeros of ( 1 x 2 ) P N 1 α , β ( x ) .
Definition 7. 
Let F μ be a fractional operator (here, we mean one of the Riemann–Liouville integrals and derivatives, Caputo derivative and Riesz derivatives) with order μ > 0 . The discretized matrix F μ corresponding to F μ is an ( N + 1 ) × ( N + 1 ) matrix whose ( i , j ) -th entry is given by
( F μ ) i j = [ F μ L j ] ( x i ) , i , j = 0 , 1 , , N .
Then, matrices   R L D l μ and   R L D r μ are the left- and right-sided Riemann–Liouville fractional integration matrices by replacing F μ with   R L D 1 , x μ and   R L D x , 1 μ ;   R L D l μ ,   R L D r μ ,   C D l μ ,   C D r μ are the left- and right-sided fractional differentiation matrices in the sense of Riemann–Liouville and Caputo by replacing F μ with   R L D 1 , x μ ,   R L D x , 1 μ ,   C D 1 , x μ and   C D x , 1 μ ; and   R Z D μ is the Riesz fractional differentiation matrix by replacing F μ with   R Z D x μ , respectively.
Theorem 2. 
The first row of   R L D l μ ,   C D l μ and the last row of   R L D r μ ,   C D r μ are all zeros; e.g., for j = 0 , , N
( R L D l μ ) 0 , j = ( R L D r μ ) N , j = ( C D l μ ) 0 , j = ( C D r μ ) N , j = 0 ,
and ( R L D l μ ) 0 , 0 = ( R L D r μ ) N , N = . Moreover, if 0 < μ < 1 , we have
( R L D l μ ) i , j = ( C D l μ ) i , j , j 0 ,
and
( R L D r μ ) i , j = ( C D r μ ) i , j , j N .
Proof. 
For j = 0 , 1 , , N , we expand L j ( x ) as
L j ( x ) = k = 0 N h k j ( x + 1 ) k = k = 0 N h ¯ k j ( 1 x ) k .
Then,
( R L D l μ ) 0 , j = [ R L D 1 , x μ L j ] ( x 0 ) = k = 0 N h k j Γ ( k + 1 ) Γ ( k + μ + 1 ) ( x + 1 ) k + μ | x = 1 = 0 ,
and
( C D l μ ) 0 , j = [ C D 1 , x μ L j ] ( x 0 ) = k > μ N h k j Γ ( k + 1 ) Γ ( k μ + 1 ) ( x + 1 ) k μ | x = 1 = 0 .
Consider L 0 ( x ) . Since L 0 ( x 0 ) = 1 ,
L 0 ( x ) = 1 + k = 1 N b k ( 1 + x ) k ,
with some b k . Hence,
  R L D 1 , x μ L 0 ( x ) = ( 1 + x ) μ Γ ( 1 μ ) + k = 1 N b k Γ ( k + 1 ) Γ ( k μ + 1 ) ( 1 + x ) k μ .
Let x 1 + , and one has ( R L D l μ ) 0 , 0 = for all μ > 0 .
Since L j ( x 0 ) = 0 for j = 1 , , N , one has [ C D 1 , x μ L j ] ( x ) = [ R L D 1 , x μ L j ] ( x ) if 0 < μ < 1 from Lemma 4. Then, relation (13) is obtained.
By considering the second expression of (15), the desired equalities on the right-hand-sided operators can be derived in a similar way. □
The above theorem states that if 0 < μ < 1 , the difference between   R L D l μ and   C D l μ lies only in the entries of the first column, and the difference between   R L D r μ and   C D r μ lies only in the entries of the last column.
We collect the coefficients of (15) in two matrices H l and H r as
( H l ) i j = h i j , ( H r ) i j = h ¯ i j .
Let us introduce three ( N + 1 ) -element vectors as
( v l μ ) i = ( x i + 1 ) μ , ( v r μ ) i = ( 1 x i ) μ , ( c μ ) i = Γ ( i + 1 ) Γ ( i + μ + 1 ) ,
and two ( N + 1 ) × ( N + 1 ) matrices
( B l ) i j = ( 1 + x i ) j , ( B r ) i j = ( 1 x i ) j .
Theorem 3. 
The following representations of the pseudospectral integration matrix are valid,
  R L D l μ = diag ( v l μ ) B l diag ( c μ ) H l ,   R L D r μ = diag ( v r μ ) B r diag ( c μ ) H r ,
where diag ( · ) is a diagonal matrix with the vector in brackets as its diagonal line. Similarly, by replacing μ with μ, the following representations of the pseudospectral differentiation matrix are found to be valid
  R L D l μ = diag ( v l μ ) B l diag ( c μ ) L l ,   R L D r μ = diag ( v r μ ) B r diag ( c μ ) L r .
Proof. 
From the first expression of (15) and Lemma 2, we have
( R L D l μ ) i j = k = 0 N h k j Γ ( k + 1 ) Γ ( k + μ + 1 ) ( x i + 1 ) μ + k = ( x i + 1 ) μ k = 0 N ( x i + 1 ) k Γ ( k + 1 ) Γ ( k + μ + 1 ) h k j .
Hence, we have the first equality of (19). The second equality of (19) can be obtained in a similar way. Because
  R L D 1 , x μ ( x + 1 ) m = Γ ( m + 1 ) Γ ( m μ + 1 ) ( x + 1 ) m μ ,   R L D x , 1 μ ( 1 x ) m = Γ ( m + 1 ) Γ ( m μ + 1 ) ( 1 x ) m μ ,
the two equalities (20) are clear. □
In fact, since the Lagrange basis function L j ( x ) satisfies L j ( x i ) = δ i j , we also have the inverse relation as
B l H l = I , B r H r = I ,
where I is the identity matrix of ( N + 1 ) × ( N + 1 ) .
Noting that ( diag ( v l μ ) ) 1 = diag ( v l μ ) , from Theorem 3 and the relation (21), the inverse of the pseudospectral integration/differentiation matrix may be formally written as
( R L D l μ ) 1 = B l ( diag ( c μ ) ) 1 L l diag ( v l μ ) , ( R L D r μ ) 1 = B r ( diag ( c μ ) ) 1 L r diag ( v r μ ) ,
and
( R L D l μ ) 1 = B l ( diag ( c μ ) ) 1 L l diag ( v l μ ) , ( R L D r μ ) 1 = B r ( diag ( c μ ) ) 1 L r diag ( v r μ ) .
Remark 2. 
Some remarks are listed as follows.
  • From Lemma 1, we expect
    ( R L D l μ ) ( R L D l ν ) =   R L D l ( μ + ν ) ,   R L D r μ   R L D r ν =   R L D r ( μ + ν ) .
    Similarly, according to Lemma 3, we can expect
    ( R L D l μ ) ( R L D l μ ) = I , ( R L D r μ ) ( R L D r μ ) = I .
    However, these equalities cannot be verified from Theorem 3 and the above-mentioned equalities. In fact, the first two equalities are not true according to the numerical test.
  • It is worth noting that
    ( diag ( c μ ) ) 1 ( diag ( c μ ) ) , ( diag ( c μ ) ) 1 ( diag ( c μ ) ) .
  • We point out that the two matrices B l and B r are Vandermonde’s type.
  • From Theorem 2, the first entry of v l μ and the last entry of v r μ are illogical for μ < 0 , which indicates the endpoint singularity of the fractional differential operator. In order to avoid this issue, the nodes { x j } j = 0 N are altered to the Jacobi–Gauss type.
  • As stated above, the matrices diag ( v l μ ) and diag ( v r μ ) are singular (or some of their entries are not well defined). Thus, the inverse of the singular matrix in (22) and (23) should be considered as a generalized inverse or a pseudo-inverse.
Theorem 4. 
From Definition 7, the following relation holds:
  R Z D μ = 1 2 cos π μ 2 [ R L D l μ +   R L D r μ ] .
As stated in Remark 2, since the decompositions in Theorem 3 involve Vandermonde’s matrix B l and B r (as well as their inverse L l and L r ), this causes problems in implementing the collocation method. The condition number of B l and B r grows rapidly as N increases.

4. Representation of Fractional Integration/Differentiation Matrices

In this section, we present a novel representation of the above-mentioned fractional integration/differentiation matrices, which also gives a stable method to compute these matrices and their inverses. Let us consider expanding L j ( x ) as
L j ( x ) = k = 0 N l k j α , β P k α , β ( x ) = k = 0 N l k j 0 , 0 P k 0 , 0 ( x ) .
where P k α , β ( x ) is a Jacobi orthogonal polynomial of degree k.
Denote
( L α , β ) i j = l i j α , β , ( L 0 , 0 ) i j = l i j 0 , 0 , ( P α , β ) i j = P j α , β ( x i ) .
From Theorem 1, we have
L 0 , 0 = T N ( α , β ) ( 0 , 0 ) L α , β .
We also have, from [26],
l i j α , β = P j α , β ( x i ) ω i γ j α , β , j = 0 , 1 , , N 1 , l i N α , β = P N α , β ( x i ) ω i 2 + α + β + 1 N γ N α , β ,
where { x i , ω i } i = 0 N denotes the nodes and weights of the Jacobi–Gauss–Lobatto quadrature. Additionally, it is easy to see that
[ L α , β ] 1 = [ P α , β ] T .

4.1. Riemann–Liouville Fractional Integral and Derivative

Here, we have the following representations.
Theorem 5. 
The following representations of the pseudospectral integration matrices are valid for α , β > 1
  R L D l μ = diag ( v l μ ) P μ , μ diag ( c μ ) T N ( α , β ) ( 0 , 0 ) L α , β ,   R L D r μ = diag ( v r μ ) P μ , μ diag ( c μ ) T N ( α , β ) ( 0 , 0 ) L α , β .
Moreover, the differentiation matrices are valid for α , β > 1
  R L D l μ = diag ( v l μ ) P μ , μ diag ( c μ ) T N ( α , β ) ( 0 , 0 ) L α , β ,   R L D r μ = diag ( v r μ ) P μ , μ diag ( c μ ) T N ( α , β ) ( 0 , 0 ) L α , β .
Proof. 
From (24) and Lemma 5, one has
  R L D 1 , x μ [ L j ( x ) ] = k = 0 N l k j 0 , 0   R L D 1 , x μ [ P k 0 , 0 ( x ) ] = k = 0 N ( 1 + x ) μ P k μ , μ ( x ) Γ ( k + 1 ) Γ ( k + μ + 1 ) l k j 0 , 0 .
Then,
  R L D l μ = diag ( v l μ ) P μ , μ diag ( c μ ) L 0 , 0 .
Thanks to (26), the first equality of (28) is obtained. The second equality of (28) is derived in a similar way. From (24) and Lemma 5, one has
  R L D 1 , x μ [ L j ( x ) ] = k = 0 N l k j 0 , 0   R L D 1 , x μ [ P k 0 , 0 ( x ) ] = k = 0 N ( 1 + x ) μ P k μ , μ ( x ) Γ ( k + 1 ) Γ ( k μ + 1 ) l k j 0 , 0 .
Then,
  R L D l μ = diag ( v l μ ) P μ , μ diag ( c μ ) L 0 , 0 .
The first equality of (29) is derived from (26). The second equality of (29) is derived in a similar way. □
In Theorem 5, the fractional integration and differentiation matrices are expressed as a product of five matrices with size ( N + 1 ) × ( N + 1 ) . We emphasize that the two matrices diag ( v l μ ) and diag ( v r μ ) are not invertible because the first diagonal entry vanishes. In addition, the two matrices diag ( v l μ ) and diag ( v r μ ) are not well defined since the first entry in the diagonal is equal to = 1 / 0 . In the following context, the inverse of a singular matrix should be understood as the generalized inverse or the pseudo-inverse.
Theorem 6. 
Let P : = P α , β , L : = L α , β and α , β > 1 . The representations of the inverses of the pseudospectral integration matrices are valid
( R L D l μ ) 1 = P T T N ( 0 , 0 ) ( α , β ) ( diag ( c μ ) ) 1 T N ( μ , μ ) ( α , β ) L T diag ( v l μ ) , ( R L D r μ ) 1 = P T T N ( 0 , 0 ) ( α , β ) ( diag ( c μ ) ) 1 T N ( μ , μ ) ( α , β ) L T diag ( v r μ ) .
Moreover, the pseudospectral differentiation matrices are valid
( R L D l μ ) 1 = P T T N ( 0 , 0 ) ( α , β ) ( diag ( c μ ) ) 1 T N ( μ , μ ) ( α , β ) L T diag ( v l μ ) , ( R L D r μ ) 1 = P T T N ( 0 , 0 ) ( α , β ) ( diag ( c μ ) ) 1 T N ( μ , μ ) ( α , β ) L T diag ( v r μ ) .
Proof. 
From Theorem 1, we know that
P j μ , μ ( x ) = k = 0 N ( T N ( μ , μ ) ( α , β ) ) k j P k α , β ( x ) .
Then,
P μ , μ = P α , β T N ( μ , μ ) ( α , β ) .
Noting that
( T N ( α 1 , β 1 ) ( α , β ) ) 1 = T N ( α , β ) ( α 1 , β 1 ) , ( L α , β ) 1 = ( P α , β ) T ,
it is clear that
( P μ , μ ) 1 = T N ( μ , μ ) ( α , β ) ( L α , β ) T .
Similarly,
( P μ , μ ) 1 = T N ( μ , μ ) ( α , β ) ( L α , β ) T .
Moreover, we know that
( diag ( v l μ ) ) 1 = diag ( v l μ ) , ( diag ( v r μ ) ) 1 = diag ( v r μ ) .
Then, the equalities (30) and (31) are easily derived from Theorem 5. □
Remark 3. 
In Theorems 5 and 6, since the matrices d i a g ( v l μ ) , d i a g ( v r μ ) , d i a g ( c μ ) are diagonal, T N ( α , β ) ( α 1 , β 1 ) is an upper triangle; this means that the fractional differentiation/integration matrices and their inverses can be computed rapidly and stably.

4.2. Caputo Derivative

Theorem 7. 
The following representations of the pseudospectral differentiation matrix are valid for α , β > 1
  C D l μ = diag ( v l m μ ) P ¯ μ m , m μ diag ( c ¯ m μ ) T N m ( α + m , β + m ) ( 0 , 0 ) D ¯ m L ¯ α , β ,   C D r μ = diag ( v r m μ ) P ¯ m μ , μ m diag ( c ¯ m μ ) T N m ( α + m , β + m ) ( 0 , 0 ) D ¯ m L ¯ α , β ,
where D ¯ m = diag ( [ d m , m α , β , d m + 1 , m α , β , , d N , m α , β ] ) with d k , m α , β = Γ ( k + m + α + β + 1 ) 2 m Γ ( k + α + β + 1 ) , ( P ¯ α , β ) i , j = P j α , β ( x i ) and ( L ¯ α , β ) j , i = l j + m , i α , β for i = 0 , , N ; j = 0 , , N m .
Proof. 
Differentiating (24) m times, one has
d m d x m L j ( x ) = k = m N d k , m α , β l k j α , β P k m α + m , β + m ( x ) .
Let b = [ b m , , b N ] T with b k = d k , m α , β l k j α , β and a = [ a 0 , , a N m ] T . Then, one uses a = T N m ( α + m , β + m ) ( 0 , 0 ) b to obtain
d m d x m L j ( x ) = k = 0 N m a k P k 0 , 0 ( x ) .
Furthermore, we have
  C D 1 , x μ L j ( x ) = k = 0 N m a k   R L D 1 , x ( m μ ) P k 0 , 0 ( x ) = k = 0 N m a k Γ ( k + 1 ) Γ ( k + m μ + 1 ) ( 1 + x ) m μ P k ( m μ ) , m μ ( x ) .
By taking x = x i (the Jacobi–Gauss–Lobatto nodes as before) for i = 0 , 1 , , N , we have
  C D l μ = diag ( v l m μ ) P ¯ μ m , m μ diag ( c ¯ m μ ) a .
Combining this with a = T N m ( α + m , β + m ) ( α , 0 ) b and b = D ¯ m L ¯ α , β , we derive the first equality of (32).
The second equality of (32) can be obtained in a similar way. □
It is clear that   C D l μ (or   C D r μ ) is singular because the sizes of P ¯ α + μ m , m μ (or P ¯ m μ , β + μ m ) and L ¯ α , β are ( N + 1 ) × ( N m + 1 ) and ( N m + 1 ) × ( N + 1 ) , where m 1 ( m 1 < μ < m ) is an integer.
Two particular cases from Theorem 7 are interesting.
1.
If μ = 0 , then m = 0 , diag ( v l 0 ) , diag ( v r 0 ) , diag ( c 0 ) and D ¯ 0 are all identity matrices. Here, we also have P ¯ 0 , 0 = P 0 , 0 , L ¯ α , β = L α , β . Then,
  C D l 0 =   C D r 0 = P 0 , 0 T N ( α , β ) ( 0 , 0 ) L α , β = [ P α , β ] T L α , β = I .
2.
If μ = k is a positive integer, then m = μ , diag ( v l 0 ) , diag ( v r 0 ) and diag ( c 0 ) are all identity matrices (but of size ( N m + 1 ) × ( N m + 1 ) ). Then,
  C D l k =   C D r k = P ¯ 0 , 0 T N k ( α , β ) ( 0 , 0 ) D ¯ k L ¯ α , β .
Because P ¯ 0 , 0 T N k ( α , β ) ( 0 , 0 ) = [ P ¯ α , β ] T = [ L ¯ α , β ] 1 , we can obtain the well-known relation of the differentiation matrix of integer order:
  C D l k =   C D r k = ( C D l 1 ) k = ( C D r 1 ) k .
We investigate the eigenvalues of the fractional differentiation matrices obtained as above, which are closely related to the stability of the numerical method. The interior part of the fractional differentiation matrices is employed with consideration of boundary conditions. Tests are performed for all kinds of the fractional derivatives defined in Section 2.1 with various μ , N , α , β . We report some results as follows. Special emphasis is on the Legendre α = β = 0 and Chebyshev α = β = 0.5 cases.
Since   C D l μ is a real matrix, it has complex conjugate pair eigenvalues. Figure 1 shows the distribution of the spectrum of   C D l μ for Chebyshev–Gauss–Lobatto points of N = 64 with different μ = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 , 1 . It is observed that the real parts of the eigenvalues are positive for all cases. It is well known that the differentiation matrix   C D 1 of first order ( μ = 1 ) with Chebyshev collocation points is semi-positive definite for every N N (property 5.3 of [17]). Hence, it is reasonable to estimate that   C D l μ is semi-positive definite for all 0 < μ 1 , N N . Similar conclusions are obtained for Legendre–Gauss–Lobatto points, whose eigenvalues are plotted in Figure 2. For fixed μ = 0.5 , Figure 3 and Figure 4 demonstrate that, with increasing N, the eigenvalues are more scattered away from the axis in both Chebyshev–Gauss–Lobatto and Legendre–Gauss–Lobatto cases.
Figure 5 shows the distribution of the spectrum of   R L D l μ with different μ = 1.1:0.2:1.9 and 2 for Legendre–Gauss–Lobatto points of N = 64 . It is observed that the real parts of the eigenvalues are negative for all cases. Hence, we estimate that   R L D l μ is semi-negative definite for all 1 < μ 2 . For fixed μ = 1.5 , Figure 6 demonstrates that, with increasing N, the eigenvalues are also more scattered away from the axis.
It is verified that the Riesz fractional derivative is self-adjoint and positive definite [36]. Figure 7 shows the distribution of the spectrum of   R Z D μ with different μ = 1.1, 1.3, 1.5, 1.7, 1.9 for Jacobi–Gauss–Lobatto points of α = β = 0.8 , N = 64 . Figure 8 shows the distribution of the spectrum of   R Z D μ for Jacobi–Gauss–Lobatto points of α = 0.7 , β = 1.9 , N = 64 . It is observed that the real parts of the eigenvalues are negative for all cases.

5. Applications in Fractional Eigenvalue Problems

Eigenvalue problems are of importance in the theory and application of partial differential equations. Duan et al. [37] studied fractional eigenvalue problems. Reutskiy [38] presented a numerical method to solve fractional eigenvalue problems based on external excitation and the backward substitution method. He et al. [39] computed second-order fractional eigenvalue problems with the Jacobi–Davidson method. Gupta and Ranta [40] applied the Legendre wavelet method to solve fractional eigenvalue problems.
Example 1. 
Let 1 < μ < 2 . Consider the boundary value problem
  C D 0 , x μ u ( x ) + λ u ( x ) = 0 , p u ( 0 ) r u ( 0 ) = 0 , q u ( 1 ) + s u ( 1 ) = 0 .
where p , q , r , s 0 such that p 2 + r 2 0 and q 2 + s 2 0 . Our test is for p = q = 1 , r = s = 0 . This problem is also solved in [37,38,39,40]. For μ = 1.8 , the first nine eigenvalues are listed in Table 1, together with the results given in [37,38,39,40] for comparison. Meanwhile, Table 2 lists the first six eigenvalues for μ = 1.6 which can be compared with the results in [37,38,40].
When μ = 2 , the analytic solution of the problem is λ n = ( n π ) 2 , n = 1 , 2 , (see [39]). The first six eigenvalues evaluated with the Chebyshev spectral collocation method (with α = β = 0.5 ) and the errors between the numerical and analytic solutions are listed in the last two columns of Table 3. We also consider what happens when μ moves closer to 2. The first six eigenvalues for μ = 1.9 , 1.999 , 1.9999 , 1.99999 , 1.999999 are listed in Table 3 and Table 4. It is observed that the corresponding eigenvalues move increasingly closer to those of μ = 2 .
For μ = 1.1 , 1.3 , 1.5 , the first six eigenvalues are listed in Table 5, which are comparable with the results in [40]. It is shown that at least 5–6 digits after the decimal point are consistent. The eigenvalues for other cases of μ are also comparable with the results in [40], which are similar, so we do not list them here.
From the results in Table 1, Table 2, Table 3, Table 4 and Table 5, it is shown that our method is reliable for the evaluation of the eigenvalues.
Example 2. 
Let 1 < μ < 2 . Consider the eigenvalue problem of the Riesz derivative
  R Z D x μ u ( x ) + λ u ( x ) = 0 , u ( 1 ) = 0 , u ( 1 ) = 0 .
We solve the eigenvalue problem (34) using the Jacobi spectral collocation method. It is known that the Riesz fractional operator is self-adjoint and positive definite in a proper Sobolev space [36]. This fact is confirmed by numerical tests. It is observed that the discrete Riesz fractional differentiation matrix is strictly diagonally dominant with a negative diagonal for every value of parameter 1 < μ < 2 , α , β > 1 , N . The same problem has been studied using the Jacobi–Galerkin spectral method [36]. The first five eigenvalues obtained with the proposed method are listed in Table 6 and Table 7, together with the eigenvalues given in [36] for comparison.
From the results in Table 6 and Table 7, we conclude that the presented method is reliable in solving eigenvalue problems.

6. Applications in Fractional Differential Equations

6.1. Fractional Initial Value Problems

The basic fractional initial value problem reads
  C D a , x μ u ( x ) = f ( u , x ) , m 1 < μ < m , u ( k ) ( a ) = u k , k = 0 , , m 1 .
For the well posedness of (35), we refer to [11]. For 0 < μ 1 , the discrete system of (35) reads
D μ u = f ( u , x )
with the collocation point vector x = [ x 1 , , x N ] T , the unknown u that approximates the exact solution at the collocation vector, and the differentiation matrix D μ with respect to the fractional derivative   C D a , x μ . We obtain the numerical solution u N ( x ) = u u ( x ) by solving the system.
While exact solutions are known in the following examples, the error E between the exact and numerical solutions is measured with
E : = max 1 i N { u ( t i ) u N ( t i ) } ,
where u N ( t ) is the numerical solution and t i denotes the mapped collocation points (Jacobi–Gauss–Lobatto points). The numerical convergence order is estimated with
C O : = log ( E ( N 1 ) ) log ( E ( N 2 ) ) log ( N 1 ) log ( N 2 ) .
where N 1 and N 2 are two different degrees of freedom. We employ C O to measure numerically the convergence order, e.g., the convergence rate likes O ( N C O ) .
Example 3. 
Let 0 < μ < 1 . Consider the scalar linear fractional differential equation
  C D 0 , t μ u ( t ) = f ( t ) , t ( 0 , T ] , u ( 0 ) = u 0 .
The right-hand-side term f ( t ) is chosen so that the exact solution satisfies one of the following cases:
C11. 
u ( t ) = k = 1 5 t k σ k , σ > 0 , t ( 0 , 2 ] , u 0 = 0 .
C12. 
u ( t ) = t sin ( t ) , t ( 0 , 2 π ] , u 0 = 0 .
C13. 
u ( t ) = E μ , 1 ( t μ ) , t ( 0 , 3 ] , u 0 = 1 .
For case C11, the source function is exactly f ( t ) = k = 1 5 Γ ( k σ + 1 ) t k σ μ k Γ ( k σ μ + 1 ) . Since the solution u has low regularity if σ > 0 is a small non-integer, the convergence order is limited. The errors E and convergence orders C O are listed in Table 8 and Table 9. It is shown that the convergence order for this case is clearly O ( N 2 σ ) if σ > 0 is a small non-integer. When σ is an integer, the solution u is smooth and exactly belongs to P 5 σ . Hence, we only need N = 5 σ to resolve this problem. Table 10 lists the errors E which show that machine precision is almost achieved for the case when N = 5 σ .
For case C12, the solution is smooth. Thus, we expect the convergence order to be exponential. The errors E in Table 11 show “spectral accuracy".
For case C13, the solution u has low regularity since the term t μ is involved. Thus, the convergence order should be low. Table 12 and Table 13 list the errors E and convergence orders C O , which show clearly that the convergence order is O ( N 2 μ ) .
Example 4. 
Let 0 < μ < 1 . Consider the linear fractional differential system
  C D 0 , t μ u 1 ( t ) = u 2 ( t ) ,   C D 0 , t μ u 2 ( t ) = u 1 ( t ) + 1 , 0 < t 10
with the initial value u 1 ( 0 ) = u 2 ( 0 ) = 0 . If μ = 1 , the solution of (38) is u 1 ( t ) = 1 cos ( t ) , u 2 ( t ) = sin ( t ) . We apply the Legendre spectral collocation method with N = 160 to produce maximum errors of E 1 , = 2.3412 e 12 and E 2 , = 2.2929 e 12 . We plot the curves of the numerical solution u 1 ( t ) for different μ in Figure 9. We plot the portrait of the phase plane u 1 ( t ) and u 2 ( t ) in Figure 10. The results show good agreement with the figures in [17].

6.2. Fractional Boundary Value Problems

Let 1 < μ < 2 . As a benchmark fractional boundary value problem, we consider the one-dimensional fractional Helmholtz equation as
λ 2 u ( x ) D x μ u ( x ) = f ( x ) , x ( a , b ) , u ( a ) = u ( b ) = 0 ,
where the fractional derivative operator D x μ will be specified in the following examples. The discrete system of (39) in the spectral collocation method reads
λ 2 I D μ u = f ( x )
with the ( N 1 ) × ( N 1 ) identical matrix I , the fractional differentiation matrix D μ (whose first and last row and column are removed) corresponding to the fractional derivative operator, the undetermined unknowns u = u N ( x ) and the interior collocation points x = [ x 1 , , x N 1 ] T (mapped into ( a , b ) ).
Example 5. 
Consider Equation (39) with the Caputo derivative: D x μ =   C D a , x μ . The source term f ( x ) is chosen so that the exact solution satisfies one of the following cases:
C21. 
u ( x ) = x σ x 2 σ , σ > 0 , a = 0 , b = 1 .
C22. 
u ( x ) = sin ( π x ) , a = 1 , b = 1 .
For case C21, since the solution u has low regularity if σ > 0 is a small non-integer, the convergence order is limited. The errors E are plotted in Figure 11 and Figure 12 for different N , σ and μ. Figure 12 shows that the convergence order is of the first order. It is shown that the convergence order for this case depends on σ and μ. A possible estimate of the convergence order is O ( N 2 ( σ μ + 1 ) ) if σ > 0 is a small non-integer.
For case C22, the source function is approximated by
f ( x ) k = 1 L ( 1 ) k π 2 k + 1 Γ ( 2 k + 2 μ ) ( x + 1 ) 2 k + 1 μ
with a sufficiently large integer L (here, L = 50 ). Since the solution is smooth, we expect the convergence order to be exponential. Figure 13 and Figure 14 plot the errors E , which show “spectral accuracy".
Example 6. 
Consider Equation (39) with the Riesz derivative D x μ =   R Z D x μ . The numerical test is performed for the fractional Poisson equation in two cases:
C31. 
f ( x ) = 1 , λ = 0 , a = 1 , b = 1 .
C32. 
f ( x ) = sin ( π x ) , λ = 0 , a = 1 , b = 1 .
We solve the fractional Poisson problems with the Riesz derivative by employing the Legendre spectral collocation method ( α = β = 0 ) with N = 64 . The profiles of the numerical solutions are plotted in Figure 15 and Figure 16. From a comparison with the curves in [41] (Figure 1 and Figure 4 (left)), the two groups of curves match very well.

6.3. Fractional Initial Boundary Value Problems

The fractional Burgers equation is a generalized model that describes weak nonlinearity propagation, such as in the acoustic phenomenon of a sound wave through a gas-filled tube. There are many publications that present analytical and numerical solutions to the Riemann–Liouville and Caputo fractional Burgers equations (see [27,42]). One of the fractional Burgers equations (FBEs) in one-dimensional form reads
t u + u x u = ϵ D x μ u , ( x , t ) ( a , b ) × ( 0 , T ]
with the boundary condition u ( a , t ) = u ( b , t ) = 0 and the initial profile u ( x , 0 ) = u 0 ( x ) . For the time discretization, we employ a semi-implicit time discretization scheme with step size τ , namely the two-step Crank–Nicolson/leapfrog scheme. Then, the full discretization scheme reads:
I ϵ τ D μ u n + 1 = I + ϵ τ D μ u n 1 2 τ ( diag ( u n ) D ) u n , n 1 , u 1 = I + ϵ τ D μ u 0 τ ( diag ( u 0 ) D ) u 0 u 0 = u 0 ( x ) ,
where D is the first-order differentiation matrix. In the following examples, we always take α = β = 0 , N = 360 and τ = 10 3 .
Example 7. 
Consider Equation (40) with the Caputo derivative: D x μ =   C D x μ . The numerical test is performed for two cases of initial profiles:
C41. 
u 0 ( x ) = sin ( π x ) , a = 1 , b = 1 .
C42. 
u 0 ( x ) = exp ( 2 x 2 ) , a = 7 , b = 7 .
We first consider the initial profile with two peaks, i.e., case C41. The surface of the numerical solution is plotted in Figure 17 for μ = 1.5 and T = 1 . The evolution of the numerical solution is observed. The numerical solutions of the FBE at time t = 1 are plotted in Figure 18 for different values of fractional order μ = 1.1 , 1.2 , 1.3 , 1.5 , 1.8 .
For case C42, the surface of the numerical solution for μ = 1.5 , T = 1 is plotted in Figure 19. The evolution of the numerical solution is observed. The numerical solutions of the FBE at time t = 1 are plotted in Figure 20 for different values of fractional order μ = 1.1 , 1.2 , 1.3 , 1.5 , 1.8 .
Example 8. 
Consider Equation (40) with the Riesz derivative: D x μ =   R Z D x μ . The numerical test is performed for the same two initial profiles as in (7).
We proceed in the same way as in the above example. We first consider the initial profile with two peaks, i.e., case C41. The surface of the numerical solution is plotted in Figure 21 for μ = 1.5 , T = 1 . The evolution of the numerical solution is observed. The numerical solutions of the FBE at time t = 1 are plotted in Figure 22 for different values of fractional order μ = 1.1 , 1.2 , 1.3 , 1.5 , 1.8 .
For case C42, the surface of the numerical solution is plotted in Figure 23 for μ = 1.5 , T = 1 . The evolution of the numerical solution is observed. The numerical solutions of the FBE at time t = 1 are plotted in Figure 24 for different values of fractional order μ = 1.1 , 1.2 , 1.3 , 1.5 , 1.8 .

7. Conclusions

A fractional differentiation matrix is constructed by employing the Jacobi–Jacobi transformation between two indexes ( α 1 , β 1 ) and ( α 2 , β 2 ) . In effect, the fractional differentiation matrix is given as a product of some special matrices. With the aid of this representation, the fractional differentiation matrix can be evaluated in a stable, fast, and efficient manner. This representation gives a direct way to form differentiation matrices with relatively fewer operations, which is easy to program. Another benefit of this representation is that the inverses of the differentiation matrices can be obtained in the meantime, which makes the discrete system easy to solve. We develop applications of the fractional differentiation matrix with the Jacobi spectral collocation method to fractional eigenvalue problems and fractional initial and boundary value problems, as well as fractional partial differential equations. Our numerical experiments involve the Riemann–Liouville, Caputo and Riesz derivatives. All numerical experiments demonstrate that the algorithm is efficient. In addition, our results provide an alternative option to compute fractional collocation differentiation matrices. We expect that our findings can contribute to further applications of the spectral collocation method to fractional-order problems.
The effectiveness of the suggested method needs further exploration. We will investigate the complexity of the method and perform comparisons with other methods in the future. We also expect to apply the method to fractional differential equations in high-dimensional domains.

Author Contributions

Conceptualization, T.Z.; methodology, T.Z.; software, W.L. and H.M.; validation, W.L., H.M. and T.Z.; formal analysis, T.Z.; investigation, W.L. and H.M.; resources, T.Z.; data curation, W.L. and H.M.; writing—original draft preparation, T.Z.; writing—review and editing, T.Z.; visualization, W.L., H.M. and T.Z.; supervision, T.Z.; project administration, T.Z.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Costa, B.; Don, W.S. On the computation of high order pseudospectral derivatives. Appl. Numer. Math. 2000, 33, 151–159. [Google Scholar] [CrossRef]
  2. Don, W.S.; Solomonoff, A. Accuracy and speed in computing the Chebyshev collocation derivative. SIAM J. Sci. Comput. 1995, 16, 1253–1268. [Google Scholar] [CrossRef]
  3. Elbarbary, E.M.E.; El-Sayed, S.M. Higher order pseudospectral differentiation matrices. Appl. Numer. Math. 2005, 55, 425–438. [Google Scholar] [CrossRef]
  4. Solomonoff, A. A fast algorithm for spectral differentiation. J. Comput. Phys. 1992, 98, 174–177. [Google Scholar] [CrossRef]
  5. Welfert, B.D. Generation of pseudospectral differentiation matrices I. SIAM J. Numer. Anal. 1997, 34, 1640–1657. [Google Scholar] [CrossRef]
  6. Weideman, J.A.C.; Reddy, S.C. A MATLAB differentiation matrix suite. ACM Trans. Math. Softw. 2000, 26, 465–519. [Google Scholar] [CrossRef]
  7. Diethelm, K. The Analysis of Fractional Differential Equations; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  8. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  9. Li, C.P.; Cai, M. Theory and Numerical Approximations of Fractional Integrals and Derivatives; SIAM: Philadelphia, PA, USA, 2019. [Google Scholar]
  10. Li, C.P.; Zeng, F.H. Numerical Methods for Fractional Calculus; Chapman and Hall/CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  11. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  12. Deng, W.H.; Hou, R.; Wang, W.L.; Xu, P.B. Modeling Anomalous Diffusion: From Statistics to Mathematics; World Scientific: Singapore, 2020. [Google Scholar]
  13. Hilfer, R. Applications of Fractional Calculus in Physics; World Scientific: Singapore, 2000. [Google Scholar]
  14. Tarasov, V.E. Fractional Dynamics: Application of Fractional Calculus to Dynamics of Particles, Fields and Media; Higher Education Press: Beijing, China, 2010. [Google Scholar]
  15. Zayernouri, M.; Karniadakis, G.E. Fractional spectral collocation methods for linear and nonlinear variable order FPDEs. J. Comput. Phys. 2015, 293, 312–338. [Google Scholar] [CrossRef]
  16. Jiao, Y.J.; Wang, L.L.; Huang, C. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis. J. Comput. Phys. 2016, 305, 1–28. [Google Scholar] [CrossRef]
  17. Dabiri, A.; Butcher, E.A. Efficient modified Chebyshev differentiation matrices for fractional differential equations. Commun. Nonlinear Sci. Numer. Simulat. 2017, 50, 284–310. [Google Scholar] [CrossRef]
  18. Al-Mdallal, Q.M.; Omer, A.S.A. Fractional-order Legendre-collocation method for solving fractional initial value problems. Appl. Math. Comput. 2018, 321, 74–84. [Google Scholar] [CrossRef]
  19. Gholami, S.; Babolia, E.; Javidi, M. Fractional pseudospectral integration/differentiation matrix and fractional differential equations. Appl. Math. Comput. 2019, 343, 314–327. [Google Scholar] [CrossRef]
  20. Wu, Z.S.; Zhang, X.X.; Wang, J.H.; Zeng, X.Y. Applications of fractional differentiation matrices in solving Caputo fractional differential equations. Fractal Fract. 2023, 7, 374. [Google Scholar] [CrossRef]
  21. Zhao, T.G. Efficient spectral collocation method for tempered fractional differential equations. Fractal Fract. 2023, 7, 277. [Google Scholar] [CrossRef]
  22. Dahy, S.A.; El-Hawary, H.M.; Alaa Fahim, A.; Aboelenen, T. High-order spectral collocation method using tempered fractional Sturm–Liouville eigenproblems. Comput. Appl. Math. 2023, 42, 338. [Google Scholar] [CrossRef]
  23. Zhao, T.G.; Zhao, L.J. Efficient Jacobian spectral collocation method for spatio-dependent temporal tempered fractional Feynman-Kac equation. Commun. Appl. Math. Comput. 2024; to appear. [Google Scholar]
  24. Zhao, T.G.; Zhao, L.J. Jacobian spectral collocation method for spatio-temporal coupled Fokker–Planck equation with variable-order fractional derivative. Commun. Nonlinear Sci. Numer. Simulat. 2023, 124, 107305. [Google Scholar] [CrossRef]
  25. Zhao, T.G.; Li, C.P.; Li, D.X. Efficient spectral collocation method for fractional differential equation with Caputo-Hadamard derivative. Frac. Calc. Appl. Anal. 2023, 26, 2902–2927. [Google Scholar] [CrossRef]
  26. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  27. Wu, Q.Q.; Zeng, X.Y. Jacobi collocation methods for solving generalized space-fractional Burgers’ equations. Commun. Appl. Math. Comput. 2020, 2, 305–318. [Google Scholar] [CrossRef]
  28. Doha, E.H.; Bhrawy, A.H.; Ezz-Eldien, S.S. A new Jacobi operational matrix: An application for solving fractional differential equations. Appl. Math. Model. 2012, 36, 4931–4943. [Google Scholar] [CrossRef]
  29. Li, C.P.; Zeng, F.H.; Liu, F.W. pectral approximations to the fractional integral and derivative. Frac. Calc. Appl. Anal. 2012, 15, 383–406. [Google Scholar] [CrossRef]
  30. Cai, M.; Li, C.P. Regularity of the solution to Riesz-type fractional differential equation. Integral Transform. Spec. Funct. 2019, 30, 711–742. [Google Scholar] [CrossRef]
  31. Cai, M.; Li, C.P. On Riesz derivative. Fract. Calc. Appl. Anal. 2019, 22, 287–301. [Google Scholar] [CrossRef]
  32. Garrappa, R. Numerical evaluation of two and three parameter Mittag–Leffler functions. SIAM Numer. Anal. 2015, 53, 1350–1369. [Google Scholar] [CrossRef]
  33. Szegő, G. Orthogonal Polynomials, 4th ed.; American Mathematical Society: Providence, RI, USA, 1975. [Google Scholar]
  34. Shen, J.; Wang, Y.W.; Xia, J.L. Fast structured Jacobi-Jacobi transforms. Math. Comput. 2019, 88, 1743–1772. [Google Scholar] [CrossRef]
  35. Chen, S.; Shen, J.; Wang, L.L. Generalized Jacobi functions and their applications to fractional differential equations. Math. Comput. 2016, 85, 1603–1638. [Google Scholar] [CrossRef]
  36. Chen, L.Z.; Mao, Z.P.; Li, H.Y. Jacobi-Galerkin spectral method for eigenvalue problems of Riesz fractional differential equations. arXiv 2018, arXiv:1803.03556. [Google Scholar] [CrossRef]
  37. Duan, J.S.; Wang, Z.; Liu, Y.L.; Qiu, X. Eigenvalue problems for fractional ordinary differential equations. Chaos Solitons Fractals 2013, 46, 46–53. [Google Scholar] [CrossRef]
  38. Reutskiy, S.Y. A novel method for solving second order fractional eigenvalue problems. J. Comput. Appl. Math. 2016, 306, 133–153. [Google Scholar] [CrossRef]
  39. He, Y.; Zuo, Q. Jacobi-Davidson method for the second order fractional eigenvalue problems. Chaos Solitons Fractals 2021, 143, 110614. [Google Scholar] [CrossRef]
  40. Gupta, S.; Ranta, S. Legendre wavelet based numerical approach for solving a fractional eigenvalue problem. Chaos Solitons Fractals 2022, 155, 111647. [Google Scholar] [CrossRef]
  41. Lischke, A.; Pang, G.F.; Gulian, M.; Song, F.Y.; Glusa, C.; Zheng, X.N.; Mao, Z.P.; Cai, W.; Meetschaert, M.M.; Ainsworth, M.; et al. What is the fractional Laplacian? A comparative review with new results. J. Comput. Phys. 2020, 404, 109009. [Google Scholar] [CrossRef]
  42. Mao, Z.P.; Karniadakis, G.E. Fractional Burgers equation with nonlinear non-locality: Spectral vanishing viscosity and local discontinuous Galerkin methods. J. Comput. Phys. 2017, 336, 143–163. [Google Scholar] [CrossRef]
Figure 1. Eigenvalues of   C D l μ : N = 64 , α = β = 0.5 .
Figure 1. Eigenvalues of   C D l μ : N = 64 , α = β = 0.5 .
Axioms 13 00305 g001
Figure 2. Eigenvalues of   C D l μ : N = 64 , α = β = 0 .
Figure 2. Eigenvalues of   C D l μ : N = 64 , α = β = 0 .
Axioms 13 00305 g002
Figure 3. Eigenvalues of   C D l 0.5 : α = β = 0.5 .
Figure 3. Eigenvalues of   C D l 0.5 : α = β = 0.5 .
Axioms 13 00305 g003
Figure 4. Eigenvalues of   C D l 0.5 : α = β = 0 .
Figure 4. Eigenvalues of   C D l 0.5 : α = β = 0 .
Axioms 13 00305 g004
Figure 5. Eigenvalues of   R L D l μ : α = β = 0 .
Figure 5. Eigenvalues of   R L D l μ : α = β = 0 .
Axioms 13 00305 g005
Figure 6. Eigenvalues of   R L D l 1.5 : α = β = 0 .
Figure 6. Eigenvalues of   R L D l 1.5 : α = β = 0 .
Axioms 13 00305 g006
Figure 7. Eigenvalues of   R Z D μ : α = β = 0.8 .
Figure 7. Eigenvalues of   R Z D μ : α = β = 0.8 .
Axioms 13 00305 g007
Figure 8. Eigenvalues of   R Z D μ : α = 0.7 , β = 1.9 .
Figure 8. Eigenvalues of   R Z D μ : α = 0.7 , β = 1.9 .
Axioms 13 00305 g008
Figure 9. Curves of numerical solution of Example 4.
Figure 9. Curves of numerical solution of Example 4.
Axioms 13 00305 g009
Figure 10. Portrait of phase plane u 1 ( t ) and u 2 ( t ) of Example 4.
Figure 10. Portrait of phase plane u 1 ( t ) and u 2 ( t ) of Example 4.
Axioms 13 00305 g010
Figure 11. Errors of C21 of Example 5.
Figure 11. Errors of C21 of Example 5.
Axioms 13 00305 g011
Figure 12. Errors of C21 of Example 5.
Figure 12. Errors of C21 of Example 5.
Axioms 13 00305 g012
Figure 13. Errors of C22 of Example 5.
Figure 13. Errors of C22 of Example 5.
Axioms 13 00305 g013
Figure 14. Errors of C22 of Example 5.
Figure 14. Errors of C22 of Example 5.
Axioms 13 00305 g014
Figure 15. Numerical solution of C31 of Example 6 with N = 64 , α = β = 0 .
Figure 15. Numerical solution of C31 of Example 6 with N = 64 , α = β = 0 .
Axioms 13 00305 g015
Figure 16. Numerical solution of C32 of Example 6 with N = 64 , α = β = 0 .
Figure 16. Numerical solution of C32 of Example 6 with N = 64 , α = β = 0 .
Axioms 13 00305 g016
Figure 17. Numerical solution of C41 of Example 7 with u 0 ( x ) = sin ( π x ) .
Figure 17. Numerical solution of C41 of Example 7 with u 0 ( x ) = sin ( π x ) .
Axioms 13 00305 g017
Figure 18. Numerical solution of C41 of Example 7 with u 0 ( x ) = sin ( π x ) .
Figure 18. Numerical solution of C41 of Example 7 with u 0 ( x ) = sin ( π x ) .
Axioms 13 00305 g018
Figure 19. Numerical solution of C42 of Example 7 with u 0 ( x ) = exp ( x 2 ) .
Figure 19. Numerical solution of C42 of Example 7 with u 0 ( x ) = exp ( x 2 ) .
Axioms 13 00305 g019
Figure 20. Numerical solution of C42 of Example 7 with u 0 ( x ) = exp ( x 2 ) .
Figure 20. Numerical solution of C42 of Example 7 with u 0 ( x ) = exp ( x 2 ) .
Axioms 13 00305 g020
Figure 21. Numerical solution of Example 8 with u 0 ( x ) = sin ( π x ) .
Figure 21. Numerical solution of Example 8 with u 0 ( x ) = sin ( π x ) .
Axioms 13 00305 g021
Figure 22. Numerical solution of Example 8 with u 0 ( x ) = sin ( π x ) .
Figure 22. Numerical solution of Example 8 with u 0 ( x ) = sin ( π x ) .
Axioms 13 00305 g022
Figure 23. Numerical solution of Example 8 with u 0 ( x ) = exp ( x 2 ) .
Figure 23. Numerical solution of Example 8 with u 0 ( x ) = exp ( x 2 ) .
Axioms 13 00305 g023
Figure 24. Numerical solution of Example 8 with u 0 ( x ) = exp ( x 2 ) .
Figure 24. Numerical solution of Example 8 with u 0 ( x ) = exp ( x 2 ) .
Axioms 13 00305 g024
Table 1. The first nine eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.8 and N = 200 .
Table 1. The first nine eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.8 and N = 200 .
λ Our Method[40][38][39][37]
λ 1 9.456856891269.45685688919.45685688929.49979.4569
λ 2 28.4768791247928.476879117028.476879118628.511628.4769
λ 3 62.2003777998362.200377733162.200377752962.323962.2004
λ 4 97.0632374728497.063237370897.063237755297.089697.0632
λ 5 155.45013805266155.4501373840155.4499080962155.6972-
λ 6 196.59593024267196.5959302453196.5986985127196.5152-
λ 7 301.52706976868--304.19 + 3.00i-
λ 8 306.72685026127--304.19 + 3.00i-
λ 9 461.179 + 43.050i--461.24 + 43.29i-
Table 2. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.6 and N = 200 .
Table 2. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.6 and N = 200 .
λ Our Method[40][38][37]
λ 1 13.42047405113.4204739913.420473988513.4205
λ 2 14.64544247314.6454425214.645442535114.6454
λ 3 47.292859 + 18.850956i47.292858 + 18.850956i--
λ 4 91.705190 + 43.625498i91.705189 + 43.625496i--
λ 5 145.569415 + 75.805031i145.569416 + 75.805025i--
λ 6 207.859129 + 114.486222i207.859147 + 114.486204i--
Table 3. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.9999 , 1.99999 , 1.999999 and N = 200 .
Table 3. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.9999 , 1.99999 , 1.999999 and N = 200 .
λ μ = 1.9999 μ = 1.99999 μ = 1.999999 μ = 2 Error
λ 1 9.86912922039.86955687299.86959964829.869604401081.073 ×   10 11
λ 2 39.471964591939.477772244639.478353067839.478417604341.995 ×   10 11
λ 3 88.808188152688.824614230988.826257069688.826439609807.148 ×   10 12
λ 4 157.8754859530157.9098514892157.9132885198157.913670417438.527 ×   10 14
λ 5 246.6748284659246.7335809315246.7394571083246.740110027231.336 ×   10 12
λ 6 355.2042026720355.2956013900355.3047427196355.305758439242.035 ×   10 11
Table 4. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with different μ and N = 200 .
Table 4. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with different μ and N = 200 .
λ μ = 1.9 μ = 1.9  [40] μ = 1.999 μ = 1.999  [39]
λ 1 9.51414312959.51414312889.86486266329.8648
λ 2 33.595671412533.595671408939.413945949139.4139
λ 3 73.039017233573.039017216388.644158107788.6442
λ 4 124.4185311384124.4185311250157.5323069955157.5325
λ 5 191.1460514291191.1460515330246.0882332022246.0886
λ 6 267.9451997398267.9452006460354.2916713396354.2922
Table 5. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.1 , 1.3 , 1.5 and N = 200 .
Table 5. The first six eigenvalues of Example 1, computed with the Chebyshev collocation method ( α = β = 0.5 ) with μ = 1.1 , 1.3 , 1.5 and N = 200 .
λ μ = 1.1 μ = 1.3 μ = 1.5
λ 1 1.3531852 + 7.2497408i5.4727766 + 8.2732647i11.1466676 + 6.1222836i
λ 2 2.8098353 + 15.7308531i13.3926574 + 21.8112244i33.3136683 + 23.1298984i
λ 3 4.3033960 + 24.6907235i22.3712536 + 37.9533918i61.1959645 + 46.6510167i
λ 4 5.8291184 + 33.9686708i32.1686781 + 55.9626406i93.8181576 + 75.3667762i
λ 5 7.3824934 + 43.4883769i42.6472214 + 75.4712271i130.5448587 + 108.4998614i
λ 6 8.9599335 + 53.2043853i53.7149929 + 96.2501648i170.9467051 + 145.5427194i
Table 6. The first five eigenvalues of Example 2, computed with the Legendre collocation method ( α = β = 0 ) with μ = 1.2 , 1.4 and N = 200 .
Table 6. The first five eigenvalues of Example 2, computed with the Legendre collocation method ( α = β = 0 ) with μ = 1.2 , 1.4 and N = 200 .
λ μ = 1.2 μ = 1.4
Our Method[36]Our Method[36]
λ 1 1.2970240218841.2969957771.4832620555661.4832334320
λ 2 3.4868064605043.4867305364.4582600134354.4581739838
λ 3 5.9118086939865.9116799758.1508740065948.1507167266
λ 4 8.5346272313368.53444142312.42459337012312.424353637
λ 5 11.29267585556411.2924300117.16267880234417.162347657
Table 7. The first five eigenvalues of Example 2, computed with the Legendre collocation method ( α = β = 0 ) with μ = 1.6 , 1.8 and N = 200 .
Table 7. The first five eigenvalues of Example 2, computed with the Legendre collocation method ( α = β = 0 ) with μ = 1.6 , 1.8 and N = 200 .
λ μ = 1.6 μ = 1.8
Our Method[36]Our Method[36]
λ 1 1.7283218900051.728295957102.0487527467382.04873498313
λ 2 5.7564346508075.756348280037.5031819811607.50311692608
λ 3 11.31206302752511.311893301015.80003115432215.7998941633
λ 4 18.17761560842418.177342879126.72447499101126.7242432849
λ 5 26.18759695451426.187204051640.11458160454740.1142338051
Table 8. The error E and convergence order C O of C11 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ) for σ = 2.5 .
Table 8. The error E and convergence order C O of C11 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ) for σ = 2.5 .
N μ = 0.2 μ = 0.4 μ = 0.6 μ = 0.8
ECOECOECOECO
126.434   ×   10 6 -1.329   ×   10 5 -2.627   ×   10 5 -5.384   ×   10 5 -
202.356   ×   10 7 6.475.692   ×   10 7 6.171.109   ×   10 6 6.201.968   ×   10 6 6.48
284.373   ×   10 8 5.011.055   ×   10 7 5.012.055   ×   10 7 5.013.648   ×   10 7 5.01
361.244   ×   10 8 5.002.998   ×   10 8 5.015.840   ×   10 8 5.011.037   ×   10 7 5.01
444.559   ×   10 9 5.001.099   ×   10 8 5.002.140   ×   10 8 5.003.800   ×   10 8 5.00
521.977   ×   10 9 5.004.764   ×   10 9 5.009.280   ×   10 9 5.001.648   ×   10 8 5.00
609.664   ×   10 10 5.002.328   ×   10 9 5.004.536   ×   10 9 5.008.054   ×   10 9 5.00
685.168   ×   10 10 5.001.245   ×   10 9 5.002.426   ×   10 9 5.004.307   ×   10 9 5.00
Table 9. The error E and convergence order C O of C11 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ) for μ = 0.5 .
Table 9. The error E and convergence order C O of C11 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ) for μ = 0.5 .
N σ = 0.5 σ = 1.2 σ = 1.8 σ = 2.2
ECOECOECOECO
84.044   ×   10 2 -1.236   ×   10 3 -1.263   ×   10 3 -7.994   ×   10 2 -
162.036   ×   10 2 0.992.410   ×   10 4 2.361.811   ×   10 5 6.124.055   ×   10 6 14.27
241.359   ×   10 2 1.009.146   ×   10 5 2.394.193   ×   10 6 3.616.775   ×   10 7 4.41
321.020   ×   10 2 1.004.591   ×   10 5 2.401.487   ×   10 6 3.601.908   ×   10 7 4.41
408.158   ×   10 3 1.002.689   ×   10 5 2.406.658   ×   10 7 3.607.143   ×   10 8 4.40
486.799   ×   10 3 1.001.737   ×   10 5 2.403.453   ×   10 7 3.603.201   ×   10 8 4.40
565.828   ×   10 3 1.001.200   ×   10 5 2.401.982   ×   10 7 3.601.624   ×   10 8 4.40
645.100   ×   10 3 1.008.708   ×   10 6 2.401.226   ×   10 7 3.609.024   ×   10 9 4.40
Table 10. The error E of C11 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
Table 10. The error E of C11 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
N ( σ ) μ = 0.1 μ = 0.3 μ = 0.5 μ = 0.7 μ = 0.9 μ = 0.99
5(1)1.776   ×   10 14 1.599   ×   10 14 1.954   ×   10 14 1.776   ×   10 14 2.309   ×   10 14 1.599   ×   10 14
10(2)2.387   ×   10 12 2.132   ×   10 12 2.103   ×   10 12 2.558   ×   10 12 2.217   ×   10 12 2.103   ×   10 12
15(3)5.912   ×   10 11 2.728   ×   10 11 2.819   ×   10 11 6.003   ×   10 11 2.728   ×   10 11 3.547   ×   10 11
Table 11. The error E of C12 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
Table 11. The error E of C12 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
N μ = 0.1 μ = 0.3 μ = 0.5 μ = 0.7 μ = 0.9 μ = 0.99
69.557   ×   10 3 2.978   ×   10 2 5.208   ×   10 2 8.299   ×   10 2 1.377   ×   10 1 1.931   ×   10 1
107.758   ×   10 6 2.475   ×   10 5 4.212   ×   10 5 6.440   ×   10 5 1.072   ×   10 4 1.626   ×   10 4
141.522   ×   10 9 4.709   ×   10 9 8.159   ×   10 9 1.212   ×   10 8 2.043   ×   10 8 3.113   ×   10 8
182.749   ×   10 13 5.519   ×   10 13 5.405   ×   10 13 1.002   ×   10 12 1.333   ×   10 12 2.055   ×   10 12
222.398   ×   10 13 5.959   ×   10 13 3.545   ×   10 14 1.024   ×   10 12 1.420   ×   10 13 1.631   ×   10 13
Table 12. The error E and convergence order C O of C13 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
Table 12. The error E and convergence order C O of C13 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
N μ = 0.2 μ = 0.3 μ = 0.5 μ = 0.6
ECOECOECOECO
46.129   ×   10 2 -8.253   ×   10 2 -1.039   ×   10 1 -1.028   ×   10 1 -
85.106   ×   10 2 0.266.075   ×   10 2 0.445.514   ×   10 2 0.914.442   ×   10 2 1.21
164.185   ×   10 2 0.294.324   ×   10 2 0.492.803   ×   10 2 0.981.894   ×   10 2 1.23
323.379   ×   10 2 0.313.002   ×   10 2 0.531.408   ×   10 2 0.998.145   ×   10 3 1.22
642.693   ×   10 2 0.332.049   ×   10 2 0.557.047   ×   10 3 1.003.525   ×   10 3 1.21
1282.123   ×   10 2 0.341.383   ×   10 2 0.573.524   ×   10 3 1.001.530   ×   10 3 1.20
2561.659   ×   10 2 0.369.261   ×   10 3 0.581.762   ×   10 3 1.006.653   ×   10 4 1.20
5121.288   ×   10 2 0.376.170   ×   10 3 0.598.812   ×   10 4 1.002.895   ×   10 4 1.20
Table 13. The error E and convergence order C O of C13 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
Table 13. The error E and convergence order C O of C13 in Example 3, computed with the Chebyshev collocation method ( α = β = 0.5 ).
N μ = 0.7 μ = 0.8 μ = 0.9 μ = 0.99
ECOECOECOECO
49.283   ×   10 2 -7.376   ×   10 2 -4.563   ×   10 2 -1.312   ×   10 2 -
83.180   ×   10 2 1.551.939   ×   10 2 1.938.566   ×   10 3 2.417.498   ×   10 4 4.13
161.144   ×   10 2 1.475.948   ×   10 3 1.702.268   ×   10 3 1.921.754   ×   10 4 2.10
324.247   ×   10 3 1.431.918   ×   10 3 1.636.379   ×   10 4 1.834.370   ×   10 5 2.01
641.597   ×   10 3 1.416.280   ×   10 4 1.611.821   ×   10 4 1.811.103   ×   10 5 1.99
1286.032   ×   10 4 1.402.066   ×   10 4 1.605.221   ×   10 5 1.802.793   ×   10 6 1.98
2562.283   ×   10 4 1.406.811   ×   10 5 1.601.499   ×   10 5 1.807.077   ×   10 7 1.98
5128.647   ×   10 5 1.402.246   ×   10 5 1.604.303   ×   10 6 1.801.794   ×   10 7 1.98
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, W.; Ma, H.; Zhao, T. Construction of Fractional Pseudospectral Differentiation Matrices with Applications. Axioms 2024, 13, 305. https://doi.org/10.3390/axioms13050305

AMA Style

Li W, Ma H, Zhao T. Construction of Fractional Pseudospectral Differentiation Matrices with Applications. Axioms. 2024; 13(5):305. https://doi.org/10.3390/axioms13050305

Chicago/Turabian Style

Li, Wenbin, Hongjun Ma, and Tinggang Zhao. 2024. "Construction of Fractional Pseudospectral Differentiation Matrices with Applications" Axioms 13, no. 5: 305. https://doi.org/10.3390/axioms13050305

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop