Next Article in Journal
An Inertial Relaxed CQ Algorithm with Two Adaptive Step Sizes and Its Application for Signal Recovery
Previous Article in Journal
A Method for Predicting Tool Remaining Useful Life: Utilizing BiLSTM Optimized by an Enhanced NGO Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Preconditioning Based on Scaled Tridiagonal and Toeplitz-like Splitting Iteration Method for Conservative Space Fractional Diffusion Equations

1
School of Data Science, Fudan University, Shanghai 200433, China
2
Shanghai Key Laboratory for Contemporary Applied Mathematics, Fudan University, Shanghai 200433, China
Mathematics 2024, 12(15), 2405; https://doi.org/10.3390/math12152405
Submission received: 12 July 2024 / Revised: 29 July 2024 / Accepted: 1 August 2024 / Published: 2 August 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The purpose of this work is to study the efficient numerical solvers for time-dependent conservative space fractional diffusion equations. Specifically, for the discretized Toeplitz-like linear system, we aim to study efficient preconditioning based on a matrix-splitting iteration method. We propose a scaled tridiagonal and Toeplitz-like splitting iteration method. Its asymptotic convergence property is first established. Further, based on the induced preconditioner, a fast circulant-like preconditioner is developed to accelerate the convergence of the Krylov Subspace iteration methods. Theoretical results suggest that the fast preconditioner can inherit the effectiveness of the original induced preconditioner. Numerical results also demonstrate its efficiency.

1. Introduction

We consider the time-dependent conservative space fractional diffusion equation (conservative SFDE), which was proposed in [1]. It is of the form
u ( x , t ) t = x d ( x , t ) ω 1 β x 1 β ( 1 ω ) 1 β ( x ) 1 β u ( x , t ) + f ( x , t ) , x Ω ( a , b ) , t [ 0 , T ] , u ( x , 0 ) = u 0 ( x ) , x [ a , b ] , u ( a , 0 ) = u ( b , 0 ) = 0 ,
where d ( x , t ) is a variable positive bounded diffusion coefficient, f ( x , t ) is the source term, 0 ω 1 is a weight parameter, and 1 β u ( x , t ) x 1 β , 1 β u ( x , t ) ( x ) 1 β ( 0 < β < 1 ) are the left and right Riemann–Liouville fractional derivatives [2], respectively, with
1 β u ( x , t ) x 1 β = 1 Γ ( β ) x a x ( x s ) β 1 u ( s , t ) d s , 1 β u ( x , t ) ( x ) 1 β = 1 Γ ( β ) x x b ( s x ) β 1 u ( s , t ) d s ,
where Γ ( · ) is the Gamma function.
The conservative SFDE, like (1), can be derived by combination of the conservation of mass and fractional Fick’s law. For the detailed derivation, one can refer to [3,4,5]. It has been recognized that such an SFDE provides appropriate description for anomalous diffusion with important physical applications, such as solute transport of groundwater and subsurface flow [6,7,8]. For a comparison, the classical diffusion equation of integer-order typically fails [3,4,9,10,11]. Also, in the setting of the variable diffusion coefficient, the conservative SFDE seems both physically and experimentally more sound than the SFDE in the non-conservative form [3,6,8].
For the solution methods of SFDE, it is rare to obtain analytical solutions by utilizing tools such as Fourier transform, Laplace transform, and Mellin transform [10]. Therefore, it is necessary and significant to consider numerical solvers. For the numerical methods of conservative SFDE, to be able to guarantee the local mass conservation property, the finite volume method (FVM) is typically considered [1,11,12]. However, due to the non-local nature of the fractional operator, the discretized coefficient matrix is dense. As a result, a direct solver needs O ( N 3 ) computational cost and O ( N 2 ) storage memory, where N is the problem size, and this is quite expensive. Such a computational challenge is actually common in solving all kinds of discretized SFDEs. Nevertheless, for the coefficient matrix of a discreteized SFDE on a uniform meshsize, it generally holds a Toeplitz-like structure, which can be multiplied with a vector in O ( N log N ) complexity and stored in O ( N ) memory. This fact was first revealed by [10], and motivated by this, Krylov Subspace iteration methods [13] have turned out to be popular as the solver.
By far, it has been widely recognized that efficient preconditioning is indispensable for the successful implementation of Krylov Subspace iteration methods. There has been quite a few studies on preconditioning for SFDE in non-conservative form; for instance, the circulant preconditioner [14], the circulant-based approximate inverse preconditioner [15], the banded preconditioners [16,17], the matrix-splitting-based preconditioners [18,19,20,21,22,23,24,25], and the τ -matrix-based preconditioners [26]. However, the research specifically into preconditioning for conservative SFDEs, in particular for that of variable diffusion coefficients, is still at an early stage. The related works are presented as follows. In [1], aiming at the Toeplitz-like discrete linear systems of the conservative SFDE (1), Pan et al. proposed a circulant-based approximate inverse (CAI) preconditioner. Both theoretical analysis and numerical results demonstrate the efficiency of the CAI preconditioner, providing that the diffusion coefficient is sufficiently smooth. Meanwhile, Pan et al. [27] also developed an efficient scaled circulant preconditioner for a steady-state conservative SFDE. Moreover, Donatelli et al. [28] studied two banded preconditioners, as well as a multigrid solver for steady-state Riesz SFDEs in the conservative form.
The main aim of this paper is to propose and develop an efficient matrix-splitting-based preconditioning method for the FVM-based discretized linear system of conservative SFDE (1). Preconditioners arising from matrix-splitting iteration methods have been very popular for a variety of non-conservative SFDEs [18,19,20,21,22,23,24,25]. They share the advantages of economic implementation and well-founded theoretical analysis. Also, they can be effective in challenging cases of diffusion coefficients, such as those with a weak smooth property. However, these matrix-splitting ideas are not applicable to our interested discretized coefficient matrix, due to its more sophisticated characteristics in terms of structure and elements.
In order to inherit the advantages but overcome the limitations of preconditioning based on matrix-splitting iteration methods for non-conservative SFDEs, we propose a two-step matrix-splitting iteration method based on the scaled tridiagonal and Toeplitz-like splitting (STTS) technique. We establish the asymptotic convergence properties for the STTS iteration method. Further, founded upon the induced STTS preconditioner, through approximating the involved Toeplitz matrix by a certain circulant matrix, a fast STTS (FSTTS) preconditioner is obtained to accelerate the convergence of Krylov Subspace iteration methods. We theoretically demonstrate that the difference between the FSTTS preconditioned matrix and the STTS preconditioned matrix can only differ by a low-rank matrix and a small-norm matrix, which suggests that the FSTTS preconditioner can inherit the preconditioning property of the STTS preconditioner well. Numerical results demonstrate that the FSTTS preconditioner is very effective.
The novel features of this work are summarized in the following. To the best of our knowledge, we are the first to develop a preconditioner based on the matrix-splitting iteration method, which is tailored for the conservative SFDE (1). We provide both sufficient new theoretical and numerical results to support the effectiveness of the FSTTS preconditioner. Compared with the CAI preconditioner [1], already proposed for solving (1), our method can overcome its theoretical limitations in the smoothness requirement of the diffusion coefficient, as well as the restriction between fractional order and weight parameter. The numerical result also highlights that the proposed FSTTS preconditioner is evidently more efficient than the CAI preconditioner.
The rest of the paper is organized as follows. In Section 2, we present the FVM-based discrete linear systems shown in [1]. In Section 3, we propose the STTS iteration method and establish its convergence properties. In Section 4, we develop the FSTTS preconditioner, and provide corresponding theoretical analysis. In Section 5, we carry out numerical experiments to illustrate the actual performance of the FSTTS preconditioner. Finally, in Section 6, the conclusions are given.

2. The FVM-Based Discrete Linear System

Let Δ t = T / M t be the size of the time step, where M t is a positive integer. We define a temporal partition t m = m Δ t for m = 0 , 1 , , M t . Let h = ( b a ) / ( N + 1 ) be the size of the spatial grid, where N is a positive integer. We define a spatial partition x i = a + i h for i = 0 , 1 , , N + 1 .
Using a standard first-order difference quotient to discretzie the first-order time derivative in (1), and based on the Crank–Nicolson scheme, we obtain
u ( x , t m ) u ( x , t m 1 ) Δ t 1 2 x d ( x , t m ) ω 1 β x 1 β ( 1 ω ) 1 β ( x ) 1 β u ( x , t m ) = 1 2 x d ( x , t m 1 ) ω 1 β x 1 β ( 1 ω ) 1 β ( x ) 1 β u ( x , t m ) + f ( x , t m , θ ) ,
where t m , 1 2 = t m 1 + Δ t 2 .
For 1 i N , by integrating both sides of (2) over x i 1 / 2 , x i + 1 / 2 , where x i 1 / 2 = ( x i 1 + x i ) / 2 , we obtain
1 Δ t x i 1 / 2 x i + 1 / 2 u ( x , t m ) d x 1 2 d ( x , t m ) ω 1 β x 1 β ( 1 ω ) 1 β ( x ) 1 β u ( x , t m ) x i 1 / 2 x i + 1 / 2 = 1 2 d ( x , t m 1 ) ω 1 β x 1 β ( 1 ω ) 1 β ( x ) 1 β u ( x , t m 1 ) x i 1 / 2 x i + 1 / 2 + 1 Δ t x i 1 / 2 x i + 1 / 2 u ( x , t m 1 ) d x + x i 1 / 2 x i + 1 / 2 f ( x , t m , 1 2 ) d x .
Let S h ( a , b ) be the space of continuous and piecewise-linear functions with respect to the spatial partition, which vanish at the boundaries x = a and x = b , and let ϕ j ( x ) be the nodal basis functions for j = 1 , 2 , , N . The approximation solution u h ( x , t m ) S h ( a , b ) is expressed as
u h ( x , t m ) = j = 1 N u j m ϕ j ( x ) .
Then, the corresponding finite volume scheme can be formulated as the linear systems
A m u m : = ( η M + B m ) u m = ( η M B m 1 ) u m 1 + f m : = b , m = 1 , 2 , , M t .
In Equation (3), u m = u 1 m , u 2 m , , u N m T , f = f 1 m , f 2 m , , f N m T with
f i m = 2 τ x i 1 / 2 x i + 1 / 2 f ( x , t m , 1 2 ) d x ,
η = 2 h τ Δ t with τ = 1 Γ ( β + 1 ) h 1 β , M = 1 8 tridiag 1 , 6 , 1 , B m is a Toeplitz-like matrix of the form
B m = ω D l m T l D r m T r + ( 1 ω ) D r m T l T D l m T r T ,
where D l m , D r m are two diagonal matrices given by
D l m = diag { d ( x i 1 / 2 , t m ) } i = 1 N , D r m = diag { d ( x i + 1 / 2 , t m ) } i = 1 N ,
and T l , T r are two Toeplitz matrices given by
T l = q 0 q 1 q N 1 q 1 q 0 , T r = q 1 q 0 q 2 q 0 q N q 2 q 1 ,
with
q k = 1 2 β , k = 0 , 3 2 β 1 2 β 1 , k = 1 , k + 1 2 β 2 k 1 2 β + k 3 2 β , k = 2 , , N .
Next, we review some basic properties of the Toeplitz matrices T l and T r , which will be used in the following sections.
The coefficients q k in T l and T r have the following properties [1,27].
Proposition 1. 
Let  q k  be defined by (5), with  0 < β < 1 It follows that
(1) 
q 2 < q 3 < < < 0 ;
(2) 
q 0 q 1 0  and  q 2 q 3 < q 3 q 4 < < q k q k + 1 < < 0 ;
(3) 
lim k | q k | = 0  and  lim k | q k q k + 1 | = 0 ;
(4) 
There exists a constant  c q > 0  such that  | q k | c q k 2 β  for  k = 2 , 3 , .
Let T : = T l T r , and define the Toeplitz matrix
T ω : = ω T + ( 1 ω ) T T .
The first column and the first row of T ω are given by b c o l = [ b 0 , b 1 , , b N 1 ] T and b r o w = [ b 0 , b 1 , , b N + 1 ] T , respectively, where
b 0 = q 0 q 1 , b 1 = ω q 1 q 2 ( 1 ω ) q 0 , b 1 = ( 1 ω ) q 1 q 2 ω q 0 , b k = ω q k q k + 1 , k = 2 , 3 , , N 1 , b k = ( 1 ω ) q k q k + 1 , k = 2 , 3 , , N 1 .
Then, we have the following property [27].
Proposition 2. 
Let  0 < β < 1  and  0 ω 1 Denote by  β *  the unique root of
h ( β ) = 3 β + 1 5 β 3 , β ( 0 , 1 ) ,
which is about  0.51888 It follows that  T ω  is strictly diagonally dominant if and only if one of the following statements holds.
(1) 
0 < β β * ;
(2) 
β * < β < 1 and 1 w * ω w * , where
w * = q 0 q 0 + q 1 q 2 .

3. The STTS Iteration Method

In this section, we construct the STTS method to approximately solve the discrete linear system (3) and establish its asymptotic convergence theory.
For convenience, we omit the superscript “m” for the mth time step, and denote the coefficient matrix A m in (3) by
A = η M + [ ω ( D l T l D r T r ) + ( 1 ω ) ( D r T l T D l T r T ) ] .
First, as in [1], we approximate the coefficient matrix A by
A ˜ : = η M + D [ ω ( T l T r ) + ( 1 ω ) ( T l T T r T ) ] = η M + D T ω ,
where D = diag d 1 , d 2 , , d N with d i = d ( x i ) . Here, d ( x ) is used to denote d ( x , t m ) .
Note that
A ˜ A = [ ( D D l ) ( ω T l ( 1 ω ) T r T ) ( D D r ) ( ω T r ( 1 ω ) T l T ) ] .
Suppose that d ( x ) is continuous on [ a , b ] ; then, given any ϵ > 0 , we have
| d ( x i ) d ( x i ± 1 / 2 ) | < ϵ ,
for sufficiently small spatial grid-size h, which leads to
D D l 2   < ϵ , D D r 2 < ϵ .
Meanwhile, according to [1], we can show that
T l 2 < c β , T r 2 < c β ,
where c β = 1 2 β 3 β 2 + 3 2 β . Therefore,
A ˜ A 2 ( D D l 2 + D D r 2 ) ( ω T l 2 + ( 1 ω ) T r T 2 ) < 2 c β ϵ ,
which indicates that A ˜ will be close to A as h becomes sufficiently small or, in other words, as A becomes sufficiently large.
Next, we construct the following two-step matrix splitting iteration method, called the STTS iteration method for solving the linear system A ˜ x = b , to obtain an approximate solution of the linear system A x = b .
The STTS iteration method: Given an initial guess u ( 0 ) C N , for k = 0 , 1 , 2 , until the iteration sequence { u ( k ) } C N converges, compute the next iteration u ( k + 1 ) C N according to the following procedure:
( α D + η M ) u ( k + 1 2 ) = ( α D D T ω ) u ( k ) + b , ( α D + D T ω ) u ( k + 1 ) = ( α D η M ) u ( k + 1 2 ) + b ,
where α is a prescribed positive constant.
By straightforward computations, Equation (12) can be integrated into a standard iteration method:
M ( α ) u ( k + 1 ) = N ( α ) u ( k ) + b ,
where
M ( α ) = 1 2 α ( α D + η M ) ( α I + T ω ) , N ( α ) = 1 2 α ( α D + η M ) ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) .
A ˜ = M ( α ) N ( α ) forms a matrix splitting of the matrix A ˜ , which is called as STTS. The iteration matrix of the STTS iteration method is given by
L ( α ) : = M ( α ) 1 N ( α ) = ( α I + T ω ) 1 ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) .
Below, we establish the convergence property for the STTS iteration method. The following theorem demonstrates that as the matrix size N becomes sufficiently large, indicating that the spatial grid is refined enough, the spectrum radius of the STTS iteration matrix ρ ( L ( α ) ) is less than one for any positive α , provided that the diffusion coefficient d ˜ ( x ) is continuous on [ a , b ] , which proves the convergence of the STTS iteration method.
Theorem 1. 
Suppose that the diffusion coefficient  d ( x )  is continuous on  [ a , b ]  with  0 < d min d ( x ) d max Then, there exists an integer  k 0 > 0 such that for  N > k 0 ρ ( L ( α ) ) < 1 .
Proof. 
It follows that the iteration matrix L ( α ) of the STTS iteration method is similar to the matrix
L ˜ ( α ) : = ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) ( α I + T ω ) 1 ,
and it holds that
ρ ( L ( α ) ) = ρ ( L ˜ ( α ) ) ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) ( α I + T ω ) 1 2 ( α I η D 1 M ) ( α I + η D 1 M ) 1 2 ( α I T ω ) ( α I + T ω ) 1 2 .
As T ω is positive definite [27], it follows from [29] (Lemma 2.1), that
( α I T ω ) ( α I + T ω ) 1 2 < 1 .
For the matrix D 1 M , observe that
M ˜ : = D 1 M + ( D 1 M ) T = 1 8 12 d 1 1 d 1 + 1 d 2 1 d 1 + 1 d 2 12 d 2 1 d 2 + 1 d 3 1 d N 2 + 1 d N 1 12 d N 1 1 d N 1 + 1 d N 1 d N 1 + 1 d N 12 d N .
For 2 i N 1 , it holds that
12 d i 1 d i 1 + 1 d i 1 d i + 1 d i + 1 = 10 d i 1 d i 1 1 d i + 1 = 8 d i + 1 d i 1 d i 1 + 1 d i 1 d i + 1 = 8 d i + d i 1 d i d i d i 1 + d i + 1 d i d i d i + 1 = 4 d i 1 d i d i 1 + 4 d i + 1 d i d i + 1 + d i 1 d i d i d i 1 + d i + 1 d i d i d i + 1 = 5 d i 1 d i d i d i 1 + 5 d i + 1 d i d i d i + 1 .
Also, it holds that
12 d 1 1 d 1 + 1 d 2 = 11 d 2 d 1 d 1 d 2 , 12 d N 1 d N 1 + 1 d N = 11 d N d N 1 d N d N 1 .
Now, let ϵ = d min . As d ( x ) is continuous on [ a , b ] , there exists δ h > 0 such that, for any x , y [ a , b ] with | x y | < δ h , | d ( x ) d ( y ) | < ϵ . Hence, there exists an integer k 0 > 0 such that, for N > k 0 , it holds that
5 d i 1 d i d i d i 1 + 5 d i + 1 d i d i d i + 1 > 4 d i 1 ϵ d i d i 1 + 4 d i + 1 ϵ d i d i + 1 > 4 d min ϵ d i d i 1 + 4 d min ϵ d i d i + 1 > 0 , 11 d 2 d 1 d 1 d 2 > 10 d 2 ϵ d 1 d 2 > 10 d min ϵ d 1 d 2 > 0 , 11 d N d N 1 d N d N 1 > 10 d N ϵ d N d N 1 > 10 d min ϵ d N d N 1 > 0 ,
and from which we know that M ˜ is strictly diagonally dominant. Also note that M ˜ is a symmetric matrix with positive diagonals. Hence, we learn from the Gerschgoriin Disc Theorem that the eigenvalues of M ˜ are all real positive. Therefore, M ˜ is symmetric positive definite, and D 1 M is positive definite. Further applying [29] (Lemma 2.1), we obtain
( α I η D 1 M ) ( α I + η D 1 M ) 1 2 < 1 .
Finally, by combination of (16), (17) and (18), we conclude that ρ ( L ( α ) ) < 1 . □
In addition, when T ω is strictly diagonally dominant, which is indicated by Proposition 2, we can derive the following upper bounds of the asymptotic convergence rate.
Theorem 2. 
Suppose that T ω is strictly diagonally dominant. Denote
α 1 = 3 3 β 2 β , α 2 = 3 η 4 d min .
Then, the following statements follow.
(a) 
For any positive continuous diffusion coefficient d ( x ) on [ a , b ] , there exists an integer k 0 > 0 , such that for N > k 0 and α > α 1 , it holds that
ρ ( L ( α ) ) < α | q N | α + | q N | < 1 .
(b) 
For any positive bounded diffusion coefficient d ( x ) on [ a , b ] , with 0 < d min d ( x ) d max , it holds that
ρ ( L ( α ) ) < 2 d max α η 2 d max α + η · α | q N | α + | q N | < 1 ,
for α > max { α 1 , α 2 } .
Proof. 
We first prove (a).
From the proof of Theorem 2, we know that ρ ( L ( α ) ) = ρ ( L ˜ ( α ) ) , and there exists an integer k 0 > 0 such that, for N > k 0 ,
ρ ( L ˜ ( α ) ) ( α I η D 1 M ) ( α I + η D 1 M ) 1 2 ( α I T ω ) ( α I + T ω ) 1 2 < ( α I T ω ) ( α I + T ω ) 1 2 α I T ω 2 ( α I T ω ) 1 2 α I T ω 1 1 / 2 α I T ω 1 / 2 ( α I + T ω ) 1 1 1 / 2 ( α I + T ω ) 1 1 / 2 .
As T ω is strictly diagonally dominant, it follows from the proof of [27] (Lemma 3.5, Lemma 3.6) that
k = 1 N 1 b k + b k = q 0 q 1 + q N ,
which leads to
b 0 k = 1 N 1 b k + b k = q N .
Then, for α > b 0 = α 1 , it holds that
α I T ω 1 = max 1 i N α b 0 + j = 1 i 1 b j + j = 1 N i b j | α b 0 | + j = 1 N 1 b j + j = 1 N 1 b j = α b 0 j = 1 N 1 b j j = 1 N 1 b j = α q N .
Analogously, we can show that, for α > α 1 ,
α I T ω α q N .
Moreover, as T w is strictly diagonally dominant, it holds that
α I + T ω 1 1 1 min 1 i N α + b 0 j = 1 i 1 b j j = 1 N i b j 1 α + b 0 j = 1 N 1 b j j = 1 N 1 b j = 1 α + q N .
Analogously, we can also show that
α I + T ω 1 1 α + q N .
Now together with (21), (22), (23), (24), and (25), we determine that
ρ ( L ( α ) ) = ρ ( L ˜ ( α ) ) < α | q N | α + | q N | < 1 .
In the following, we prove (b).
Note that
ρ ( L ˜ ( α ) ) ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) ( α I + T ω ) 1 α I η D 1 M ( α I + η D 1 M ) 1 α I T ω ( α I + T ω ) 1 .
Now, on the one hand, for α > 3 η 4 d min = α 2 , it holds that
α I η D 1 M = max 1 i N α η m i i / d i + i j η m i j / d i = max 1 i N α η ( m i i i j | m i j | ) / d i max 1 i N α η ( m i i i j | m i j | ) / d max = α η 2 d max .
On the other hand, as α I + η D 1 M is strictly diagonally dominant, it holds that
( α I + η D 1 M ) 1 1 min 1 i N | α + η m i i / d i | i j | η m i j / d i | = 1 min 1 i N α + η ( | m i i | i j | m i j | ) / d i 1 min 1 i N α + η ( | m i i | i j | m i j | ) / d max = 2 d max 2 d max α + η .
Hence, by combination of (23), (25), (26), (27), and (28), we conclude that
ρ ( L ( α ) ) = ρ ( L ˜ ( α ) ) < 2 d max α η 2 d max α + η · α | q N | α + | q N | < 1 ,
for α > max { α 1 , α 2 } . □

4. The FSTTS Preconditioning

While the STTS iteration method targets at an approximate solution of the linear systems (3) by solving A ˜ x = b , where A ˜ given by (9) is an approximation of the original coefficient matrix given by (8), we can apply the STTS preconditioner
M ( α ) = 1 2 α α D + η M α I + T ω ,
induced from the STTS iteration method to accelerate the convergence of Krylov Subspace iteration methods for solving the original linear systems (3).
When implementing the preconditioner M ( α ) , we need to solve the two linear systems
( α D + η M ) y = b , ( α I + T ω ) x = y .
Note that α D + η M is a tridiagonal matrix; therefore, solving the first linear system requires O ( N ) storage and O ( N ) computational cost, which is cheap.
The storage and computational cost of solving the second linear system can be expensive. To reduce the cost, we consider approximating the Toeplitz matrix T ω by a circulant matrix C ω , like the Strang circulant approximation [30] of T ω . Then, we will turn to compute ( α I + C ω ) x = y , which only requires O ( N ) storage cost and O ( N log N ) computational cost. By doing so, a circulant-based variant of the STTS preconditioner M ˜ ( α ) is obtained as
M ˜ ( α ) = 1 2 α α D + η M α I + C ω ,
and we call it the fast STTS (FSTTS) preconditioner.
Now, the computational complexity of implementing the FSTTS preconditioner is O ( N + N log N ) . Note also that the Toeplitz-like coefficient matrix of (3) can be multiplied with any vector in O ( N log N ) computational complexity. Hence, when the FSTTS preconditioner is applied to Krylov Subspace iteration methods for solving each discrete linear system (3), the computational complexity per iteration is O ( N + N log N ) . Additionally, the storage cost requires O ( N ) .
In the following, we analyze the preconditioned matrix M ˜ ( α ) 1 A .
To this end, we first give some estimates required in the analysis.
Lemma 1. 
It holds that
α D + η M 1 2 2 η + 2 α d min .
Proof. 
Obviously, α D + η M is strictly diagonally dominant, and it holds that
( α D + η M ) 1 2 ( α D + η M ) 1 1 1 2 ( α D + η M ) 1 1 2 1 min 1 i N α d i + η / 2 = 2 η + 2 α d min .
Lemma 2. 
It holds that
( α I + C ω ) 1 2 < 1 α 2 1 β ,
for α > 2 1 β . In particular, when T ω is strictly diagonally dominant, it holds that
( α I + C ω ) 1 2 1 α ,
for any α > 0 .
Proof. 
Without loss of generality, we assume N is odd.
Since C ω is the Strang circulant approximations of T ω , the first column of C ω is given by
c ω ( 0 ) , c ω ( 1 ) , , c ω ( N 1 ) T ,
where
c ω ( k ) = q 0 q 1 , k = 0 , ω ( q 1 q 2 ) ( 1 ω ) q 0 , k = 1 , ω ( q k q k + 1 ) , 2 k N 1 2 , ( 1 ω ) ( q N k q N k + 1 ) , N + 1 2 k N 2 , ( 1 ω ) ( q 1 q 2 ) ω q 0 , k = N 1 .
Note that
j = 2 N 2 | c ω ( j ) | = j = 2 N 2 c ω ( j ) q 2 + q N + 1 2 q 2 .
If c ω ( 1 ) c ω ( N 1 ) 0 , we have
| c ω ( 1 ) | + | c ω ( N 1 ) | = | c ω ( 1 ) + c ω ( N 1 ) | = q 0 + q 2 q 1 ,
and, together with (32), we obtain
j = 1 N 1 | c ω ( j ) | q 0 q 1 ,
which implies that α I + C ω is strictly diagonally dominant. Therefore, it holds that
( α I + C ω ) 1 2 ( α I + C ω ) 1 1 1 2 ( α I + C ω ) 1 1 2 1 α + ( c ω ( 0 ) j = 1 N 1 | c ω ( j ) | ) 1 α + ( q 0 q 1 ) ( q 0 q 1 ) = 1 α .
If c ω ( 1 ) c ω ( N 1 ) < 0 , as
q 0 + q 1 q 2 = 1 2 β ( 3 β + 1 5 β 2 ) > 0 ,
we have
| c ω ( 1 ) | + | c ω ( N 1 ) |   =   | c ω ( 1 ) c ω ( N 1 ) |     | 2 ω 1 | ( q 0 + q 1 q 2 ) .
Hence, it follows from (32) that
c ω ( 0 ) j = 1 N 1 c ω ( j ) q 0 q 1 | 2 ω 1 | ( q 0 + q 1 q 2 ) + q 2 q 0 q 1 ( q 0 + q 1 q 2 ) + q 2 2 ( q 1 q 2 ) = 2 1 β h ( β ) ,
where h ( β ) is given by (7). As h ( β ) < 1 , we know that α I + C ω is strictly diagonally dominant for α > 2 1 β , in which case, it holds that
( α I + C ω ) 1 2 ( α I + C ω ) 1 1 1 2 ( α I + C ω ) 1 1 2 1 α 2 1 β h ( β ) < 1 α 2 1 β .
Now, by combination of (34) and (35), (30) is proven.
In particular, when T ω is strictly diagonally dominant, it is easy to see that C ω is also strictly diagonally dominant. Hence,
j = 1 N 1 | c ω ( j ) | q 0 q 1 ,
which returns to (33), and we can conclude that, for any α > 0 ,
( α I + C ω ) 1 2 1 α ,
which proves (31). □
By combination of Lemma 1 and Lemma 2, we directly obtain the following lemma.
Lemma 3. 
It holds that
M ˜ ( α ) 1 2 4 α ( α 2 1 β ) ( η + 2 α d min ) ,
for α > 2 1 β . In particular, when T ω is strictly diagonally dominant, it holds that
M ˜ ( α ) 1 2 4 η + 2 α d min ,
for any α > 0 .
Lemma 4. 
It holds that
α I + T ω 2 α + 2 1 β ( 4 3 β ) .
And, it holds that
( α I + T ω ) 1 2 1 α 2 1 β ,
for α > 2 1 β . Moreover, when T ω is strictly diagonally dominant, it holds that
α I + T ω 2 α + 2 1 β ( 3 3 β ) , ( α I + T ω ) 1 2 1 α ,
for any α > 0 .
Proof. 
We first consider the case that T ω is not strictly diagonally dominant.
From Proposition 2, we know that T ω is not diagonally dominant if and only if β * < β < 1 and ω > ω * , or β * < β < 1 and ω < 1 ω * .
If β * < β < 1 and ω > ω * , it holds that
k = 1 N 1 | b k | + | b k | = ω ( q 0 + q 1 q 2 ) q 0 + ω ( q 0 + q 1 q 2 ) ( q 1 q 2 ) q 2 + q N = 2 ω ( q 0 + q 1 q 2 ) q 0 q 1 + q N 2 ( q 0 + q 1 q 2 ) ( q 0 + q 1 ) = q 0 q 1 + 2 ( q 1 q 2 ) .
If β * < β < 1 and ω < 1 ω * , it holds that
k = 1 N 1 | b k | + | b k | = q 0 ω ( q 0 + q 1 q 2 ) + ( q 1 q 2 ) ω ( q 0 + q 1 q 2 ) q 2 + q N ( 1 ω ) ( q 0 + q 1 q 2 ) ω ( q 0 + q 1 ) ( 1 ω ) q 2 ( 1 ω ) ( q 0 + q 1 2 q 2 ) q 0 q 1 + 2 ( q 1 q 2 ) .
Therefore, we derive that
α I + T ω 2 α I + T ω 1 1 2 α I + T ω 1 2 α + b 0 + k = 1 N 1 | b k | + | b k | α + 2 ( q 0 q 2 ) = α + 2 1 β ( 4 3 β ) .
Also, for α > 2 1 β , we derive that
( α I + T ω ) 1 2   ( α I + T ω ) 1 1 1 2 ( α I + T ω ) 1 1 2 1 α + b 0 k = 1 N 1 ( | b k | + | b k | ) 1 α 2 ( q 1 q 2 ) = 1 α 2 1 β h ( β ) < 1 α 2 1 β .
As in the case where T ω is strictly diagonally dominant, from the proof of Theorem 2, we know that
( α I + T ω ) 1 2     ( α I + T ω ) 1 1 1 2 ( α I + T ω ) 1 1 2 1 α + | q N | < 1 α .
Also, it holds that
α I + T ω 2   α I + T ω 1 1 2 α I + T ω 1 2 α + b 0 + k = 1 N 1 ( | b k | + | b k | ) α + 2 ( q 0 q 1 ) = α + 2 1 β ( 3 3 β ) .
Next, we characterize the approximation of M ( α ) by M ˜ ( α ) .
Lemma 5. 
Given any ϵ > 0 , there exists a positive integer k 0 such that for N > 2 k 0 and α > 2 1 β , it holds that
M ˜ ( α ) 1 M ( α ) = I + E ˜ ( α ) + F ˜ ( α ) ,
with
rank ( E ˜ ( α ) ) k 0 , F ˜ ( α ) 2 ϵ α 2 1 β .
Moreover, if T ω is strictly diagonally dominant, (39) holds with
rank ( E ˜ ( α ) ) k 0 , F ˜ ( α ) 2 ϵ α ,
for any α > 0 .
Proof. 
From [27] (Lemma 3.8), we know that, for any given ϵ > 0 , there exists a positive integer k 0 such that, for N > 2 k 0 , it holds that
T ω C ω = E 1 + F 1 ,
where
rank ( E 1 ) = k 0 , F 1 1 = F 1 < ϵ .
Now, let
E ˜ ( α ) = ( α I + C ω ) 1 E 1 , F ˜ ( α ) = ( α I + C ω ) 1 F 1 .
Then, we have
M ˜ ( α ) 1 M ( α ) I = ( α I + C ω ) 1 ( α I + T ω ) I = ( α I + C ω ) 1 ( T ω C ω ) = E ˜ ( α ) + F ˜ ( α ) .
Obviously, we have rank ( E ˜ ( α ) ) k 0 , and it follows from Lemma 2 that
F ˜ ( α ) 2 ( α I + C ω ) 1 2 F 1 2 1 α 2 1 β F 1 1 1 2 F 1 1 2 = ϵ α 2 1 β ,
for α > 2 1 β . Moreover, if T ω is strictly diagonally dominant, it holds that
F ˜ ( α ) 2 1 α F 1 2 ϵ α ,
for any α > 0 . □
Finally, we characterize the difference between the FSTTS preconditioned matrix M c ( α ) 1 A and the STTS preconditioned matrix M ( α ) 1 A .
Theorem 3. 
Denote
ϖ 11 = 2 α + 2 2 β ( 4 3 β ) , ϖ 12 = α 2 1 β η + 2 α d min , ϖ 21 = 2 α + 2 2 β ( 3 3 β ) , ϖ 22 = α η + 2 α d min .
Given any ϵ > 0 , there exists a positive integer N 0 such that, for N > N 0 and α > 2 1 β θ , it holds that
M ˜ ( α ) 1 A = M ( α ) 1 A ˜ + E ( α ) + F ( α ) ,
with
rank ( E ( α ) ) N 0 , F ( α ) 2 ϖ 11 + 4 α ϖ 12 ( α 2 1 β ) 2 ϵ .
In particular, if T ω is strictly diagonally dominant, (40) holds with
rank ( E ( α ) ) N 0 , F ( α ) 2 ϖ 21 + 4 α ϖ 22 α 2 ϵ ,
for any α > 0 .
Proof. 
Observe that
M ( α ) 1 A ˜ = I M ( α ) 1 N ( α ) = I ( α I + T ω ) 1 ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) = ( α I + T ω ) 1 ( I L ˜ ( α ) ) ( α I + T ω ) ,
where
L ˜ ( α ) = ( α I η D 1 M ) ( α I + η D 1 M ) 1 ( α I T ω ) ( α I + T ω ) 1 .
According to the proof of Theorem 1, there exists an integer k 0 > 0 such that, for N k 0 , it holds that L ˜ ( α ) 2 1 . Then, for α > 2 1 β , it follows from (41) and Lemma 4 that
M ( α ) 1 A ˜ 2   ( α I + T ω ) 1 ( I L ˜ ( α ) ) ( α I + T ω ) 2   ( α I + T ω ) 1 2 I L ˜ ( α ) 2 ( α I + T ω ) 2   ( α I + T ω ) 1 2 ( α I + T ω ) 2 ( 1 + L ˜ ( α ) 2 ) 2 α + 2 2 β ( 4 3 β ) α 2 1 β = ϖ 11 α 2 1 β .
Hence, by combination of (42) and Lemma 5, there exists an integer k 1 > 0 such that, for N > 2 k 1 and α > 2 1 β , it holds that
M ˜ ( α ) 1 A ˜ = M ˜ ( α ) 1 M ( α ) M ( α ) 1 A ˜ = ( I + E ˜ ( α ) + F ˜ ( α ) ) M ( α ) 1 A ˜ = M ( α ) 1 A ˜ + E ^ ( α ) + F ^ ( α ) ,
where
E ^ ( α ) = E ˜ ( α ) M ( α ) 1 A ˜ , F ^ ( α ) = F ˜ ( α ) M ( α ) 1 A ˜ ,
satisfying rank ( E ^ ( α ) ) k 1 , and
F ^ ( α ) 2 F ˜ ( α ) 2 M ˜ ( α ) 1 A ˜ 2 ϖ 11 ϵ ( α 2 1 β ) 2 .
Further, denote R = A A ˜ . Then, according to (11), there exists an integer k 2 such that, for N > k 2 , it holds that R 2 < ϵ .
Therefore, for α > 2 1 β , when N > N 0 = max { k 0 , 2 k 1 , k 2 } , it follows from Lemma 3 and (43) that
M ˜ ( α ) 1 A = M ˜ ( α ) 1 ( A ˜ + A A ˜ ) = M ˜ ( α ) 1 A ˜ + M ˜ ( α ) 1 R = M ( α ) 1 A ˜ + E ( α ) + F ( α ) ,
where
E ( α ) = E ^ ( α ) , F ( α ) = F ^ ( α ) + M ˜ ( α ) 1 R ,
satisfying rank ( E ^ ( α ) ) 2 k 0 , and
F ( α ) 2   F ^ ( α ) 2 + M ˜ ( α ) 1 2 R 2 ϖ 11 ϵ ( α 2 1 β ) 2 + 4 α ϵ ( α 2 1 β ) ( η + 2 α d min ) = ϖ 11 + 4 α ϖ 12 ( α 2 1 β ) 2 ϵ .
In particular, when T ω is strictly diagonally dominant, for any α > 0 , it follows from Lemmas 3–5 that
M ( α ) 1 A ˜ 2 2 α + 2 2 β ( 3 3 β ) α = ϖ 21 α , M ˜ ( α ) 1 2 4 η + 2 α d min = ϖ 22 α , and F ˜ ( α ) 2 ϵ α .
Therefore, with the same arguments above, we can determine that rank ( E ( α ) ) k 1 , and
F ( α ) 2   F ^ ( α ) 2 + M ˜ 1 ( α ) 2 R 2   F ˜ ( α ) 2 M ˜ ( α ) 1 A ˜ 2 + M ˜ 1 ( α ) 2 R 2 ϖ 21 + 4 α ϖ 22 α 2 ϵ .
Theorem 3 suggests that the FSTTS preconditioned matrix M ˜ ( α ) 1 A can be well approximated by the STTS preconditioned matrix M ( α ) 1 A , in the sense that their difference can be only a low rank matrix and a small norm matrix. Hence, the FSTTS preconditioner can be expected to inherit the preconditioning property of the STTS preconditioner to accelerate the convergence of Krylov Subspace iteration methods well.

5. Numerical Experiments

In this section, we carry out numerical experiments to test the performance of the FSTTS preconditioner. We employ the right preconditioned GMRES method to solve the discrete linear systems (3) of all of the time steps. The preconditioned GMRES method equipped with the FSTTS preconditioner is denoted by GMRES-FSTTS. The initial guesses are set as
v 0 = u 0 , m = 1 , u m 1 , m 1 .
The stopping criterion is that
r k 2 r 0 2 10 7 ,
where r k is the residual vector after k iterations and r 0 is the initial residual vector, or the maximal number of iteration steps are more than 1000, or the maximal computing time in CPU has exceeded 4 h. All of the experiments are carried out using MATLAB (version R2023a) on a desktop computer with 3.00 GHZ central processing unit (Inter(R) Core(TM) i7-9700F CPU), 16.0 GB memory and Windows 10 operating system.
With respect to the choice of the parameter α involved in the FSTTS preconditioner, despite the optimal ones that lead to the best actual performances in each time step being too difficult to obtain, as suggested by [18,19,22], it is typically feasible to carry out a moderate amount of experiments to determine an experimentally optimal one to ensure the effectiveness of matrix-splitting-based preconditioners. In our experiments, we observe that it suffices to use the parameter that minimizes the number of iterations at the first time step to yield satisfactory performance. It turns out that, in all our experiments, by first setting the initial test parameter in a small magnitude around 10 6 , then gradually enlarging it in a certain candidate set for more tests, and finally selecting the optimal one, we can always quickly yield ideal performance.
For comparisons, we will test the performances of the CAI preconditioner [1]. The corresponding preconditioned GMRES method is denoted by “GMRES-CAI()”, where denotes the number of interpolation points. As in [1], we take = 4 , 6 in our experiments. Additionally, we will test the Strang circulant preconditioner for the Toeplitz matrix
η M + ω ( d ˜ l T l d ˜ r T r ) + ( 1 ω ) ( d ˜ r T l T d ˜ l T r T ) ,
where d ˜ l and d ˜ r are the mean values of the diagonals of D l and D r . The corresponding preconditioned GMRES method is denote by “GMRES-C”. Moreover, we will also test the GMRES method without preconditioning, which is denoted by “GMRES”.
Example 1. 
Consider the SFDE (1) with ( x , t ) [ 0 , 1 ] × [ 0 , 1 ] , and the diffusion coefficient given by
d ( x , t ) = 1 ( x + 0.001 ) 2 + e 8 x + 10 t 2 ,
The exact solution is u ( x , t ) = e t x 2 ( 1 x ) 2 , and the source term f ( x , t ) is accordingly given by
f ( x , t ) = e t x 2 ( 1 x ) 2 e t d ( x , t ) Γ ( 1 + β ) ω g 1 ( x ) + ( 1 ω ) g 1 ( 1 x ) e t Γ ( 1 + β ) 2 ( x + 0.001 ) 3 + 8 e x ω g 1 ( x ) ( 1 ω ) g 2 ( 1 x ) ,
where
g 1 ( x ) = 2 x β 12 x β + 1 β + 1 + 24 x β + 2 ( β + 1 ) ( β + 2 ) ,
and
g 2 ( x ) = 2 ( 1 x ) β + 1 ( β + 1 ) 12 ( 1 x ) β + 2 ( β + 1 ) ( β + 2 ) + 24 ( 1 x ) β + 3 ( β + 1 ) ( β + 2 ) ( β + 3 ) .
For Example 1, the parameters α e x p of the FSTTS preconditioner used in the GMRES-FSTTS method are shown in Table 1, The numerical results of all of the tested preconditioners with different β and ω are reported in Table 2, Table 3 and Table 4. In these tables, “N” denotes the number of spatial grid points, “M” denotes the number of time steps, “Iter” denotes the average number of iterations required to solve the linear system at each time step, and “CPU” denotes the total CPU time in seconds for solving the linear systems of all of the time steps. We remark that, for the running time of different methods counted in “CPU”, the time in preparing related matrix information and constructing the preconditioner is also included. We also remark that, compared with the time in solving the linear systems of all of the time steps, this part of time is almost negligible. For “Iter” and “CPU” that are both with “-”, it indicates that the number of iteration steps in a certain time step exceeds 1000, which is seen as out of convergence. For “CPU” with “>4 h”, it indicates that the total computing time exceeds 4 h, and the computing process will be terminated. Meanwhile, the corresponding “Iter” will not be recorded, which is left with “-”.
From the results of GMRES-STTS, we see that the FSTTS preconditioner exhibits excellent performances for all of the cases of β and ω in both of iteration steps and CPU time, and apparently, the performances of the FSTTS preconditioners are better than those of the CAI preconditioners, whose efficiency depends on the smoothness of the diffusion coefficient. Note that, in this example, the diffusion coefficient d ( x , t ) is continuous, but is near to a second-order singularity around x = 0 . The results of GMRES-C show that the circulant preconditioner is very ineffective. Its poor performance is expected, as the diffusion coefficient values obviously cannot be well approximated by their mean values, which leads to a poor approximation of the coefficient matrix. Moreover, the GMRES method without preconditioning converges extremely slowly, and as the matrix size increases, it fails to achieve the prescribed convergence within 1000 steps.
Example 2. 
In this example, we replace the diffusion coefficient d ( x , t ) in Example 1 with
d ( x , t ) = 1 ( x + 0.001 ) 2 + 1 ( 1.001 x ) 2 + 10 t 2 .
and then compute the source term f ( x , t ) correspondingly. Other data are kept the same as that in Example 1.
For Example 2, the parameters α e x p of the FSTTS preconditioner used in the GMRES-FSTTS method are shown in Table 5, The numerical results of all of the tested preconditioners with different β and ω are reported in Table 6, Table 7 and Table 8.
We see that the FSTTS preconditioner still converges quite quickly for all of the cases of β and ω , and it evidently outperforms the CAI preconditioner. Note that, in this example, the diffusion coefficient d ( x , t ) is continuous, and is near to two second-order singularities around x = 0 and x = 1 . Also, the FSTTS preconditioner performs much better than the circulant preconditioner. In addition, we see that the GMRES method without preconditioning converges extremely slowly.

6. Conclusions

Based on the STTS iteration method, we have developed efficient FSTTS preconditioner for the discrete linear system, resulting from the finite volume discretization of conservative SFDE (1). We have established the convergence property of STTS iteration method to guarantee the effectiveness of the induced STTS preconditioner. Further, we have theoretically shown that the fast approximation of FSTTS preconditioner can inherit the preconditioning property of the STTS preconditioner well. Numerical results have demonstrated that FSTTS preconditioners can significantly accelerate the convergence of the GMRES method. They have also illustrated that the FSTTS preconditioner can be much more efficient than the CAI and circulant preconditioners.
There are still quite challenging works which deserve further exploration. First, we will study how to design more intelligent and efficient strategy to identify relatively optimal parameters α involved in the FSTTS preconditioner. Second, we will also investigate how to develop efficient preconditioning technique for two- and three-dimensional conservative SFDEs with variable diffusion coefficients. Note that, in the multi-dimensional case, the discretized coefficient matrix is formed by the sum of a series of diagonal-times-Block Toeplitz Matrix with Toeplitz Block (BTTB) matrices. Hence, the structure is much more complicated and, hence, much more difficult to deal with than in one dimension.

Funding

This research is supported by National Natural Science Foundation of China (92370105).

Data Availability Statement

The data that support the findings of this study are available from the author upon reasonable request.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Pan, J.Y.; Ng, M.K.; Wang, H. Fast iterative solvers for linear systems arising from time-dependent space fractional diffusion equations. SIAM J. Sci. Comput. 2016, 38, 2806–2826. [Google Scholar] [CrossRef]
  2. Podlubny, I. Fractional Differential Equations; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  3. Meerschaert, M.M.; Sikorskii, A. Stochastic Models for Fractional Calculus; De Gruyter: Berlin, Germany, 2012. [Google Scholar]
  4. Schumer, R.; Benson, D.A.; Meerschaert, M.M.; Wheatcraft, S.W. Eulerian derivation of the fractional advection-dispersion equation. J. Contam. Hydrol. 2001, 48, 69–88. [Google Scholar] [CrossRef] [PubMed]
  5. Mao, Z.P.; Shen, J. Efficient spectral-Galerkin methods for fractional partial differential equations with variable coefficients. J. Comput. Phys. 2016, 307, 243–261. [Google Scholar] [CrossRef]
  6. Huang, G.H.; Huang, Q.Z.; Zhan, H.B. Evidence of one-dimensional scale-dependent fractional advection-despersion. J. Contam. Hydrol. 2006, 85, 53–71. [Google Scholar] [CrossRef] [PubMed]
  7. Sun, L.W.; Qiu, H.; Wu, C.H.; Niu, J.; Hu, B.X. A review of applications of fractional advection-dispersion equations for anomalous solute transport in surface and subsurface water. WIRES Water 2020, 7, e1448. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Benson, D.A.; Meerschaer, M.M.; LaBolle, E.M. Space-fractional advection-dispersion equations with variable parameters: Diverse formulas, numerical solutions, and application to the Macrodispersion Experiment site data. Water Resour. Res. 2007, 43, W05439. [Google Scholar] [CrossRef]
  9. Metzler, R.; Klafter, J. The radom walk’s guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep. 2000, 339, 1–77. [Google Scholar] [CrossRef]
  10. Wang, H.; Wang, K.; Sircar, T. A direct O(Nlog2N) finite difference method for fractional diffusion equations. J. Comput. Phys. 2010, 229, 8095–8104. [Google Scholar] [CrossRef]
  11. Wang, H.; Du, N. A superfast-preconditioned iterative method for steady-state fractional diffusion equations. J. Comput. Phys. 2013, 240, 49–57. [Google Scholar] [CrossRef]
  12. Wang, H.; Cheng, A.; Wang, K. Fast finite volume methods for Space-fractional diffusion equations. Discrete. Contin. Dyn. Ser. B 2015, 20, 1427–1441. [Google Scholar] [CrossRef]
  13. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  14. Lei, S.; Sun, H.W. A circulant preconditioner for fractional diffusion equations. J. Comput. Phys. 2013, 242, 715–725. [Google Scholar] [CrossRef]
  15. Pan, J.Y.; Ke, R.H.; Ng, M.K.; Sun, H.W. Preconditioning techniques for diagnoal-times-Toeplitz matrices in fractional diffusion equations. SIAM. Sci. Comput. 2014, 36, 2698–2719. [Google Scholar] [CrossRef]
  16. Lin, F.R.; Yang, S.W.; Jin, X.Q. Preconditioned iterative methods for fractional diffusion equation. J. Comput. Phys. 2014, 256, 109–117. [Google Scholar] [CrossRef]
  17. Donatelli, M.; Mazza, M.; Serra-Capizzano, S. Spectral analysis and structure preserving preconditioners for fractional diffusion equations. J. Comput. Phys. 2016, 307, 262–279. [Google Scholar] [CrossRef]
  18. Bai, Z.Z.; Lu, K.Y.; Pan, J.Y. Diagnoal and Toeplitz splitting iteration methods for diagnoal-plus-toeplitz linear systems from spatial fractional diffusion equations. Numer Linear Algebra Appl. 2017, 24, e2093. [Google Scholar] [CrossRef]
  19. Bai, Z.Z. Respectively scaled HSS iteration methods for solving discretized spatial fractional diffusion equations. Numer. Linear Algebra Appl. 2018, 25, e2157. [Google Scholar] [CrossRef]
  20. Bai, Z.Z.; Lu, K.Y. On regularized Hermitian splitting iteration method for solving discretized almost-isotropic spatial fractional diffusion equation. Numer. Linear Algebra Appl. 2020, 27, e2274. [Google Scholar] [CrossRef]
  21. Chen, F.; Li, T.Y.; Muratova, G.V. Lopsided scaled HSS preconditioning for steady-state space-fractional diffusion equation. Calcolo 2021, 58, 26. [Google Scholar] [CrossRef]
  22. Lu, K.Y.; Xie, D.X.; Chen, F.; Muratova, G.V. Dominant Hermitian splitting iteration method for discrete space-fractional diffusion equations. Appl. Numer. Math. 2021, 164, 15–28. [Google Scholar] [CrossRef]
  23. Shao, X.H.; Kang, C.B. Modified DTS Iteration methods for spatial fractional diffusion equations. Mathematics 2023, 11, 931. [Google Scholar] [CrossRef]
  24. Tang, S.P.; Huang, Y.M. A matrix splitting preconditioning method for solving the discretized tempered fractional diffusion equations. Numer. Algor. 2023, 92, 1311–1333. [Google Scholar] [CrossRef]
  25. Tang, S.P.; Huang, Y.M. A fast preconditioning iterative method for solving the discretized second-order space-fractional advection-diffusion equations. J. Comput. Appl. Math. 2024, 438, 115513. [Google Scholar] [CrossRef]
  26. Barakitis, N.; Ekström, S.; Vassalos, P. Preconditioners for fractional diffusion equations based on the spectral symbol. Numer. Linear Algebra Appl. 2022, 29, e2441. [Google Scholar] [CrossRef]
  27. Pan, J.Y.; Ng, M.K.; Wang, H. Fast preconditioned Iterative Methods for finite volume discretization of steady-state space-fractional diffusion equations. Numer. Algor. 2017, 74, 153–173. [Google Scholar] [CrossRef]
  28. Donatelli, M.; Mazza, M.; Serra-Capizzano, S. Spectral analysis and multigrid methods for finite volume approximations of space-fractional diffusion equations. SIAM J. Sci. Comput. 2018, 40, 4007–4039. [Google Scholar] [CrossRef]
  29. Bai, Z.Z.; Golub, G.H.; Lu, L.Z.; Yin, J.F. Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 2005, 26, 844–863. [Google Scholar] [CrossRef]
  30. Chan, R.F.; Jin, X.Q. An Introduction to Iterative Toeplitz Solvers; SIAM: Philadelphia, PA, USA, 2007. [Google Scholar]
Table 1. The parameters α e x p of the FSTTS preconditioner used in Example 1.
Table 1. The parameters α e x p of the FSTTS preconditioner used in Example 1.
β
β ω 2 10 2 11 2 12 2 13
0.20.21.0 × 10−41.0 × 10−43.0 × 10−51.0 × 10−5
0.53.0 × 10−48.0 × 10−53.0 × 10   5 1.0 × 10   5
0.83.0 × 10   4 1.0 × 10   4 8.0 × 10   5 1.0 × 10   5
0.50.28.0 × 10   4 5.0 × 10   4 1.0 × 10   4 1.0 × 10   4
0.58.0 × 10   4 3.0 × 10   4 1.0 × 10   4 1.0 × 10   4
0.88.0 × 10   4 3.0 × 10   4 1.0 × 10   4 8.0 × 10   5
0.80.25.0 × 10   3 3.0 × 10   3 3.0 × 10   3 1.0 × 10   3
0.53.0 × 10   3 1.0 × 10   3 8.0 × 10   4 5.0 × 10   4
0.83.0 × 10   3 1.0 × 10   3 8.0 × 10   4 3.0 × 10   4
Table 2. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 1 with β = 0.2 .
Table 2. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 1 with β = 0.2 .
GMRESGMRES-CGMRES-CAI(4)GMRES-CAI(6)GMRES-FSTTS
ω M = N IterCPUIterCPUIterCPUIterCPUIterCPU
0.2 2 10 703.84427.48511.10233.5226.1411.9526.6912.8810.658.38
2 11 --732.361877.9130.3142.2130.7645.5911.8427.83
2 12 --->4 h34.03184.8334.69200.0611.85104.23
2 13 --->4 h37.99776.5139.38857.6512.38399.87
0.5 2 10 693.18418.65521.82242.7526.6512.2827.1913.1711.778.96
2 11 --749.751936.7931.0043.1331.7946.9511.4127.82
2 12 --->4 h35.27190.2335.78204.7212.26104.99
2 13 --->4 h39.04788.7040.24869.2012.92405.71
0.8 2 10 854.70609.57540.42256.1827.0712.3427.9213.3212.969.10
2 11 --799.482167.0532.2544.0733.1547.8912.8028.94
2 12 --->4 h37.47198.8438.36216.8214.83113.72
2 13 --->4 h43.05863.1742.75920.6413.46413.34
Table 3. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 1 with β = 0.5 .
Table 3. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 1 with β = 0.5 .
GMRESGMRES-CGMRES-CAI(4)GMRES-CAI(6)GMRES-FSTTS
ω M = N IterCPUIterCPUIterCPUIterCPUIterCPU
0.2 2 10 555.63284.53465.79201.6125.7712.0726.8913.0611.028.74
2 11 --641.311450.9530.6342.9831.9447.3111.4728.48
2 12 --->4 h33.52182.1935.66204.9611.79104.52
2 13 --->4 h34.70723.7836.35805.9112.53401.70
0.5 2 10 388.99156.53432.47178.6027.1812.3828.7813.6111.448.75
2 11 585.411272.15559.891128.0931.5543.3533.0547.7311.1427.73
2 12 ->4 h612.639945.5835.16190.8337.33214.3111.73104.71
2 13 --->4 h36.78754.9038.52844.5112.78406.89
0.8 2 10 673.79398.30459.44197.0028.0612.6530.0113.9312.608.97
2 11 --590.131241.2433.4645.2735.0650.0512.2128.78
2 12 --630.8210,614.5840.07211.0739.32221.5412.67107.34
2 13 --->4 h41.29833.1241.86898.9913.45413.57
Table 4. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 1 with β = 0.8 .
Table 4. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 1 with β = 0.8 .
GMRESGMRES-CGMRES-CAI(4)GMRES-CAI(6)GMRES-FSTTS
ω M = N IterCPUIterCPUIterCPUIterCPUIterCPU
0.2 2 10 509.81245.89504.95247.9427.5012.3529.4013.6413.719.16
2 11 834.562506.23726.251970.0735.3346.5736.9151.5014.0129.64
2 12 ----45.01232.3846.02253.0116.63116.98
2 13 ----60.531194.6455.651172.7414.50423.72
0.5 2 10 155.5142.24298.09106.8629.0712.8329.2513.7313.009.04
2 11 194.70209.09334.47491.4736.0046.9835.9050.4712.0028.74
2 12 255.542016.48327.923187.9143.59228.2043.26241.2412.17105.87
2 13 349.5414,013.41325.1212,640.1949.03968.4048.411026.9412.89407.51
0.8 2 10 584.32309.61346.76134.8131.0613.2431.2814.3315.009.30
2 11 936.923041.66396.44650.4838.1348.6937.7752.2513.9929.82
2 12 --383.324224.6645.25234.8244.77249.1313.95110.93
2 13 --->4 h51.581011.9547.641006.6213.80417.82
Table 5. The parameters α e x p of the FSTTS preconditioner used in Example 2.
Table 5. The parameters α e x p of the FSTTS preconditioner used in Example 2.
β
β ω 2 10 2 11 2 12 2 13
0.20.21.0 × 10   3 3.0 × 10   4 1.0 × 10   4 5.0 × 10   5
0.55.0 × 10   4 1.0 × 10   4 1.0 × 10   4 8.0 × 10   5
0.81.0 × 10   3 3.0 × 10   4 1.0 × 10   4 5.0 × 10   5
0.50.21.0 × 10   3 5.0 × 10   4 3.0 × 10   4 1.0 × 10   4
0.51.0 × 10   3 5.0 × 10   4 3.0 × 10   4 3.0 × 10   4
0.81.0 × 10   3 5.0 × 10   4 3.0 × 10   4 1.0 × 10   4
0.80.28.0 × 10   3 5.0 × 10   3 3.0 × 10   3 1.0 × 10   3
0.53.0 × 10   3 3.0 × 10   3 1.0 × 10   3 1.0 × 10   3
0.88.0 × 10   3 5.0 × 10   3 3.0 × 10   3 1.0 × 10   3
Table 6. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 2 with β = 0.2 .
Table 6. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 2 with β = 0.2 .
GMRESGMRES-CGMRES-CAI(4)GMRES-CAI(6)GMRES-FSTTS
ω M = N IterCPUIterCPUIterCPUIterCPUIterCPU
0.2 2 10 465.29211.3547.8914.9022.9811.4521.2011.4912.599.00
2 11 778.092160.5760.7458.8526.8539.9325.1441.1112.2428.98
2 12 --80.12376.1233.61183.4830.79182.6313.06108.53
2 13 --107.082094.7739.76798.3735.89789.9413.96419.68
0.5 2 10 410.35170.6746.6314.3320.5210.6518.7210.929.898.48
2 11 675.791653.2659.8657.9923.6237.2121.0337.2310.2827.21
2 12 --79.13368.6527.16157.1923.19150.5211.18102.86
2 13 --106.592073.7635.64734.6226.92639.9412.28402.89
0.8 2 10 465.34210.7747.8914.7222.9811.3221.2011.4212.599.00
2 11 778.102150.1760.7458.7026.8639.7725.1440.8412.2428.56
2 12 --80.12379.0433.61183.3730.79183.0613.06108.16
2 13 --107.082083.8939.75807.1735.90791.8513.96419.01
Table 7. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 2 with β = 0.5 .
Table 7. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 2 with β = 0.5 .
GMRESGMRES-CGMRES-CAI(4)GMRES-CAI(6)GMRES-FSTTS
ω M = N IterCPUIterCPUIterCPUIterCPUIterCPU
0.2 2 10 331.93121.5344.9214.1028.3012.6525.0312.6212.799.15
2 11 543.881109.7755.3254.5132.6844.4229.1344.4113.0029.12
2 12 ->4 h70.91328.8137.85202.5832.45189.7413.78110.53
2 13 --92.611698.2845.32901.8535.69791.8714.70430.67
0.5 2 10 222.3167.9740.1613.8727.1012.7622.7412.219.048.69
2 11 348.22510.5350.8551.5931.4443.8325.5041.569.6927.74
2 12 571.438772.6866.59305.5334.91188.0727.76169.6310.70103.00
2 13 ->4 h88.031606.1338.92791.5430.21694.2412.55407.40
0.8 2 10 331.79122.3644.9214.5228.3013.0325.0312.9812.799.50
2 11 543.821109.8755.3254.8632.6845.1229.1344.8313.0029.66
2 12 ->4 h70.91322.8937.85202.8432.45193.5713.78111.37
2 13 --92.611715.6145.32907.8335.69791.4914.70431.93
Table 8. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 2 with β = 0.8 .
Table 8. The iteration steps and CPU time of GMRES, GMRES-C, GMRES-CAI(4), GMRES-CAI(6), and GMRES-FSTTS for Example 2 with β = 0.8 .
GMRESGMRES-CGMRES-CAI(4)GMRES-CAI(6)GMRES-FSTTS
ω M = N IterCPUIterCPUIterCPUIterCPUIterCPU
0.2 2 10 277.2692.6549.8115.6244.5417.1034.1815.5015.039.89
2 11 450.74793.5658.9458.2058.7368.6542.5557.6215.9931.88
2 12 ->4 h73.31341.0677.53412.2851.57284.0616.00118.11
2 13 --95.171783.11100.602205.2663.161347.8515.91445.49
0.5 2 10 115.8430.4033.5412.4736.3715.0229.3814.199.589.00
2 11 172.40176.9241.9645.6245.0455.8735.5751.1810.0628.10
2 12 266.012131.0554.46244.9653.37273.7141.14231.0710.55103.16
2 13 ->4 h71.721244.4761.671225.8146.05980.3511.85398.66
0.8 2 10 277.2892.7749.8115.5644.5417.1034.1815.4515.039.93
2 11 450.70793.0358.9457.8658.7368.2742.5557.2015.9931.57
2 12 ->4 h73.31338.7377.53408.0951.57282.9316.00117.86
2 13 --95.171771.59100.602206.6363.161354.1715.91446.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, X. Efficient Preconditioning Based on Scaled Tridiagonal and Toeplitz-like Splitting Iteration Method for Conservative Space Fractional Diffusion Equations. Mathematics 2024, 12, 2405. https://doi.org/10.3390/math12152405

AMA Style

Guo X. Efficient Preconditioning Based on Scaled Tridiagonal and Toeplitz-like Splitting Iteration Method for Conservative Space Fractional Diffusion Equations. Mathematics. 2024; 12(15):2405. https://doi.org/10.3390/math12152405

Chicago/Turabian Style

Guo, Xiaofeng. 2024. "Efficient Preconditioning Based on Scaled Tridiagonal and Toeplitz-like Splitting Iteration Method for Conservative Space Fractional Diffusion Equations" Mathematics 12, no. 15: 2405. https://doi.org/10.3390/math12152405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop