Next Article in Journal
Energy–Logistics Cooperative Optimization for a Port-Integrated Energy System
Previous Article in Journal
Stability Estimates of Optimal Solutions for the Steady Magnetohydrodynamics-Boussinesq Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Note on the Convergence of Multigrid Methods for the Riesz–Space Equation and an Application to Image Deblurring

1
Dipartimento di Scienza e Alta Tecnologia, Università dell’Insubria, Via Valleggio 11, 22100 Como, Italy
2
Dipartimento di Matematica, Università Degli Studi di Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome, Italy
3
Facoltà di Informatica, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6900 Lugano, Switzerland
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1916; https://doi.org/10.3390/math12121916
Submission received: 18 March 2024 / Revised: 8 May 2024 / Accepted: 12 June 2024 / Published: 20 June 2024
(This article belongs to the Special Issue Mathematical Methods for Image Processing and Understanding)

Abstract

:
In recent decades, a remarkable amount of research has been carried out regarding fast solvers for large linear systems resulting from various discretizations of fractional differential equations (FDEs). In the current work, we focus on multigrid methods for a Riesz–Space FDE whose theoretical convergence analysis of such multigrid methods is currently limited in the relevant literature to the two-grid method. Here we provide a detailed theoretical convergence study in the multilevel setting. Moreover, we discuss its use combined with a band approximation and we compare the result with both τ and circulant preconditionings. The numerical tests include 2D problems as well as the extension to the case of a Riesz–FDE with variable coefficients. Finally, we investigate the use of a Riesz–Space FDE in a variational model for image deblurring, comparing the performance of specific preconditioning strategies.
MSC:
65F10; 65F08; 65F22

1. Introduction

In the present work, we are interested in the fast numerical solution of special large linear systems stemming from the approximation of a Riesz fractional diffusion equation (RFDE). It is known that, when using standard finite difference schemes on equispaced gridding in the presence of one-dimensional problems, we end up with a dense real symmetric Toeplitz linear system, whose dense structure is inherited from the non-locality of the underlying symmetric operator. We study a one-dimensional RFDE discretized with a numerical scheme with precision order one mainly for the simplicity of the presentation. Nevertheless, the same analysis can be carried out either when using higher-order discretization schemes [1,2] or when dealing with more complex higher-dimensional fractional models [3,4].
While the matrix–vector product involving dense Toeplitz matrices has a cost of O ( M log M ) with a moderate constant in the “big O” and with M being the matrix size, the solution can be of higher complexity in the presence of ill-conditioning. In connection with the last issue, it is worth recalling that the Toeplitz structure carries a lot of spectral information (see [5] and references therein). As shown in [6], the Euclidean conditioning of the coefficient matrices grows exactly as M α . Therefore the classical conjugate gradient (CG) method would need a number of iterations exactly proportional to M α / 2 to converge within a given precision.
In this perspective, a large number of preconditioning strategies for the preconditioned CG (PCG) have been proposed. One possibility is given by band approximations of the coefficient matrices; as discussed in [6,7], this does not ensure a convergence within a number of iterations independent of M, but still proves to be efficient for some values of the fractional order and whose cost stays linear within the matrix size. As an alternative, the circulant proposal given in [8] has a cost proportional to the matrix–vector product and ensures convergence in the presence of 1D problems that do not include variable coefficients. Another algebraic preconditioner proven to be optimal consists in the τ preconditioning explored in [9,10]. In a multidimensional setting, the multilevel circulants are not well suited, while the τ preconditioning still shows optimal behavior; see [10] and references therein. For other effective preconditioners, in [11], a band preconditioner with diagonal compensation is proposed for handling varying coefficients.
Other approaches to the efficient solution of the ill-conditioned linear systems we are interested in include multigrid methods. Optimal multigrid proposals for fractional operators are given, e.g., in [6,7,12,13,14]. However, a theoretical convergence analysis of such multigrids is currently limited to the two-grid method.
Here we provide a detailed theoretical convergence study in the case of multigrid methods. To our knowledge, this is the first time in the literature that a multigrid convergence analysis is provided for RFDEs. Moreover, as multigrid methods are often applied as preconditioners themselves, and often to an approximation of the coefficient matrix instead of to the coefficient matrix itself, on the same line of what has been performed in [7], we discuss a band approximation to be solved with a Galerkin approach which guarantees linear cost in M instead of O ( M log M ) and that inherits the multigrid optimality. We compare the resulting methods with both τ and circulant preconditioning.
Finally, in accordance with the topic of this Special Issue, “Mathematical Methods for Image Processing and Understanding”, we investigate the use of RFDE in a variational model for image deblurring. Indeed, fractional-order diffusion models have recently received much attention in imaging applications; see, e.g., [15,16]. We propose a matrix–algebra preconditioner combining the structure information on the blurring operator and the RFDE. The effectiveness of the preconditioner is explored by varying the regularization parameter proving that it can reduce the computational cost. On the other hand, regarding this particular application, we explain the reason why multigrid methods require further investigations.
The paper is organized as follows. In Section 2, we present the problem, the chosen numerical approximation, and the resulting linear systems. The main spectral features of the considered coefficient matrices are also recalled. In Section 3, we describe the essentials of multigrid methods and give the description of our proposal and its analysis. In Section 4, we discuss its use applied to a band approximation of the coefficient matrix. Section 5 contains a wide set of numerical experiments that compare the multigrid with matrix–algebra-based preconditioners and include 2D problems as well as the extension to the case of an RFDE with variable coefficients. Section 6 is devoted to applying the best preconditioning strategy to an image deblurring problem. Future research directions and conclusions are given in Section 7.

2. Preliminaries

The current section briefly introduces the continuous problem, its numerical approximation scheme, and the resulting linear systems. The main spectral features of the considered coefficient matrices are recalled as well.
As anticipated in the introduction, we consider the following one-dimensional Riesz fractional diffusion equation (1D-RFDE), given by
d α u ( x ) | x | α = m ( x ) , x Ω = [ a , b ]
with boundary conditions
u ( x ) = 0 , x Ω ,
where d > 0 is the diffusion coefficient, m ( x ) is the source term, and α u ( x ) | x | α is the 1D-Riesz fractional derivative for α ( 1 , 2 ) , defined as
α u ( x ) | x | α = c ( α ) ( D x α a u ( x ) + D b α x u ( x ) ) ,
with c ( α ) = 1 2 cos α π 2 > 0 , while D x α a u ( x ) and D b α x u ( x ) are the left and right Riemann–Liouville fractional derivatives; see, e.g., [17,18].
For the approximation of the problem, we define the uniform space partition on the interval [ a , b ] , that is
x i = a + i h , h = b a M + 1 , i = 1 , , M .
Here, the left and right fractional derivatives in (3) are numerically approximated by the shifted Grünwald difference formula [3] as
D x α a u ( x i ) = 1 h α k = 0 i + 1 g k ( α ) u ( x i k + 1 ) + O ( h ) , D b α x u ( x i ) = 1 h α k = 0 M i + 2 g k ( α ) u ( x i + k 1 ) + O ( h ) ,
with g k ( α ) being the alternating fractional binomial coefficients, whose expression is
g k ( α ) = ( 1 ) k α k = ( 1 ) k k ! α ( α 1 ) ( α k + 1 ) , k N 0 .
By employing the approximation in (5) to equation in (1) and using (3) and (6), we obtain the required scheme, as follows:
d c ( α ) h α k = 0 i + 1 g k ( α ) u i k + 1 + k = 0 M i + 2 g k ( α ) u i + k 1 = m i , i = 1 , , M .
By defining u = [ u 1 , u 2 , , u M ] and m = [ m 1 , m 2 , , m M ] , the resulting approximation equations (7) can be written in matrix form as
c ¯ ( G M α + G M α ) u = m
and, by defining G M α = G M α + G M α , we have
A M α u = m ,
with A M α = c ¯ G M α , c ¯ = d c ( α ) h α , and
G M α = g 1 ( α ) g 0 ( α ) 0 0 0 g 2 ( α ) g 1 ( α ) g 0 ( α ) 0 0 g M 1 ( α ) g M 2 ( α ) g 1 ( α ) g 0 ( α ) g M ( α ) g M 1 ( α ) g 2 ( α ) g 1 ( α ) M × M
being a lower Hessenberg Toeplitz matrix. The fractional binomial g k ( α ) coefficients satisfy few properties summarized in the following Lemma 1 (see [3,19,20]).
Lemma 1.
For 1 < α < 2 , the coefficients g k ( α ) defined in (6) satisfy
g 0 ( α ) = 1 , g 1 ( α ) = α , g 0 ( α ) > g 2 ( α ) > g 3 ( α ) > > 0 , k = 0 g k ( α ) = 0 , k = 0 n g k ( α ) < 0 , f o r n 1 .
Using Lemma 1, it has been proven in [20] that the coefficient matrix A M α is strictly diagonally dominant and hence invertible. To determine its generating function and study its spectral distribution, tools on Toeplitz matrices have been used (see [6]). For the notion of spectral distribution in the Weyl sense with special attention to Toeplitz structures and their variants, see [21,22,23,24,25]. A short review of these results follows.
Definition 1
([5]). Given f L 1 ( π , π ) and its Fourier coefficients
a k = 1 2 π π π f ( x ) e ι k x d x , ι 2 = 1 , k Z ,
we define T M ( f ) C M × M the Toeplitz matrix
T M ( f ) = a 0 a 1 a 2 a 1 M a 1 a 0 a 1 a 2 M a M 2 a 1 a M 1 a M 2 a 1 a 0
generated by f. In addition, the Toeplitz sequence T M ( f ) M N is called the family of Toeplitz matrices generated by f.
For instance, the tridiagonal Toeplitz matrix
G M 2 = 2 1 1 2 1 1 2 1 1 2
associated with a finite difference discretization of the Laplacian operator is generated by
l ( x ) = 2 2 cos ( x ) .
Note that, for Toeplitz matrices T M ( f ) , M N , as in (12), to have a generating function associated to the whole Toeplitz sequence, we need it to be the case that there exists f L 1 ( π , π ) , for which the relationship (11) holds for every k Z .
In the case where the partial Fourier sum
k = M + 1 M 1 a k e ι k x
converges in infinity norm, f coincides with its sum and is a continuous 2 π periodic function, given the Banach structure of this space equipped with the infinity norm. A sufficient condition is that k = | a k | < , i.e., the generating function belongs to the Wiener class, which is a closed sub-algebra of the continuous 2 π periodic functions.
Now, according to (9), we define
b M , α x = k = 0 M g k ( α ) e ι ( k 1 ) x .
Hence, the partial Fourier sum associated to G M α is
b M , α x + b M , α x ¯ = 2 k = 0 M g k ( α ) cos ( k 1 ) x .
Fixed d = 1 ; from (9), it is evident that the matrix h α A M α = c ( α ) G M α is symmetric, Toeplitz, and is generated by the following real-valued even function on [ π , π ] , defined in the following lemma.
Lemma 2
([6,26]). The generating function of h α A M α = c ( α ) G M α is
f α ( x ) = c ( α ) 2 α + 1 sin x 2 α cos x 1 α 2 + α π 2 , x [ 0 , π ] .
  • For α = 2 , f 2 ( x ) = l ( x ) = 2 2 cos x .
  • For α = 1 , f 1 ( x ) = 1 2 l ( x )
As observed in [6], the function f α ( x ) has a unique zero of order α at x = 0 for 1 < α 2 .
By using the extremal spectral properties of Toeplitz matrix sequences (see, e.g., [27,28]) and by combining it with Lemma 2, for α ( 1 , 2 ) , the obtained spectral information can be summarized in the following items (see again [6]):
  • any eigenvalue λ of h α A M α belongs to the open interval ( 0 , r ) with 0 = min f α , r = max f α > 0 ;
  • λ max ( h α A M α ) is monotonic strictly increasing converging to r = max f α , as M tends to infinity;
  • λ min ( h α A M α ) is monotonic strictly decreasing converging to 0 = min f α , as M tends to infinity;
  • by taking into account that the zero of the generating function f α has exactly order α ( 1 , 2 ] , it holds that
    λ min ( h α A M α ) 1 M α ,
    where, for non-negative sequences α n , β n , the writing α n β n means that there exist real positive constants 0 < c C < , independent of n, such that c α n β n C α n , n N .
As a consequence of the third and of the fourth items reported above, the Euclidean conditioning of h α A M α grows exactly as M α .
Therefore, classical stationary methods fail to be optimal, in the sense that we cannot expect convergence within a given precision, with a number of iterations bounded by a constant independent of M: for instance, the Gauss–Seidel iteration would needs a number of iterations exactly proportional to M α . In addition, even the Krylov methods will suffer from the latter asymptotic ill-conditioning. In reality, looking at the estimates by Axelsson and Lindskög [29], O ( M α / 2 ) iterations would be needed by a standard CG for reaching a solution within a fixed accuracy. In the present context of a zero of non-integer order, even the preconditioning by using band–Toeplitz matrices, will not ensure optimality, as discussed in [6] and detailed in Section 4. On the other hand, when using circulant or the τ preconditioning, the spectral equivalence is also guaranteed for a non-integer order of the zero at x = 0 if the order is bounded by 2 [8,9,10]. In the latter case, the distance in rank is less with respect to other matrix–algebras.In a multidimensional setting, only the τ preconditioning leads to optimality using the PCG under the same constraint on the order of the zero, while multilevel circulants are not well-suited. Finally, since we consider also problems in imaging we recall that the τ algebra emerges also in this context when using high-precision boundary conditions (BCs) such as the anti-reflective BCs [30].

3. Multigrid Methods and Level Independency

Multigrid methods (MGMs) have already proven to be effective solvers as well as valid preconditioners for Krylov methods when numerically approaching FDEs [7,12,13]. A multigrid method combines two iterative methods called smoother and coarse grid correction [31]. The coarser matrices can either be obtained by projection (Galerkin approach) or by rediscretization (geometric approach). In the case where only a single coarser level is considered, we obtain two grids methods (TGMs), while in the presence of more levels, we have multigrid methods—in particular, V-cycle when only a recursive call is performed and W-cycle when two recursive calls are applied.
Deepening what has been performed in [6,13], in this section, we investigate the convergence of multigrid methods in Galerkin form for the discretized problem (8).
To apply the Galerkin multigrid to a given linear system A M x = b , for M 0 = M > M 1 > M 2 > > M > 0 , let us define the grid transfer operators P k R M k + 1 × M k as
P k = K k T M k ( p k ) , k = 0 , , 1 ,
where K k is the down-sampling operator and p k is a properly chosen polynomial. The coarser matrices are then
A M k + 1 = P k A M k P k , k = 0 , , 1 .
If, at the recursive level, k, we simply apply one post-smoothing iteration of a stationary method having iteration matrix S k the TGM iteration matrix at the level k is given by
T G M k = S k I M k P k ( P k A M k P k ) 1 P k A M k .
The following theorem states the convergence of TGM at a generic recursion level k.
Theorem 1
(Ruge-Stüben [32]). Let A M k be a symmetric positive definite matrix, D k the diagonal matrix having as main diagonal that of A M k , and S k be the post-smoothing iteration matrix. Assume that σ k > 0 , such that
| | S k u | | A M k 2 | | u | | A M k 2 σ k | | u | | A M k D k 1 A M k 2 , u R M k ,
and δ k > 0 , such that
min y R M k + 1 | | u P k y | | D k 2 δ k | | u | | A M k 2 , u R M k ,
then, δ k > σ k and
| | T G M k | | A M k 1 σ k δ k .
The inequalities in Equations (17) and (18) are well known as the smoothing property and approximation property, respectively. To prove the multigrid convergence (recursive application of TGM) it is enough to prove that the assumptions of Theorem 1 are satisfied for each k = 0 , 1 , , 1 . On the other hand, to guarantee a W-cycle convergence in a number of iterations independent of the size M [33], we have to prove that the following level independency condition
lim inf k σ k δ k = β > 0
holds with β independent of k.
Concerning (multilevel) Toeplitz matrices, multigrid methods have been extensively investigated in the literature [34,35,36]. While the smoothing property can be easily carried on for a simple smoother like weighted Jacobi, the approximation property usually requires a detailed analysis based on a generalization of the local Fourier analysis.
If we now apply the multigrid with Galerkin approach to the solution of (8) and refer to the matrices at the coarser level k as A M k α , they are still Toeplitz and, based on the theoretical analysis developed in [36,37] for τ matrices, are associated to the following symbols:
f k + 1 , α ( x ) = 1 2 f k , α x 2 p k , α 2 x 2 + f k , α π x 2 p k , α 2 π x 2 , k = 0 , , 1 .
Since the function f 0 , α = f α has only a zero at the origin of order smaller or equal to 2, according to the convergence analysis of multigrid methods in [36], we define the symbols of the projectors P k in (15) as
p k ( x ) = p k , α ( x ) , w h e r e p k , α ( x ) = C k + 1 , α p ( x ) = C k + 1 , α ( 1 + cos x )
and C k + 1 , α is a constant chosen, such that f k + 1 , α ( π ) = f k , α ( π ) .
We observe that the symbols f k , α at each level are non-negative and they vanish only at the origin with order α . Therefore, the symbols p k , α in (21) and f k + 1 , α in (20) satisfy the approximation property (18) because
lim x 0 p k , α ( x + π ) 2 f k + 1 , α ( x ) = γ k <
at each level k = 0 , , 1 .
Remark 1.
Since δ k is proportional to γ k , the level independency condition (19) requires that
lim sup k γ k = γ <
with γ independent of k.
For the smoothing property (17), in order to ensure σ k > σ > 0 for all k, will be crucial the choice of C k + 1 , α as a constant such that f k + 1 , α ( π ) = f k , α ( π ) .
In conclusion, to prove the level independency condition (19), we need a detailed analysis of the symbol f k + 1 , α , which is performed in the next subsection.

3.1. Behavior of Symbols f k , α

The analysis of f k + 1 , α in (20) requires a detailed study of the constant C k + 1 , α in p k , α chosen imposing the condition
f k + 1 , α ( π ) = f k , α ( π ) = c ( α ) 2 α + 1 , k = 0 , 1 , , 1 ,
because f 0 , α ( π ) = c ( α ) 2 α + 1 .
According to the classical analysis, in the case of discrete Laplacian, i.e., l ( x ) = 2 2 cos x , the value of C k + 1 , 2 at each level is 2 .
Proposition 1.
Let f 0 , 2 ( x ) = l ( x ) , where l ( x ) = 2 2 cos x , and p k , 2 ( x ) = C k + 1 , 2 p ( x ) with C k + 1 , 2 = 2 . Then, for f k + 1 , 2 computed according to (20), it holds
f k + 1 , 2 ( x ) = l ( x ) , k = 0 , 1 , , 1 .
Proof. 
Since p k , 2 ( x ) = 2 p ( x ) = 2 ( 1 + cos x ) , using relation (20), for k = 0 it holds
f 1 , 2 ( x ) = l x 2 p 2 x 2 + l π x 2 p 2 π x 2 = l ( x ) .
Since the symbol p k , 2 of the projector is the same at each level, by recursive relation we obtain the required result
f , 2 ( x ) = = f 1 , 2 ( x ) = f 0 , 2 ( x ) = l ( x ) .
For α ( 1 , 2 ] , the value of C k + 1 , α , computed such that f k + 1 , α ( π ) = f k , α ( π ) , is
C k + 1 , α 2 = f k + 1 , α ( π ) L k + 1 , α ( π ) = 2 α + 1 L k + 1 , α ( π ) , k = 0 , 1 , 2 , 1 ,
where
L k + 1 , α ( x ) = 1 2 f k , α x 2 p 2 x 2 + f k , α π x 2 p 2 π x 2 .
Since L k + 1 , α ( π ) = f k , α π 2 converges to 4 as k diverges, we have
lim k C k , α = 2 α 1 2 , for α ( 1 , 2 ] ,
as can be seen in Figure 1. Moreover, the convergence of C k , α is quite fast, especially for α close to 2.
We observe that the sequence of the coarser symbols satisfies
f , α ( x ) c ( α ) f 1 , α ( x ) c ( α ) f 0 , α ( x ) c ( α ) l ( x ) ,
for any α ( 1 , 2 ) , as can be seen in Figure 2. Combining this fact with
f k , α = f k , α ( π ) = f 0 , α ( π ) = c ( α ) 2 α + 1 , k = 1 ,
for any α ( 1 , 2 ) , we obtain that the ill-conditioning of the coarser linear systems decreases moving on coarser grids.

3.2. Smoothing Property

If weighted Jacobi is used as a smoother, then the smoothing property (17) is satisfied whenever the smoother converges [32,34]. Since the matrix A M k α is symmetric positive definite, the weighted Jacobi method
S k = I M k ω k D k 1 A M k α
is well-defined, with D k being the diagonal of A M k α . Moreover, it convergences for 0 < ω k < 2 / ρ ( D k 1 A M k α ) , where ρ ( D k 1 A M k α ) is the spectral radius of D k 1 A M k α .
When A M k α = T M k ( f 0 , α ) , as for the geometric approach, D k = a 0 ( 0 ) I M k , where a 0 ( 0 ) is the Fourier coefficient of order zero of f 0 , α ; thus, it holds
S k = I M k ω k 2 c ( α ) α A M k α ,
since a 0 ( 0 ) = 2 c ( α ) α .
Lemma 3.
Let A M k α = T M k ( f 0 , α ) with f 0 , α = f α defined as in (14) and let S M k defined as in (27). If we choose
0 < ω k < α 2 α 1 ,
then there exist σ k , such that the smoothing property in (17) holds true.
Moreover, choosing
ω k = 2 2 α α 3 ,
it holds
σ k σ = 2 3 α α 9 > 0 .
Proof. 
Since f 0 , α is non-negative, from Lemma 4.2 in [13], it follows
0 < ω k < 2 a 0 ( 0 ) f 0 , α = α 2 α 1 ,
since f 0 , α = f 0 , α ( π ) = c ( α ) 2 α + 1 . Moreover, for ω k chosen as in (29), the following condition
I M k ω k 2 c ( α ) α A M k α 2 I M k σ k 2 c ( α ) α A M k α
in the proof of Lemma 4.2 in [13] is satisfied for all σ k σ in (30). □
From the previous lemma, following the same idea as given in [38], we propose to use the following Jacobi parameter
ω k = ω = 2 2 α α 3 , k 0 ,
which provides a good convergence rate as confirmed in the numerical results in Section 5.

3.3. Approximation Property and Level Independency

According to Remark 1, the uniform boundness of δ k is ensured by proving Equation (23).
First of all, we prove that, for k N 0 , the sequence of functions g k , α ( x ) = p k , α 2 ( π x ) f k , α ( x ) is monotonic increasing in x. Recalling the expression of p k , α in (21), differentiating g k , α we have
d d x g k , α ( x ) = C k + 1 , α 2 ( 1 cos x ) f k , α 2 ( x ) 2 sin x f k , α ( x ) ( 1 cos x ) d d x f k , α ( x ) ,
and hence it is equivalent to proving that
g k , α ( x ) = 2 sin x f k , α ( x ) ( 1 cos x ) d d x f k , α ( x )
is non-negative for all k N 0 . This result can be proved by induction on k.
For k = 0 , replacing the expression of f 0 , α (see Equation (14)) in g k , α , by direct computations we obtain g k , α ( x ) 0 , x [ 0 , π ] .
Assuming that for k = n it holds g n , α ( x ) 0 , x [ 0 , π ] , we have to prove that the same is true for k = n + 1 .
Applying the recurrence (20) to f n + 1 , α , we have
g n + 1 , α ( x ) p 2 x 2 S 1 ( x ) p 2 π x 2 S 2 ( x ) ,
where p ( x ) = 1 + cos ( x ) , as defined in (21), and
S 1 ( x ) = sin x 2 f n , α x 2 1 2 1 cos x 2 d d x f n , α x 2 , S 2 ( x ) = sin x 2 f n , α π x 2 + 1 2 1 + cos x 2 d d x f n , α π x 2 .
Thanks to the inductive assumption and direct computation, we obtain the desired result g n + 1 , α ( x ) 0 , x [ 0 , π ] , i.e.,
d d x g k , α ( x ) 0 , k N 0 .
Figure 3 depicts the function g k , α for different values of k and α .
In conclusion, by the monotonicity of g k , α ( x ) , for all k 0 , we have
max x [ 0 , π ] g k , α ( x ) = g k , α ( π ) = p k , α 2 ( 0 ) f k , α ( π ) = 4 C k , α 2 c ( α ) 2 α + 1 ,
and since the sequence C k , α is bounded thanks to Equation (26) (see also Figure 1), we have that { γ k } k ; hence, { δ k } k is bounded.
Combining the previous result on δ k with Equation (30) for the smoothing analysis, it holds (19), i.e., the level independency is satisfied.

4. Band Approximation

Multigrid methods are often applied as preconditioners for Krylov methods, and often to an approximation of A M α instead of at the original coefficient matrix. On the same line as what has been performed in [7], in this section, we discuss a band approximation of A M α to combine with the multigrid method discussed in the previous section. The advantage of using a band matrix is in the possibility of applying the Galerkin approach with a linear cost in M instead of O ( M log M ) and inheriting the V-cycle optimality from the results in [37].
Starting from the truncated Fourier sum of order s of the symbol f α
g s ( x ) = 2 k = 0 s g k ( α ) cos ( k 1 ) x ,
we consider as band approximation of A M α the associated Toeplitz matrix B M α s = T M ( g s ) explicitly given by
A ˜ M α s = 2 g 1 ( α ) g 0 ( α ) + g 2 ( α ) g s ( α ) g 0 ( α ) + g 2 ( α ) 2 g 1 ( α ) g 0 ( α ) + g 2 ( α ) g s ( α ) g s ( α ) g s ( α ) g s ( α ) g 0 ( α ) + g 2 ( α ) 2 g 1 ( α ) g 0 ( α ) + g 2 ( α ) g s ( α ) g 0 ( α ) + g 2 ( α ) 2 g 1 ( α ) M × M .
The emerging banded matrix is defined as
A ˜ M α s = c ¯   B M α s ,
As already discussed in [7,13], concerning the application of our multigrid method to a linear system with the coefficient matrix A ˜ M α s , we have the following feature thanks to the results in [35,37]:
  • The computation of the coarser matrices by the Galerkin approach (16), using the projector defined in (15), preserves the band structure at the coarser levels; see ([35] [Proposition 2]).
  • If the function g s is non-negative, then the V-cycle is optimal, i.e., it has a constant convergence rate independent of the size M and a computational cost per iteration proportional to O ( s M ) , that is O ( M log M ) with the choice of s, such that M = 2 s 1 .
About the spectral features of preconditioned Toeplitz matrices with Toeplitz preconditioners, we recall the well-known results of both localization and distribution type in the sense of Weyl in the following theorem (see [5]).
Theorem 2.
Let f be real-valued, 2 π -periodic, and continuous and g be non-negative, 2 π -periodic, continuous, with max g > 0 . Then,
  • T M ( g ) is positive definite for every M;
  • if r = inf f g , R = sup f g with r < R , then all the eigenvalues of T M 1 ( g ) T M ( f ) lie in the open set ( r , R ) for every M;
  • { T M ( f ) } M , { T M ( g ) } M , { T M 1 ( g ) T M ( f ) } M are distributed in the Weyl sense as f , g , f g , respectively, i.e., for all F in the set of continuous functions with compact support over C it holds
    lim n 1 M j = 1 M F ( λ j ( A M ) ) = 1 b a a b F ( κ ( t ) ) d t .
    with A M = T M ( f ) , T M ( g ) , T M 1 ( g ) T M ( f ) , κ = f , g , f g , λ j ( A M ) the eigenvalues of A M , and [ a , b ] = [ π , π ] .
In other words, a good Toeplitz preconditioner for a Toeplitz matrix generated by f should be generated by g, such that f g is close to 1 in a certain metric, for instance in L or in L 1 norm.
For the band preconditioner in (33), Figure 4 shows the graphs of g = g s and f = f α for two different values of α , while δ s = f α g s are depicted in Figure 5, for some values of s. Obviously, the quality of the approximation grows increasing s, but it is good enough already for a small s, in particular when α approaches the value 2 (see the scale of the y-axis in Figure 5).
More importantly, in the light of Theorem 2, Figure 6 depicts the functions κ s 1 with κ s = f α g s . This quantity measures the relative approximation, which is the one determining the quality of the preconditioning since ( κ s 1 , [ 0 , π ] ) is the distribution function of the shifted preconditioned matrix–sequence
{ T M 1 ( g ) T M ( f ) I M } M , g = g s , f = f α ,
in the sense of Weyl and in accordance again with Theorem 2. Note that the function κ s 1 is almost zero except around the zero of f α at the origin.

5. Numerical Examples

In this section, we present some numerical examples, to verify the effectiveness of the MGM used both as a solver and preconditioner. In what follows:
  • V ν 1 ν 2 consists of a Galerkin V-cycle with linear interpolation as grid transfer operator and ν 1 and ν 2 iterations of pre and post-smoother weighted Jacobi, respectively;
  • P C and P S represent the Chan [39] and Strang circulant preconditioner, respectively;
  • P s V ν 1 ν 2 denotes the banded preconditioner, defined as in Equation (33) and inverted using MGM with Galerkin approach V ν 1 ν 2 ;
  • P V ν 1 ν 2 and P ˜ V ν 1 ν 2 are multigrid preconditioners, both as Galerkin and Geometric approach, respectively.
The aforementioned preconditioners are combined with CG and GMRES computationally performed using built-in pcg and gmres Matlab functions. The stopping criterion is chosen as
r k r 0 < 10 8 ,
where · denotes the Euclidean norm, r k is the residual vector at the k-th iteration. The initial guess is fixed as the zero vector. All the numerical experiments are run by using MATLAB R2021a on HP 17-cp0000nl computer with configuration, AMD Ryzen 7 5700U with Radeon Graphics CPU and 16 GB RAM.
The following examples are selected to validate the theoretical analysis. Specifically, Example 1 is a 1D problem, which helps in determining the optimal settings for the multigrid method and circulant preconditioner. Example 2 is a 2D constant coefficient example, which extends the finding from 1D to a multidimensional context. Example 3 is particularly noteworthy as it involves a 2D problem with non-constant coefficients. This example demonstrates the robustness of multigrid methods compared to matrix–algebra (circulant and τ ) preconditioners, when dealing with varying diffusion coefficients.
Example 1.
Consider the following 1D-RFDE given in Equation (1), with Ω = [ 0 , 1 ] and the source term built from the exact solution
u ( x ) = x 2 ( 1 x ) 2 .
Table 1 shows the number of iterations of the proposed multigrid methods and preconditioners for three different values of α .
Regarding multigrid methods, the observations are as follows:
  • The Galerkin approach is robust, even as V-cycle, in agreement with the previous theoretical analysis.
  • The geometric approach is robust only when α > 1.5 .
  • The robustness of the geometric multigrid can be improved by using it as a preconditioner, as seen in the column P ˜ V 1 1 . In particular, it is even more robust than the band multigrid preconditioner P s V 1 1 .
When comparing the circulant preconditioners, the Strang preconditioner P S is preferable to the optimal Chan preconditioner P C .

2D Problems

Here, we extend our study to two-dimensional variable coefficient RFDE, given by
c ( x , y ) α u ( x , y ) | x | α e ( x , y ) β u ( x , y ) | y | β = m ( x , y ) , ( x , y ) Ω = [ a 1 , b 1 ] × [ a 2 , b 2 ] ,
with boundary condition
u ( x , y ) = 0 , ( x , y ) Ω ,
where c ( x , y ) , e ( x , y ) are non-negative diffusion coefficients, m ( x , y ) is the source term, and γ u ( x , y ) | x | γ , γ u ( x , y ) | y | γ are the Riesz fractional derivatives for γ { α , β } , α , β ( 1 , 2 ) . In order to define the uniform partition of the spatial domain Ω , we fix M 1 , M 2 N and define
x j = a 1 + j h x , h x = b 1 a 1 M 1 + 1 , j = 0 , , M 1 + 1 , y j = a 2 + j h y , h y = b 2 a 2 M 2 + 1 , j = 0 , , M 2 + 1 .
Let us now introduce the following M-dimensional vectors, with M = M 1 M 2 :
u = [ u 1 , 1 , u 2 , 1 , , u M 1 , 1 , u 1 , 2 , u 2 , 2 , , u M 1 , 2 , , u 1 , M 2 , u 2 , M 2 , , u M 1 , M 2 ] , c = [ c 1 , 1 , c 2 , 1 , , c M 1 , 1 , c 1 , 2 , c 2 , 2 , , c M 1 , 2 , , c 1 , M 2 , c 2 , M 2 , , c M 1 , M 2 ] , e = [ e 1 , 1 , e 2 , 1 , , e M 1 , 1 , e 1 , 2 , e 2 , 2 , , e M 1 , 2 , , e 1 , M 2 , e 2 , M 2 , , e M 1 , M 2 ] , m = [ m 1 , 1 , m 2 , 1 , , m M 1 , 1 , m 1 , 2 , m 2 , 2 , , m M 1 , 2 , , m 1 , M 2 , m 2 , M 2 , , m M 1 , M 2 ] ,
where u i , j = u ( x i , y j ) c i , j = c ( x i , y j ) , and e i , j = e ( x i , y j ) .
Using the shifted Grünwald difference formula for both fractional derivatives in x and y leads to the matrix–vector form
A M ( α , β ) u = m ,
with the following (diagonal times two-level Toeplitz structured) coefficient matrix
A M ( α , β ) = c ¯ α C I M 2 G M 1 α + ( G M 1 α ) + c ¯ β E G M 2 β + ( G M 2 β ) I M 1 ,
where C = diag ( c ) , E = diag ( e ) , c ¯ α = c ( α ) h x α , c ¯ β = c ( β ) h y β and c ( γ ) , G M i γ is defined as in Section 2.
Note that the coefficient matrix A M ( α , β ) is strictly diagonally dominant M-matrix. Moreover, when the diffusion coefficients are constant and equal, the coefficient matrix A M ( α , β ) is a symmetric positive definite Block–Toeplitz with Toeplitz Blocks (BTTB) matrix.
In the following examples, we compare the performance of the MGMs, banded, circulant, and τ -based preconditioners.
  • Multigrid preconditioners P V 1 1 and P V ˜ 1 1 . In both cases, the grid transfer operator is a bilinear interpolation, the weighted Jacobi is used as smoother, and one iteration of the V-cycle is performed to approximate the inverse of A M ( α , β ) . Following the 1D case, the relaxation parameter of Jacobi ω is computed as
    ω = 4 5 ζ ,
    where ζ = 2 F ^ 0 F α , β ( x , y ) , with F ^ 0 being the first Fourier coefficient of F α , β ( x , y ) , the symbol of A M ( α , β ) (see [13] for more details).
  • According to the 1D numerical results, the Strang circulant preconditioner outperforms the optimal Chan preconditioner. Therefore, in the following, we consider only the 2-level Strang circulant preconditioner defined as
    P S = c ¯ α c a v I M 2 S ( G M 1 α ) + S ( G M 1 α ) ) ) + c ¯ β e a v S ( G M 2 β ) + S ( G M 2 β ) I M 1 ,
    where c a v = m e a n ( c ) , e a v = m e a n ( e ) , and S ( T M ) is the Strang circulant approximation of the Toeplitz matrix T M .
  • Like in the 1D case, here we extend the banded preconditioner P s V 1 1 strategy to the 2D case. To approximate the inverse of A ˜ M ( α , β ) s , we performed V 1 1 iterations of V-cycle with Galerkin approach, weighted Jacobi as smoother and bilinear interpolation as grid transfer operator.
    The 2D-banded matrix is defined as
    A ˜ M ( α , β ) s = c ¯ α C I M 2 G M 1 α s + ( G M 1 α s ) + c ¯ β E G ( M 2 β s + G ( M 2 β s ) I M 1 ,
  • P τ denotes the τ -based preconditioner. For the symmetric Toeplitz matrix G M i γ = G M i γ s + ( G M i γ s ) , with i = 1 , 2 , the τ matrix is
    τ ( G M i γ ) = G M i γ H M i ,
    where H M i is a Hankel matrix, whose entries along antidiagonals are constant and equal to
    [ g 3 ( γ ) , g 4 ( γ ) , , g M i ( γ ) , 0 , 0 , 0 , g M i ( γ ) , , g 4 ( γ ) , g 3 ( γ ) ] .
    Thanks to [40], the τ matrix can be diagonalized as follows
    τ ( G M i γ ) = S M i D M i S M i ,
    where D M i = diag ( λ ( G M i γ H M i ) ) and S M i is the sine transform matrix whose entries are
    S M i = 2 M i + 1 sin i j π M i + 1 i , j = 1 M i .
    The τ -based preconditioner of the resulting linear system (39) is
    P Ø = D M S M D M S M ,
    with S M = S M 1 S M 2 , D M = ( I M 2 D M 1 + D M 2 I M 1 ) and D M is the diagonal matrix that contains the average of the diffusion coefficients. Regarding the computational point of view, P Ø is extremely suitable because the matrix–vector product can be performed in O ( M log M ) operations through the discrete sine transform (DST) algorithm.
Example 2.
In this example, we consider the following 2D-RFDE given in Equation (37), with c ( x , y ) = e ( x , y ) = 1 and Ω = [ 0 , 1 ] × [ 0 , 1 ] . The source term and exact solution are given as
u ( x , y ) = x 2 ( 1 x ) 2 y 2 ( 1 y ) 2 ,
and
m ( x , y ) = 1 cos ( α π 2 ) y 2 ( 1 y ) 2 [ 2 Γ ( 3 α ) x 2 α + ( 1 x ) 2 α 12 Γ ( 4 α ) x 3 α + ( 1 x ) 3 α + 24 Γ ( 5 α ) x 4 α + ( 1 x ) 4 α ] + 1 cos ( β π 2 ) x 2 ( 1 x ) 2 [ 2 Γ ( 3 β ) y 2 β + ( 1 y ) 2 β 12 Γ ( 4 β ) y 3 β + ( 1 y ) 3 β + 24 Γ ( 5 β ) y 4 β + ( 1 y ) 4 β ] .
Table 2 presents the number of iterations and CPU times required by the proposed multigrid methods and preconditioners. Since we have constant coefficients c ( x , y ) = e ( x , y ) = 1 , the τ preconditioner P τ stands out as the most robust choice, both in terms of iteration number and CPU time. The geometric multigrid used as a preconditioner is also a good option, although the robustness of multigrid methods declines in the case of anisotropic problems, i.e., when ( α , β ) = ( 1.7 , 1.9 ) . In such a case, it is advisable to adopt the strategies proposed in [7].
Example 3.
Here, we consider the following 2D-RFDE (37) with Ω = [ 0 , 5 ] × [ 0 , 5 ] and the diffusion coefficients are specified as
c ( x , y ) = 1 , e ( x , y ) = 1 + x y ,
The source term is built from the exact solution given as
u ( x , y ) = x 4 ( 5 x ) 4 y 4 ( 5 y ) 4 .
Due to the nonsymmetry of the coefficient matrix caused by diffusion coefficients, the CG method has been replaced with GMRES. Now the geometric multigrid preconditioner provides a good and robust alternative to the τ preconditioner, especially when the multigrid method is applied to the band approximation ( P s V 1 1 ) (Table 3).

6. Application to Image Deblurring

Fractional differential operators have been investigated to enhance diffusion in image denoising and deblurring [16,41]. In this section, we investigate the effectiveness of the previous preconditioning strategies to the Tikhonov regularization problem
min u R M B M u m 2 2 + μ u A M ( α , β ) u ,
where m is the noisy and blurred observed image, B M is the discrete convolution operator associated with the blurring phenomenon, and μ > 0 is the regularization parameter.
The solution of the least square problem (50) can be obtained by solving the linear system
B M B M + μ A M ( α , β ) u = m ,
by the preconditioned CG method since the coefficient matrix is positive definite. The matrix B M exhibits a block Toeplitz with Toeplitz blocks structure as the matrix A M ( α , β ) , but the spectral behavior is completely different. Indeed the ill-conditioned subspace of B M is very large and intersects substantially the high frequencies. Therefore, the coefficient matrix B M B M + μ A M ( α , β ) has a double source of ill-conditioning, including both low and high frequencies. As a result, the theoretical analysis previously employed cannot be directly applied. In conclusion, the application of multigrid methods presents considerable challenges, as noted in [42], necessitating further investigation in the future. Here, we show the effectiveness of the τ preconditioner in comparison with the Strang circulant preconditioner, and CG without preconditioning.
We consider the true satellite image of 128 × 128 pixels in Figure 7a. The observed image in Figure 7b is affected by a Gaussian blur and 5 % of white Gaussian noise. We choose the fractional derivative α = β = 1.1 and the diffusion coefficients c ( x , y ) = e ( x , y ) = 1 . The matrix A M ( α , β ) is defined as in (40), while the Strang circulant and the τ preconditioners are defined in (42) and (46), respectively. Since the quality of the reconstruction depends on the model (50), all the different solvers provide a similar reconstruction. The reconstructed image for μ = 10 4 is shown in Figure 7c.
Various strategies exist for estimating the parameter μ , see, e.g., [43], but such estimation is beyond the scope of this work. Instead, we test different values of μ . The linear system (50) is solved using the built-in pcg Matlab function with a tolerance of 10 6 . Table 4 reports the number of iterations and the related restoration error for various values of μ . It is noteworthy that the τ preconditioner demonstrates a significant speedup compared to both CG without preconditioning and PCG with the Strang circulant preconditioner for all relevant values of μ [ 10 5 , 10 3 ] .
We should also point out that the considered images show a black background and hence all the various BCs are equivalent in terms of the precision of the reconstruction. However, in a general setting, we observe that the most precise BCs are related to the τ algebra; see [30].

7. Conclusions

We discussed the convergence analysis of the multigrid method applied to a Riesz problem and compared it with matrix–algebra-based preconditioners. The numerical results demonstrate that multigrid methods are competitive in scenarios with variable diffusion coefficients. This motivates further investigation into multigrid methods for 3D RFDEs [44], as these methods do not suffer from issues related to dimensionality in contrast to matrix–algebra preconditioners.
An application of the latter to an image deblurring problem with Tikhonov regularization is provided. Certain matrix–algebra preconditioners are successfully adapted, while the application of multigrid methods to this specific problem warrants further investigation in the future. This specific application will be considered in future research works, especially considering numerical methods for nonlinear models that require solving a linear problem as an inner step.

Author Contributions

Conceptualization and writing—original draft, D.A., M.D., M.M., S.S.-C. and K.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work of the authors is supported by the GNCS-INdAM project CUP E53C22001930001 and project CUP E53C23001670001. The work of the second author is partially funded by MUR—PRIN 2022, grant number 2022ANC8HL. The third author acknowledges the MUR Excellence Department Project MatMod@TOV awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C23000330006. The work of the fourth author is funded by the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955701. The JU receives support from the European Union’s Horizon 2020 research and innovation program and Belgium, France, Germany, and Switzerland. Furthermore, the fourth is grateful for the support of the Laboratory of Theory, Economics and Systems—Department of Computer Science at Athens University of Economics and Business.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, A.; Kumar, S.; Vigo-Aguiar, J. High-order schemes and their error analysis for generalized variable coefficients fractional reaction–diffusion equations. Math. Methods Appl. Sci. 2023, 46, 16521–16541. [Google Scholar] [CrossRef]
  2. Tian, W.; Zhou, H.; Deng, W. A class of second order difference approximations for solving space fractional diffusion equations. Math. Comput. 2015, 84, 1703–1727. [Google Scholar] [CrossRef]
  3. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for fractional advection–dispersion flow equations. J. Comput. Appl. Math. 2004, 172, 65–77. [Google Scholar] [CrossRef]
  4. Li, T.Y.; Chen, F.; Sun, H.W.; Sun, T. Preconditioning technique based on sine transformation for nonlocal Helmholtz equations with fractional Laplacian. J. Sci. Comput. 2023, 97, 17. [Google Scholar] [CrossRef]
  5. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications; Springer: Chams, Switzerland, 2017; Volume 1. [Google Scholar]
  6. Donatelli, M.; Mazza, M.; Serra-Capizzano, S. Spectral analysis and structure preserving preconditioners for fractional diffusion equations. J. Comput. Phys. 2016, 307, 262–279. [Google Scholar] [CrossRef]
  7. Donatelli, M.; Krause, R.; Mazza, M.; Trotti, K. Multigrid preconditioners for anisotropic space-fractional diffusion equations. Adv. Comput. Math. 2020, 46, 1–31. [Google Scholar] [CrossRef]
  8. Lei, S.L.; Sun, H.W. A circulant preconditioner for fractional diffusion equations. J. Comput. Phys. 2013, 242, 715–725. [Google Scholar] [CrossRef]
  9. Huang, X.; Lin, X.L.; Ng, M.K.; Sun, H.W. Spectral analysis for preconditioning of multidimensional Riesz fractional diffusion equations. Numer. Math. Theory Methods Appl. 2022, 15, 565–591. [Google Scholar] [CrossRef]
  10. Barakitis, N.; Ekström, S.E.; Vassalos, P. Preconditioners for fractional diffusion equations based on the spectral symbol. Numer. Linear Algebra Appl. 2022, 29, e2441. [Google Scholar] [CrossRef]
  11. She, Z.H.; Lao, C.-X.; Yang, H.; Lin, F.R. Banded preconditioners for Riesz space fractional diffusion equations. J. Sci. Comput. 2021, 86, 31. [Google Scholar] [CrossRef]
  12. Pang, H.K.; Sun, H.W. Multigrid method for fractional diffusion equations. J. Comput. Phys. 2012, 231, 693–703. [Google Scholar] [CrossRef]
  13. Moghaderi, H.; Dehghan, M.; Donatelli, M.; Mazza, M. Spectral analysis and multigrid preconditioners for two-dimensional space-fractional diffusion equations. J. Comput. Phys. 2017, 350, 992–1011. [Google Scholar] [CrossRef]
  14. Pan, K.; Sun, H.W.; Xu, Y.; Xu, Y. An efficient multigrid solver for two-dimensional spatial fractional diffusion equations with variable coefficients. Appl. Math. Comput. 2021, 402, 126091. [Google Scholar] [CrossRef]
  15. Bai, J.; Feng, X.C. Fractional-order anisotropic diffusion for image denoising. IEEE Trans. Image Process. 2007, 16, 2492–2502. [Google Scholar] [CrossRef]
  16. Yang, Q.; Chen, D.; Zhao, T.; Chen, Y. Fractional calculus in image processing: A review. Fract. Calc. Appl. Anal. 2016, 19, 1222–1249. [Google Scholar] [CrossRef]
  17. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications. In Mathematics in Science and Engineering; Academic Press: New York, NY, USA, 1998; Volume 198. [Google Scholar]
  18. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  19. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for two-sided space-fractional partial differential equations. Appl. Numer. Math. 2006, 56, 80–90. [Google Scholar] [CrossRef]
  20. Wang, H.; Wang, K.; Sircar, T. A direct O(Nlog2N) finite difference method for fractional diffusion equations. J. Comput. Phys. 2010, 229, 8095–8104. [Google Scholar] [CrossRef]
  21. Böttcher, A.; Silbermann, B. Introduction to Large Truncated Toeplitz Matrices; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  22. Widom, H. Szegö’s limit theorem: The higher-dimensional matrix case. J. Funct. Anal. 1980, 39, 182–198. [Google Scholar] [CrossRef]
  23. Tilli, P. A note on the spectral distribution of Toeplitz matrices. Linear Multilinear Algebra 1998, 45, 147–159. [Google Scholar] [CrossRef]
  24. Tilli, P. Locally Toeplitz sequences: Spectral properties and applications. Linear Algebra Its Appl. 1998, 278, 91–120. [Google Scholar]
  25. Tyrtyshnikov, E.E. A unifying approach to some old and new theorems on distribution and clustering. Linear Algebra Its Appl. 1998, 232, 1–43. [Google Scholar] [CrossRef]
  26. Pang, H.K.; Sun, H.W. Fast numerical contour integral method for fractional diffusion equations. J. Sci. Comput. 2016, 66, 41–66. [Google Scholar] [CrossRef]
  27. Serra-Capizzano, S. On the extreme spectral properties of Toeplitz matrices generated L1 functions with several minima/maxima. BIT 1996, 36, 135–142. [Google Scholar] [CrossRef]
  28. Böttcher, A.; Grudsky, S.M. On the condition numbers of large semidefinite Toeplitz matrices. Linear Algebra Its Appl. 1998, 279, 285–301. [Google Scholar] [CrossRef]
  29. Axelsson, O.; Lindskog, G. On the rate of convergence of the preconditioned conjugate gradient method. Numer. Math. 1986, 48, 499–523. [Google Scholar] [CrossRef]
  30. Serra-Capizzano, S. A note on antireflective boundary conditions and fast deblurring models. SIAM J. Sci. Comput. 2004, 25, 1307–1325. [Google Scholar] [CrossRef]
  31. Hackbusch, W. Multi-Grid Methods and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 4. [Google Scholar]
  32. Ruge, J.W.; Stüben, K. Algebraic multigrid. In Multigrid Methods; SIAM: Philadelphia, PA, USA, 1987; pp. 73–130. [Google Scholar]
  33. Trottenberg, U.; Oosterlee, C.W.; Schuller, A. Multigrid; Elsevier: Amsterdam, The Netherlands, 2000. [Google Scholar]
  34. Chan, R.H.; Chang, Q.S.; Sun, H.W. Multigrid method for ill-conditioned symmetric Toeplitz systems. SIAM J. Sci. Comput. 1998, 19, 516–529. [Google Scholar]
  35. Aricò, A.; Donatelli, M. A V-cycle multigrid for multilevel matrix–algebras: Proof of optimality. Numer. Math. 2007, 105, 511–547. [Google Scholar] [CrossRef]
  36. Fiorentino, G.; Serra-Capizzano, S. Multigrid methods for Toeplitz matrices. Calcolo 1991, 28, 283–305. [Google Scholar] [CrossRef]
  37. Aricò, A.; Donatelli, M.; Serra-Capizzano, S. V-cycle optimal convergence for certain (multilevel) structured linear systems. SIAM J. Matrix Anal. Appl. 2004, 26, 186–214. [Google Scholar] [CrossRef]
  38. Ahmad, D.; Donatelli, M.; Mazza, M.; Serra-Capizzano, S.; Trotti, K. A smoothing analysis for multigrid methods applied to tempered fractional problems. Linear Multilinear Algebra 2023. [Google Scholar] [CrossRef]
  39. Chan, T.F. An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput. 1988, 9, 766–771. [Google Scholar] [CrossRef]
  40. Bini, D.; Capovani, M. Spectral and computational properties of band symmetric Toeplitz matrices. Linear Algebra Its Appl. 1983, 52, 99–126. [Google Scholar] [CrossRef]
  41. Antil, H.; Bartels, S. Spectral approximation of fractional PDEs in image processing and phase field modeling. Comput. Methods Appl. Math. 2017, 17, 661–678. [Google Scholar] [CrossRef]
  42. Donatelli, M. A multigrid for image deblurring with Tikhonov regularization. Numer. Linear Algebra Appl. 2005, 12, 715–729. [Google Scholar] [CrossRef]
  43. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion; SIAM: Philadelphia, PA, USA, 1998. [Google Scholar]
  44. Guo, L.; Zhao, X.L.; Gu, X.M.; Zhao, Y.L.; Zheng, Y.B.; Huang, T.Z. Three-dimensional fractional total variation regularized tensor optimized model for image deblurring. Appl. Math. Comput. 2021, 404, 126224. [Google Scholar]
Figure 1. Plot of C k , α vs. the level k, for different values of α .
Figure 1. Plot of C k , α vs. the level k, for different values of α .
Mathematics 12 01916 g001
Figure 2. Plots of f k + 1 , α / c ( α ) for α = 1.5 and k = 0 , , 8 , with C k , α computed according to (25).
Figure 2. Plots of f k + 1 , α / c ( α ) for α = 1.5 and k = 0 , , 8 , with C k , α computed according to (25).
Mathematics 12 01916 g002
Figure 3. Plot of g n , α ( x ) for different values of α . (a) α = 1.2 . (b) α = 1.5 (c) α = 1.8 .
Figure 3. Plot of g n , α ( x ) for different values of α . (a) α = 1.2 . (b) α = 1.5 (c) α = 1.8 .
Mathematics 12 01916 g003
Figure 4. Plots of f and g s . (a) α = 1.2 . (b) α = 1.8 .
Figure 4. Plots of f and g s . (a) α = 1.2 . (b) α = 1.8 .
Mathematics 12 01916 g004
Figure 5. Plots of δ s . (a) α = 1.2 . (b) α = 1.8 .
Figure 5. Plots of δ s . (a) α = 1.2 . (b) α = 1.8 .
Mathematics 12 01916 g005
Figure 6. Plots of κ s 1 . (a) α = 1.2 . (b) α = 1.8 .
Figure 6. Plots of κ s 1 . (a) α = 1.2 . (b) α = 1.8 .
Mathematics 12 01916 g006
Figure 7. Image deblurring example: (a) true image, (b) observed image, (c) restored image for μ = 10 4 .
Figure 7. Image deblurring example: (a) true image, (b) observed image, (c) restored image for μ = 10 4 .
Mathematics 12 01916 g007
Table 1. Example 1: Number of iterations for α = 1.2 , 1.5 , and 1.8 with ω = 0.6964 , 0.7071 , and 0.6892 , respectively.
Table 1. Example 1: Number of iterations for α = 1.2 , 1.5 , and 1.8 with ω = 0.6964 , 0.7071 , and 0.6892 , respectively.
Galerkin Geometric
TGM V-CycleTGMV-CyclePreconditioners
α M + 1 CG V 0 1 V 1 0 V 1 1 V 0 1 V 1 0 V 1 1 V 0 1 V 1 0 V 1 1 V 0 1 V 1 0 V 1 1 s P s V 1 1 P V 1 1 P ˜ V 1 1 P C P S
2 6 3217179171791616133738317861195
2 7 631616916171016161343433478612106
1.2 2 8 11016169161710161612484837910612126
2 9 17816169161810161612525240913712136
2 10 279151581618111515115556421118712147
2 6 3217179171710171710232216776895
2 7 6217179161791717102524177868115
1.5 2 8 1111717916171017171027261991068137
2 9 192161691618101616928282091469147
2 10 3281616916181016169303020111979168
2 6 321717101717111717101821137877106
2 7 641717101718111717101920137978136
1.8 2 8 12617171017181117171020221391078157
2 9 238171791719111717921231491478177
2 10 4481717918201217179222414111878217
Table 2. Example 2: Number of iterations for different values of α and β with M 1 = M 2 .
Table 2. Example 2: Number of iterations for different values of α and β with M 1 = M 2 .
GalerkinGeometricPreconditioners
( α , β ) M 1 + 1 CG V 1 1 T   ( s ) V 1 1 T   ( s ) P V 1 1 T   ( s ) P ˜ V 1 1 T   ( s ) P s V 1 1 T   ( s ) P τ T   ( s ) P S T   ( s )
2 3 1517 0.022 18 0.043 7 0.070 8 0.091 8 0.076 4 0.072 9 0.087
( 1.1 , 1.2 ) 2 4 3118 0.026 28 0.073 9 0.078 11 0.110 9 0.079 5 0.082 11 0.098
ω = 0.83 2 5 5717 0.036 36 0.134 9 0.083 13 0.144 10 0.085 6 0.091 13 0.101
2 6 9314 0.126 43 0.322 9 0.167 14 0.229 11 0.121 7 0.100 17 0.114
2 3 1014 0.021 14 0.040 7 0.069 7 0.087 7 0.075 5 0.077 7 0.084
( 1.5 , 1.5 ) 2 4 2414 0.023 17 0.064 8 0.076 8 0.111 8 0.080 5 0.083 9 0.092
ω = 0.85 2 5 4414 0.035 19 0.083 8 0.082 8 0.127 9 0.079 6 0.088 12 0.102
2 6 7814 0.121 21 0.194 8 0.151 9 0.178 10 0.119 6 0.102 13 0.111
2 3 1617 0.022 18 0.047 8 0.071 7 0.090 8 0.075 5 0.080 9 0.086
( 1.7 , 1.9 ) 2 4 3321 0.027 22 0.070 10 0.079 10 0.114 10 0.081 5 0.082 12 0.099
ω = 0.83 2 5 6624 0.041 26 0.114 11 0.089 11 0.139 11 0.085 6 0.091 15 0.105
2 6 12727 0.205 30 0.235 12 0.181 12 0.215 13 0.128 6 0.104 19 0.130
Table 3. Example 3: Number of iterations for different values of α and β with M 1 = M 2 .
Table 3. Example 3: Number of iterations for different values of α and β with M 1 = M 2 .
( α , β ) M 1 + 1 GMRES P ˜ V 1 1 T (s) P s V 1 1 T (s) P τ T (s) P S T (s)
2 3 3211 0.116 11 0.094 18 0.109 21 0.115
( 1.1 , 1.2 ) 2 4 7215 0.151 14 0.103 23 0.123 29 0.132
ω = 0.83 2 5 13918 0.192 17 0.112 25 0.143 37 0.152
2 6 25119 0.318 19 0.188 26 0.212 44 0.260
2 3 3511 0.118 12 0.096 18 0.111 22 0.118
( 1.5 , 1.5 ) 2 4 7616 0.154 16 0.109 22 0.122 31 0.135
ω = 0.85 2 5 15019 0.197 19 0.119 25 0.147 38 0.153
2 6 27820 0.327 21 0.194 26 0.216 44 0.267
2 3 3612 0.120 12 0.095 18 0.110 22 0.119
( 1.9 , 1.9 ) 2 4 8717 0.157 17 0.112 22 0.121 29 0.133
ω = 0.82 2 5 19020 0.209 20 0.120 24 0.141 35 0.150
2 6 39720 0.325 20 0.190 24 0.210 41 0.257
Table 4. Image deblurring example: number of iterations and relative restoration error (RRE) for different values of μ .
Table 4. Image deblurring example: number of iterations and relative restoration error (RRE) for different values of μ .
μ CG P τ P S RRE
10 3 123121.54 × 10−1
10 4 15491.12 × 10−1
10 5 366161.15 × 10−1
10 6 9310442.21 × 10−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmad, D.; Donatelli, M.; Mazza, M.; Serra-Capizzano, S.; Trotti, K. A Note on the Convergence of Multigrid Methods for the Riesz–Space Equation and an Application to Image Deblurring. Mathematics 2024, 12, 1916. https://doi.org/10.3390/math12121916

AMA Style

Ahmad D, Donatelli M, Mazza M, Serra-Capizzano S, Trotti K. A Note on the Convergence of Multigrid Methods for the Riesz–Space Equation and an Application to Image Deblurring. Mathematics. 2024; 12(12):1916. https://doi.org/10.3390/math12121916

Chicago/Turabian Style

Ahmad, Danyal, Marco Donatelli, Mariarosa Mazza, Stefano Serra-Capizzano, and Ken Trotti. 2024. "A Note on the Convergence of Multigrid Methods for the Riesz–Space Equation and an Application to Image Deblurring" Mathematics 12, no. 12: 1916. https://doi.org/10.3390/math12121916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop