Next Article in Journal
Some Notes on the Formation of a Pair in Pairs Trading
Previous Article in Journal
On Best Approximations for Set-Valued Mappings in G -convex Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multilevel Iteration Method for Solving a Coupled Integral Equation Model in Image Restoration

1
School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China
2
Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou 510275, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 346; https://doi.org/10.3390/math8030346
Submission received: 9 February 2020 / Revised: 27 February 2020 / Accepted: 1 March 2020 / Published: 4 March 2020

Abstract

:
The problem of out-of-focus image restoration can be modeled as an ill-posed integral equation, which can be regularized as a second kind of equation using the Tikhonov method. The multiscale collocation method with the compression strategy has already been developed to discretize this well-posed equation. However, the integral computation and solution of the large multiscale collocation integral equation are two time-consuming processes. To overcome these difficulties, we propose a fully discrete multiscale collocation method using an integral approximation strategy to compute the integral, which efficiently converts the integral operation to the matrix operation and reduces costs. In addition, we also propose a multilevel iteration method (MIM) to solve the fully discrete integral equation obtained from the integral approximation strategy. Herein, the stopping criterion and the computation complexity that correspond to the MIM are shown. Furthermore, a posteriori parameter choice strategy is developed for this method, and the final convergence order is evaluated. We present three numerical experiments to display the performance and computation efficiency of our proposed methods.

1. Introduction

Continuous integral equations are often used to model certain practical problems in image processing. However, the corresponding discrete models are often used instead of continuous models because discrete models are much easier and more convenient to implement than continuous integral models. Discrete models are piecewise constant approximations of integral equation models, and they introduce a bottleneck model error that cannot be addressed by any image processing method. To overcome the accuracy deficiency of conventional discrete reconstruction models, we use continuous models directly to restore images, which are more in line with physical laws. This idea first appeared in [1], and was widely used later, such as in [2,3,4]. In addition to making more sense in physics, continuous models can be discretized with higher-order accuracy. This means that the model error will be significantly decreased when compared with piecewise constant discretization, especially in the field of image enlargement.
Many researchers have made great contributions to the solution of integral equations, however, many difficulties remain. First, integral operators are compact in Banach space since integral kernels are normally smooth. This will produce a situation in which the solutions of the relevant integral equations do not depend on the known data continuously. To overcome this problem, the Tikhonov [5] and the Lavrentiev [6] regularization methods were proposed to regularize the ill-posed integral equation into a second-kind integral equation. Second, the equation using the Tikhonov regularization method contains a composition integral operator, which greatly increases the computation time. Given this condition, a coupled system equation that only involves a one-time integral operator was proposed in [7] to reduce the high computational cost. The original second-kind integral equation involves the composition of an integral operator, which is time-consuming. The collocation method [8] and the Galerkin method [9] were proposed to discretize the coupled system, and the collocation method is much easier. The third problem is that after the coupled system equation is discretized by the collocation method, a full coefficient matrix is generated. To overcome this issue, Chen et al. [10] proposed that the integral operator be represented using a multiscale basis and obtained a sparse coefficient matrix. Then, a matrix compression technique [1,11] is used to approximate that matrix, which does not affect the existing convergence order. Finally, appropriately choosing the regularization parameter (see [12,13,14,15,16,17]) is a crucial process that should balance between the approximation accuracy and the well-posedness.
The purpose of this paper is to efficiently solve the second-kind integral equation on the basis of the previous achievements of other researchers. Although the equivalent coupled system is developed to reduce the computational complexity caused by the composition of the original integral operator, this is insufficient because a lot of computing time is required to compute the integral. Therefore, inspired by [18], we further propose a fully discrete multiscale collocation method using an integral approximation strategy to compute the integration. The idea of this strategy is the use of the Gaussian quadrature formula to efficiently compute the sparse coefficient matrix. By using the piecewise Gauss-Legendre quadrature, we turn the calculation of the integration into the matrix operation, which will tremendously reduce the computation time. Another challenging issue is that directly solving the large, fully discrete system obtained from the matrix compression strategy is time-consuming. Inspired by [19], we propose a multilevel iteration method (MIM) to solve the large, fully discrete coupled system, and we further present the computation complexity of the MIM. We also propose a stopping criterion of this iteration process, and we prove that this criterion can maintain the existing convergence rate. Finally, we adopt an a posteriori choice of the regularization parameter related to the MIM and then show that it will lead to an optimal convergence order.
This paper is organized into six sections. In Section 2, an overview flowchart is displayed first, and then the integral equation model of the first kind is reviewed to reconstruct an out-of-focus image. Following this, an equivalent coupled system is deduced. In Section 3, we present the fast multiscale collocation method to discretize the coupled system using piecewise polynomial spaces. Additionally, a compression strategy is developed to generate a sparse coefficient matrix in order to make the coupled equation easily solvable. Finally, we propose an integral approximation strategy to compute the nonzero entries of the compressed coefficient matrix, which turns the operation of the integral into a matrix computation. We also provide a convergence analysis of the proposed method. In Section 4, we propose a multilevel iteration method corresponding to the multiscale method to solve the integral equations, and a complete analysis of the convergence rate of the corresponding approximate solution is shown. A posteriori choice of the regularization parameter, which is related to the multilevel iteration method, is presented in Section 5, and we further prove that the MIM, combined with this posteriori parameter choice, makes our solution optimal. In Section 6, we report three comparative tests to verify the efficiency of our two proposed methods. The first test shows the computing efficiency of the coefficient matrix using the integral approximation strategy and the numerical quadrature scheme in [1]. The other two tests exhibit the performance of the MIM compared with the Gaussian elimination method and the multilevel augmentation method, respectively. These tests reveal the high efficiency of our proposed methods.

2. The Integral Equation Model for Image Restoration

In this section, we first depict the overall process of reconstructing out-of-focus images and then describe some common notations that are used throughout the paper. Finally, in the second subsection, we introduce the approach to formulating our image restoration problem into an integral equation.

2.1. System Overview

We present an overview flowchart for reconstructing an out-of-focus image in Figure 1. This process includes four main parts, that is, modeling, discretization, the parameter choice rule, and solving the equation.
For the input out-of-focus image, we formulate it into a Tikhonov-regularized integral equation, which is described in the next subsection. Solving this integral equation necessitates discretization. We propose a fully discrete multiscale collocation method based on an integral approximation strategy. This method works by converting the calculation of the integral into a matrix operation. The next two parts describe the parameter choice rule and solution of the problem. We note that these two parts are actually a whole when executed in practice. However, we describe it in two parts because it is too complicated to describe as a whole. For a clearer presentation, we first display the multilevel iteration method in Section 4 under the condition that we have already selected a good regularization parameter. Then, we describe how this regularization parameter is chosen in Section 5.
Some notations are needed. Suppose that R d represents d-dimensional Euclidean space. In addition, Ω R 2 and E R denote the subsets of R d . L is a special kind of space of L p when p = , and it is proved to be a Banach space. We use x, including x with upper and lower indices, to denote the solution of the equation. Similarly, we use y, including y with upper and lower indices, to denote the known variable of the equation. Furthermore, the operator K represents the blurring kernel, and K * represents the adjoint operator of K . The notation A B denotes the direct sum of spaces A and B when A B . The notation a b means that a and b are in the same order. R ( · ) represents the range of the operator. Finally, many extended and new notations that are not declared here are defined in the context in which they are first used.

2.2. Model Definition

In this subsection, we describe an integral equation model obtained from a reformulation of the Tikhonov regularization equation for image restoration. In addition, an equivalent coupled system is developed for solving this integral equation quickly and efficiently.
Assume that the whole image is a support set. In general, images are usually rectangular. Let Ω R 2 denotes the image domain, also the support set. The image restoration can be modeled by a continuous integral equation of the first kind
K x = y ,
where y is the blurred image and we aim to reconstruct the blurred image y into a clearer image x : Ω R . Thus, image x can also be called the reconstructed image. K is the linear compact operator defined in terms of a kernel g by
( K x ) ( u ) : = Ω g ( u , u ) x ( u ) d u , u Ω .
Let Ω = E × E , with E : = [ 0 , 1 ] . The images may not be in [ 0 , 1 ] × [ 0 , 1 ] , but can be shaped to by scaling and shifting. According to [20], the out-of-focus image is usually modeled with the kernel
g ( u , u ) = 1 2 π σ 2 exp ( ( u u ) 2 + ( v v ) 2 2 σ 2 ) ,
where u : = ( u , v ) , u : = ( u , v ) Ω , and σ is the parameter characterizing the level of ambiguity in the kernel.
Blurring kernel (3) is a two-dimensional symmetric operator. It can be written as a combination of two univariate integral operators. That is to say, solving Equation (1) is equivalent to solving the following system:
1 2 π σ 0 1 exp ( ( v v ) 2 2 σ 2 ) x 1 ( u , v ) d v = y ( u , v ) , 1 2 π σ 0 1 exp ( ( u u ) 2 2 σ 2 ) x 2 ( u , v ) d u = x 1 ( u , v ) .
In this system, x 2 is the final solution of the reconstruction, and x 1 is the intermediate result. These two equations mean that we can first deal with all the rows of the image (first equation) and then all the columns (second equation). Hence, the problem has changed significantly. We have changed the procedure of dealing with a two-dimensional image to processing the rows of the image first, and then the columns. Furthermore, as can be seen from the two equations in (4), the rows and columns of the image are processed similarly, and both processes can be formulated to a one-dimensional integral equation:
K x = y .
For x X : = L ( E ) , K is the linear compact operator defined by
K x ( u ) : = E k ( u , u ) x ( u ) d u , u E ,
with
k ( u , u ) = 1 2 π σ exp ( ( u u ) 2 2 σ 2 ) , u , u E ,
and y X is the observed function. Hence, we focus on Equation (5) in the following illustration.
Ill-posedness makes Equation (5) difficult to solve, but the Tikhonov regularization method can handle this problem. By using the Tikhonov regularization method, we get the following equation:
( α I + K * K ) x α = K * y
where α > 0 is the regularization parameter, and K * is the adjoint operator of K . It is easy to prove that the operator α I + K * K is invertible. In addition, y δ is usually noisy data, with
y y δ   δ
for a noise level δ . x α δ denotes the solution of Equation (6) when y is replaced by y δ . K is self-adjoint; that is, K * = K . Note that Equation (6) includes K * K , which is defined by a 2-fold integral. This 2-fold integral is tremendously computationally expensive. By letting v α δ : = y δ K x α δ α , the authors of [7] split Equation (6) into a coupled system:
α x α δ K * v α δ = 0 K x α δ +   α v α δ = y δ .
These two were proven to be equal in [7], and the advantage is that system (8) only involves one single integral. Thus, we do not solve Equation (6) but instead solve system (8) to reduce the computational difficulties.
In the next section, we apply a fully discrete method to solve system (8).

3. Fully Discrete Multiscale Collocation Method

In this section, a multiscale method is reviewed first, and then we develop a fully discrete formulation of the multiscale collocation method using an integral approximation strategy. By using this strategy, we turn the computation of the integral into a matrix operation, which will greatly reduce the calculation time.

3.1. Multiscale Collocation Method

Let us begin with a quick review of the fast collocation method [10,21,22,23,24] for solving system (8).
We denote N as the natural numbers set. For n N + , N + : = N \ 0 , let Z n : = { 0 , 1 , 2 , , n 1 } . Let X n , n N , denote the subspaces of L ( E ) . Let W 0 : = X 0 , X n +   1 = X n W n +   1 , n N . More clearly,
X n = W 0 W 1 W n .
A basis construction of X n can be found in [25,26]. Suppose that { e i j } is an orthogonal basis of W i . Let U n : = { ( i , j ) : i Z n +   1 , j Z w ( i ) } , where w ( n ) is the dimension of W n . According to (9), we get
X n = span { e i j : ( i , j ) U n } .
Similarly, the multiscale collocation functionals and the collocation points are important. Suppose that i j is the collocation functional. Meanwhile, we have a sequence of collocation points { v i j : ( i , j ) U n } in E. We define an interpolation projection P n : L ( E ) X n , n N and an orthogonal projection Q n : L 2 ( E ) X n , n N .
For n N , let K n : = P n K | X n . The multiscale collocation method for system (8) is to find the solutions
v α , n δ : = ( i , j ) U n v i j α , n e i j and x α , n δ : = ( i , j ) U n x i j α , n e i j of the system
α x α , n δ K n * v α , n δ = 0 K n x α , n δ +   α v α , n δ = P n y δ .
For ( i , j ) , ( i j ) U n , after the introduction of three definitions,
E n : = [ i j , e i j ] , K n : = [ i j , K e i j ] , y n δ : = [ i j , y δ ] ,
Equation (9) has the matrix form
α E n K n K n α E n x α , n δ v α , n δ = 0 y n δ .
Note that K n is a dense matrix. The compression of K n is an important factor of our method. Following the compression strategy of [7], we get the compression matrix K ¯ n :
K ¯ n = i j , K e i j , i +   i   n , 0 , o t h e r w i s e .
Thus, substituting K n with K ¯ n , we finally need to solve the equation
α E n K ¯ n K ¯ n α E n x α , n δ v α , n δ = 0 y n δ .

3.2. Integral Approximation Strategy

The computation of system (11) mainly lies in the integral operator K n . Next, we focus on this problem and develop an integral approximation strategy using the Gaussian quadrature formula to solve this coupled system.
Note that the integral operator K is
K e i j ( u ) = E k ( u , u ) e i j ( u ) d u ,
and the orthogonal basis functions { e i j : ( i , j ) U n } are all piecewise polynomial functions. Thus, the piecewise Gauss–Legendre quadrature is used here.
Integral approximation strategy: With the supports of these basis functions, we divide E equally into μ n parts. Let E q : = [ q μ n , q +   1 μ n ] , for q Z μ n ; then, E = q Z μ n E q . Each basis function is a continuous function in E q , q Z μ n . Then, we choose m > 1 Gaussian points in each part E q and let γ = 2 m 1 . All Gaussian points form a set of piecewise Gaussian points G in order. Let g ( n ) : = | G | ; then, g ( n ) = m × μ n . Thus, the integral under the accuracy of γ in E q can be written as
K e i j ( u ) | E q t = q m +   1 ( q +   1 ) m ρ t k ( u , g t ) e i j ( g t ) , u E q ,
where { g t , t = q m +   1 , , ( q +   1 ) m } represents a Gaussian point in E q , and ρ t represents the weight corresponding to g t . Further, the integral with an accuracy of γ in E is
K e i j ( u ) | E q Z μ n K e i j ( u ) | E q ,
which can be written as a formulation of vector multiplication
K e i j ( u ) | E [ k ( u , g 1 ) , k ( u , g 2 ) , , k ( u , g g ( n ) ) ] [ w i j ( g 1 ) , w i j ( g 2 ) , , w i j ( g g ( n ) ) ) ] T .
Furthermore, as shown in Figure 2, we can use this strategy to approximate K n as a matrix form of
K n L n K s ( n ) × g ( n ) W g ( n ) × s ( n ) ,
where W g ( n ) × s ( n ) is a basis function matrix with weights. And it stands for all the basis functions { e i j , ( i , j ) U n } act on all piecewise Gaussian points of G with the weight of ρ t . L n denotes the matrix representation of point evaluation functional s , and the details of L n refer to [26].
In order to combine it with the matrix compression strategy, we write Equation (13) in blocks. This is the matrix representation K ˜ n , i , i Z n +   1 , which is the approximate value of K ¯ n using the integral approximation strategy.
K ˜ n = ( L n K s ( n ) × g ( n ) W g ( n ) × s ( n ) ) i i , i +   i   n 0 , o t h e r w i s e
Then, Equation (10) becomes a formulation of the fully discrete multiscale collocation equation:
α E n K ˜ n K ˜ n α E n x α , n δ v α , n δ = 0 y n δ ,
where K ˜ n is as given in Equation (14).
Let A : = K * K , A n : = K n * K n . Assume that K ¯ n is the operator corresponding to the matrix representation K ¯ n using the compression strategy, and define A ¯ n : = K ¯ n * K ¯ n . Additionally, let K ˜ n represent the operator with respect to the matrix representation K ˜ n using the integral approximation strategy and the compression strategy, and then define A ˜ n : = K ˜ n * K ˜ n . Therefore, corresponding to Equation (11), system (9) becomes
α x ˜ α , n δ K ˜ n * v ˜ α , n δ = 0 K ˜ n x ˜ α , n δ +   α v ˜ α , n δ = P n y δ .
By adopting the integral approximation strategy, the computation of the integral becomes the computation of the matrix multiplication (14), which greatly reduces the calculation.
Next, we estimate the convergence order of the fast multiscale collocation method using the integral approximation strategy. We should note that parameter c is different in different scenarios below, unless explicitly stated.
We assume that there exists a constant M such that K L 2 ( E ) L ( E )   M . According to [5,12], for any α > 0 , α I + K * K is invertible. We also have the inequality
( α I +   K * K ) 1   α +   M 2 α 3 2 and ( α I +   K * K ) 1 K * L 2 ( E ) L ( E )   M α .
Assume that x ^ is the exact solution of Equation (1), and y R ( K ) .
(H1) If x ^ R ( A ν K * ) with 0 < ν   1 , then there exists an ω L ( E ) such that x ^ = A ν K * ω .
Following from [5,27], if hypothesis (H1) holds, then we have the convergence rate
x ^ x α   O ( α ν )
Suppose that c 0 > 0 . In order to estimate the convergence rate of the integral approximation strategy, we propose the following hypothesis:
(H2) For all n N and some positive constants r,
( I P n ) K   c 0 μ r n , ( I P n ) K *   c 0 μ r n , K ( I Q n )   c 0 μ r n ,
K * ( I Q n )   c 0 μ r n , ( I P n ) K   c 0 μ r n , ( I P n ) K * L 2 ( E ) L ( E )   c 0 μ r n . when K = K * is a Fredholm integral operator with a kernel having the rth derivative, this hypothesis holds true.
Following [28], we can write the remark below.
Remark 1.
Let E j = [ j μ n , j +   1 μ n ] for j Z μ n . The Gauss-Legendre quadrature of function f H 2 m [ 0 , 1 ] on E j is:
E j f ( μ ) d μ = i = 1 m ρ i f ( μ i ) , μ i E j , i = 1 , , m ,
where μ i , 1   i   m is a Gaussian point on E j . The remainder of the Gauss–Legendre quadrature is given as
r m ( f ) = f ( 2 m ) ( ξ ) ( 2 m ) ! E j ω 2 ( μ ) d μ ,
where ω ( μ ) : = ( μ μ 1 ) ( μ μ 1 ) ( μ μ m ) , and | f ( 2 m ) ( ξ ) ( 2 m ) ! |   M for ξ E . We can conclude that:
| r m ( f ) |   M μ n μ 2 m n .
Note that when m Gaussian points are used, the accuracy of the Gauss-Legendre quadrature is γ = 2 m 1 . Corresponding to Remark 1, we give the following proposition.
Proposition 1.
Assume that the integral accuracy m satisfies
r > 2 m .
Then, we can obtain the conclusion that
K ¯ n K ˜ n   c μ ( γ +   1 ) n .
Proof. 
Assume that r > 2 m , the operator K H 2 m [ 0 , 1 ] . Because the polynomial functions are infinitely differentiable, we can get the result that K e i j H 2 m [ 0 , 1 ] , for ( i , j ) U n . From Remark (1), we gain the following inequality:
sup j Z μ n | K e i j ( · ) q Z μ n t = m q +   1 m ( q +   1 ) ρ t k ( · , g t ) e i j ( g t ) |   c μ n μ ( γ +   1 ) n .
Furthermore, we can infer from inequality (21) that when all μ n areas are added, then
K ¯ n K ˜ n   c μ ( γ +   1 ) n .
Thus, the proof has been completed. □
Note that the Gaussian function has an infinite derivative; thus, assumption (19) is easy to satisfy.
Theorem 1.
Let c 0 > 0 , and β ( n , γ ) = max { μ ( γ +   1 ) n , n μ r n / 2 } . If hypothesis (H2) and condition (19) are true, then for n Z ,
A A ˜ n   c 0 β ( n , γ ) .
Proof. 
First, A A ˜ n   A A ¯ n +   A ¯ n A ˜ n ; then, we can prove this theorem by estimating A A ¯ n and A ¯ n A ˜ n separately.
If hypothesis (H2) holds, and according to Lemma (3.2) in [22], we have the conclusion that for all n N ,
A A ¯ n   c n μ r n / 2 , and K n K ¯ n   c n μ r n .
From condition (19), we have that K ¯ n K ˜ n   c μ ( γ +   1 ) n . At the same time, K ¯ n * K ˜ n *   c μ ( γ +   1 ) n . Since K ¯ n * and K ˜ n are uniformly bounded, we can infer that
A ¯ n A ˜ n = K ¯ n * K ¯ n K ˜ n * K ˜ n = K ¯ n * ( K ¯ n K ˜ n ) +   ( K ¯ n * K ˜ n * ) K ˜ n .
Thus, we can obtain A ¯ n A ˜ n   c μ ( γ +   1 ) n . Because β ( n , γ ) = max { μ ( γ +   1 ) n , n μ r n / 2 } , we can get the inequality
A A ˜ n   A A ¯ n +   A ¯ n A ˜ n   c 0 β ( n , γ ) .
This completes the proof. □
Lemma 1.
Assume that hypothesis (H2) holds, and c 0 is the same parameter as in Theorem 1. If parameters n and γ are chosen from
β ( n , γ )   1 2 c 0 α 3 / 2 α +   M / 2 ,
then we can conclude that α I +   A ˜ n is invertible. In addition,
( α I +   A ˜ n ) 1   2 α +   M α 3 / 2 .
Proof. 
According to a known result in [29], we conclude that α I +   A ˜ n is invertible and
( α I +   A ˜ n ) 1   ( α I +   A ) 1 1 ( α I +   A ) 1 A A ˜ n .
Estimate (25) follows from the above bound, condition (24), estimates (17), and Theorem 1. □
Next, we give two lemmas. The proofs are similar to those in [22], so they are omitted here.
Lemma 2.
Assume that hypothesis (H2) and assumption (19) hold, x ^ R ( K * ) , and inequality (24) is satisfied. Then, for n N and parameter c > 0 ,
x α x ˜ α , n   c n μ r n α 3 / 2 .
Lemma 3.
Suppose that hypothesis (H2) holds, and inequality (24) is satisfied. Then, for n N and c > 0 ,
x ˜ α , n x ˜ α , n δ   c ( δ α +   μ r n δ α 3 / 2 ) .
For the remainder of this section, we estimate the error bound for x ^ x ˜ α , n δ .
Theorem 2.
Suppose that hypotheses (H1) and (H2) and assumption (19) hold, x ^ R ( K * ) , and inequality (24) is satisfied. Then, for c 1 > 0 ,
x ^ x ˜ α , n δ   c 1 ( α ν +   δ α +   n μ r n α 3 / 2 ) .
Proof. 
From the triangle inequality, we have
x ^ x ˜ α , n δ   x ^ x α +   x α x ˜ α , n +   x ˜ α , n x ˜ α , n δ .
It is apparent that the estimate in this theorem follows directly from the above bound, inequality (18), and Lemmas 2 and 3. □

4. Multilevel Iteration Method

In general, we solve Equation (16) while choosing the regularization parameter. When parameter selection is finished, the equation is solved. Note that when executed in practice, these two processes are a whole and occurring simultaneously. However, in order to describe it more clearly, we split it into two processes. In this section, we present the multilevel iteration method (MIM) for a fixed α and assume that this parameter is already well selected. In the next section, we show the regularization parameter choice rule.
In this section, we first describe the multilevel iteration method and then present the computation complexity of this algorithm. Finally, the error estimation is then proved.

4.1. Multilevel Iteration Method

After obtaining matrices E n , K ˜ n , and y δ , we begin to solve Equation (15). If we just invert this equation directly, it will require considerable time. Thus, we follow a MIM instead of inversion to obtain a fast algorithm.
First, we introduce the MIM to the coupled system (16). We now assume that the fixed parameter α is already selected according to the rule in Section 5.
With the decomposition of solution domain, for n = k +   m and k , m N + , we can write the solutions x ˜ α , n δ X and v ˜ α , n δ X of system (16) as two block vectors
x ˜ α , n δ = [ ( x ˜ α , n δ ) 0 , ( x ˜ α , n δ ) 1 , , ( x ˜ α , n δ ) m ] T , v ˜ α , n δ = [ ( v ˜ α , n δ ) 0 , ( v ˜ α , n δ ) 1 , , ( v ˜ α , n δ ) m ] T ,
where ( x ˜ α , n δ ) 0 X k , ( v ˜ α , n δ ) 0 X k , and ( x ˜ α , n δ ) j W k +   j , ( v ˜ α , n δ ) j W k +   j , for j = 1 , 2 , , m . The operator K ˜ k +   m also has the following matrix form
K ˜ k +   m : = P k K ˜ k +   m Q k P k K ˜ k +   m Q k +   1 P k K ˜ k +   m Q k +   m P k +   1 K ˜ k +   m Q k P k +   1 K ˜ k +   m Q k +   1 P k +   1 K ˜ k +   m Q k +   m P k +   m K ˜ k +   m Q k P k +   m K ˜ k +   m Q k +   1 P k +   m K ˜ k +   m Q k +   m .
The MIM for solving the coupled system (16) can be represented as Algorithm 1. Let
K ˜ k +   m L : = P k K ˜ k +   m Q k +   m , and K ˜ k +   m H : = ( P k +   m P k ) K ˜ k +   m Q k +   m .
We split the operator K ˜ k +   m into two parts, K ˜ k +   m = K ˜ k +   m L +   K ˜ k +   m H , which are lower and higher frequencies, respectively. Similarly, we can obtain K ˜ k +   m * L and K ˜ k +   m * H by splitting K ˜ k +   m * using an analogous approach. Accordingly, the coupled system (16) can be written as
α x ˜ α , k +   m δ K ˜ k +   m * L v ˜ α , k +   m δ = K ˜ k +   m * H v ˜ α , k +   m δ , K ˜ k +   m L x ˜ α , k +   m δ +   α v ˜ α , k +   m δ = y k +   m δ K ˜ k +   m H x ˜ α , k +   m δ .
Algorithm 1: Multilevel Iteration Method (MIM).
Step 1
Initialization. For a fixed k > 0 and fixed α > 0 , solve (16) with n = k exactly to obtain x ˜ α , k δ , v ˜ α , k δ X k .
Step 2
Projection. Set x ˜ α , k +   m δ , 0 : = x ˜ α , k δ 0 X k +   m and v ˜ α , k +   m δ , 0 : = v ˜ α , k δ 0 X k +   m , then compute K ˜ k +   m L , K ˜ k +   m H , and K ˜ k +   m * L , K ˜ k +   m * H separately.
Step 3
Update. For any N + , find x ˜ α , k +   m δ , = [ ( x ˜ α , k +   m δ , ) 0 , ( x ˜ α , k +   m δ , ) 1 , , ( x ˜ α , k +   m δ , ) m ] T X k +   m , and v ˜ α , k +   m δ , = [ ( v ˜ α , k +   m δ , ) 0 , ( v ˜ α , k +   m δ , ) 1 , , ( v ˜ α , k +   m δ , ) m ] T X k +   m with ( x ˜ α , k +   m δ , ) 0 X k , ( x ˜ α , k +   m δ , ) j W k +   j , and ( v ˜ α , k +   m δ , ) 0 X k , ( v ˜ α , k +   m δ , ) j W k +   j , j = 1 , 2 , , m from the iteration
α x ˜ α , k +   m δ , K ˜ k +   m * L v ˜ α , k +   m δ , = K ˜ k +   m * H v ˜ α , k +   m δ , 1 , K ˜ k +   m L x ˜ α , k +   m δ , +   α v ˜ α , k +   m δ , = y k +   m δ K ˜ k +   m H x ˜ α , k +   m δ , 1 .
.
Step 4
Stopping criterion. Stop the iteration when x ˜ α , k +   m δ , x ˜ α , k +   m δ , 1 < δ , v ˜ α , k +   m δ , v ˜ α , k +   m δ , 1 < δ .

4.2. Computation Complexity

We now turn to studying the computation complexity of this algorithm. Specifically, we estimate the number of multiplications used in the method. As a result, we write the iterative equation in Algorithm 1 in the matrix representation form. First, we introduce the block matrix K ˜ i i : = [ K ˜ i j , i j : j Z w ( i ) , j Z w ( i ) ] and define K ˜ n : = [ K ˜ i i : i , i Z n +   1 ] . Moreover, for a fixed k N , we define the blocks K ˜ 0 , 0 k : = K ˜ k . Additionally, for s , s N , we define K ˜ 0 , s k : = [ K ˜ i i : i Z k +   1 , i = k +   s ] , K ˜ s , 0 k : = [ K ˜ i i : i Z k +   1 , i = k +   s ] , and K ˜ s , s k : = K ˜ k +   s , k +   s . From these definitions, we write K ˜ k +   m = [ K ˜ i , i k : i , i Z m +   1 ] . We also partition matrix E n in the same way, which we omit here.
Then, the matrix representations of the operators K ˜ k +   m L and K ˜ k +   m H are
K ˜ k , m L : = K ˜ 0 , 0 k K ˜ 0 , 1 k K ˜ 0 , m k 0 0 0 0 0 0 , and K ˜ k , m H : = 0 0 0 K ˜ 1 , 0 k K ˜ 1 , 1 k K ˜ 1 , m k K ˜ m , 0 k K ˜ m , 1 k K ˜ m , m k , respectively .
We also write down the matrix representations of the operators x ˜ α , k +   m δ , and v ˜ α , k +   m δ , at the th iteration as x ˜ k , m : = [ x ˜ i : i Z m +   1 ] and v ˜ k , m : = [ v ˜ i : i Z m +   1 ] . Furthermore, we define y k +   m δ : = [ y i δ : i Z m +   1 ] . Using this block form of the matrix, the solutions of the iterative equation in Algorithm 1 become
x ˜ i = 1 α j = 0 m K ˜ i , j k v ˜ j 1 j = i +   1 m E i , j k x ˜ j , i = m , m 1 , , 1 .
v ˜ i = 1 α y i δ 1 α j = 0 m K ˜ i , j k x ˜ j 1 j = i +   1 m E i , j k v ˜ i , i = m , m 1 , , 1 .
For i = 0 ,
α E k K ˜ k K ˜ k α E k x ˜ 0 v ˜ 0 = 0 y 0 δ +   j = 1 m α E 0 , j k K ˜ 0 , j k K ˜ 0 , j k α E 0 , j k x ˜ j v ˜ j .
For a matrix A, we denote by N ( A ) the number of nonzero entries of A. Let
R k = α E k K ˜ k K ˜ k α E k .
For > 0 , we need 2 N ( K ˜ k , m ) +   2 N ( E k +   m ) multiplications with an inverse operation R k 1 to obtain x ˜ i and v ˜ i , i = m , m 1 , , 1 , 0 from Equation (31). However, in all iterations, the inverse operation R k 1 only needs to be done once; thus, we assume that the inverse operation R k 1 needs M ( k ) multiplications. Hence, the number of multiplications for computing x ˜ i and v ˜ i , i = m , m 1 , , 1 , 0 from x ˜ i 1 and v ˜ i 1 , i = m , m 1 , , 1 , 0 is
N k , m = 2 N ( K ˜ k +   m ) +   2 N ( E ˜ k +   m ) +   M ( k ) .
In addition, we only need to compute the same inverse operation R k 1 to obtain x ˜ k , m 0 and v ˜ k , m 0 in the first step in Algorithm 1. Therefore, we are now ready to summarize the above discussion in a proposition.
Proposition 2.
The total number of multiplications required to obtain x ˜ k , m and v ˜ k , m is given by
M ( k ) +   [ 2 N ( K ˜ k +   m ) +   2 N ( E ˜ k +   m ) ] .
Note that when we solve the coupled system (16) directly, we need to compute the inverse operation R k +   m 1 . However, when we use the MIM, we only need to compute the inverse operation R k 1 . This is the key factor that leads to a fast algorithm.
In the next subsection, we estimate the error x ^ x ˜ α , k +   m δ , .

4.3. Error Estimation

From the triangle inequality
x ^ x ˜ α , k +   m δ ,   x ^ x ˜ α , k +   m δ +   x ˜ α , k +   m δ x ˜ α , k +   m δ , ,
we only need to estimate x ˜ α , k +   m δ x ˜ α , k +   m δ , . Let
D k , m : = K ˜ n * K ˜ k +   m H +   K ˜ k +   m * H K ˜ k +   m L , F k , m ( α ) : = α I +   K ˜ k +   m * L K ˜ k +   m L .
Then, α I +   K ˜ n * K ˜ n = D k , m +   F k , m ( α ) .
Lemma 4.
Suppose that hypotheses (H1) (H2) and condition (24) hold, then D k , m 0 , as k , for m N + . Then for N N + , k > N and m N + ,
D k , m < α 3 / 2 2 α +   M , and F k , m 1 ( α )   1 α 3 / 2 2 α +   M D k , m .
Proof. 
If these two hypotheses are true, then there exists a constant c 2 > 0 such that
( I P k ) K ˜ n   ( I P k ) K n +   ( I P k ) ( K n K ¯ n ) +   ( I P k ) ( K ¯ n K ˜ n )   c 2 β ( β , γ )   α 3 / 2 2 α +   M ,
and ( I P k ) K ˜ n * is the same. We rewrite D k , m as
D k , m = ( I P k ) K ˜ n * P k K ˜ n * +   K ˜ n * ( I P k ) K ˜ n   α 3 / 2 2 α +   M 0 , a s k .
From the definition of F k +   m ( α ) , (17), and Theorem 1, for any u X k +   m , we have
F k , m ( α ) u   ( α I +   K ˜ n * K ˜ n ) u D k , m u   ( α 3 / 2 2 α +   M D k , m ) u .
This, together with Equation (37), proves that the first part of (36) is true. Then, obviously, the second part is true. □
A corollary (cf, Corollary 9.9, page 337 of [10]) confirms that the condition numbers cond( α I +   K ˜ n * L K ˜ n L ) and cond( α I +   K ˜ n ) have the same order. In other words, the MIM will not ruin the well-condition property of the original multiscale method. However selecting an appropriate k is important, which will influence the result of the iteration process. It follows from the iterative equation in Algorithm 1 that
( α I +   K ˜ k +   m * L K ˜ k +   m L ) x ˜ α , k +   m δ , = K ˜ k +   m * L ( y k +   m δ K ˜ k +   m * H x ˜ α , k +   m δ , 1 ) +   K ˜ k +   m * H ( y k +   m δ K ˜ k +   m H x ˜ α , k +   m δ , 2 K ˜ k +   m L x ˜ α , k +   m δ , 1 ) .
Generally, in order to make the iteration process convergence, we would choose k such that
( α I +   K ˜ k +   m * L K ˜ k +   m L ) 1 K ˜ k +   m * L K ˜ k +   m * H < 1 ,
and
( α I +   K ˜ k +   m * L K ˜ k +   m L ) 1 K ˜ k +   m * H K ˜ k +   m * H < 1 .
Theorem 3.
If hypothesis (H2) and condition (24) hold, let
γ ˜ α , k , m , : = c 3 t k 2 ( α ) ( δ α +   k μ r k α 3 / 2 ) ,
where
t k ( α ) = μ r k α 3 / 2 2 α +   M D k , m ,
and then for N 0 N + , k > N 0 , and m N + , we obtain the result
x ˜ α , k +   m δ x ˜ α , k +   m δ ,   γ ˜ α , k , m , .
Proof. 
We prove this result by induction on .
First, we prove that Equation (43) holds when = 0 . It is apparent that x ˜ α , k +   m δ , 0 = x ˜ α , k δ . Therefore, from the result of Lemmas 2 and 3, we can conclude that there exists a constant c,
x ˜ α , k +   m δ x ˜ α , k δ   x α x ˜ α , k +   m δ +   x α x ˜ α , k δ   c ( δ α +   k μ r k α 3 / 2 ) ,
and then we obtain Equation (43) when = 0 .
Meanwhile, following from Equation (29), we can obtain
( α I +   K ˜ k +   m * L K ˜ k +   m L +   K ˜ k +   m * L K ˜ k +   m H +   K ˜ k +   m * H K ˜ k +   m L ) x ˜ α , k +   m δ = K ˜ k +   m * y k +   m δ
Subtracting (39) from (45), we have
x ˜ α , k +   m δ , x ˜ α , k +   m δ = F k , m 1 ( α ) i = 1 2 w i
where
w 1 = ( K ˜ k +   m * L K ˜ k +   m H +   K ˜ k +   m * H K ˜ k +   m L ) ( x ˜ α , k +   m δ x ˜ α , k +   m δ , 1 ) ,
and
w 2 = K ˜ k +   m * H K ˜ k +   m H ( x ˜ α , k +   m δ x ˜ α , k +   m δ , 2 ) .
On the basis of Lemma 4, for N 0 N + and k > N 0 ,
x ˜ α , k +   m δ x ˜ α , k +   m δ ,   F k , m 1 ( α ) i = 1 2 w i .
Next, we estimate K ˜ k +   m * H K ˜ k +   m H , K ˜ k +   m * L K ˜ k +   m H , and K ˜ k +   m * H K ˜ k +   m L . From hypothesis (H2), we have
K ˜ k +   m H K ˜ k +   m H =   ( P k +   m P k ) K ˜ k +   m * Q k +   m ( P k +   m P k ) K ˜ k +   m * Q k +   m   P k +   m ( I P k ) K ˜ k +   m * P k +   m ( I P k ) K ˜ k +   m *   c μ 2 r k ,
and
K ˜ k +   m L K ˜ k +   m H =   P k K ˜ k +   m * Q k +   m ( P k +   m P k ) K ˜ k +   m * Q k +   m   M P k +   m ( I P k ) K ˜ k +   m *   c μ r k .
Then, we get
x ˜ α , k +   m δ x ˜ α , k +   m δ ,   c μ r k F α , k +   m 1 ( α ) ( γ ˜ α , k , m , 1 +   γ ˜ α , k , m , 2 )   c t k ( α ) ( γ ˜ α , k , m , 1 +   γ ˜ α , k , m , 2 ) .
We assume that Equation (43) holds for = q 1 , q N + , and prove that it also holds for = q .
Following from Equations (40) and (41), we can obtain
t k ( α ) = μ r k α 3 / 2 2 α +   M D k , m < 1 .
From Equation (50) and inequality (51), we obtain
x ˜ α , k +   m δ x ˜ α , k +   m δ , q   c t k ( α ) [ c t k q 1 2 ( α ) +   c t k q 2 2 ( α ) ] ( δ α +   k μ r k α 3 / 2 ) = c [ c t k q +   1 2 ( α ) +   c t k q 2 ( α ) ] ( δ α +   k μ r k α 3 / 2 )   c 3 t k q 2 ( α ) ( δ α +   k μ r k α 3 / 2 ) = γ ˜ α , k , m , q ,
which completes the proof of this theorem. □
Theorem 4.
If hypotheses (H1) and (H2) and assumption (19) hold, the parameters n and γ are chosen to satisfy condition (24), and the integer k is chosen to satisfy Equations (40) and (41), then for N 0 N + , k > N 0 , and m N + ,
x ^ x ˜ α , k +   m δ ,   c 4 [ α ν +   δ α +   n μ r n α 3 / 2 +   t k 2 ( α ) k μ r k α 3 / 2 ] .
Moreover, if parameters k and m are chosen on the basis of n : = k +   m , and the number of iterations ℓ satisfies
[ n μ r n α 3 / 2 +   t k 2 ( α ) k μ r k α 3 / 2 ]   2 δ α ,
then
x ^ x ˜ α , k +   m δ ,   3 c 4 ( α ν +   δ α ) .
Proof. 
From the triangle inequality, we have
x ^ x ˜ α , k +   m δ ,   x ^ x ˜ α , k +   m δ +   x ˜ α , k +   m δ x ˜ α , k +   m δ , .
The combination of Theorems 2 and 4 implies the inequality (53). If the three parameters n, γ , and k are chosen to satisfy conditions (40), (41), and (54), together with m : = n k , as well as inequality (53), we can conclude that
x ^ x ˜ α , k +   m δ ,   c 4 ( α ν +   3 δ α )   3 c 4 ( α ν +   δ α ) ,
which completes the proof. □
Note that
x ˜ α , k +   m δ , x ˜ α , k +   m δ , 1   x ˜ α , k +   m δ , x ˜ α , k +   m δ   γ ˜ α , k , m , 1
and γ ˜ α , k , m , is exponentially decreasing. Therefore, the stopping criterion can be reached at a finite step. With the next remark, we show that the stopping criterion of Algorithm 1 can guarantee the convergence rate of Equation (55).
Remark 2.
Suppose that hypotheses (H1) and (H2) and assumption (19) hold. The parameters n and γ are selected to satisfy condition (24), and n satisfies n μ r n α 3 / 2   δ α . If the iteration stops when
x ˜ α , k +   m δ , x ˜ α , k +   m δ , 1 < δ and v ˜ α , k +   m δ , v ˜ α , k +   m δ , 1 < δ ,
then the estimation error is
x ^ x ˜ α , k +   m δ ,   c ( α ν +   δ α ) .
Proof. 
From the triangle inequality, we get
x ^ x ˜ α , k +   m δ ,   x ^ x ˜ α , k +   m δ +   x ˜ α , k +   m δ x ˜ α , k +   m δ , .
On the one hand, following from the iterative equation in Algorithm 1, we can write x ˜ α , k +   m δ , as
x ˜ α , k +   m δ , = ( α I +   K ˜ k +   m * K ˜ k +   m ) 1 [ α K ˜ k +   m * H ( v ˜ α , k +   m δ , 1 v ˜ α , k +   m δ , ) K ˜ k +   m * K ˜ k +   m * H ( x ˜ α , k +   m δ , 1 x ˜ α , k +   m δ , ) +   K ˜ k +   m * y δ ] .
On the other hand,
x ˜ α , k +   m δ = ( α I +   K ˜ k +   m * K ˜ k +   m ) 1 K ˜ k +   m * y δ .
Combining Equations (59) and (60), we have
x ˜ α , k +   m δ , x ˜ α , k +   m δ   α K ˜ k +   m H α I +   K ˜ k +   m * K ˜ k +   m v ˜ α , k +   m δ , 1 v ˜ α , k +   m δ , +   K ˜ k +   m * K ˜ k +   m H α I +   K ˜ k +   m * K ˜ k +   m x ˜ α , k +   m δ , 1 x ˜ α , k +   m δ , .
From the right inequality of (17) and the error bound (57), we can obtain
x ˜ α , k +   m δ x ˜ α , k +   m δ ,   c δ α .
According to Theorem 2, the above inequality, and also n μ r n α 3 / 2   δ α , we can get the final result:
x ^ x ˜ α , k +   m δ ,   c ( α ν +   δ α ) .
 □

5. Regularization Parameter Choice Strategies

Choosing an appropriate regularization parameter α is an important process in solving the ill-posed integral equation. We present a posteriori parameter choice strategy [14] for our proposed method.
For any given α > 0 , we choose the parameters k ( α ) , m ( α ) , n ( α ) , γ according to Theorem 4. Following from [14], we assume that there exist two increasing continuous functions,
φ ( α ) : = 3 c 4 α ν and ψ ( α ) : = α 3 c 4 ,
with φ ( 0 ) = 0 , such that we can write Equation (53) as
x ^ x ˜ α , k +   m δ ,   φ ( α ) +   δ ψ ( α ) .
Then, α = α o p t : = ( φ ψ ) 1 ( δ ) would be the best choice. For constants q 0 > 1 and ρ > 0 , we let the positive integer N be determined by
ρ δ q 0 N 1   1 < ρ δ q 0 N ,
and then define the set Δ N : = { α i : = ρ δ q 0 i : i = 0 , 1 , , N } , and define M ( Δ N ) : = { α i : α i Δ N , φ ( α i )   δ ψ ( α i ) } . Obviously, α * : = max { α i : α i M ( Δ N ) } can be the approximation of the regularization parameter, but the function φ ( α ) involves the unknown smoothness order ν of the integral operator. Therefore, it is infeasible to use M ( Δ N ) directly, and a little modification is necessary. We next present a rule for choosing the regularization parameter α .
Rule 1.
As suggested in [14], we choose the parameter α : = α + = max { α j : α j M + ( Δ N ) } as an approximation of α o p t , where
M + ( Δ N ) : = { α j : α j Δ N , x ˜ α j , n ( α j ) δ , x ˜ α i , n ( α i ) δ ,   4 δ ψ ( α i ) , i = 0 , 1 , , j }
This is a posteriori parameter choice strategy, which does not involve the smoothness order ν . We next present a crucial lemma, which can be found in Theorem 2.1 of [14].
Lemma 5.
Suppose that assumption (64) holds, and k ( α + ) and m ( α + ) are the integers corresponding to α + . If for α i Δ N , i = 1 , 2 , , N , we have M ( Δ N ) , Δ N M ( Δ N ) , and ψ ( α i )   c ψ ( α i 1 ) , for c > 0 , then we can get the result
x ^ x ˜ α + , k ( α + ) +   m ( α + ) δ ,   6 c φ ( ( φ ψ ) 1 ( δ ) ) .
Proof. 
This lemma can be obtained from [7] with a slight modification. Thus, we omit the details of the proof. □
Then, we estimate the convergence order when we choose the parameter α : = α + .
Theorem 5.
Suppose that hypotheses (H1) and (H2) and assumption (19) are true, and if we choose the regularization parameter according to Rule 1, then the convergence order of the approximate solution x ˜ α + , k ( α + ) +   m ( α + ) δ , is
x ^ x ˜ α + , k ( α + ) +   m ( α + ) δ , = O ( δ ν ν +   1 ) , a s δ 0 , m N 0 .
Proof. 
Substituting Equation (63) into Equation (66), we can obtain Equation (67). Thus, it is sufficient to verify the hypotheses of Lemma 5. From the definition of Δ N , we can get ψ ( α i ) = q 0 ψ ( α i 1 ) . On the one hand, φ ( α 0 ) ψ ( α 0 ) = ( ρ δ q 0 0 ) ν +   1 = ( ρ δ ) ν +   1 , and ρ ν +   1 δ ν   1 , and then we can obtain φ ( α 0 )   δ ψ ( α 0 ) . On the other hand, φ ( α N ) ψ ( α N ) = ( ρ δ q 0 N ) ν +   1 > 1 and δ   1 , and then φ ( α N ) > δ ψ ( α N ) . In sum, we can conclude that α 0 M ( Δ N ) and α N M ( Δ N ) . Therefore, M ( Δ N ) , and at the same time, we also have Δ N M ( Δ N ) . So far, we have proven that the conditions of Lemma 5 are met. Therefore, there exists a constant c 5 such that
x ^ x ˜ α + , k ( α + ) +   m α + δ ,   6 q 0 φ ( ( φ ψ ( δ ) ) 1 ) = c 5 δ ν ν +   1 .
The proof has been completed. □

6. Numerical Experiments

In this section, three numerical examples are presented for the restoration of out-of-focus images to verify the performance of the fully discrete multiscale collection method and the multilevel iteration method. Matlab was used to conduct our simulations, and all examples below were run on a computer with a 3.00 GHz CPU and 8 GB memory.
Using the integral equation necessitates transforming the discrete matrix (the observed image) into a continuous function. We used the method in [1] directly. Assume that the size of the image is r o × c o , and the pixels of image are on the grid { ( i / r o , j / c o ) : i = 0 , 1 , , r o ; j = 0 , 1 , , c o } . The function to formulate the image is
h ( u , v ) : = i = 0 r o j = 0 c o h ¯ i j ϕ i , r o ( u ) ϕ j , c o ( v ) ,
where h ¯ i j is the old pixel value. Assume that s is a positive integer. Then, for l = 0 , 1 , , s ,
ϕ l , s ( t ) = ( s t ( l 1 ) ) , ( l 1 ) / s < t   l / s , ( ( l +   1 ) s t ) , l / s < t   ( l +   1 ) / s , 0 , o t h e r w i s e .
Then Equation (5) becomes
y ( u , v ) = i = 0 r o j = 0 c o 0 1 0 1 1 2 π σ 2 e ( u u ) 2 +   ( v v ) 2 2 σ 2 ϕ i , r o ( u ) ϕ j , c o ( v ) d u d v .
The noise level is defined as δ = y · e / 100 . Note that we employ the piecewise linear functions in [26] for simplicity in the following examples.
Example 1.
In our first experiment, we verified the effectiveness of the integral approximation strategy for the coupled integral equation. We set n = 0 as the initial level and measured the computing time in seconds to generate the coefficient matrix K n of the coupled operator Equation (9) for the range n = 5 to 10 in Table 1. For comparison, we repeated the experiment of the numerical quadrature scheme in [1]. In this example, we set both integral methods to have the same accuracy, γ = 3 . For different values of n, T N denotes the computing time of the numerical quadrature scheme in [1], and T I denotes the computing time of the proposed integral approximation strategy. As we can see in Table 1, the proposed integral approximation strategy uses less time to generate the coefficient matrix K n , which proves that the integral approximation strategy is an efficient fast algorithm.
Example 2.
The second simulation was executed to verify the efficiency of the proposed multilevel iteration method. Figure 3a is the original clear ’building’ image with the size 256 × 384 , and the blurred image is recovered in Figure 3b with σ = 0 . 01 in the blurring kernel and noise level δ = 0 . 01 using the MIM. For comparison, we also conducted experiments using the Gaussian elimination method (GEM) in [28] and the multilevel augmentation method (MAM) in [10]. T G E M , T M A M and T M I M represent the computing time of the Gaussian elimination method, the multilevel augmentation method and the multilevel iteration method, respectively. The value n listed in Table 2 ranges from 5 to 10, and in this case, the continuous intensity function (69) is needed. As shown in Section 2, two one-dimensional integral equations are solved to recover the blurred image. Therefore, the computing time here denotes the summation of the time needed to solve the coupled Equation (8) twice. In our MIM experiments, all processes stopped at the second iteration, which is very fast because of the good selection of the initial values x ˜ α , k δ and v ˜ α , k δ . These two initial values can be found in Step 1 of Algorithm 1. Figure 3c–e show the reconstructed images of (b) using the GEM, the MAM and the MIM. Meanwhile, Table 2 and Figure 4 exhibit the computing time of these three methods. On the whole, the computing time of the MIM is the least among the results of these methods. All results show the proposed multilevel iteration method requires less computational resources. The difference is obvious, especially when the indicator of the extended dimension n is large.
Example 3.
In this example, we demonstrate that the performance of the MIM is as good as the alternative method. As shown in Figure 5b–d, we consider the restoration of the ’Lena’ image, which has the size 257 × 257 . We use the image with σ = 0.01 in the blurring kernel and different noise level δ. Note that when δ = 0 , the image is noise free. We introduce the peak signal-to-noise ratio (PSNR) to evaluate the restored images and blurred images. For comparison, we solved the corresponding integral equation by using the proposed multilevel iteration method and the Gaussian elimination method with the piecewise linear polynomial basis functions at n = 8 . Table 3 and Table 4 list α + obtained from the parameter choice using Rule 1, the PSNR value of the blurred image (PB), the PSNR value of the reconstructed image (PR), and the corresponding time to solve the equation using the GEM and the MIM separately, where the noise level δ ranges from 0 to 0.15. Figure 5e–j show the reconstructed image corresponding to different methods and different noise levels of 0, 0.03, and 0.1. In general, by comparing the numerical experiments in Table 3 and Table 4, we can conclude that there is almost no difference in the PSNR value of the reconstructed image using the GEM and the MIM, but more specifically, the MIM performs better in most cases. But more seriously, the MIM performs better in most cases.
Example 2, combined with Example 3, proves that the MIM uses less computation time than the alternative methods, while the performance is equally well. Therefore, we can conclude that the multilevel iteration method is an effective fast algorithm to solve the integral equation in image restoration.

7. Conclusions

In this paper, we formulate the problem of image restoration as an integral equation. In order to solve this integral equation, we propose two fast algorithms. The first one is the fully discrete multiscale collocation method, which converts the calculation of the integral to a matrix operation. The second one is the multilevel iteration method, which guarantees that the solution has an optimal order. All examples verify that the proposed methods are accurate and efficient when compared with alternative strategies. In the future, we will still focus on finding faster and more efficient methods.

Author Contributions

B.Z. completed the analysis and designed the paper; H.Y. managed the project and reviewed the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by NSFC under grant 11571386.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, Y.; Shen, L.; Xu, Y. Integral equation models for image restoration: High accuracy methods and fast algorithms. Inverse Probl. 2010, 26, 045006. [Google Scholar] [CrossRef]
  2. Chan, R.H.; Chan, T.F.; Shen, L.; Shen, Z. Wavelet algorithms for high-resolution image reconstruction. SIAM J. Sci. Comput. 2003, 24, 1408–1432. [Google Scholar] [CrossRef]
  3. Jiang, Y.; Li, S.; Xu, Y. A Higher-Order Polynomial Method for SPECT Reconstruction. IEEE Trans. Med. Imaging 2019, 38, 1271–1283. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, Y.; Shen, L.; Xu, Y.; Yang, H. A collocation method solving integral equation models for image restoration. J. Integral Eq. Appl. 2016, 28, 263–307. [Google Scholar] [CrossRef]
  5. Groetsch, C.W. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind; Research Notes in Mathematics; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984. [Google Scholar]
  6. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Mathematics and Its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1996. [Google Scholar]
  7. Chen, Z.; Ding, S.; Xu, Y.; Yang, H. Multiscale collocation methods for ill-posed integral equations via a coupled system. Inverse Probl. 2012, 28, 025006. [Google Scholar] [CrossRef]
  8. Russell, R.D.; Shampine, L.F. A collocation method for boundary value problems. Numer. Math. 1972, 19, 1–28. [Google Scholar] [CrossRef]
  9. Krasnosel’skii, M.A.; Vainikko, G.M.; Zabreiko, P.P.; Rutitskii, Y.B.; Stetsenko, V.Y. Approximate Solution of Operator Equations; Wolters-Noordhoff Pub: Groningen, The Netherlands, 1972. [Google Scholar]
  10. Chen, Z.; Micchelli, C.A.; Xu, Y. Multiscale Methods for Fredholm Integral Equations; Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  11. Micchelli, C.A.; Xu, Y.; Zhao, Y. Wavelet Galerkin methods for second-kind integral equations. J. Comput. Appl. Math. 1997, 86, 251–270. [Google Scholar] [CrossRef] [Green Version]
  12. Groetsch, C.W. Convergence analysis of a regularized degenerate kernel method for Fredholm integral equations of the first kind. Integral Eq. Oper. Theory 1990, 13, 67–75. [Google Scholar] [CrossRef]
  13. Hämarik, U.; Raus, T. About the balancing principle for choice of the regularization parameter. Numer. Funct. Anal. Optim. 2009, 30, 951–970. [Google Scholar] [CrossRef]
  14. Pereverzev, S.; Schock, E. On the adaptive selection of the parameter in regularization of ill-posed problems. SIAM J. Numer. Anal. 2005, 43, 2060–2076. [Google Scholar] [CrossRef]
  15. Bauer, F.; Kindermann, S. The quasi-optimality criterion for classical inverse problems. Inverse Probl. 2008, 24, 035002. [Google Scholar] [CrossRef]
  16. Jin, Q.; Wang, W. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule. Inverse Probl. 2018, 34, 035001. [Google Scholar] [CrossRef]
  17. Rajan, M.P. Convergence analysis of a regularized approximation for solving Fredholm integral equations of the first kind. J. Math. Anal. Appl. 2003, 279, 522–530. [Google Scholar] [CrossRef] [Green Version]
  18. Ma, Y.; Xu, Y. Computing integrals involved the Gaussian function with a small standard deviation. J. Sci. Comput. 2019, 78, 1744–1767. [Google Scholar] [CrossRef] [Green Version]
  19. Luo, X.; Fan, L.; Wu, Y.; Li, F. Fast multi-level iteration methods with compression technique for solving ill-posed integral equations. J. Comput. Appl. Math. 2014, 256, 131–151. [Google Scholar] [CrossRef]
  20. Gonzalez, R.C.; Wintz, P. Digital Image Processing; Addison-Wesley Publishing Co.: Reading, MA, USA; London, UK; Amsterdam, The Netherlands, 1977. [Google Scholar]
  21. Chen, Z.; Micchelli, C.A.; Xu, Y. A construction of interpolating wavelets on invariant sets. Math. Comp. 1999, 68, 1569–1587. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, Z.; Xu, Y.; Yang, H. Fast collocation methods for solving ill-posed integral equations of the first kind. Inverse Probl. 2008, 24, 065007. [Google Scholar] [CrossRef]
  23. Chen, Z.; Xu, Y.; Yang, H. A multilevel augmentation method for solving ill-posed operator equations. Inverse Probl. 2006, 22, 155–174. [Google Scholar] [CrossRef]
  24. Chen, Z.; Micchelli, C.A.; Xu, Y. Fast collocation methods for second kind integral equations. SIAM J. Numer. Anal. 2002, 40, 344–375. [Google Scholar] [CrossRef]
  25. Micchelli, C.A.; Xu, Y. Reconstruction and decomposition algorithms for biorthogonal multiwavelets. Multidimens. Syst. Signal Process. 1997, 8, 31–69. [Google Scholar] [CrossRef]
  26. Fang, W.; Lu, M. A fast collocation method for an inverse boundary value problem. Int. J. Numer. Methods Eng. 2004, 59, 1563–1585. [Google Scholar] [CrossRef]
  27. Groetsch, C.W. Uniform convergence of regularization methods for Fredholm equations of the first kind. J. Aust. Math. Soc. Ser. A 1985, 39, 282–286. [Google Scholar] [CrossRef] [Green Version]
  28. Kincaid, D.; Cheney, W. Numerical Analysis: Mathematics of Scientific Computing; Brooks/Cole Publishing Co.: Pacific Grove, CA, USA, 1991. [Google Scholar]
  29. Taylor, A.E.; Lay, D.C. Introduction to Functional Analysis, 2nd ed.; Robert, E., Ed.; Krieger Publishing Co., Inc.: Melbourne, FL, USA, 1986. [Google Scholar]
Figure 1. Overview flowchart.
Figure 1. Overview flowchart.
Mathematics 08 00346 g001
Figure 2. The components of matrix K n .
Figure 2. The components of matrix K n .
Mathematics 08 00346 g002
Figure 3. (a) The original clear ’building’ image. (b) Blurred image with σ = 0.01 and δ = 0.01 . (ce) are reconstructed images of (b) with the GEM, the MAM, and the MIM.
Figure 3. (a) The original clear ’building’ image. (b) Blurred image with σ = 0.01 and δ = 0.01 . (ce) are reconstructed images of (b) with the GEM, the MAM, and the MIM.
Mathematics 08 00346 g003
Figure 4. The computing time of the GEM, the MAM and the MIM.
Figure 4. The computing time of the GEM, the MAM and the MIM.
Mathematics 08 00346 g004
Figure 5. (a) Original clear ’Lena’ image. (bd) are blurred images with σ = 0.01 in kernel and noise level δ = 0 , 0.03 , 0.1 , respectively. (eg) are reconstructed images of (bd) respectively using the GEM. (hj) are reconstructed images of (bd) respectively using the MIM.
Figure 5. (a) Original clear ’Lena’ image. (bd) are blurred images with σ = 0.01 in kernel and noise level δ = 0 , 0.03 , 0.1 , respectively. (eg) are reconstructed images of (bd) respectively using the GEM. (hj) are reconstructed images of (bd) respectively using the MIM.
Mathematics 08 00346 g005
Table 1. Computing time for generating matrices K n .
Table 1. Computing time for generating matrices K n .
n5678910
T N 0.310.581.172.425.4513.98
T I 0.0020.0040.0130.0470.1902.14
Table 2. The computing time of the GEM, the MAM and the MIM.
Table 2. The computing time of the GEM, the MAM and the MIM.
n5678910
T G E M 0.01310.06450.35352.425015.8140100.6399
T M A M 0.01030.05280.34542.258413.889882.9874
T M I M 0.00840.05000.34781.887511.088064.6845
Table 3. Performance of the Gaussian elimination method.
Table 3. Performance of the Gaussian elimination method.
δ 00.010.030.050.100.15
α + 4.3980 × 10 7 0.00670.01670.02560.040.05
P B 23.017322.955222.479221.672619.110216.7175
P R 31.907225.830624.503923.634821.913520.7536
Table 4. Performance of the multilevel iteration method.
Table 4. Performance of the multilevel iteration method.
δ 00.010.030.050.100.15
α + 6.6570 × 10 6 0.0090.01990.02400.03940.0526
P B 23.017322.952822.480121.669219.120116.7044
P R 30.903025.947924.553723.628121.910520.7557

Share and Cite

MDPI and ACS Style

Yang, H.; Zhou, B. A Multilevel Iteration Method for Solving a Coupled Integral Equation Model in Image Restoration. Mathematics 2020, 8, 346. https://doi.org/10.3390/math8030346

AMA Style

Yang H, Zhou B. A Multilevel Iteration Method for Solving a Coupled Integral Equation Model in Image Restoration. Mathematics. 2020; 8(3):346. https://doi.org/10.3390/math8030346

Chicago/Turabian Style

Yang, Hongqi, and Bing Zhou. 2020. "A Multilevel Iteration Method for Solving a Coupled Integral Equation Model in Image Restoration" Mathematics 8, no. 3: 346. https://doi.org/10.3390/math8030346

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop