Next Article in Journal
Pairs of Associated Yamabe Almost Solitons with Vertical Potential on Almost Contact Complex Riemannian Manifolds
Next Article in Special Issue
Generalized Quantification Function of Monogenic Phase Congruency
Previous Article in Journal
Optimal Fault-Tolerant Resolving Set of Power Paths
Previous Article in Special Issue
Night Vision Anti-Halation Algorithm of Different-Source Image Fusion Based on Low-Frequency Sequence Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Total Fractional-Order Variation-Based Constraint Image Deblurring Problem

1
Abdus Salam School of Mathematical Sciences (AS-SMS), Government College University, Lahore 54000, Pakistan
2
Department of Mathematics, Korea University, Seoul 02841, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2869; https://doi.org/10.3390/math11132869
Submission received: 16 May 2023 / Revised: 10 June 2023 / Accepted: 20 June 2023 / Published: 26 June 2023
(This article belongs to the Special Issue Advances of Mathematical Image Processing)

Abstract

:
When deblurring an image, ensuring that the restored intensities are strictly non-negative is crucial. However, current numerical techniques often fail to consistently produce favorable results, leading to negative intensities that contribute to significant dark regions in the restored images. To address this, our study proposes a mathematical model for non-blind image deblurring based on total fractional-order variational principles. Our proposed model not only guarantees strictly positive intensity values but also imposes limits on the intensities within a specified range. By removing negative intensities or constraining them within the prescribed range, we can significantly enhance the quality of deblurred images. The key concept in this paper involves converting the constrained total fractional-order variational-based image deblurring problem into an unconstrained one through the introduction of the augmented Lagrangian method. To facilitate this conversion and improve convergence, we describe new numerical algorithms and introduce a novel circulant preconditioned matrix. This matrix effectively overcomes the slow convergence typically encountered when using the conjugate gradient method within the augmented Lagrangian framework. Our proposed approach is validated through computational tests, demonstrating its effectiveness and viability in practical applications.

1. Introduction

In image processing, image deblurring is an attractive topic due to its practical applications in robot vision [1], remote sensing [2], medical image processing [3], virtual reality [4], astronomical imaging [5], and many other fields. The mathematical relationship between the original u and blurry z images is follows:
z = K u + ϵ ,
where ε denotes a noise function and K denotes the blurring operator:
( K u ) ( x ) = Ω k ( x , y ) u ( y ) d y , x Ω ,
where k ( x , y ) = ϕ ( x y ) is referred to as a translation-invariant kernel or a point spread function (PSF). Therefore, the task of recovering the u and K from z is called the deconvolution problem. If the blurring operator K is given, then the corresponding approach is referred to as non-blind deconvolution  [6,7,8,9,10,11,12,13]. However, when the blurring operator is unknown, the corresponding approach is referred to as blind deconvolution  [14,15,16,17,18,19,20]. In this paper, our primary focus is on non-blind deconvolution. K is the compact operator. However, the recovering of u from z poses challenges as it transforms the problem (1) into an unstable inverse problem [21,22,23]. To address this issue, researchers have extensively explored the potential of energy minimization models to solve image deblurring problems, which have attracted significant attention over the past few decades.
min u C Ω ( k u z ) 2 d Ω + α ˜ R ( u ) ,
where C represents a constrained set, R ( u ) is a regularization functional, and  α ˜ > 0 is a smoothing parameter that determines the balance between the data fitting and smoothing terms, and ∗ is a 2-D convolution operator. When applying these techniques to noisy and blurry photos, researchers must overcome two major challenges. The first challenge is dealing with non-linearity, while the second challenge involves resolving the massive matrix system involved.

1.1. Related Works

Non-blind deconvolution poses significant challenges as an ill-posed inverse problem. Numerous techniques for deconvolution have been developed to address these challenges by incorporating different image priors to regularize the solution. One such example is the utilization of the Tikhonov regularization model [21,22],
min u C Ω ( k u z ) 2 d Ω + α ˜ u T i k ,
where u T i k = Ω | u | d Ω . The Tikhonov model involves least-squares estimation, which often leads to excessively smoothed image reconstructions. As a result of its edge-preserving property, the total variation (TV) model [23,24,25] has become the most widely recognized non-linear energy minimization image deblurring model used for image deblurring.
min u C Ω ( k u z ) 2 d Ω + α ˜ u T V , β ,
where u T V , β = Ω | u | β d Ω and | u | β = u x 2 + u y 2 + β . Here, β > 0 is used to make functional u T V , β differentiable at zero. The TV model possesses numerous advantageous features; however, it does have one significant flaw. One notable drawback of the TV model is its tendency to transform smooth functions into piecewise constant functions, resulting in staircase effects in the restored images. The repaired photos appear blocky as a result. To mitigate the staircase effects in restored images, one solution is to employ total fractional-order variation (TFOV)-based models  [26,27,28,29,30,31,32]. These models have been proposed as an alternative approach to address the limitations of the TV model and reduce the undesirable staircase artifacts.
min u C Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β ,
where α represents order of fractional derivative. u T F O V , β = Ω α u 2 + β 2 d Ω , and | α u | 2 = ( D x α u ) 2 + ( D y α u ) 2 , where D x α and D y α are the fractional derivative operators along the x and y directions, respectively. The x-direction derivative is also denoted by D a , x α . Here, a and x are the lower and upper bounds of the integrals, and  α represents the order of fractional derivative. In this notation, 0 < n 1 < α < n , where n = [ α ] + 1 and [ · ] denotes the greatest integer function. Various definitions have been proposed to define a fractional-order derivative [33,34,35,36]. The regularization models based on TFOV are known for their exceptional efficiency. These models preserve edges in the recovered images while simultaneously eliminating the undesirable staircase effect.
In recent years, the utilization of TFOV-based image processing methods has attracted increasing attention. Some basic conclusions have been drawn in the areas of image denoising, edge detection, and reconstruction [31,37,38]. Mathieu et al. [37] proposed an edge detection method based on fractional differential, which effectively enhances image details such as edges and textures. Tian et al. [38] proposed a fractional-order adaptive regularization primal–dual algorithm for image denoising. Furthermore, Zhang et al. [31] proposed a TFOV model for image restoration, demonstrating its efficacy in suppressing the staircase effect. These studies have shown that, compared to first-order and second-order total variation methods, TFOV can more accurately and delicately represent image textures. More recently, Fairag et al. [39] and Guo et al. [30] have incorporated the TFOV model within the framework of image deblurring problems, further highlighting the applicability and potential of the TFOV-based approaches.

1.2. Scope of the Paper

Particularly in astronomical images, image deblurring frequently requires that the restored image has precisely non-negative intensities  [40,41,42,43,44]. However, it has been noted that solutions using current techniques may not always produce favorable outcomes. Images with many pixels having intensity values equal to or close to zero are known as images with negative intensities or black space. In this research, we provide a model for TFOV-based image deblurring that guarantees strictly positive outcomes for image intensities. The suggested model additionally restricts the image intensity values, maintaining them within a specified range. The removal of negative intensities or their confinement within the prescribed range also contributes to improving the quality of deblurred images. The main idea behind this paper is to convert the TFOV-based constrained image deblurring problem into an unconstrained one and then introduce the Lagrange multiplier. The optimization issues in computer vision and image processing have been successfully addressed by augmented Lagrangian methods  [45,46,47]. Augmented Lagrangian methods have demonstrated superior speed compared to other numerical techniques. It has been demonstrated that the original nontrivial minimization problem can be broken down into a number of straightforward and quick-to-solve subproblems using augmented Lagrangian methods. Some of them have closed forms of solutions, while others can be quickly solved using tools such as the fast Fourier transform (FFT). In our augmented Lagrangian method, the solution of one of the subproblems requires the conjugate gradient (CG) method. However, the CG method exhibits slow convergence due to an ill-conditioned matrix system. To overcome the slow convergence problem of CG, we introduce a new preconditioned matrix in this paper.
The main contributions of this paper are as follows: (i) it presents the one-sided and two-sided constraint methods for the TFOV-based constrained image deblurring problem; (ii) the proposed methods limit the upper boundary of the image intensity values, maintaining them within a specified range, while also guaranteeing strictly positive results; (iii) it presents a new circulant preconditioned matrix to improve the convergence of the CG method within the augmented Lagrangian method; and (iv) the proposed methods generate high-quality restored images compared to the most recent existing TFOV-based image deblurring methods.
This paper is divided into different sections. We discuss one-sided and two-sided constraint problems in Section 2. Section 3 presents Euler–Lagrange equations. Section 4 presents the cell discretization and the matrix-system of the model. The proposed preconditioned matrix is also presented in Section 4. The numerical application of our approaches is presented in Section 5. Section 6 contains the conclusions regarding the suggested methods and the Appendix A.

2. Constraint Image Deblurring Problem

In this section, we present a model for TFOV-based image deblurring that guarantees strictly positive image intensities as an outcome. The proposed model also imposes restrictions on the upper boundary of image intensity values, constraining them within a specified range. The main idea of this model is to convert the TFOV-based constrained image deblurring problem into an unconstrained problem and subsequently introduce a Lagrange multiplier.

2.1. One-Sided Constraint Problem

Consider the constrained image deblurring problem
min u Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β
subject   to : u 0
We convert the inequality constrained (8) into an equality constrained by introducing a function γ
u + γ 2 = 0 .
Now, Equations (7) and (8) become
min u Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β
subject   to : u + γ 2 = 0 .
We have that u is a local (global) minimum of Equations (7) and (8) if and only if ( u , γ ) , where γ = u is local (global) minimum of Equations (10) and (11). Now, consider the augmented Lagrangian functional for Equations (10) and (11) defined for positive penalty parameter c > 0 and multiplier λ by
f c ( u , γ , λ ) = Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β + Ω λ ( u + γ 2 ) d Ω + c 2 Ω ( u + γ 2 ) 2 d Ω
We are interested in minimizing augmented Lagrangian Equation (12) with respect to ( u , γ ) for different λ and c. Observe that the minimization of f c ( u , γ , λ ) with respect to γ can be found explicitly for each fixed u [48]. The minimization of the problem above with respect to γ is equivalent to
min w Ω λ ( u + w ) + 1 2 c ( u + w ) 2 d Ω .
The above integrand is quadratic in w. The unconstrained (global) minimum at which the derivative is zero is w . We have
w = u λ / c .
Therefore, the solution to Equation (13) is
w = max { 0 , u λ / c } .
Substituting w into the functional (12) gives
f c ( u , λ ) = Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β + Ω λ ( u + max { 0 , u λ / c } ) d Ω + c 2 Ω ( u + max { 0 , u λ / c } ) 2 d Ω .
Now, the method of multiplier [49,50] can be described as follows: For a given multiplier λ ( k ) and the penalty parameter c ( k ) , we minimize f c ( k ) ( u , γ , λ ( k ) ) by obtaining u ( k ) and γ ( k ) ; subsequently, we set
λ ( k + 1 ) = λ ( k ) + c ( k ) max { u ( k ) , λ ( k ) / c ( k ) } .
In order to minimize the functional (12), we first pick a value for c and a function λ , then we compute w using Equation (15). Next, we compute u by minimizing the functional (12). This suggests following the one-sided constraint method (OSCM) Algorithm 1.
Algorithm 1 One-sided constraint method
function:  [ u ] = OnesideConstraint( c , λ , u , k )
  • Set:  c ( 0 ) = c , λ ( 0 ) = λ
  • Set:  u ( 0 ) = u
  • Set:  w ( 0 ) = max { 0 , u ( 0 ) λ ( 0 ) / c ( 0 ) }
  •  For m = 1 , 2 ,
  •    Find  u ( m ) :    min u f c ( m 1 ) ( u , w ( m 1 ) , λ ( m 1 ) )
  •    Set:  λ ( m ) = λ ( m 1 ) + c ( m 1 ) max { u ( m 1 ) , λ ( m 1 ) / c ( m 1 ) }
  •    Test: Stopping criteria
  •    Set:  c ( m ) = d c ( m 1 )
  •    Set:  w ( m + 1 ) = max { 0 , u ( m ) λ ( m ) / c ( m ) }
  •  end
  • Set:  u = u ( m )

2.2. Two-Sided Constraint Problem

Next, we take the case where pixel values of digital images must lie in a specific interval [ a 1 , a 2 ] . For instance, for 8-bit images, the interval is [ a 1 , a 2 ] = [ 0 , 255 ] . We consider solving the constrained model:
min u Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β
subject   to : a 1 u a 2 .
First, we convert the inequality (18) into two equalities
u + a 1 + γ 1 2 = 0 and u a 2 + γ 2 2 = 0 .
Then, Equations (17) and (18) become
min u Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β
subject   to : u + a 1 + γ 1 2 = 0 ,
u a 2 + γ 2 2 = 0 .
Let us consider the augmented Lagrangian functional for Equations (20)–(22) defined for a positive penalty parameter c > 0 and multipliers λ 1 , λ 2 by
g c ( u , γ 1 , γ 2 , λ 1 , λ 2 ) = Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β + Ω λ 1 ( u + a 1 + γ 1 2 ) d Ω
+ Ω λ 2 ( u a 2 + γ 2 2 ) d Ω + c 2 Ω ( u + a 1 + γ 1 2 ) 2 + ( u a 2 + γ 2 2 ) 2 d Ω
Now, we want to minimize augmented Lagrangian (23) with respect to ( u , γ 1 , γ 2 ) for different λ 1 , λ 2 , and c.
Similar to the one-side constraint case, minimization of g c with respect to γ 1 and γ 2 can be explicitly found for each fixed u. Minimization with respect to γ 1 and γ 2 are equivalent to
min w 1 Ω λ ( u + a 1 + w 1 ) + 1 2 c ( u + a 1 + w 1 ) 2 d Ω ,
min w 2 Ω λ ( u a 2 + w 2 ) + 1 2 c ( u a 2 + w 2 ) 2 d Ω .
Hence, the solutions of Equations (24) and (25) are
w 1 * = max { 0 , u λ 1 / c a 1 } ,
w 2 * = max { 0 , u λ 2 / c + a 2 } .
The method of the multiplier can be described as follows: Given multipliers λ 1 ( k ) , λ 2 ( k ) , and a penalty parameter c ( k ) , we minimize g c ( k ) by obtaining u ( k ) , γ 1 ( k ) and γ 2 ( k ) , then we set
λ 1 ( k + 1 ) = λ 1 ( k ) + c ( k ) max { u ( k ) , λ 1 ( k ) / c ( k ) a 1 } ,
λ 2 ( k + 1 ) = λ 2 ( k ) + c ( k ) max { u ( k ) , λ 2 ( k ) / c ( k ) + a 2 } .
In order to minimize the functional (23), we first select a value for c and a function λ 1 . Then, we compute w 1 * using Equation (26). After that we choose a function λ 2 and compute w 2 * using Equation (27). Next, we compute u by minimizing the functional (23). This two-sided constraint method (TSCM) is explained in Algorithm 2.
Algorithm 2 Two-sided constraint method
function: [ u ] = TwosidesConstraint( c , λ 1 , λ 2 , u , k )
  1.
Set:  c ( 0 ) = c , λ 1 ( 0 ) = λ 1 , λ 2 ( 0 ) = λ 2
  2.
Set:  u ( 0 ) = u
  3.
Set:  w 1 ( 0 ) = max { 0 , u ( 0 ) λ 1 ( 0 ) / c ( 0 ) a 1 }
  4.
Set:  w 2 ( 0 ) = max { 0 , u ( 0 ) λ 2 ( 0 ) / c ( 0 ) + a 2 }
  5.
For m = 1 , 2 ,
  6.
    Find  u ( m ) :     min u g c ( m 1 ) ( u , w 1 ( m 1 ) , w 2 ( m 1 ) , λ 1 ( m 1 ) , λ 2 ( m 1 ) )
  7.
    Set:  λ 1 ( m ) = λ 1 ( m 1 ) + c ( m 1 ) max { u ( m 1 ) , λ 1 ( m 1 ) / c ( m 1 ) a 1 }
  8.
    Set:  λ 2 ( m ) = λ 2 ( m 1 ) + c ( m 1 ) max { u ( m 1 ) , λ 2 ( m 1 ) / c ( m 1 ) + a 2 }
  9.
    Test: Stopping criteria
 10.
    Set:  c ( m ) = d c ( m 1 )
 11.
    Set:  w 1 ( m ) = max { 0 , u ( m 1 ) λ 1 ( m 1 ) / c ( m 1 ) a 1 }
 12.
    Set:  w 2 ( m ) = max { 0 , u ( m 1 ) λ 2 ( m 1 ) / c ( m 1 ) + a 2 }
 13.
end
Set:  u = u ( m )
In both algorithms (OSCM and TSCM), we have to compute u by minimizing the functionals (12) and (23), respectively. In both cases, we require Euler–Lagrange equations to compute u. The Euler–Lagrange equations for u are the same for both cases.

3. Euler–Lagrange Equations

This section includes the presentation of the Euler–Lagrange equations connected with TFOV for the image deblurring problem.
Theorem 1. 
For the functional given in Equation (12) and α ( 1 , 2 ) , the Euler–Lagrange equations are
K * ( K u z ) + α ˜ L α ( u ) u = 0 , o n Ω , D α 2 α u | α u | 2 + β 2 · η = 0 , D α 1 α u | α u | 2 + β 2 · η = 0 , in Ω ,
where η is the unit outward normal vector. The  K * is an adjoint operator of the integral operator K and non-linear differential operator L α ( u ) is defined as follows:
L α ( u ) w = ( 1 ) n α . α w | α u | 2 + β 2 + c w .
Proof. 
The proof is given in Appendix A.    □
Note that Equation (30) can be written as follows:
K * K u + α ˜ α . v + c u = K * z ,
α u + α u 2 + β v = 0
with the dual, or flux, variable
v = α u α u 2 + β .
We apply the Galerkin method to Equations (32) and (33) together with midpoint quadrature for the integral term and the cell-centered finite difference method for the derivative part.

4. Numerical Implementation

First, we will present the discretization of our proposed model. The computational domain Ω = ( 0 , 1 ) × ( 0 , 1 ) is divided into N 2 equal squares (cells), where N represents the number of equispaced partitions in the x or y direction. We proceed the same discretization approach in [31,51]. Next, let ( x k , y l ) , k , l = 0 , 1 , , N + 1 be discrete points for the image domain Ω . We assume that u satisfies homogenous Dirichlet boundary condition. To discretize the fractional derivative of order α at the inner point ( x k , y l ) (for k , l = 0 , 1 , , N ) in the x-direction, we employ the shifted Gr u ¨ nwald approximation approach [52].
D α f ( x k , y l ) = δ 0 α f ( x k , y l ) h α + O ( h ) = 1 2 δ α f ( x k , y l ) h α + δ + α f ( x k , y l ) h α + O ( h ) = 1 2 h α Σ j = 0 k + 1 ω j α f k j + 1 l + Σ j = 0 N k + 2 ω j α f k + j 1 l + O ( h ) ,
which is applicable to both Riemann–Liouville and Caputo derivatives [53,54]. Here, f s l = f s , l and ω j α = ( 1 ) j α j j = 0 , 1 , , N and ω 0 α = 1 , ω j α = ( 1 1 + α j ) ω j 1 α for j > 0 . From Equation (35), one can observe that the first-order approximation of D [ a , b ] α f ( x k , y l ) along x-direction at point ( x k , y l ) is a linear combination of N + 2 values f l 0 , f l 1 , , f l N , f l N + 1 with fixed y l . After using the homogenous boundary condition in the matrix estimation of the fractional-order derivative, all N equations of the fractional derivatives in the x-direction in Equation (35) can be expressed as follows:
δ 0 α f ( x 1 , y l ) δ 0 α f ( x 2 , y l ) δ 0 α f ( x N , y l ) = 1 2 h α 2 ω 1 α ω 0 α + ω 2 α ω 3 α ω N α ω 0 α + ω 2 α 2 ω 1 α ω 3 α ω 3 α 2 ω 1 α ω 0 α + ω 2 α ω N α ω 3 α ω 0 α + ω 2 α 2 ω 1 α B α N f 1 l f 2 l f N l .
By the definition of fractional derivative (35), for any 1 < α < 2 , the coefficients ω k α possess the properties given below:
(1)
ω 0 α = 1 , ω 1 α = α < 0 , 1 ω 2 α ω 3 α 0 .
(2)
k = 0 ω k α = 0 , k = 0 m ω k α 0 ( m 1 ) .
By applying the Gershgorin circle theorem, it can be concluded that the matrix B α N is a symmetric and negative-definite Toeplitz matrix (i.e., B α N is the positive definite Toeplitz matrix). Let U R N × N represent the solution matrix at all nodes ( k h x ; l h y ) , k , l = 1 , , N corresponding to the spatial discretization nodes in the x and y directions. The ordered solution vector of U is denoted by u R N 2 × 1 . The discrete and direct analogue to differentiation for an arbitrary order α derivative is
u x α = ( I N B α N ) u = B x α u
In the same way, all values in the y-direction having order α derivative of u ( x ; y ) for these nodes are estimated using
u y α = ( B α N I N ) u = B y α u ,
where
u x α = ( u 11 α , , u N 1 α , u 12 α , , u N N α ) T , u y α = ( u 11 α , , u 1 N α , u 21 α , , u N N α ) T ,
u = u 11 , u 12 , , u N N and ⊗ represents the Kronecker product. The  α th-order derivative of u x α of u ( x ; y ) along all x-direction nodes in Ω can be represented by the matrix B N α U . For further details on the discretization, we recommend reading References [35,54]. The fractional discretization mentioned above utilizes cell-centered finite difference method (CCFDM) and takes advantage of the fact that [ ( 1 ) n α · ] is the adjoint operator of the operator α . Consequently, Equations (32) and (33) yield the following system
V + K h U = Z , K h V α ˜ ( L α h U m ) U m + 1 = 0 , m = 0 , 1 , 2 N F ,
where N F is the number of the Fixed Point Iterations (FPI) used to linearize the non-linear term in the square root in (34). The matrix K h is obtained by using the midpoint quadrature for the integral operator as follows:
( K u ) ( x i , y j ) [ K h U ] i j , i , j = 1 , 2 , , N
with entries [ K h U ] i j , l m = h 2 k ( x i x j , y l y m ) , where the lexico-graphical order is used, K h is a block Toeplitz with Toeplitz block (BTTB) matrix. The discrete scheme of the matrix L α h U is given by
L α ( U m ) U m + 1 = [ B N ( D 1 ( U m ) ) ( B N U m + 1 ) ] + [ ( D 2 ( U m ) ( U m + 1 B M ) ) ] B N + c I N ,
where ∘ is the point-wise multiplication, m is the m-th fixed-point iteration, and U is an N × N -sized reshaped matrix of the vector u. D 1 ( U m ) and D 2 ( U m ) are the diagonals of the Hadamard inverses of B x α ( U m ) and B y α ( U m ) , respectively. I N is the identity matrix.
Now, if we eliminate V from the system (39), then we have the following primal system of the TFOV-based image deblurring model:
( K h * K h + α ˜ L α ( U m ) ) U m + 1 = K h * Z .
If we use a simple total variation (TV) regularization functional, then we have the following similar primal form:
( K h * K h + α ˜ L h T V ( U m ) ) U m + 1 = K h * Z ,
where
L h T V ( U m ) = G h * H h 1 ( U m ) G h .
The L h T V ( U m ) is derived from the discretization of total variational functional. The details can be seen in Reference [22]. The matrix B h has the following structure,
G h = 1 h G 1 G 2 ,
where both G 1 and G 2 are of size n x ( n x 1 ) × n x 2 , and 
G 1 = F I ˜ a n d G 2 = I ˜ F .
F = 1 1 1 1 1 1 1
is a matrix of size ( n x 1 ) × n x . H h is a diagonal matrix obtained by the discretization of the expression u 2 + β 2 , which has the following structure:
H h = H x 0 0 H y ,
where H x is a size of ( n x 1 ) × n x , and H y is a size of n x × ( n x 1 ) .
To obtain the value of our primal variable u, one needs to solve the matrix system (42). As for the remaining variables and Lagrange multipliers, we solve them directly after discretizing them at the grid points ( x i , y j ) .
As mentioned earlier, to compute the value of u, we need to solve the matrix system (42), which is a nonlinear system. The Hessian matrix Λ = K h * K h + α ˜ L α ( U m ) of the system (42) is extremely large and tends to be ill-conditioned when α ˜ is small. This is primarily due to the clustering of eigenvalues of K h around zero [23], while K h * K h is a full matrix, the Fast Fourier transformation (FFT) can be used to evaluate K h * K h U in O ( n x l o g n x ) operations [23] because the blurring kernel exhibits translation-invariant behavior. The advantageous aspect is that the Hessian matrix is symmetric positive definite (SPD).
Therefore, the conjugate gradient (CG) method is suitable for solving the system (42). However, the CG method can have a slow convergence rate for large and ill-conditioned systems. In order to achieve faster convergence, we use the preconditioned conjugate gradient (PCG) method [55,56,57]. Here, we introduce our SPD circulant preconditioned matrix P of Strang-type [58].
P = K h * ˜ K h ˜ + α ˜ d i a g ( L h T V ( U m ) ) ,
where K h ˜ is a circulant approximation of matrix K h . The  d i a g ( L h T V ( U m ) ) is a diagonal structure of L h T V ( U m ) . This is summarized in Algorithm 3.
Algorithm 3 The PCG Method
function: [ U ] = PCG( α ˜ , U , K , Z )
  • Set:  U ( 0 ) = U , on mesh Ω h ,
  • For m = 1 , 2 ,
  •     Find  U ( m ) :     A m U m + 1 = b m ,
  •     Set:  A m = K h * K h + α ˜ L h α ( U m ) ,
  •     Set:  b m = K h * Z ,
  •     Set:  P = K h * ˜ K h ˜ + α ˜ d i a g ( L h T V ( U m ) ) ,
  •     Test: Stopping criteria
  • end
Set:  U = U ( m )
While applying PCG to Equation (42), we need to take the inverse of the preconditioned matrix P. For the inversion of the first term K h * ˜ K h ˜ , we require O ( n x l o g n x ) floating-point operations using FFTs [23]. As for the second term in P, which is a diagonal matrix, inversion is not problematic.
Now, let the eigenvalues of K h * K h , L h α and L h T V be λ i K , λ i α and λ i T V , respectively, such that λ i K 0 , and λ i α . Consequently, the eigenvalues of P 1 Λ are given by
η i = λ i K + α ˜ λ i α λ i K + α ˜ λ i d T V ,
where λ i d T V are the eigenvalues of d i a g ( L h T V ( U m ) ) . It is evident that η i 1 as i because λ i d T V λ i T V λ i α . Therefore, the spectrum of P 1 Λ is more favorable than that of the Hessian matrix Λ . The flowchart of our proposed method is illustrated in Figure 1.

5. Numerical Experiments

In this section, we use our algorithms to solve the unconstrained TFOV problem (5) and compare the results with the one-sided constrained problems (7) and (8) and two-sided constrained problems (17) and (18). We conduct several sets of experiments using different digital images. The algorithm code is written in MATLAB and all computational experiments are performed on an Intel ( R ) Core ( T M ) i7-4510U CPU @2.60 GHz. To evaluate the quality of the restored image, we use the peak signal-to-noise ratio (PSNR) in decibels (dB) [59] and the structural similarity index measure (SSIM) [60]. A higher PSNR and SSIM value indicates a high quality of the restored image. The degree to which the algorithms satisfy the constraints is measured by counting the number of pixels with negative values for the one-sided constraint method and the number of pixels outside the range [ 0 , 255 ] for the two-sided constraint method. To observe the optimum values of our initial parameters c 0 and d, we performed computations on the Barbara image. We observed numerically (see Figure 2) that the optimum ranges for the initial parameters c ( 0 ) and d of the augmented Lagrangian method are c ( 0 ) [ 1 × 10 5 , 1 × 10 4 ] and d [ 4 , 10 ] , respectively. For our experiments, we choose c ( 0 ) = 0.000085 , d = 5 . The λ ( 0 ) = λ 1 ( 0 ) = λ 2 ( 0 ) = c ( 0 ) z and u ( 0 ) = z . The values of α = 1.8 , α ˜ = 1 × 10 8 and β = 0.1 are chosen according to [39]. In all experiments, the stopping criterion for the numerical iterations is defined as b A x k   < t o l   b , where x k = ( v k , u k ) is the solution vector in the k-th iteration. The results are presented in figures and tables.
Example 1. 
Here, we have compared Algorithm 1 (OSCM) with an unconstrained TFOV-based deblurring technique. We used Barbara’s image, which presents a challenge due to the combination of a large-scale cartoon element (the face) with a small-scale texture (the shirt). The restored images are shown in Figure 3, with each subfigure having a size of 256 × 256 . Table 1 lists the PSNR, SSIM, and the number of negative pixels in the experiment. In this example, the test images are blurred using the PSF given in Figure 4, which is a circular Gaussian kernel. The criterion to stop the computational algorithm is based on a tolerance value of t o l = 1 × 10 7 .
Remark 1. 
  • From Figure 3, it can be observed that the deblurred images produced by the OSCM exhibit significantly better quality compared to the unconstrained method.
  • In Table 1, one can observe that the PSNR and SSIM values of the OSCM are considerably higher than the PSNR and SSIM values of the unconstrained method. The OSCM identifies negative pixels and reduces them as the iterations progress. Finally, it removes them in just 12 iterations, resulting a clear, blur-free image.
Example 2. 
Here, we have compared Algorithm 2 (TSCM) with the unconstrained TFOV-based deblurring method. We used the Moon image, which is a non-texture image. The restored images are shown in Figure 5, each of size 256 × 256 . Table 2 lists the PSNR, SSIM, and the number of pixels outside the interval [ 0 , 255 ] . In this example, we have used the PSF given in Figure 4. The criterion to stop the computational algorithm is based on a tolerance of t o l = 1 × 10 7 .
Remark 2. 
  • From Figure 5, it can be seen that the deblurred images created by the TSCM are much better than those of the unconstrained method. This can also be verified with the data given in Table 2.
  • In Table 2, one can observe that the PSNR and SSIM values of TSCM are considerably higher than the PSNR and SSIM values of the unconstrained method. The TSCM identifies pixels that are outside the given range of [ 0 , 255 ] and modifies them as the number of iterations increases.
Example 3. 
Here, we have compared our algorithms (OSCM and TSCM) with the TFOV-based methods by Fairag et al. [39], in which they created the one-level method (OLM) and the two-level method (TLM) for the TFOV-based image deblurring problem. Two different kinds of images were used in this investigation: Goldhills (actual) and Cameraman (complicated). Figure 6 and Figure 7 display the various facets of each picture. Each subfigure has a size of 512 × 512 . In this example, test images are blurred by a Gaussian kernel (PSF). Additionally, Gaussian noise with mean μ = 0.01 and variance σ 2 = 0.5 is added to the images. For the TFOV-based algorithms (OLM and TLM), we used α = 1.8 , λ L α = 1 × 10 16 , and β = 0.1 . The Level-II parameter λ J in TLM is calculated according to the formula given in Reference [39]. OLM and TLM also use the CG method for numerical solution. The stopping criterion of the computational algorithm is based on a tolerance t o l = 1 × 10 7 . Table 3 contains all the information related to this experiment.
Remark 3. 
  • One can see from Figure 6 and Figure 7 that our algorithms (OSCM and TSCM) produce results of slightly higher quality compared to other methods.
  • From Table 3, it can be observed that our algorithms (OSCM and TSCM) consistently achieve higher PSNR and SSIM values compared to other methods (OLM, and TLM) for all photos. Although the TLM generates higher PSNR and SSIM values more quickly, the quality of the PSNR and SSIM is inferior to that of OSCM and TSCM.
    Despite taking less time, our algorithms produce significantly better quality compared to other methods. For example, for the Goldhills image, OLM and TLM require 1005.2589 and 526.5476 s, respectively, to achieve PSNR/SSIM values of 33.1589 / 0.7704 and 33.1458/0.7690, respectively. However, OSCM and TSCM take 896.4058 and 909.5469 s, respectively, to achieve higher PSNR/SSIM values of 34.8945 / 0.7788 and 34.8965 / 0.7759 , respectively. Similar trends can be observed in the Cameraman’s image. Therefore, our algorithms (OSCM and TSCM) produce high-quality deblurred images compared to other methods.
Example 4. 
Here, we have also compared our algorithms (OSCM and TSCM) with the TFOV-based methods (OLM and TLM) proposed by Fairag et al. [39]. We used a Pepper image (a non-texture image) for this comparison. Figure 8 displays various facets of the given image, with each subfigure having a size of 512 × 512 . In this example, the test images are blurred by a motion kernel (PSF) with a motion length of l = 256 and an angle of motion θ = 15 0 . Salt and pepper noise with a small density of 0.01 is added to the blurry image. For the TFOV-based algorithms (OLM and TLM), we used α = 1.8 , λ L α = 1 × 10 12 , and β = 0.1 . The Level-II parameter λ J in TLM is calculated according to the formula given in [39]. The stopping criterion of the computational algorithm is based on a tolerance of t o l = 1 × 10 6 . Table 4 contains all the information related to this experiment.
Remark 4. 
  • One can see from Figure 8 that our algorithms (OSCM and TSCM) produce results of slightly higher quality compared to other methods.
  • From Table 4, it can be observed that for all photos, our algorithms (OSCM and TSCM) exhibit higher PSNR values compared to other methods (OLM and TLM). Although the TLM generates faster PSNR and SSIM computation, its quality is inferior to that of OSCM and TSCM. Therefore, our algorithms (OSCM and TSCM) produce superior-quality deblurred images when compared to other methods.
Example 5. 
In this example, we used two satellite images from Reference [61]. The blurred images were corrupted by Poisson noise and blurring artifacts. The blurring process was conducted using the f s p e c i a l ( g a u s s i a n , 9 , s q r t ( 3 ) ) kernel. Dealing with the addition of Poisson noise in the images proves to be particularly challenging for most deblurring methods. Imaging modalities like this often exhibit the presence of Poisson noise, primarily arising from photon counting. Simultaneously, blurring is an inevitable outcome resulting from the physical mechanism of an imaging system, accurately represented as the convolution of the image with a point spread function. For comparison, we used the non-blind fractional order total variation-based method (NFOV) [61] and Richardson–Lucy algorithm with total variation regularization (RLTV) [62]. The restored images of Galaxy are shown in Figure 9, each with a size of 256 × 256 . The restored images of Satel are shown in Figure 10, each with a size of 128 × 128 . For the NFOV and RLTV methods, the parameters used are according to [61]. Table 5 lists the information about this experiment.
Remark 5. 
  • From Figure 9 and Figure 10 and Table 5, one can observe that all of the methods generated nearly identical results. However, OSCM and TSCM exhibit better PSNR and SSIM values compared to all other methods. This demonstrates the effectiveness of the OSCM and TSCM in generating high-quality images.
Example 6. 
In this example, we used four different images from the dataset of Levin et al. [63]. The k e g e n ( N , 100 , 5 ) kernel was used for blurring. For comparison, we used TV, OLM, TLM, RLTV, NFOV, OSCM, and TSCM. Restored images are shown in Figure 11, Figure 12, Figure 13 and Figure 14. Each one is of the size 255 × 255 . Table 6 lists the information of this experiment.
Remark 6. 
  • From Figure 11, Figure 12, Figure 13 and Figure 14, and Table 6, it is evident that our methods, OSCM and TSCM, consistently produce better values for PSNR and SSIM when compared to all other methods. These results demonstrate the strong performance of the OSCM and TSCM, which consistently generate high-quality images. A comparison of PSNR and SSIM values for different methods using Levin’s dataset is depicted in Figure 15.

6. Conclusions

For the image deblurring problem, we presented OSCM and TSCM using a TFOV regularization functional. In addition to guaranteeing strictly positive outcomes, both OSCM and TSCM impose upper limitations on image intensity levels, maintaining them within a predetermined range. We applied our proposed approaches to conduct numerical tests on various image types, including synthetic, real, complex, satellite and non-texture images. To evaluate our algorithms, we also used images from Levin’s dataset [63]. We compared the OSCM and TSCM with the most recent TFOV-based approaches, mainly TV, OLM, TLM, RLTV, and NFOV. The numerical tests demonstrated the efficiency of our proposed techniques. In the future, we plan to develop the OSCM and TSCM for the other computationally expensive regularization functionals, such as mean curvature functional will be pursued in the future. Additionally, we aim to design a constrained model within a similar framework for the blind image deblurring problem. It is worth noting that under specific conditions, the proposed techniques can be implemented in other image processing models.

Author Contributions

Conceptualization, S.A. and S.S.; methodology, S.A. and S.S.; software, S.A.; validation, J.K., S.A. and S.S.; formal analysis, S.A. and S.S.; investigation, S.A. and S.S.; resources, S.A. and S.S.; data curation, S.A. and S.S.; writing—original draft preparation, S.S.; supervision, S.A. and J.K.; writing—review and editing, S.A., S.S. and J.K.; funding acquisition, J.K. All authors have read and agreed to the manuscript.

Funding

The funding was supported by the Brain Korea 21 (BK 21) FOUR from the Ministry of Education.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The corresponding author (J.K.) was supported by the Brain Korea 21 (BK 21) FOUR from the Ministry of Education. The authors thank the reviewers for the constructive comments on the revision of this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1. 
From Equation (12), let the functional F α ( u ) be
F α ( u ) = Ω ( k u z ) 2 d Ω + α ˜ u T F O V , β + Ω λ ( u + γ 2 ) d Ω + c 2 Ω ( u + γ 2 ) 2 d Ω .
Let ν W 1 α ( Ω ) = { v L 1 ( Ω ) : v W 1 α ( Ω ) = Ω | v | d x + Ω | v | d x < + } be a function. Then, for u W 1 α ( Ω ) B V α ( Ω ) , the Gâteaux derivative of functional F α ( u ) of first-order in the direction of ν is
F α ( u ) ν ν = lim t 0 F α ( u + t ν ) F α ( u ) t = lim t 0 G 1 ( u + t ν ) G 1 ( u ) t + lim t 0 G 2 ( u + t ν ) G 2 ( u ) t + lim t 0 G 3 ( u + t ν ) G 3 ( u ) t ,
where G 1 ( u ) = 1 2 Ω ( K u z ) d x , G 2 ( u ) = α ˜ J T V β α ( u ) , and G 3 ( u ) = Ω [ λ ( u + γ 2 ) + c 2 ( u + γ 2 ) 2 ] d Ω . By using the Taylor series in the direction of t, we have
F α ( u ) ν ν = Ω K * ( K u z ) ν d x + Ω ( W . α ν ) d x + Ω c ( u ν ) d x ,
where W = α ˜ α u | α u | 2 + β 2 . Now, by using α -order integration by parts [31], we have
Ω ( W . α ν ) d x = ( 1 ) n Ω ( ν C d i v α W ) d x
j = 0 n 1 ( 1 ) j 0 1 D [ a , x ] α + j n W 1 n j 1 ν ( x ) x 1 n j 1 | x 1 = 0 x 1 = 1 d x 2
j = 0 n 1 ( 1 ) j 0 1 D [ x , b ] α + j n W 2 n j 1 ν ( x ) x 2 n j 1 | x 2 = 0 x 2 = 1 d x 1 ,
where we know that for 1 < α < 2 , n = 2 .
Case-I: If u ( x ) | Ω = b 1 ( x ) and u ( x ) n | Ω = b 2 ( x ) , so u ( x ) + t ν ( x ) | Ω = b 1 ( x ) and u ( x ) + t ν ( x ) n | Ω = b 2 ( x ) . Then, it suffices to take ν C 0 1 ( Ω , R ) (the space of first-order differentiable functions vanishes at the boundary), which implies
i ν ( x ) n i | Ω = 0 , i = 0 , 1 ,
n j 1 ν ( x ) x 1 n j 1 | x 1 = 0 , 1 = n j 1 ν ( x ) x 2 n j 1 | x 2 = 0 , 1 = 0 , n j 1 = 0 , 1 .
Hence, Equation (A2) reduces to Equation (30).
Case-II: If ν W 1 α ( Ω ) , then
n j 1 ν ( x ) x 1 n j 1 | x 1 = 0 , 1 0 , n j 1 ν ( x ) x 2 n j 1 | x 2 = 0 , 1 0 , n j 1 = 0 , 1 .
Therefore, the boundary terms in Equation (A3) can only become zero if
D [ a , x ] α + j n W 1 | x 1 = 0 , 1 = D [ x , b ] α + j n W 2 | x 2 = 0 , 1 = 0
D α + j n W . η = 0 , j = 0 , 1 .
This concludes the proof.  □

References

  1. Gultekin, G.K.; Saranli, A. Feature detection performance based benchmarking of motion deblurring methods: Applications to vision for legged robots. Image Vis. Comput. 2019, 82, 26–38. [Google Scholar] [CrossRef]
  2. Zhang, Z.; Zheng, L.; Piao, Y.; Tao, S.; Xu, W.; Gao, T.; Wu, X. Blind remote sensing image deblurring using local binary pattern prior. Remote Sens. 2022, 14, 1276. [Google Scholar] [CrossRef]
  3. Hansen, M.S.; Sørensen, T.S. Gadgetron: An open source framework for medical image reconstruction. Magn. Reson. Med. 2013, 69, 1768–1776. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Maimone, A.; Fuchs, H. Reducing interference between multiple structured light depth sensors using motion. In Proceedings of the 2012 IEEE Virtual Reality Workshops (VRW), Costa Mesa, CA, USA, 4–8 March 2012; pp. 51–54. [Google Scholar]
  5. Bonettini, S.; Landi, G.; Piccolomini, E.L.; Zanni, L. Scaling techniques for gradient projection-type methods in astronomical image deblurring. Int. J. Comput. Math. 2013, 90, 9–29. [Google Scholar] [CrossRef]
  6. Choi, N.R. A Comparative Study of Non-Blind and Blind Deconvolution of Ultrasound Images; University of Southern California: Los Angeles, CA, USA, 2014. [Google Scholar]
  7. Inampudi, S.; Vani, S.; TB, R. Image Restoration using Non-Blind Deconvolution Approach—A Comparison. Int. J. Electron. Commun. Eng. Technol. 2019, 10, 9–16. [Google Scholar] [CrossRef]
  8. Tao, S.; Dong, W.; Feng, H.; Xu, Z.; Li, Q. Non-blind image deconvolution using natural image gradient prior. Optik 2013, 124, 6599–6605. [Google Scholar] [CrossRef]
  9. Xiong, N.; Liu, R.W.; Liang, M.; Wu, D.; Liu, Z.; Wu, H. Effective alternating direction optimization methods for sparsity-constrained blind image deblurring. Sensors 2017, 17, 174. [Google Scholar] [CrossRef] [Green Version]
  10. Chu, Y.; Zhang, X.; Liu, H. Decoupling Induction and Multi-Order Attention Drop-Out Gating Based Joint Motion Deblurring and Image Super-Resolution. Mathematics 2022, 10, 1837. [Google Scholar] [CrossRef]
  11. Qi, S.; Zhang, Y.; Wang, C.; Lan, R. Representing Blurred Image without Deblurring. Mathematics 2023, 11, 2239. [Google Scholar] [CrossRef]
  12. Zhang, K.; Ren, W.; Luo, W.; Lai, W.S.; Stenger, B.; Yang, M.H.; Li, H. Deep image deblurring: A survey. Int. J. Comput. Vis. 2022, 130, 2103–2130. [Google Scholar] [CrossRef]
  13. Awwal, A.M.; Wang, L.; Kumam, P.; Mohammad, H. A two-step spectral gradient projection method for system of nonlinear monotone equations and image deblurring problems. Symmetry 2020, 12, 874. [Google Scholar] [CrossRef]
  14. Campisi, P.; Egiazarian, K. Blind Image Deconvolution: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  15. Ge, X.; Tan, J.; Zhang, L.; Qian, Y. Blind image deconvolution via salient edge selection and mean curvature regularization. Signal Process. 2022, 190, 108336. [Google Scholar] [CrossRef]
  16. Li, L.; Pan, J.; Lai, W.S.; Gao, C.; Sang, N.; Yang, M.H. Learning a discriminative prior for blind image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 8–23 June 2018; pp. 6616–6625. [Google Scholar]
  17. Yang, D.; Wu, X.; Yin, H. Blind Image Deblurring via a Novel Sparse Channel Prior. Mathematics 2022, 10, 1238. [Google Scholar] [CrossRef]
  18. Sharif, S.; Naqvi, R.A.; Mehmood, Z.; Hussain, J.; Ali, A.; Lee, S.W. MedDeblur: Medical Image Deblurring with Residual Dense Spatial-Asymmetric Attention. Mathematics 2022, 11, 115. [Google Scholar] [CrossRef]
  19. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.K. A simple local minimal intensity prior and an improved algorithm for blind image deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2923–2937. [Google Scholar] [CrossRef]
  20. Lata, M.A.; Ghosh, S.; Bobi, F.; Yousuf, M.A. Novel method to assess motion blur kernel parameters and comparative study of restoration techniques using different image layouts. In Proceedings of the 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), Dhaka, Bangladesh, 13–14 May 2016; pp. 367–372. [Google Scholar]
  21. Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994, 10, 1217–1229. [Google Scholar] [CrossRef]
  22. Tikhonov, A.N. Regularization of incorrectly posed problems. Soviet Math. Dokl. 1963, 4, 1624–1627. [Google Scholar]
  23. Vogel, C.R.; Oman, M.E. Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 1998, 7, 813–824. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, J.; Ma, R.; Zeng, X.; Liu, W.; Wang, M.; Chen, H. An efficient non-convex total variation approach for image deblurring and denoising. Appl. Math. Comput. 2021, 397, 125977. [Google Scholar] [CrossRef]
  25. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  26. Chan, R.; Lanza, A.; Morigi, S.; Sgallari, F. An adaptive strategy for the restoration of textured images using fractional order regularization. Numer. Math. Theory Methods Appl. 2013, 6, 276–296. [Google Scholar] [CrossRef]
  27. Chen, D.; Chen, Y.; Xue, D. Fractional-order total variation image restoration based on primal-dual algorithm. Abstr. Appl. Anal. 2013, 2013, 585310. [Google Scholar] [CrossRef] [Green Version]
  28. Chen, D.; Chen, Y.; Xue, D. Three fractional-order TV-L2 models for image denoising. J. Comput. Inf. Syst. 2013, 9, 4773–4780. [Google Scholar]
  29. Chen, D.; Sun, S.; Zhang, C.; Chen, Y.; Xue, D. Fractional-order TV-L2 model for image denoising. Cent. Eur. J. Phys. 2013, 11, 1414–1422. [Google Scholar] [CrossRef] [Green Version]
  30. Guo, L.; Zhao, X.L.; Gu, X.M.; Zhao, Y.L.; Zheng, Y.B.; Huang, T.Z. Three-dimensional fractional total variation regularized tensor optimized model for image deblurring. Appl. Math. Comput. 2021, 404, 126224. [Google Scholar] [CrossRef]
  31. Zhang, J.; Chen, K. A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 8, 2487–2518. [Google Scholar] [CrossRef] [Green Version]
  32. Zhao, X.L.; Huang, T.Z.; Gu, X.M.; Deng, L.J. Vector extrapolation based Landweber method for discrete ill-posed problems. Math. Probl. Eng. 2017, 2017, 1375716. [Google Scholar] [CrossRef] [Green Version]
  33. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; Wiley: New York, NY, USA, 1993. [Google Scholar]
  34. Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974; Volume 111. [Google Scholar]
  35. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic Press: Cambridge, MA, USA, 1998; Volume 198. [Google Scholar]
  36. De Oliveira, E.C.; Tenreiro Machado, J.A. A review of definitions for fractional derivatives and integral. Math. Probl. Eng. 2014, 2014, 238459. [Google Scholar] [CrossRef] [Green Version]
  37. Mathieu, B.; Melchior, P.; Oustaloup, A.; Ceyral, C. Fractional differentiation for edge detection. Signal Process. 2003, 83, 2421–2432. [Google Scholar] [CrossRef]
  38. Tian, D.; Xue, D.; Wang, D. A fractional-order adaptive regularization primal–dual algorithm for image denoising. Inf. Sci. 2015, 296, 147–159. [Google Scholar] [CrossRef]
  39. Fairag, F.; Al-Mahdi, A.; Ahmad, S. Two-level method for the total fractional-order variation model in image deblurring problem. Numer. Algorithms 2020, 85, 931–950. [Google Scholar] [CrossRef]
  40. Bardsley, J.M.; Vogel, C.R. A nonnegatively constrained convex programming method for image reconstruction. SIAM J. Sci. Comput. 2004, 25, 1326–1343. [Google Scholar] [CrossRef]
  41. Calvetti, D.; Landi, G.; Reichel, L.; Sgallari, F. Non-negativity and iterative methods for ill-posed problems. Inverse Probl. 2004, 20, 1747. [Google Scholar] [CrossRef]
  42. Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M. Nonnegative least-squares image deblurring: Improved gradient projection approaches. Inverse Probl. 2009, 26, 025004. [Google Scholar] [CrossRef]
  43. Chan, R.H.; Tao, M.; Yuan, X. Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers. SIAM J. Imaging Sci. 2013, 6, 680–697. [Google Scholar] [CrossRef] [Green Version]
  44. Williams, B.M.; Chen, K.; Harding, S.P. A new constrained total variational deblurring model and its fast algorithm. Numer. Algorithms 2015, 69, 415–441. [Google Scholar] [CrossRef]
  45. Chan, S.H.; Khoshabeh, R.; Gibson, K.B.; Gill, P.E.; Nguyen, T.Q. An augmented Lagrangian method for total variation video restoration. IEEE Trans. Image Process. 2011, 20, 3097–3111. [Google Scholar] [CrossRef]
  46. Tai, X.C.; Hahn, J.; Chung, G.J. A fast algorithm for Euler’s elastica model using augmented Lagrangian method. SIAM J. Imaging Sci. 2011, 4, 313–344. [Google Scholar] [CrossRef]
  47. Wu, C.; Tai, X.C. Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 2010, 3, 300–339. [Google Scholar] [CrossRef] [Green Version]
  48. Fletcher, R. Practical Methods of Optimization; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  49. Hestenes, M.R. Multiplier and gradient methods. J. Optim. Theory Appl. 1969, 4, 303–320. [Google Scholar] [CrossRef]
  50. Powell, M.J.D. A method for nonlinear constraints in minimization problems. In Optomization; Academic Press: Cambridge, MA, USA, 1969; pp. 283–298. [Google Scholar]
  51. Zhang, J.; Chen, K. Variational image registration by a total fractional-order variation model. J. Comput. Phys. 2015, 293, 442–461. [Google Scholar] [CrossRef] [Green Version]
  52. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for two-sided space-fractional partial differential equations. Appl. Numer. Math. 2006, 56, 80–90. [Google Scholar] [CrossRef]
  53. Podlubny, I.; Chechkin, A.; Skovranek, T.; Chen, Y.; Jara, B.M.V. Matrix approach to discrete fractional calculus II: Partial fractional differential equations. J. Comput. Phys. 2009, 228, 3137–3153. [Google Scholar] [CrossRef] [Green Version]
  54. Wang, H.; Du, N. Fast solution methods for space-fractional diffusion equations. J. Comput. Appl. Math. 2014, 255, 376–383. [Google Scholar] [CrossRef]
  55. Chan, R.H.; Ng, K.P. Toeplitz preconditioners for Hermitian Toeplitz systems. Linear Algebra Its Appl. 1993, 190, 181–208. [Google Scholar] [CrossRef] [Green Version]
  56. Chan, T.F. An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput. 1988, 9, 766–771. [Google Scholar] [CrossRef]
  57. Salkuyeh, D.K.; Masoudi, M.; Hezari, D. On the generalized shift-splitting preconditioner for saddle point problems. Appl. Math. Lett. 2015, 48, 55–61. [Google Scholar] [CrossRef] [Green Version]
  58. Ng, M.K. Iterative Methods for Toeplitz Systems; Oxford University Press: London, UK, 2004. [Google Scholar]
  59. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef] [Green Version]
  60. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  61. Chowdhury, M.R.; Qin, J.; Lou, Y. Non-blind and blind deconvolution under poisson noise using fractional-order total variation. J. Math. Imaging Vis. 2020, 62, 1238–1255. [Google Scholar] [CrossRef]
  62. Dupé, F.X.; Fadili, M.J.; Starck, J.L. Image deconvolution under Poisson noise using sparse representations and proximal thresholding iteration. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 761–764. [Google Scholar]
  63. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
Figure 1. Flow chart of TFOV-based constraint image deblurring method.
Figure 1. Flow chart of TFOV-based constraint image deblurring method.
Mathematics 11 02869 g001
Figure 2. PSNR for Barbara image against different values of parameters.
Figure 2. PSNR for Barbara image against different values of parameters.
Mathematics 11 02869 g002
Figure 3. (a) is an exact image. (b) is a blurry image. (c) is a deblurred image by the unconstrained method. The images from (do) are deblurred by the OSCM according to iterations from k = 1 to k = 12 , respectively.
Figure 3. (a) is an exact image. (b) is a blurry image. (c) is a deblurred image by the unconstrained method. The images from (do) are deblurred by the OSCM according to iterations from k = 1 to k = 12 , respectively.
Mathematics 11 02869 g003
Figure 4. The circular Gaussian Kernel.
Figure 4. The circular Gaussian Kernel.
Mathematics 11 02869 g004
Figure 5. (a) is an exact image. (b) is a blurry image. (c) is a deblurred image by the unconstrained method. The images from (ds) are deblurred by TSCM according to iterations k = 1 to k = 16 , respectively.
Figure 5. (a) is an exact image. (b) is a blurry image. (c) is a deblurred image by the unconstrained method. The images from (ds) are deblurred by TSCM according to iterations k = 1 to k = 16 , respectively.
Mathematics 11 02869 g005aMathematics 11 02869 g005b
Figure 6. (a) is an exact image, (b) is a blurry image, (c) is an image deblurred by OLM, (d) is an image deblurred by TLM (e), deblurred image by OSCM and (f) is a deblurred image by TSCM.
Figure 6. (a) is an exact image, (b) is a blurry image, (c) is an image deblurred by OLM, (d) is an image deblurred by TLM (e), deblurred image by OSCM and (f) is a deblurred image by TSCM.
Mathematics 11 02869 g006aMathematics 11 02869 g006b
Figure 7. (a) is an exact image, (b) is a blurry image, (c) is an image deblurred by OLM, (d) is a an imaged deblurred by TLM (e) deblurred image by OSCM and (f) is a deblurred image by TSCM.
Figure 7. (a) is an exact image, (b) is a blurry image, (c) is an image deblurred by OLM, (d) is a an imaged deblurred by TLM (e) deblurred image by OSCM and (f) is a deblurred image by TSCM.
Mathematics 11 02869 g007
Figure 8. (a) is an exact image, (b) is a blurry image, (c) is a deblurred image by OLM, (d) is an image deblurred by image TLM, (e) is an image deblurred image by OSCM, and (f) is an image deblurred by TSCM.
Figure 8. (a) is an exact image, (b) is a blurry image, (c) is a deblurred image by OLM, (d) is an image deblurred by image TLM, (e) is an image deblurred image by OSCM, and (f) is an image deblurred by TSCM.
Mathematics 11 02869 g008
Figure 9. Galaxy image: (a) The blurry image, (b) deblurred image by the RLTV method, (c) deblurred image by the NFOV method, (d) deblurred image by the OSCM and (e) deblurred image by the TSCM.
Figure 9. Galaxy image: (a) The blurry image, (b) deblurred image by the RLTV method, (c) deblurred image by the NFOV method, (d) deblurred image by the OSCM and (e) deblurred image by the TSCM.
Mathematics 11 02869 g009
Figure 10. Satel image: (a) the blurry image, (b) image deblurred using the RLTV method, (c) image deblurred using the NFOV method, (d) image deblurred using OSCM and (e) image deblurred by the TSCM.
Figure 10. Satel image: (a) the blurry image, (b) image deblurred using the RLTV method, (c) image deblurred using the NFOV method, (d) image deblurred using OSCM and (e) image deblurred by the TSCM.
Mathematics 11 02869 g010
Figure 11. Image 1: (a) Exact image, (b) blurry image, (c) image deblurred by the TV method, (d) image deblurred by the OLM, (e) image deblurred by the TLM, (f) image deblurred by the RLTV method, (g) image deblurred by the NFOV method, (h) image deblurred by the OSCM and (i) deblurred image by the TSCM.
Figure 11. Image 1: (a) Exact image, (b) blurry image, (c) image deblurred by the TV method, (d) image deblurred by the OLM, (e) image deblurred by the TLM, (f) image deblurred by the RLTV method, (g) image deblurred by the NFOV method, (h) image deblurred by the OSCM and (i) deblurred image by the TSCM.
Mathematics 11 02869 g011
Figure 12. Image 2: (a) Exact image (b) blurry image, (c) deblurred image by the TV method, (d) deblurred image by the OLM, (e) deblurred image by the TLM, (f) deblurred image by the RLTV method, (g) deblurred image by the NFOV method, (h) deblurred image by the OSCM and (i) deblurred image by the TSCM.
Figure 12. Image 2: (a) Exact image (b) blurry image, (c) deblurred image by the TV method, (d) deblurred image by the OLM, (e) deblurred image by the TLM, (f) deblurred image by the RLTV method, (g) deblurred image by the NFOV method, (h) deblurred image by the OSCM and (i) deblurred image by the TSCM.
Mathematics 11 02869 g012
Figure 13. Image 3: (a) Exact image, (b) blurry image, (c) image deblurred by the TV method, (d) image deblurred by the OLM method, (e) image deblurred by the TLM, (f) image deblurred by the RLTV method, (g) image deblurred by the NFOV method, (h) image deblurred by the OSCM method, and (i) deblurred image by the TSCM.
Figure 13. Image 3: (a) Exact image, (b) blurry image, (c) image deblurred by the TV method, (d) image deblurred by the OLM method, (e) image deblurred by the TLM, (f) image deblurred by the RLTV method, (g) image deblurred by the NFOV method, (h) image deblurred by the OSCM method, and (i) deblurred image by the TSCM.
Mathematics 11 02869 g013aMathematics 11 02869 g013b
Figure 14. Image 4: (a) Exact image, (b) blurry image, (c) image deblurred by the TV method, (d) image deblurred by the OLM method, (e) image deblurred by the TLM, (f) image deblurred by the RLTV method, (g) image deblurred by the NFOV method, (h) image deblurred by the OSCM method, and (i) deblurred image by the TSCM.
Figure 14. Image 4: (a) Exact image, (b) blurry image, (c) image deblurred by the TV method, (d) image deblurred by the OLM method, (e) image deblurred by the TLM, (f) image deblurred by the RLTV method, (g) image deblurred by the NFOV method, (h) image deblurred by the OSCM method, and (i) deblurred image by the TSCM.
Mathematics 11 02869 g014
Figure 15. Comparison of PSNR and SSIM values for different methods using Levin’s dataset.
Figure 15. Comparison of PSNR and SSIM values for different methods using Levin’s dataset.
Mathematics 11 02869 g015
Table 1. The PSNR, SSIM, and number of negative pixels for Example 1.
Table 1. The PSNR, SSIM, and number of negative pixels for Example 1.
k c k PSNRSSIMNegative Pixels
Blurred25.54530.7212
Unconstrained42.78440.9834
Constrained11 × 10 5 35.21410.9479538
24.8 × 10 5 40.07190.9775229
32.4 × 10 4 42.12970.9849140
41.2 × 10 3 42.85660.9871115
55.9 × 10 3 44.45850.991277
63.0 × 10 2 45.53070.993556
71.5 × 10 1 45.62460.993655
87.4 × 10 1 46.40880.995034
93.7 × 10 0 46.54750.995125
101.9 × 10 1 46.57290.995314
119.3 × 10 1 46.55790.99533
124.6 × 10 2 46.55780.99520
Table 2. The PSNR, SSIM, and number of pixels outside the interval [ 0 , 255 ] for Example 2.
Table 2. The PSNR, SSIM, and number of pixels outside the interval [ 0 , 255 ] for Example 2.
k c k PSNRSSIMPixels Outside [ 0 , 255 ]
Blurred25.76200.8750
Unconstrained51.62170.9932
Constrained11 × 10 5 47.86050.9892929
22.9 × 10 5 50.78340.9941379
38.6 × 10 5 50.78340.9959379
42.6 × 10 4 50.78340.9960379
57.7 × 10 4 50.78340.9961379
62.3 × 10 3 50.78340.9961379
76.9 × 10 3 50.78340.9962379
82.1 × 10 2 50.78340.9964379
96.2 × 10 2 50.78340.9969379
101.9 × 10 1 52.31070.9973202
115.6 × 10 1 53.24400.9974112
121.7 × 10 0 53.78020.997472
135.0 × 10 0 54.08440.997946
141.5 × 10 1 54.21970.997824
154.5 × 10 1 54.22120.997912
161.4 × 10 2 54.30870.99800
Table 3. PSNR, SSIM, and CPU-Time comparison of different methods for Example 3.
Table 3. PSNR, SSIM, and CPU-Time comparison of different methods for Example 3.
BlurredOLMTLMOSCMTSCM
GoldhillsPSNR23.125633.158933.145834.894534.8965
SSIM0.56870.77040.76900.77880.7759
CPU-Time1005.2589526.5476896.4058909.5469
CameramanPSNR23.569343.456143.548946.005645.9967
SSIM0.75240.70470.91130.91860.9121
CPU-Time592.3464345.2675512.3641526.3428
Table 4. PSNR, SSIM and CPU-Time Comparison of different methods for Example 4.
Table 4. PSNR, SSIM and CPU-Time Comparison of different methods for Example 4.
BlurredOLMTLMOSCMTSCM
PepperPSNR23.157945.236645.455946.297346.3012
SSIM0.71030.83950.84250.84380.8442
CPU-Time880.2645524.7881764.5225791.2988
Table 5. PSNR and SSIM comparison of different methods for Example 5.
Table 5. PSNR and SSIM comparison of different methods for Example 5.
ImageGalaxySatel
MethodPSNRSSIMPSNRSSIM
Blurred20.66200.671220.45590.7994
RLTV23.87690.756022.28810.8731
NFOV24.14170.822222.74390.8759
OSCM25.04240.840924.12900.8829
TSCM25.05190.842524.19520.8837
Table 6. PSNR and SSIM comparison of different methods for Example 6.
Table 6. PSNR and SSIM comparison of different methods for Example 6.
ImageImg1Img2Img3Img4
Method PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
Blurred20.66200.671220.45590.799420.02610.612621.87480.6910
TV23.87690.756022.28810.873135.15200.941736.68720.9656
OLM23.87690.756022.28810.873141.98240.972342.74670.9878
TLM23.87690.756022.28810.873141.95620.979942.86410.9864
RLTV23.87690.756022.28810.873139.76340.971942.87370.9869
NFOV24.14170.822222.74390.875941.18220.978241.62210.9834
OSCM25.04240.840924.12900.882942.39560.982643.54420.9885
TSCM25.05190.842524.19520.883741.72530.980343.55220.9886
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saleem, S.; Ahmad, S.; Kim, J. Total Fractional-Order Variation-Based Constraint Image Deblurring Problem. Mathematics 2023, 11, 2869. https://doi.org/10.3390/math11132869

AMA Style

Saleem S, Ahmad S, Kim J. Total Fractional-Order Variation-Based Constraint Image Deblurring Problem. Mathematics. 2023; 11(13):2869. https://doi.org/10.3390/math11132869

Chicago/Turabian Style

Saleem, Shahid, Shahbaz Ahmad, and Junseok Kim. 2023. "Total Fractional-Order Variation-Based Constraint Image Deblurring Problem" Mathematics 11, no. 13: 2869. https://doi.org/10.3390/math11132869

APA Style

Saleem, S., Ahmad, S., & Kim, J. (2023). Total Fractional-Order Variation-Based Constraint Image Deblurring Problem. Mathematics, 11(13), 2869. https://doi.org/10.3390/math11132869

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop