Next Article in Journal
Information Properties of Consecutive Systems Using Fractional Generalized Cumulative Residual Entropy
Previous Article in Journal
State-Space Approach to the Time-Fractional Maxwell’s Equations under Caputo Fractional Derivative of an Electromagnetic Half-Space under Four Different Thermoelastic Theorems
Previous Article in Special Issue
COVID-19 Diagnosis by Extracting New Features from Lung CT Images Using Fractional Fourier Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Fractional-Order Non-Convex TVα,p Model in Image Deblurring

1
School of Mathematics and Information Sciences, Nanchang Hangkong University, Nanchang 330063, China
2
Department of Mathematics, Harbin Institute of Technology, Weihai 264209, China
3
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(10), 567; https://doi.org/10.3390/fractalfract8100567
Submission received: 2 September 2024 / Revised: 26 September 2024 / Accepted: 27 September 2024 / Published: 28 September 2024

Abstract

:
In this paper, we propose a non-convex model with fractional-order applied to image deblurring problems. In the new model, fractional-order gradients have been introduced to preserve detailed features, and a source term with a blurry kernel is used for deblurring. This aspect of the model ensures that it can handle various blurring scenarios. Additionally, we devise an algorithm that maintains the non-expansiveness of the support set for image gradients, serving as a critical component in our approach to address image deblurring issues. After approximate linearization, the algorithm can be easily implemented. Some standard image processing techniques similar to fast Fourier transform can be utilized. Global convergence has likewise been confirmed and established. Moreover, we have also demonstrated that the proposed deblurring algorithm exhibits edge preservation properties. Compared with several existing classic models, the proposed method maintains a good balance between detail preservation, edge preservation, and deblurring. In addition, compared with several classic methods, the proposed method improved PSNR and SSIM by 0.9733 and 0.0111, respectively.

1. Introduction

Image deblurring is a critical concern in image processing, as noted in sources [1,2,3,4]. There have been substantial achievements in a variety of fields up until now, such as communications, biomedical engineering, medical imaging analysis and so on. However, due to its ill-posed nature, no explicit closed-form solution exists. In general, the process of image blurring is considered as the convolution of a clear image with a shift-invariant kernel plus noise, i.e.,
f = K u + n ,
where K is a blurring operator, n is an additive noise, and u is the original clean image. This is a prototypical case of both an ill-posed and an inverse problem.
Over the recent period, a variety of methods and models have been deployed to regularize the ill-conditioned nature of this inverse problem. The most classic method among them is the total variation (TV) model proposed by Rudin, Osher, and Fatemi (ROF) in [5]. The discrete form of the model is as follows:
min u R N i = 1 N D i u 2 + λ 2 K u f 2 2 ,
where λ > 0 is a tuning parameter, N = n × n represents the number of pixels in an n × n dimensional image, D i u represents the gradient of image u at pixel i. TV model can better preserve object boundaries or sharp edges that are usually crucial features to recover. However, due to the non-linearity and non-differentiability of the TV functions, the TV-based model is computationally more challenging to implement. To overcome this difficulty, numerous numerical methods have been proposed, such as the fast total variation minimization (FastTVMM) method [2] and the first-order primal-dual method [6]. These numerical methods can effectively solve the computational difficulties induced by the non-linearity and non-differentiability of the total variation function.
In addition to the variational approach, numerous alternative methodologies have also experienced substantial advancements. As an illustration, consider the utilization of suitable basis functions for approximation, such as those found in the expansive domain of wavelet approximation, or the application of smoothing operations through filters like Gaussian convolutions. One of the most representative models is a new wavelet frame representation method (WFBM) proposed by Cai, Dong, and Shen in [1]. The model specifically treats the image as a function that is smooth in sections, concurrently inferring both the target image to be restored and its discontinuity set. However, the wavelet frame-based model (WFBM) can also lead to oscillatory phenomena, akin to pseudo-Gibbs artifacts, and spurious noise spikes in the vicinity of discontinuities of images. Traditional local regularization strategies, including wavelet-based and TV-based methods, emphasize spatial adaptability from the perspective of local smoothness. The limitation of such local methods is that important structures in the image, such as textures and edges, exhibit non-local dependencies or regularities (e.g., within texture regions or similar patches along the edge direction).
In [7], Hintermuller et al. proposed an image restoration model named TVp ( 0 < p < 1 ). The model can be described as follows:
min u R N i = 1 N D i u 2 p + λ 2 K u f 2 2 .
The TVp model, due to its characteristics of preserving edges and enhancing contrast, has been successfully applied to tasks related to image restoration. In [8], Bai et al. introduced a new model based on fractional-order total variation (FOTV) for image restoration. The specific form of this model is as follows:
min u R N i = 1 N D i α u 2 + λ 2 K u f 2 2 ,
where α is a fractional derivatives that belong to the interval (1,2). Due to the non-local property and self-similarity of fractional-order gradients, they are capable of preserving texture details in images very effectively.
In recent years, a significant amount of research has been conducted by scholars focusing on image restoration techniques. In [3], an image restoration model utilizing non-convex and non-smooth total generalized variation (NNTGV) was presented as described below:
min u , w R N λ K u f 2 2 + α 1 Ω φ ( u u w ) d x + α 2 Ω φ ( w w ) d x .
Here, the parameters α 1 and α 2 , which are positive values, act as control variables, and φ denotes a potential function characterized by its non-convex and non-smooth nature. λ is an adjustment parameter that balances the weights between the regularization term and the fidelity term. The NNTGV model alleviates the punishment for large variation patterns by encouraging punishment for small variation u patterns. Therefore, the NNTGV model demonstrates excellent edge-preserving performance. Additionally, in [4], Huang et al. proposed an innovative image restoration framework combining the strengths of imaginary quaternion dictionary learning and saturation-value (QDSV) total variation regularization. It can successfully restore real natural images.
Inspired by the methods mentioned in (2) and (3) above, we study a novel deblurring framework based on a class of fractional-order gradients and non-convex exponent. The proposed model can simultaneously inherit the advantages of fractional-order gradients preserving texture and non-convex exponent preserving edge. Below is the outline of the proposed model:
min u R N i = 1 N D i α u 2 p + λ 2 K u f 2 2 ,
where 0 < p < 1 and the value of α corresponds to fractional-order parameters that lie in the interval between 1 and 2. We will refer to model (5) as the TVα,p model. The objective function uses TVp as the regularization term, which can preserve edges, while the fractional-order gradient fidelity term can preserve more details, and achieve a balance between deblurring and edge preservation by adjusting the regularization parameters. K u f 2 2 stands for the fidelity term, which is aimed at ensuring a certain form of “consistency” between the deblurred image and the original blurred image. We know that traditional local regularization strategies, including wavelet-based and TV-based methods, emphasize spatial adaptation from the perspective of local smoothing. The limitation of this local approach is that important structures in the image, such as edges and textures, exhibit non local dependencies or regularities (e.g., similar patches along the edge direction or within texture regions). Compared with it, the fractional gradient operator can better simulate the texture details of images due to its non locality and self similarity. Thus, better preserving image textures with complex internal spaces.
The contributions of this work are summarized as follows:
  • A novel framework based on fractional-order non-smooth and non-Lipschitz optimization problems is developed for the image deblurring problem.
  • Relying on the principles of first-order optimality conditions for non-Lipschitz regularization, we provide a solution algorithm that focuses on the support set of image gradients under constraints.
  • We showed that the algorithm came up with converges globally. The lower bound theory of the iterative sequence was proven, indicating the algorithm’s advantage in image recovery with edge and detail preservation.
  • The proposed method can easily handle various types of blurring. This is because the exponential regularization term in the model allows our method to be completely independent of assumptions about kernel space distribution.
The rest of the paper is organized as follows. In Section 2, we provide preliminaries about this paper. In Section 3, we introduce the problem and propose the main algorithm. In Section 4, we provide the convergence analysis. In Section 5, we present implementation details. Experimental results are demonstrated in Section 6. Finally, we share a conclusion in Section 7.

2. Preliminaries

In this section, we will briefly review the relevant definitions of fractional TV and fractional-order derivatives. Latter in this section, we present some notation used in the paper.

2.1. Fractional-Order Total Variation

The definition of fractional-order derivatives can be perceived as a generalized form of derivatives of integer-order. Currently, there are three prevalent definitions, namely the Riemann–Liouville (RL), the Grünald–Letnikov (GL), and the Caputo definitions. In this paper, we adopt the GL definitions. For more details about the other two definitions, readers can refer to the literature [9]. Let us assume that α ( α R + ), lies in the range from n 1 up to but not including n; meaning, 0 n 1 α < n . The α -order fractional derivative at a point x R is shown by D x α a , and x and a are the start and end points for integration in a one-dimensional computation domain.
The left and right GL fractional-order derivatives are
D x α a G v ( x ) = lim h 0 1 h α j = 0 [ x a h ] ( 1 ) j C j α v ( x j h ) ,
and
D b α x G v ( x ) = lim h 0 1 h α j = 0 [ b x h ] ( 1 ) j C j α v ( x + j h ) ,
The symbol Γ ( · ) denotes the gamma function, while C j α = Γ ( α + 1 ) / [ Γ ( j + 1 ) Γ ( α j + 1 ) ] is indicative of the generalized binomial coefficient. In this paper, [ θ ] (meaning the greatest integer less than or equal to θ , so that Π 1 < [ Π ] Π ) is an integer. Specially, if α = 1 , then C j 1 = 0 for j 2 . At this moment, Equation (7) degenerates into a first-order derivative. Based on the above Equations (6) and (7), the Grünald–Letnikov (central) fractional-order derivative can be defined by
D b α a G v ( x ) = 1 2 ( D x α a G v ( x ) D b α x G v ( x ) ) .
To illustrate the characteristics of fractional methods in image processing, our review the total α -order variation, as documented in publications [10,11]. For some other applications of fractional-order methods, readers can refer to references [12,13].
Definition 1.
The definition of the total α-order variation applied to u is given by
T V α ( u ) : = sup ϕ T i N j N ( u div α ϕ ) i , j ,
where
T = { ϕ C 0 l ( Ω , R 2 ) | ϕ ( x ) 1 , 2   1 for all x Ω } ,
d i v α ϕ = i = 1 d α ϕ i x i α and α ϕ i x i α  represents the fractional-order derivative of  ϕ i  along  x i  direction. Based on α-bounded variation (BV), the α-BV norm is defined by
u B V α = u L 1 + T V α ( u ) ,
and the set of functions exhibiting α-bounded variation over the region Ω is
B V α ( Ω ) : = { u L 1 ( Ω ) | T V α ( u ) < + } .
For any positive integer p, let W p α ( Ω ) = { u L p ( Ω ) | u W p α ( Ω ) < } be a function space which is inclusive of the norm u W p α ( Ω ) = ( i N j N | u i , j | p + i N j N | ( α u ) i , j | p ) 1 p , where α u = ( D x α a G u , D y α a G u ) T .
Lemma 1
(see [10]). If u W 1 α ( Ω ) ; then T V α ( u ) = i N j N | ( α u ) i , j | .
Next, we will briefly review the definition of the GL fractional-order derivative [14] of the real function f ( x ) , the specific form is as follows
D α f ( x ) = lim h 0 + k 0 ( 1 ) k C k α f ( x k h ) h α , α > 0 ,
where C k α is defined from (6). From the definition of C k α , it is easy to know that when k (for fixed α ), C k α quickly converges to 0. When we take h to be 1, we can figure out the finite fractional-order difference like this:
D α f ( x ) = k = 0 K 1 ( 1 ) k C k α f ( x k ) .
Specially, when α is set to 1, the above Equation (12) degenerates into the most general first-order forward difference.
In addition, we also introduce finite difference (FD) discretization of fractional-order derivatives. Before delving into the fractional-order derivative discretization, we had delineated a spatial division ( x i , y j ) over the image domain Ω , encompassing all indices i , j from 0 through N + 1 . Let the Dirichlet boundary condition of u be zero. Additionally, our consideration extends to the discretization process of the α -order derivative applied to every internal point ( x i , y j ) in the domain, for all values of i and j from 0 through N. Next, we used this method to work out how to discretize the α -order derivatives for each point moving along the x and y lines.   
D x α f ( x i , y j ) = 1 2 [ k = 0 i + 1 w k α f ( x i k + 1 , y j ) + k = 0 N i + 2 w k α f ( x i + k 1 , y j ) ] ,
D y α f ( x i , y j ) = 1 2 [ k = 0 j + 1 w k α f ( x i , y j k + 1 ) + k = 0 N j + 2 w k α f ( x i , y j + k 1 ) ] ,
where k = 0 , 1 , , and w k α can be computed by the following recursive formulas:
w 0 α = 1 ; w k α = ( 1 1 + α k ) w k 1 α , for k > 0 .
In real-world scenarios, zero boundary conditions are incorporated into the matrix approximation for computing fractional derivatives. Therefore, in Equations (13) and (14), all fractional derivative equations that includes fractional derivatives can be uniformly represented in matrix form (denote w = w 0 α + w 2 α ).
D x α f ( x 1 , y j ) D x α f ( x 2 , y j ) D x α f ( x 3 , y j ) D x α f ( x N , y j ) = 1 2 2 w 1 α w w 3 α w N α w 2 w 1 α w 3 α w 3 α 2 w 1 α w w N α w 3 α w 2 w 1 α B N α f ( x 1 , y j ) f ( x 2 , y j ) f ( x 3 , y j ) f ( x N , y j ) f .
Denote by W, an N × N matrix, the assembly of solutions at all nodes ( i h ; j h ) for i , j = 1 , 2 , , N, which aligns with the spatial discretization points in both x and y directions. Therefore, we can easily obtain D x α W = B N α W . Likewise, the α th order derivative of w ( x ; y ) is examined, with ( x , y ) falling in the domain [ h , N h ] , along the y-direction can be approximated by D y α W = W ( B N α ) T .

2.2. Some Notation

Using J = { 1 , 2 , , N } to represent the set of all pixel indices of u. D i = ( D i x , D i y ) T R 2 × N represents the discrete gradient operator at pixel i, where D i x and D i y are designated as the discrete gradient operators acting along the horizontal and vertical axes, respectively. K refers to a real-valued matrix, while K T represents its transpose. To make our notation more concise, we employ · to indicate the 2 -norm. Across the entirety of this work, we stipulate that
Ω 1 ( u ) = { i J : D i u   0 } ,
where u R N , and denote Ω 0 ( u ) = J Ω 1 ( u ) . With respect to a set Ξ , the symbol Ξ stands for the size of Ξ , which is equivalent to its cardinality.

3. The Problem and the Algorithm

3.1. The Problem

In order to better analyze the non-convex model TVα,p (see (5) for details) with fractional-order gradient, we take into account the following problem of variation:
min u R N F ( u ) = i = 1 N ϕ ( D i α u 2 )   +   λ 2 K u f 2 2 ,
where ϕ ( t ) = t p .
Assumption 1.
(i) kerK ∩ ( i = 1 N ker D i ) = { 0 } .
(ii) ϕ, a function from [ 0 , + ) onto [ 0 , + ) , is characterized by concavity and possesses coercive properties as outlined in Definition 3.25 in [15], and notably, ϕ evaluates to 0 at 0.
(iii) On the interval ( 0 , + ) , ϕ exhibits C 1 smoothness, with its derivative, ϕ ( t ) , being greater than zero across the entire interval, and notably, ϕ exhibits a tendency towards infinity as it approaches zero from the right.
(iv) For any θ > 0 , ϕ ( θ , + ) is a function that satisfy the Lipschitz condition uniformly.
Remark 1.
(a) If Assumptions 1(i) and (ii) hold true, then F ( u ) becomes coercive, which ensures the existence of a minimizer for problem (15).
(b) Assumption 1(iii) indicates that the function ϕ does not satisfy a Lipschitz condition.
(c) We present a few examples of functions that meet the criteria outlined in Assumptions 1(ii), (iii) and (iv):  ϕ 1 ( t ) = t 1 2 + 2 t ϕ 2 ( t ) = t 1 2 + t 1 3 , p ( 0 , 1 ) ϕ 3 ( t ) = log ( t p + 1 ) ,   p ( 0 , 1 ) ϕ 4 ( t ) = t p , p ( 0 , 1 ) .
The following lemma can be directly obtained by Assumption 1(ii) and (iii).
Lemma 2
(see [16]). The given inequality stands true for every instance when x is non-negative and x ¯ is positive:
ϕ ( x ) ϕ ( x ¯ ) + ϕ ( x ¯ ) ( x x ¯ ) .

3.2. The Algorithm

In this subsection, we will briefly analyze the relevant theories of algorithms for solving the minimization problem (15). Given that the problem stated in (15) is inherently non-smooth and does not meet the conditions for Lipschitz continuity, designing an effective algorithm can be particularly challenging. Most existing methods are approximate methods, or rather, smooth approximation methods [17,18]. In this paper, we designed an algorithm with global convergence. In addition, we provide some results that, although fundamental, are the core of this paper.
Proposition 1.
If t * represents a local minimum of F ( t ) according to Equation (15), and both Assumptions 1(ii) and (iii) hold true. Thus, a value ζ exists, greater than zero, such that
e i t h e r D i t *   = 0 o r D i t *   > ζ i J .
Proof. 
The detailed proof is similar to the proof of Proposition 1 in Ref. [16].    □
Remark 2.
(i) Proposition 1 from reference [16] shows that even if t * is the critical point (Remark 3(b) in [16]) of F ( u ) , the above proposition still holds.
(ii) If ϕ C 2 on ( 0 , + ) and ϕ ( 0 + ) = , the foundational lower boundary outlined in work ([19], Theorem 3.1).
(iii) From Theorem 2.2 in [20], we are aware that the foundation for choosing the lower bound theory lies in adhering to the standards of first-order optimality. Therefore, ϕ does not need to be restricted in C 2 . Our lower bound is related to the F ( t * ) value, while the lower bound is uniform in [19].
Proposition 1 indicates that if we want to discovering a point t ¯ that is close proximity to the local minimum point t * , ensuring that D i t * D i t ¯     D i t * t ¯     < ζ , i J , then D i t *   = 0 when D i t ¯ = 0 . It implies that there will not be a support extension from t ¯ to t * . Inspired by this phenomenon, we present the iterative support Algorithm 1 that as shown below.
Algorithm 1 ISSA: Iterative Support Shrinking Algorithm.
1. Input λ , K, f. Initialize u 0 = f .
2. In each round for k = 0 , 1 , , calculate u k + 1 by addressing
min u R N F k ( u ) : = i Ω 1 k ϕ ( D i α u ) + λ 2 K u f 2 2 , s.t. D i α u = 0 , i Ω 0 k ,
where Ω 0 k = Ω 0 ( u k ) and Ω 1 k = Ω 1 ( u k ) . ϕ : [ 0 , + ) [ 0 , + ) is a potential function, λ > 0 is a tuning parameter.
In general, the above optimization problem (16) is difficult to solve. Thus, we introduce an iterative method that employs support reduction and incorporates proximal linearization (Algorithm 2). In simple terms, it is to replace the original function with its first-order Taylor expansion.
Algorithm 2 ISSAPL: Iterative Support Shrinking Algorithm with Proximal Linearization.
1. Input λ , K, f, ρ > 0 . Initialize u 0 = f .
2. In each round for k = 0 , 1 , , calculate u k + 1 by addressing
min u R N Q k ( u ) : = i Ω 1 k ϕ ( D i α u k ) D i α u + ρ 2 u u k 2 2 + λ 2 K u f 2 2 , s.t. D i α u = 0 , i Ω 0 k ,
where Ω 0 k = Ω 0 ( u k ) and Ω 1 k = Ω 1 ( u k ) . 1 ρ > 0 is the step size. ϕ ( θ , + ) is a function that satisfy the Lipschitz condition uniformly.

4. Convergence Analysis

In this section, we will analyze the global convergence of the iterative support shrinking algorithm with proximal linearization. The sequence { u k } emerges from executing an iterative algorithm that combines support shrinkage with proximal linearization, where Ω 0 k = Ω 0 ( u k ) and Ω 1 k = Ω 1 ( u k ) .

4.1. Basic Convergence Properties

Before presenting our main results, we first introduce some basic convergence properties. These properties are essential for proving the convergence.
Lemma 3.
For the constraint sets in iterative support shrinking algorithm with proximal linearization, we easy to know that the Ω 0 k Ω 0 k + 1 and Ω 0 k will converge in a finite number of steps, i.e.,
K , s u c h t h a t Ω 0 k = Ω 0 K , k K .
Proof. 
Since u k + 1 is the solution to optimization problem (17), D i u k + 1 = 0 , i Ω 0 k holds. That is, for i Ω 0 k , there is D i u k + 1 = 0 . In addition, according to the definition of Ω 0 k = Ω 0 ( u k ) and Ω 0 ( u k ) , D i u k + 1 = 0 means i Ω 0 k + 1 . Thus, nested behavior Ω 0 k Ω 0 k + 1 is established. That is to say, { Ω 0 k } is an increasing sequence. Due to the convergence definition of Ω 0 k { 0 , 1 , , N } and discrete set columns, Ω 0 k will converge within a finite number of steps.    □
We denote Ω ¯ 0 = Ω 0 K and Ω ¯ 1 = Ω 1 K . According to the above Lemma 3, the problem (16) can be rewritten as
min u R N F ¯ ( u ) : = i Ω ¯ 1 ϕ ( D i α u ) + λ 2 K u f 2 2 , s.t. D i α u = 0 , i Ω ¯ 0
and the challenge delineated in (17) may be recast in the form of
min u R N Q ¯ k ( u ) : = ρ 2 u u k 2 2 + i Ω ¯ 1 ϕ ( D i α u k ) D i α u + λ 2 K u f 2 2 , s.t. D i α u = 0 , i Ω ¯ 0 .
In order to better analyze ( F ¯ ) and ( Q ¯ k ), we have transformed the above problems (19) and (20) into unconstrained forms. Since D i α u = 0 is a special linear constraint in (19) and (20), this means that it is possible to reconstruct u through 
u = E w u w ,
where u w R N w ( N w N ) is composed of all non-zero elements N w of u, and E w R N × N w is relies on u w . Here, it is easy to see that the choice of u w is not unique. For convenience, we fix u w : u w = P u , where P R N w × N . According to the Cauchy–Schwartz inequality
u w   =   u     E w u w .
Combining Equation (21), problem (19) can be rewritten as follows:
min u w R w N F ¯ w ( u w ) : = i Ω ¯ 1 ϕ ( D i α E w u w ) + λ 2 K E w u w f 2 2 .
These two problems are equivalent, i.e.,
u w * solves ( 23 ) and u * = E w u w * u * solves ( 19 ) and u w * = P u * .
Correspondingly, when k K , define
Q ¯ k w ( u w ) : = i Ω ¯ 1 ϕ ( D i α u k ) D i α E w u w   +   ρ 2 E w u w u k 2 +   λ 2 K E w u w f 2 ,
and
min u w R w N Q ¯ k w ( u w ) .
Then (24) and (17) are equivalent, i.e.,
u w k + 1 solves ( 24 ) u k + 1 solves ( 17 ) , k K .
To analyze convergence, we provide several fundamental results that will be utilized later on. By combining Equations (15)–(18), and (22), the verification of the ensuing relation is feasible:
F k ( u k + j ) = F ( u k + j ) , k , j 0 ,
F ¯ w ( u w k ) = F ¯ ( u k ) = F ( u k ) , k K .
When k K , through Equation (21), we can obtain D i α E w u w k   =   D i α u k   0 , i Ω ¯ 1 . As a result, Q ¯ k w ( u w ) is differentiable at u w k . Then, leveraging the conditions for first-order optimality, we can deduce
Q ¯ k w ( u w k + 1 ) = 0 .
That is,
0 = i Ω ¯ 1 ϕ ( D i α u k ) ( D i α E w ) T D i α E w u w k + 1 D i α E w u w k + 1 + λ ( K E w ) T ( K E w u w k + 1 f ) + ρ E w T ( E w u w k + 1 u k ) = i Ω ¯ 1 ϕ ( D i α u k ) ( D i α E w ) T D i α u k + 1 D i α u k + 1 + ρ E w T ( u k + 1 u k ) + λ ( K E w ) T ( K u k + 1 f ) .
Lemma 4.
The sequence { F ( u k ) } is decreasing and satisfies
ρ 2 u k + 1 u k 2 F ( u k ) F ( u k + 1 ) , k 0 .
Proof. 
By Lemma 2, we can obtain
ϕ ( D i α u ) ϕ ( D i α u k ) + ϕ ( D i α u k ) ( D i α u   D i α u k ) , i Ω 1 k .
Therefore,
F k ( u ) + ρ 2 u u k 2 = i Ω 1 k ϕ ( D i α u ) +   ρ 2 u     u k 2 +   λ 2 K u f 2 i Ω 1 k ϕ ( D i u k ) + i Ω 1 k ϕ ( D i u k ) ( D i u   D i u k ) +   ρ 2 u     u k 2 +   λ 2 K u f 2 = Q k ( u ) + i Ω 1 k ( ϕ ( D i α u k ) ϕ ( D i α u k ) D i α u k ) .
We have
Q k ( u k ) + i Ω 1 k ( ϕ ( D i α u k ) ϕ ( D i α u k ) D i α u k ) = i Ω 1 k ϕ ( D i u k ) + λ 2 K u k f 2 = i = 1 N ϕ ( D i α u k ) + λ 2 K u k f 2 = F ( u k ) .
It follows that
F ( u k + 1 ) + ρ 2 u k + 1 u k 2 = F k ( u k + 1 ) + ρ 2 u k + 1 u k 2 Q k ( u k + 1 ) + i Ω 1 k ( ϕ ( D i α u k ) ϕ ( D i α u k ) D i α u k ) Q k ( u k ) + i Ω 1 k ( ϕ ( D i α u k ) ϕ ( D i α u k ) D i α u k ) = F ( u k ) ,
where ϕ ( 0 ) = 0 and u k + 1 is the solution to optimization problem (17). Thus, the lemma proves to be true.    □
Lemma 5.
Given that the sequence { u k } originates from the Algorithm 2, it is demonstrably bounded and have
k = 1 u k + 1 u k 2 < ,
and lim k u k + 1 u k = 0 holds.
Proof. 
From Lemma 4 and F ( u ) 0 , we can conclude that { F ( u k ) } is bounded and convergent. Considering F ( u ) is coercive nature (Remark 1(a)), then { u k } is satisfied and bounded. With N being a positive integer, add the terms on both sides of inequality (28) simultaneously, the following inequality can be obtained
k = 0 N 1 u k + 1 u k 2 2 ρ ( F ( u 0 ) F ( u N ) ) 2 ρ F ( u 0 ) .
Therefore, by taking the limit N of the above inequality, (32) can be obtained to hold. According to the convergence theorem of positive series, it can be seen that positive series u k + 1 u k 2 converges. As a result, the establishment of the limit lim k u k + 1 u k = 0 is assured.    □

4.2. Global Convergence

Lemma 6.
In the event that Assumptions 1(ii) and (iii) are satisfied, exists a positive constant ζ such that   
e i t h e r D i α u k   = 0 o r D i α u k   > ζ k K , i J .
Proof. 
According to Lemma 3, there exists a constant ζ > 0 , such that 
D i α u k   > ζ , i Ω ¯ 1 .
By Equation (27) yields that, v w R w N have
0 = i Ω ¯ 1 ϕ ( D i α u k ) ( D i α E w ) T D i α u k + 1 D i α u k + 1 + λ ( K E w ) T ( K u k + 1 f ) +   ρ E w T ( u k + 1 u k ) , v w = i Ω ¯ 1 ϕ ( D i α u k ) D i α u k + 1 D i α u k + 1 , D i α E w v w + ρ u k + 1 u k , E w v w +   λ K u k + 1 f , K E w v w
Write v = E w v w R N . From (21), it can be obtained that D i α v = 0 , i Ω ¯ 0 . Definition ker ( Ω ¯ 0 ) : = { u R N : D i α u = 0 , i Ω ¯ 0 } . One can equivalently formulate the above equation, as v ker ( Ω ¯ 0 ) have
i Ω ¯ 1 ϕ ( D i α u k ) D i α u k + 1 D i α u k + 1 , D i α v = ρ u k u k + 1 , v + λ f K u k + 1 , K v ρ u k u k + 1   · v +   λ ( f + K ·   u k + 1 ) K · v
According to Lemma 5, the sequence { u k } is within bounds and adheres to u k u k + 1   0 ( k ) , then there exists a positive δ , irrespective of k, such that
i Ω ¯ 1 ϕ ( D i α u k ) D i α u k + 1 D i α u k + 1 , D i α v δ v .
Let Ξ k + 1 : = { μ : μ =   D i α u k + 1 , i Ω ¯ 1 } . Then l : = Ξ k + 1 Ω ¯ 1 N : Ξ k + 1 = { μ 1 , μ 2 , , μ l } , where μ 1 > μ 2 > > μ l . Definition J i = { j J : D j α u k + 1   = μ i } Ω ¯ 1 . Just find the lower bound of μ 1 , μ 2 , , μ l .
Next, we will continue to prove the lemma through mathematical induction. First, we establish a lower bound for μ 1 . Choose v 1 R N and make it satisfied
D i α v 1 = 1 μ 1 D i α u k + 1 , i J
As can be seen from the previous proof section, v 1 ker ( Ω ¯ 0 ) . Due to the image corresponding to u k + 1 / μ 1 satisfying (34), i.e., (34) is conservative. Additionally, 1 μ 1 | ( D i α ) x u k + 1 | 1 , 1 μ 1 | ( D i α ) α u k + 1 | 1 . Let v 1 1 = 0 and v 1 be determined. According to reference ([16], Lemma 6), have | v i 1 |   < N , i J , and
v 1 2 < γ : = N ( N + 1 ) ( 2 N + 1 ) 6
hold. On the other hand, it can be inferred from (34) that there is
D i α u k + 1 , D i α v 1 = 1 μ 1 D i α u k + 1 2 .
From the inequality (33), it can be obtained that
i Ω ¯ 1 ϕ ( D i α u k ) D i α u k + 1 μ 1 δ v 1 .
Based on Assumptions 1(iii) and (iv), the following inequality holds
δ γ > δ v 1   i Ω ¯ 1 ϕ ( D i α u k ) D i α u k + 1 μ 1 i J 1 ϕ ( D i α u k ) D i u k + 1 μ 1 = i J 1 ϕ ( D i α u k ) ϕ ( D i α u k ) , i J 1 .
Since ϕ ( 0 + ) = + , we provide a constant
ζ ¯ 1 = inf { t > 0 : ϕ ( t ) = δ γ } ,
it is finite and irrespective of k. Due to ϕ it is concave (see Assumption 1(ii)), D i α u k   > ζ ¯ 1 , i J 1 . In addition, D i α u k D i α u k + 1     D i α   ·   u k u k + 1   0 . Therefore, there exists K 1 > K , such that when k > K 1 , there is μ 1 > ζ 1 : = ζ ¯ 1 2 .
Assuming there is a static s, included in the set starting from 2 up to l, exist K s 1 > K , ζ ¯ s 1 > 0 , ζ s 1 > 0 , such that when k > K s 1 , have
D i α u k > ζ ¯ s 1 , i J ¯ : = r = 1 s 1 J r μ r > ζ s 1 , r { 1 , , s 1 }
The task of this paper is to create μ s establish a lower bound.
Choose v s R N to satisfy the following equation
D i α v s = 1 μ s D i α u k + 1 , i J J ¯
As can be seen from the previous proof section, v s ker ( Ω ¯ 0 ) . Due to the image corresponding to u k + 1 / μ s satisfying (36), (36) is conservative. In addition, 1 μ s | ( D i α ) x u k + 1 | 1 , 1 μ s | ( D i α ) y u k + 1 |   1 , i J J ¯ . Let J ¯ 1 through J ¯ h be the entirety of connected sections derived from the subtraction of J ¯ from J. Construct v s through the following methods:
v t i s = 0 f o r   a   g i v e n   i n d e x t i B 1 ( J ¯ i ) , i { 1 , , h } v i s = 0 , i J B 1 ( J J ¯ ) .
According to reference ([16], Lemma 6), it can be obtained that v s 2   < γ and | v i s |   < N ,   i J . Thus, for any element i residing within J, there is | ( D i α ) x v s |   < 2 N , | ( D i α ) y v s |   < 2 N , and
D i α v s   = | ( D i α ) x v s | 2 + | ( D i α ) y v s | 2 < 2 2 N .
On the other hand, from (36), it can be obtained that
D i α u k + 1 , D i α v s = 1 μ s D i α u k + 1 2 , i J J ¯
From the inequality (33), it can be concluded that
δ γ > δ v s   i Ω ¯ 1 J ¯ ϕ ( D i α u k ) D i α u k + 1 μ s + i J ¯ ϕ ( D i α u k ) D i α u k + 1 D i α u k + 1 , D i α v s i J s ϕ ( D i α u k ) D i α u k + 1 μ s i J ¯ ϕ ( D i α u k ) D i α u k + 1 D i α u k + 1 D i α v s > i J s ϕ ( D i α u k ) 2 2 N i J ¯ ϕ ( D i α u k )
The last term of inequality (38) is derived from Equation (37). From the inequality (35), it can be concluded that:   
i J ¯ ϕ ( D i α u k ) i J ¯ ϕ ( ζ ¯ s 1 ) N ϕ ( ζ ¯ s 1 )
From the previous statement, it can be inferred that there exists a positive number M, such that g i ( D i α u k ) < M . Combined with the inequality (38), there are
δ γ +   2 2 N 2 ϕ ( ζ ¯ s 1 ) > i J s ϕ ( D i α u k ) ϕ ( D i α u k ) , i J s
Due to ϕ ( 0 + ) = + , constant given by
ζ ˜ s = inf { t > 0 : ϕ ( t ) = δ γ + 2 2 N 2 ϕ ( ζ ¯ s 1 ) ,
is well-defined and finite. Then, it is easy to have D i α u k   > ζ ˜ s , i J s . Similarly, there exists K s > K s 1 , so that when k > K s , μ s > ζ ˜ s 2 holds. Define ζ ¯ s : = min { ζ ˜ s , ζ ¯ s 1 } and ζ s : = min { ζ ˜ s 2 , ζ s 1 } . Combining Equation (35), it can be obtained that
D i α u k   > ζ ¯ s , i r = 1 s J r ; μ r > ζ s , r { 1 , , s }
Finally, it is possible to know the existence of K N > K , so that when k > K N , it can be concluded that D i α u k   > ζ : = ζ ¯ N , i Ω 1 K holds. This completes the proof.    □
Lemma 7.
When k satisfies k > K , then F ¯ w ( u w ) is differentiable at u w k and there exists a constant Δ > 0 , such that
F ¯ w ( u w k )   Δ u w k u w k 1 .
Proof. 
When k > K , due to Lemma 3, we deduce D i α E w u w k   =   D i α u k   0 , i Ω ¯ 1 . Thus, F ¯ w ( u w ) is capable of differentiation at u w k and
F ¯ w ( u w k ) = i Ω ¯ 1 ϕ ( D i α E w u w k ) ( D i α E w ) T D i α E w u w k D i α E w u w k +   λ ( K E w ) T ( K E w u w k f ) = i Ω ¯ 1 ϕ ( D i α u k ) ( D i α E w ) T D i α u k D i α u k +   λ ( K E w ) T ( K u k f ) .
Additionally, we have
Q ¯ k w ( u w ) : = i Ω ¯ 1 ϕ ( D i α u k ) D i α E w u w   +   λ 2 K E w u w f 2 +   ρ 2 E w u w u k 2 .
Combining the above two equations, we can easily deduce
0 = i Ω ¯ 1 ϕ ( D i α u k ) ( D i α E w ) T D i α u k + 1 D i α u k + 1 + ρ E w T ( u k + 1 u k ) +   λ ( K E w ) T ( K u k + 1 f ) ,
where u w k + 1 is the solution of Q ¯ k w . Therefore,
0 = i Ω ¯ 1 ϕ ( D i α u k 1 ) ( D i α E w ) T D i α u k D i α u k + ρ E w T ( u k u k 1 ) +   λ ( K E w ) T ( K u k f ) .
Combining the two equations, we can obtain
F ¯ w ( u w k ) = i Ω ¯ 1 ( ϕ ( D i α u k ) ϕ ( D i α u k 1 ) ) ( D i α E w ) T D i α u k D i α u k +   ρ E w T ( u k 1 u k ) .
Hence,
F ¯ w ( u w k )   i Ω ¯ 1 | ϕ ( D i α u k ) ϕ ( D i α u k 1 ) | D i α E w D i α u k D i α u k +   ρ E w u k 1 u k .
Combining the definitions of ϕ in Assumption 1(iv) and F ( u k ) in Lemma 4, we can deduce 
| ϕ ( D i α u k + 1 ) ϕ ( D i α u k ) | L θ | D i α u k + 1     D i α u k | L θ D i α u k + 1 u k .
Therefore,
F ¯ w ( u w k )   i Ω ¯ 1 L θ D i 2 E w u k u k 1 +   ρ E w u k u k 1 E w 2 ( L θ i J D i 2 +   ρ ) u w k u w k 1 = Δ u w k u w k 1 ,
where Δ : = E w 2 ( L θ i J D i 2 + ρ ) . This completes the proof.    □
Below, we will present the main theorems in the paper.
Theorem 1.
(Global convergence) If Assumption 1 holds and ϕ ( t ) is an o-minimal structure [16]. In the event that { u k } is a sequence that originates from the Algorithm 2 procedure, then { u k } will converge to the first-order optimization point u * of F ¯ ( u ) (see (19) for details).
Proof. 
According to the definition of the Kurdyka–Lojasiewicz function and o minimal structure in Section 3.3 of reference [16], it can be inferred that when ϕ ( t ) is an o-minimum structure, then F ¯ w ( u w ) is a Kurdyka–Lojasiewicz function [16]. According to Lemma 5, it can be inferred that { u k } is bounded, while according to u w     u     E w u w , it can be inferred that { u w k } is also bounded. Since F ¯ w ( u w ) is continuous, there exist subsequences u ˜ w and { u w k t } that satisfy 
{ u w k t } u ˜ w and F ¯ w ( u w k t ) F ¯ w ( u ˜ w ) , as t .
Combining (28), (32), and (39), it can be inferred from ([21], Theorem 2.9) that the sequence { u k } approaches u w * , a critical point of the functional F ¯ w . Therefore, { u k } converges to u * = E w u w * . Since F ¯ w results from the relaxation of constraints in ( F ¯ ) , it follows that u w * is a critical point of F ¯ w . This signifies that u * adheres to the primary optimality requirement of  ( F ¯ ) .    □

5. Algorithm Implementation

In this section, we will delve into the implementation details of the optimization problem outlined in Equation (17). This problem is a convex optimization problem, characterized by its linear constraints. Therefore, we use ADMM to solve it. To avoid symbol confusion, this article introduces a new variable v to replace u. Let
w i k = ϕ ( D i α u k ) .
In addition, the variable z = ( z i ) was also introduced, where z i R 2 , i Ω 1 k . Then ( Q k ) in (17) can be rewritten as   
min v , z i Ω 1 k ω i k z i +   ρ 2 v u k 2 2 +   λ 2 K v f 2 s.t. z i = D i α v , i Ω 1 k , D i α v = 0 , i Ω 0 k .
Specify the augmented Lagrangian corresponding to the optimization problem numbered (41) as
L ( v , z ; μ ) = i Ω 1 k ω i k z i   +   ρ 2 v u k 2 2 +   λ 2 K v f 2 + β 1 2 i Ω 1 k z i D i α v 2 + i Ω 1 k < μ i , D i α v z i > + β 2 2 i Ω 0 k D i α v 2 + i Ω 0 k < μ i , D i α v > ,
where μ i R 2 , i J and β 1 , β 2 are two positive Lagrange multipliers. · , · denotes the operation of taking the inner product of two given vectors. Applying ADMM will generate the following Algorithm 3.
Algorithm 3 ADMM for solving ( Q k ) in (17)
Require: 
u k , w i k , Ω 0 k , Ω 1 k , α , p, σ , k 1 , ρ , λ , β 1 , β 2 . Initialize: v 0 = u k , μ 0 = 0 and t = 0 .
  • For any t 0 , calculation
    z t + 1 argmin z L ( v t , z ; μ t ) ; v t + 1 argmin v L ( v , z t + 1 ; μ t ) ; μ i t + 1 = μ i t + β 2 D i α v t + 1 , if i Ω 0 k ; μ i t + β 1 ( D i α v t + 1 z i t + 1 ) , if i Ω 1 k .
  • If the termination condition is not met, let t = t + 1 and execute the above iteration simultaneously. Otherwise, output: u k + 1 = v t + 1 .
For the convenience of calculation, we assume that β 1 = β 2 = β . The z- and v-subproblems that constitute part of Algorithm 3 are evaluated through the processes outlined below.
1. The z-subproblem described in Equation (42) is equivalent to the optimization problem below.
z t + 1 argmin z i Ω 1 k ( ( ω i k ) z i   +   β 2 z i ( D i α v t + 1 β μ i t ) 2 ) .
The answer to the outlined problem can be attained by employing a two-dimensional contracting method.
z i t + 1 = max { ( D i α v t + 1 β μ i t )   ω i k β , 0 } ( D i α v t + 1 β μ i t ) ( D i α v t + 1 β μ i t ) , i Ω 1 k ,
with the convention that 0 · ( 0 / 0 ) is considered equivalent to 0.
2. The v-subproblem described in Equation (42) is difficult to effectively solve due to the lack of a good structure. However, through simple reformulation, FFT can still be used for solving. This paper introduces z ˜ = ( z j ) with z j = 0 R 2 , j Ω 0 k and define z ¯ t + 1 = ( z t + 1 , z ˜ t + 1 ) . As a result, the subproblem involving v in problem (42) can be simplified to take on following this form instead.
v t + 1 argmin v { ρ 2 v u k 2 +   λ 2 K v f 2 +   β 2 i J D i α v 2 β i J < z i ¯ t + 1 , D i α v > + i J < μ i t , D i α v > } .
The above problem is a least squares problem, and the corresponding standard equation is as follows:
( β ( D α ) T D α + λ K T K + ρ I ) v = ( D α ) T ( β z ¯ t + 1 μ t ) + λ K T f + ρ u k ,
where
D α = ( D α ) x ( D α ) y R 2 N × N with ( D α ) x = ( D 1 α ) x ( D N α ) x and ( D α ) y = ( D 1 α ) y ( D N α ) y .
Here, D α is α -order fractional derivative, ( D α ) x and ( D α ) y are components along the x-direction and y-direction, respectively. Since both K T K and ( D α ) T D α are cyclic matrices with cyclic blocks, they can be diagonalized through two-dimensional discrete Fourier transform. Therefore, Equation (46) can be solved through FFT. Readers can refer to reference [22] for detailed calculations.

6. Numerical Experiments

In this section, numerical examples will be provided to demonstrate the effectiveness of the proposed model and algorithm. All experiments were conducted on a Lenovo laptop equipped with an Intel 2.5 GHz CPU and 4 GB of memory.

6.1. Experimental Settings

We selected eight images, including “Lena”, “Chimpanzee”, “Cameraman”, “Parrot”, “Twocircle”, “Logo”, “Aerial”, and “Butterfly”, as the test images (see Figure 1). In the experiment, six deblurring scenarios were considered as benchmarks in many publications and journals. Table 1 summarizes the point spread function (PSF) for each scenario. And the results were compared with several existing classic image restoration methods, such as FastTVMM model [2], WFBM model [1], NNTGV model [3], QDSV model [4], TVp,q(x) [23], and D-TGV [24].
Signal to noise ratio (SNR), peak signal to noise ratio (PSNR), and structural similarity (SSIM) are commonly used to measure the quality of restored images. Here, we terminate the outer loops of ISSPL by checking whether the stopping criterion has been met.
u k + 1 u k 2 u k + 1 2 ϵ outer ,
and the termination of ADMM nested loops was determined by assessing whether:
v t + 1 v t 2 v t 2 ϵ inner ,
where ϵ inner , ϵ outer is two given positive constant. In this paper, for the proposed method, set parameters β = 10 , ρ = 10 10 , p = 0.5 and α = 1.4 . On the other hand, both the maximum inner and maximum outer iterations are set 500. For all numerical methods, set λ = 5 × 10 4 and ϵ inner = 10 4 , ϵ outer = 10 5 .

6.2. Experimental Results

In Figure 2, Figure 3, Figure 4 and Figure 5, we compared the proposed method with several classic competing methods, FastTVMM [2] and WFBM [1]. Figure 2 and Figure 3 display the recovery of the Lena image, previously impaired by the implementation of Scenario I and superimposed with Gaussian noise characterized by zero mean and a standard deviation of 10−3. In Figure 4 and Figure 5, our restoration work encompassed the Chimpanzee image, degraded as a result of Scenario II and the imposition of Gaussian noise characterized by zero mean and a standard deviation of 10−3. For example, the eyes and eyelashes of Lena in Figure 2 and Figure 3, or the hair on the face of the chimpanzee in Figure 4 and Figure 5. Its recovery results often exhibit an oversmooth phenomenon, and readers can observe Lena’s eye corner area in Figure 2 and Figure 3 (the face area of the circle in Figure 4 and Figure 5).
FastTVMM [2] is widely recognized as a traditional and effective method for addressing image blurring issues. It performs admirably in the restoration of smooth regions, achieving high precision, but its performance in preserving image details is not satisfactory when restoring images. WFBM [1] is a new image deblurring model based on wavelet frames, which explicitly treats images as piecewise smoothing functions. Contrasting with existing methods, our novel approach boasts a notable advantage in precisely restoring subtle edge details, such as the complex texture of feathers on hats and the minute strands of eyelashes, as evidenced in Figure 2 and Figure 3. To put it differently, our methodology avoids the common pitfall of producing artifacts in areas containing edges, setting it apart from competing approaches.
For Lena images (see Figure 2 and Figure 3), the proposed method is significantly superior to the other two methods. Contrasting with existing methods, our novel approach boasts a notable advantage in precisely restoring subtle edge details, such as the complex texture of feathers on hats and the minute strands of eyelashes, as evidenced in Figure 2 and Figure 3. For Chimpanzee images (see Figure 4 and Figure 5), the subjective quality of the deblurring results obtained by the proposed method is significantly better than other competing schemes. The fundamental reason for this phenomenon is that the integer derivative operator is a local operator that only considers the relationship between pixels in the image and their neighboring pixels while ignoring the correlation of global information in the image. The texture features of images often exhibit obvious non-locality and self-similarity. Therefore, as a non-local extension of integer order calculus, the proposed fractional order model has a good texture preservation function. Precisely, the non-local feature of the fractional-order derivative in the model is instrumental in safeguarding texture integrity.
Table 2 provides the parallel comparison results of the corresponding PSNR and SSIM values in Figure 2, Figure 3, Figure 4 and Figure 5. As evidenced by the data in Table 2, the proposed methodology shows strong capabilities, particularly in the aspects measured by PSNR and SSIM. From the last row of Table 2, it can be seen that the proposed method has the highest average values in both PSNR and SSIM metrics. Specifically, the proposed method improved PSNR values by 0.9733 (i.e., 0.9733 is obtained by subtracting 32.3494 from 31.3761) and SSIM values by 0.0111 (i.e., 0.0111 is obtained by subtracting 0.8752 from 0.8641) compared to the FastTVMM method.
The effectiveness of the proposed model in addressing noise and blur image restoration is evaluated and compared in Figure 6 and Figure 7. Figure 6a is damaged by Scenario III and zero-mean Gaussian noise with a standard deviation of 2. Figure 7a is damaged by Scenario IV and zero-mean Gaussian noise with a standard deviation of 2. As shown in Figure 6b and Figure 7b, despite the impressive denoising prowess displayed by the NNTGV [3] technique, a substantial amount of detail is sacrificed, prominently seen in the absence of intricate features, including the house located farthest to the right in the visual frame, and the restoration results show an oversmooth phenomenon (such as the lawn below the image being almost smoothed out). Compared with the NNTGV [3] method, although the QDSV [4] method has made significant improvements in detail preservation, like the NNTGV [3] method, the QDSV [4] method still lacks satisfactory deblurring results. Readers wishing for a more detailed analysis are invited to closely inspect the representation of the grassland as shown in Figure 6c and Figure 7c. In contrast, although the proposed method generates some residual noise in the results, it has clearer visual results. For more details, readers can observe the grassland in Figure 6d and Figure 7d. The reason for the presence of noise in Figure 6d and Figure 7d is that the proposed model requires both denoising and deblurring, and the blurring operator is ill-conditioned. In general, when dealing with simultaneous deblurring and denoising tasks, the images requiring restoration are commonly impaired by the presence of Gaussian noise with a mean value of zero and a standard deviation set at 0.001. Therefore, the Gaussian noise interfering with Figure 6a and Figure 7a is very large. It is for this reason that the results illustrated in Figure 6d and Figure 7d exhibit a notable presence of noise.
Overall, the NNTGV [3] method has impressive denoising performance, but suffers from severe loss of details. Contrasted with the NNTGV [3] technique, the QDSV [4] approach has indeed made strides in preserving intricate details; however, much like NNTGV [3], QDSV [4] remains deficient in achieving commendable deblurring effects. In comparison, our method can achieve a compromise between denoising and preserving details. That is, in the presence of very small noise, images that can or have rich details.
Figure 8 shows the convergence behavior of the iterative support shrinking algorithm. Specifically, Figure 8a shows the history of the energy function value F ( u k ) changing with the number of outer iteration number; Figure 8b shows the history of the u k + 1 u k value changing with the number of outer iteration number. We can clearly observe that as the outer iteration number k increases, F ( u k ) decreases and converges, which validates the theoretical result in Lemma 4. Meanwhile, F ( u k ) converging to 0 corroborates the theoretical finding in Lemma 5.
To demonstrate that the proposed model has good restoration results for images contaminated by different deblurring scenes, we present the restoration results of the proposed method for images contaminated by Scenario V and Scenario VI in Figure 9. Based on the results of restoring images contaminated by different scenes, we found that the proposed method applies to a wide range of images and blur kernels. In fact, the proposed method has extensive utility across various fields in the real world. Specifically, the proposed method not only performs well in natural image (see Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7) restoration, but also performs well in restoring piecewise constant images (see Figure 9). In Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17, we show the restoration results of our method on two test images and six blurring kernels. From Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17, it further verified that our method is suitable for a wide range of images and blurred kernels. This is because the variable exponential regularization term in the model allows our scheme to be completely independent of any assumptions about the spatial distribution of the kernel. However, the proposed method still faces some challenges, namely, its effectiveness is not ideal for situations with high noise levels. The reason lies in the dual requirement of the proposed model to address both noise and blur, making the blurring operator inherently difficult to manage. Moreover, we also provide the values of SNR and RMSE at the annotations in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17. We found that most of our results have good results in terms of SNR and RMSE values. This further validates the advantages of the proposed method.
In addition, we also compared our methods with those of the past two years. The experimental results are displayed in Figure 18 and Figure 19. From Figure 18c and Figure 19c and the corresponding enlarged images, it can be clearly seen that the TVp,q(x) method produces results with significant loss of texture details. The visual quality of the deblurring results obtained using D-TGV methods is too smooth and accompanied by obvious artifacts. The specific results can be seen in Figure 18d and Figure 19d. In contrast, the proposed method achieves higher visual quality, including preserving details, preventing oversmoothing, and avoiding artifacts.
To better demonstrate the effectiveness of the proposed method, we provide a computational complexity metrics. The specific values of each metrics are recorded in Table 3. From Table 3, we can clearly see that the single iteration time of the three images with a size of 512 × 512 (i.e., Figure 10a–Figure 12a) exceeds 10 s, the single iteration time of the three images with a size of 256 × 256 (i.e., Figure 13a and Figure 14a and Figure 17a) exceeds 1 s, and the single iteration time of the two smaller images (i.e., Figure 15a and Figure 16a) is less than 1 s. Similarly, the total iteration time and total number of iterations follow the same pattern. In addition, we have provided the time required to obtain restoration results using the proposed method and other methods. The data we tested is the time required for the restoration of Figure 10a–Figure 17a. The specific numerical results are recorded in Table 4 below. From Table 4, we clearly find that, except for the FastTVMM method, the proposed method requires the least amount of time to restore blurry images. The FastTVVM method requires less time because it uses an alternating iteration algorithm. However, the restoration results of the FastTVMM method exhibit oversmoothness, resulting in the loss of texture details. This is not the result we want to obtain.
Finally, we also present the deblurring results obtained using the proposed method with different fractional-orders in Figure 20. From Figure 20, we can clearly see that the proposed method maintains different levels of detail in deblurring images using different fractional-orders. The reason for this phenomenon is determined by the internal structure of the image itself.

6.3. Discussion

In this paper, we propose a novel fractional-order non-convex TVα,p model in image deblurring. The fractional-order derivatives we introduce have non-locality and self-similarity, which can effectively simulate the texture information of images. Therefore, it can produce results that preserve the texture of the image. On the contrary, integer-order derivative operators are local operators that only consider the relationship between pixels in the image and their neighboring pixels, while ignoring the correlation of global information in the image. Therefore, the method of modeling based on integer-order gradient generally produces image processing results with oversmoothing, resulting in the inability to restore texture and other details in the image. Compared with several classic methods and several latest methods, the experimental results have verified the effectiveness of the proposed method.
However, there are also some drawbacks. Firstly, the proposed model should not handle blurry images with too much noise. If the blurred image contains too much noise, the proposed model’s deblurring results will also have significant noise. This is because deblurring is essentially an inverse problem that involves solving a ill-conditioned equation. This inverse problem is numerically unstable, meaning that even small errors can be significantly amplified. In addition, in the presence of a large amount of noise, deblurring operations may amplify these noises, resulting in noticeable noise traces in the restored image. In order to more intuitively demonstrate the impact of different noise levels on the deblurring effect of the proposed method, we conducted a set of experiments in Figure 21. From Figure 21, we can clearly see that when the noise level σ 4 × 10 3 , the proposed method produces unsatisfactory results in deblurring. Secondly, the proposed model is experimented on specific types of datasets (i.e., natural scenery), which may limit its direct application in other fields (e.g., medical imaging, satellite imagery). Moreover, for images under extreme conditions (such as extreme low light, high dynamic range scenes), the recovery results generated by the proposed model are not ideal. In order to more intuitively demonstrate the impact of extreme conditions (taking darkness levels as an example) on the deblurring effect of the proposed method, we conducted a set of experiments in Figure 22. From Figure 22, we can clearly see that when the darkness level exceeds 3 (including when the darkness level is equal to 3), the proposed method produces unsatisfactory results in deblurring.
It is worth mentioning that the proposed model also has a wide range of applications, including virtual reality and augmented reality (VR/AR), nature conservation and documentation, artistic and commercial uses, and wildlife photography. High-quality, deblurred nature images can be used to create more immersive VR/AR experiences, such as virtual tours of natural landscapes, which can be beneficial for education and tourism. By providing clearer and more detailed images, our model can support the documentation of biodiversity, assist in species identification, and help in creating high-resolution databases for ecological research. Deblurred nature images can also find applications in the creation of high-quality prints, digital art, and commercial photography, enhancing the visual appeal and marketability of nature-related media. The proposed method can improve the clarity of images taken in natural settings, where motion blur is common due to the unpredictable movements of animals. This can aid in wildlife monitoring, research, and conservation efforts.
In our future work, we plan to investigate adaptive fractional-order models. Additionally, we will also study the characteristics of images under extreme conditions (such as extreme low light, high dynamic range scenes) and design reasonable methods to handle different blurs and noise. Moreover, it is essential to develop more efficient algorithms to enhance the computational performance of the model.

7. Conclusions

In this paper, we propose a novel non-convex based and fractional-order variational model for image restoration. Furthermore, an iterative support algorithm was proposed to solve the proposed variational model. The algorithm’s global convergence is also established. The proposed variational model with fractional-order can flexibly adjust to handle images contaminated by different blur kernels. Compared with other state-of-the-art methods, the experimental results demonstrate the effectiveness of the proposed method. Future works include accelerating our method and generalizing it to blind noise removal. Automatically deciding which parameter would be optimal for an input image is also an interesting topic worthy of future study.

Author Contributions

Conceptualization, B.C., X.D. and Y.T.; methodology, B.C. and X.D.; investigation, B.C. and Y.T.; writing—original draft preparation, B.C.; writing—review and editing, X.D. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (12401559, 12061045), the Jiangxi Provincial Natural Science Foundation (20224ACB211004), and Nanchang Hangkong University Doctoral Research Initiation Fund (EA202307255). The authors thank the referees for their helpful suggestions that result in the improvement of the paper.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Cai, J.; Dong, B.; Shen, Z. Image restoration: A wavelet frame based model for piecewise smooth functions and beyond. Appl. Comput. Harmon. Anal. 2016, 41, 94–138. [Google Scholar] [CrossRef]
  2. Huang, Y.; Michael, N.; Wen, Y. A fast total variation minimization method for image restoration. Multiscale Model. Simul. 2008, 7, 774–795. [Google Scholar] [CrossRef]
  3. Zhang, H.; Tang, L.; Zhuang, F.; Xiang, C.; Li, C. Nonconvex and nonsmooth total generalized variation model for image restoration. Signal Process. 2017, 143, 69–85. [Google Scholar] [CrossRef]
  4. Huang, C.; Ng, M.; Wu, T.; Zeng, T. Quaternion dictionary learning and satuation-value total variation-based color image restoration. IEEE Trans. Multimedia 2021, 24, 3769–3781. [Google Scholar] [CrossRef]
  5. Rudin, L.; Osher, S.; Fatemi, E. Nonlinear total variation-based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  6. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  7. Hintermuller, H.; Sidky, Y.; Wu, T. Nonconvex tvq-models in image restoration: Analysis and a trust region regularization-based superlinearly convergent solver. SIAM J. Imaging. Sci. 2013, 6, 1358–1415. [Google Scholar] [CrossRef]
  8. Bai, J.; Feng, X. Fractional-order anistropic diffusion for image denoising. IEEE Trans. Image Process 2007, 16, 2492–2502. [Google Scholar] [CrossRef]
  9. Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974; Volume 111, pp. 1–28. [Google Scholar]
  10. Zhang, J.; Chen, K. A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 4, 2487–2518. [Google Scholar] [CrossRef]
  11. Chen, B.; Guo, Z.; Yao, W.; Ding, X.; Zhang, D. A novel low-light enhancement via fractional-order and low-rank regularized retinex model. Comp. Appl. Math. 2023, 42, 1–28. [Google Scholar] [CrossRef]
  12. Arbi, A. Robust model predictive control for fractional-order descriptor systems with uncertainty. Fract. Calc. Appl. Anal. 2024, 27, 173–189. [Google Scholar] [CrossRef]
  13. Sabir, Z.; Hashem, A.F.; Arbi, A.; Abdelkawy, M.A. Designing a Bayesian regularization approach to solve the fractional layla and Majnun system. Mathematics 2023, 11, 3792. [Google Scholar] [CrossRef]
  14. Podlubny, I. Fractional differential equations, an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications. Math. Sci. Eng. Process 1999, 198, 313–335. [Google Scholar]
  15. Rockafellar, R.; Wets, R. Variational Analysis; Springer: Berlin, Germany, 2009; Volume 317. [Google Scholar]
  16. Zeng, C.; Jia, R.; Wu, C. An iterative support shrinking algorithm for non-lipschitz optimization in image restoration. J. Math. Imaging Vis. 2019, 61, 122–139. [Google Scholar] [CrossRef]
  17. Kiwiel, K.C. Convergence of the gradient sampling algorithm for nonsmooth nonconvex optimization. SIAM J. Optim. 2007, 18, 379–388. [Google Scholar] [CrossRef]
  18. Burke, V.; James, L.; Overton, S.; Michael, L. A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optim. 2005, 15, 751–779. [Google Scholar] [CrossRef]
  19. Zeng, C.; Wu, C. On the edge recovery property of noncovex nonsmooth regularization in image restoration. SIAM J. Numer. Anal. 2018, 56, 1168–1182. [Google Scholar] [CrossRef]
  20. Chen, X.; Xu, F.; Ye, Y. Lower bound theory of nonzero entries in solutions of 2-p minimization. SIAM J. Sci. Comput. 2010, 32, 2832–2852. [Google Scholar] [CrossRef]
  21. Attouch, H.; Bolte, J.; Svaiter, B. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods. Math. Program. 2013, 137, 91–129. [Google Scholar] [CrossRef]
  22. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 3, 248–272. [Google Scholar] [CrossRef]
  23. Chen, B.; Yao, W.J.; Wu, B.Y.; Ding, X.H. A novel variable exponent non-convex TVp,q(x) model in image restoration. Appl. Math. Lett. 2023, 145, 108791. [Google Scholar] [CrossRef]
  24. Zou, T.; Li, G.; Ma, G.; Zhao, Z.; Li, Z. A derivative fidelity-based total generalized variation method for image restoration. Mathematics 2022, 10, 3942. [Google Scholar] [CrossRef]
Figure 1. Test image: (a) Lena (“512 × 512”), (b) Chimpanzee (“512 × 512”), (c) Cameraman (“256 × 256”), (d) Parrot (“256 × 256”), (e) Twocircle (“64 × 64”), (f) Logo (“122 × 107”), (g) Aerial (“512 × 512”), (h) Butterfly (“256 × 256”).
Figure 1. Test image: (a) Lena (“512 × 512”), (b) Chimpanzee (“512 × 512”), (c) Cameraman (“256 × 256”), (d) Parrot (“256 × 256”), (e) Twocircle (“64 × 64”), (f) Logo (“122 × 107”), (g) Aerial (“512 × 512”), (h) Butterfly (“256 × 256”).
Fractalfract 08 00567 g001
Figure 2. Deblurring of the Lena image, Scenario I. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Figure 2. Deblurring of the Lena image, Scenario I. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Fractalfract 08 00567 g002
Figure 3. Deblurring of the Lena image, Scenario II. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Figure 3. Deblurring of the Lena image, Scenario II. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Fractalfract 08 00567 g003
Figure 4. Deblurring of the Chimpanzee image, Scenario I. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Figure 4. Deblurring of the Chimpanzee image, Scenario I. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Fractalfract 08 00567 g004
Figure 5. Deblurring of the Chimpanzee image, Scenario II. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Figure 5. Deblurring of the Chimpanzee image, Scenario II. (a) Input. (b) Results by FastTVMM [2]. (c) Results by WFBM [1]. (d) Proposed model.
Fractalfract 08 00567 g005
Figure 6. (ad) Noisy blurred image (Scenario III and corrupted by noise of standard deviation σ = 2), results by NNTGV [3], results by QDSV [4], and results by proposed model.
Figure 6. (ad) Noisy blurred image (Scenario III and corrupted by noise of standard deviation σ = 2), results by NNTGV [3], results by QDSV [4], and results by proposed model.
Fractalfract 08 00567 g006
Figure 7. (ad) Noisy blurred image (Scenario IV and corrupted by noise of standard deviation σ = 2), results by NNTGV [3], results by QDSV [4], and results by proposed model.
Figure 7. (ad) Noisy blurred image (Scenario IV and corrupted by noise of standard deviation σ = 2), results by NNTGV [3], results by QDSV [4], and results by proposed model.
Fractalfract 08 00567 g007
Figure 8. (a): Histories of F ( u k ) changes vs. outer iteration number, (b): histories of u k + 1 u k changes vs. outer iteration number.
Figure 8. (a): Histories of F ( u k ) changes vs. outer iteration number, (b): histories of u k + 1 u k changes vs. outer iteration number.
Fractalfract 08 00567 g008
Figure 9. The first row shows blurred Twocircle and Logo images with different kernels from left to right: (a) Scenario V, (b) Scenario V, (c) Scenario VI, (d) Scenario VI. The four images in the second row (i.e., (eh)) are the recovered results of the proposed method.
Figure 9. The first row shows blurred Twocircle and Logo images with different kernels from left to right: (a) Scenario V, (b) Scenario V, (c) Scenario VI, (d) Scenario VI. The four images in the second row (i.e., (eh)) are the recovered results of the proposed method.
Fractalfract 08 00567 g009
Figure 10. The first row shows blurred Lena images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 10. The first row shows blurred Lena images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g010
Figure 11. The first row shows blurred Chimpanzee images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 11. The first row shows blurred Chimpanzee images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g011
Figure 12. The first row shows blurred Aerial images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 12. The first row shows blurred Aerial images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g012
Figure 13. The first row shows blurred Cameraman images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 13. The first row shows blurred Cameraman images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g013
Figure 14. The first row shows blurred Parrot images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 14. The first row shows blurred Parrot images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g014
Figure 15. The first row shows blurred Twocircle images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 15. The first row shows blurred Twocircle images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g015
Figure 16. The first row shows blurred Logo images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 16. The first row shows blurred Logo images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g016
Figure 17. The first row shows blurred Butterfly images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Figure 17. The first row shows blurred Butterfly images with different kernels from left to right: (a) Scenario I, (b) Scenario II, (c) Scenario III, (d) Scenario IV, (e) Scenario V, (f) Scenario VI. The noise carried by these blurry images is Gaussian noise with a mean of zero and a standard deviation of 10−3. The six images in the second row are the recovered results of the proposed method. The SNR and RMSE values of the restored image are displayed in the image annotation.
Fractalfract 08 00567 g017
Figure 18. The five Parrot images in the first row from left to right are: (a) Original image, (b) blurred image (Scenario V), (c) deblurred image by TVp,q(x) [23], (d) D-TGV [24], (e) our method. The second row image is an enlarged image corresponding to the first row image.
Figure 18. The five Parrot images in the first row from left to right are: (a) Original image, (b) blurred image (Scenario V), (c) deblurred image by TVp,q(x) [23], (d) D-TGV [24], (e) our method. The second row image is an enlarged image corresponding to the first row image.
Fractalfract 08 00567 g018
Figure 19. The five Butterfly images in the first row from left to right are: (a) Original image, (b) blurred image (Scenario II), (c) deblurred image by TVp,q(x) [23], (d) D-TGV [24], (e) our method. The second row image is an enlarged image corresponding to the first row image.
Figure 19. The five Butterfly images in the first row from left to right are: (a) Original image, (b) blurred image (Scenario II), (c) deblurred image by TVp,q(x) [23], (d) D-TGV [24], (e) our method. The second row image is an enlarged image corresponding to the first row image.
Fractalfract 08 00567 g019
Figure 20. (ad): blurred Aerial images with Scenario I. Bottom row: the deblurring results generated by the proposed model using fractional-order of different orders.
Figure 20. (ad): blurred Aerial images with Scenario I. Bottom row: the deblurring results generated by the proposed model using fractional-order of different orders.
Fractalfract 08 00567 g020
Figure 21. Top row: blurred images with Scenario I with different noise levels. (a) σ = 10 3 , (b) σ = 2 × 10 3 , (c) σ = 3 × 10 3 , (d) σ = 4 × 10 3 , (e) σ = 5 × 10 3 . (fj): the deblurring results generated by the proposed model.
Figure 21. Top row: blurred images with Scenario I with different noise levels. (a) σ = 10 3 , (b) σ = 2 × 10 3 , (c) σ = 3 × 10 3 , (d) σ = 4 × 10 3 , (e) σ = 5 × 10 3 . (fj): the deblurring results generated by the proposed model.
Fractalfract 08 00567 g021
Figure 22. Top row: blurred images with Scenario I with different drakness levels. (a) 1, (b) 2, (c) 3, (d) 4, (e) 5. (fj): the deblurring results generated by the proposed model.
Figure 22. Top row: blurred images with Scenario I with different drakness levels. (a) 1, (b) 2, (c) 3, (d) 4, (e) 5. (fj): the deblurring results generated by the proposed model.
Fractalfract 08 00567 g022
Table 1. The experimentation involved the application of six separate varieties of blurring kernels.
Table 1. The experimentation involved the application of six separate varieties of blurring kernels.
ScenarioPSF
“I” 9 × 9 Gaussian PSF with standard deviation 10
“II”fspecial(‘motion’, 20, 45)
“III”[1 4 6 4 1]T[1 4 6 4 1]/256
“IV”fspecial(‘average’, 3)
“V”fspecial(‘disk’, 3)
“VI”[1 1 1 1 1]T[1 1 1 1 1]/25
Table 2. PSNR and SSIM values for the experiments shown in Figure 2, Figure 3, Figure 4 and Figure 5.
Table 2. PSNR and SSIM values for the experiments shown in Figure 2, Figure 3, Figure 4 and Figure 5.
Images/AverageModel PSNR SSIM
“Lena”(I)FastTVMM [2]30.44050.8199
WFBM [1]28.77050.7732
Ours30.78220.8558
“Lena”(II)FastTVMM [2]31.79870.8481
WFBM [1]29.34500.7972
Ours31.91290.8490
“Chimpanzee”(I)FastTVMM [2]31.07600.8922
WFBM [1]30.42580.8841
Ours33.38720.8973
“Chimpanzee”(II)FastTVMM [2] 32.1892 0.8965
WFBM [1]31.31180.8755
Ours33.31530.8988
“Average”FastTVMM [2] 31.3761 0.8641
WFBM [1]29.96330.8325
Ours32.34940.8752
Table 3. Time, iteration and FSIM values for the experiments shown in Figure 10a, Figure 11a, Figure 12a, Figure 13a, Figure 14a, Figure 15a, Figure 16a, Figure 17a.
Table 3. Time, iteration and FSIM values for the experiments shown in Figure 10a, Figure 11a, Figure 12a, Figure 13a, Figure 14a, Figure 15a, Figure 16a, Figure 17a.
ImagesTime (s) (Single)IterationTime (s) (Total)FSIM
Figure 10a”10.1325253.250.8335
Figure 11a”15.5420310.930.8417
Figure 12a”21.7218350.990.8308
Figure 13a”2.891337.690.8636
Figure 14a”1.951529.310.8278
Figure 15a”0.08650.430.8836
Figure 16a”0.3641.430.8915
Figure 17a”2.761541.380.8553
Table 4. The time required to obtain restoration results for different test images and different methods.
Table 4. The time required to obtain restoration results for different test images and different methods.
ImagesFastTVMMWFBMNNTGVQDSV TV p , q ( x ) D-TGVOur
Figure 10a”5.24349.89271.74391.25288.23264.12253.25
Figure 11a”3.26373.32380.97486.31394.29353.43310.93
Figure 12a”5.12390.52410.26462.28423.16384.32350.99
Figure 13a”0.8547.0964.15214.3761.3553.2237.69
Figure 14a”0.5664.5044.98189.1672.6767.9229.31
Figure 15a”0.1515.8711.4157.2431.1511.310.43
Figure 16a”0.5443.2814.9473.1964.4613.951.43
Figure 17a”0.9971.7555.74267.3565.4787.4541.38
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, B.; Ding, X.; Tang, Y. A Novel Fractional-Order Non-Convex TVα,p Model in Image Deblurring. Fractal Fract. 2024, 8, 567. https://doi.org/10.3390/fractalfract8100567

AMA Style

Chen B, Ding X, Tang Y. A Novel Fractional-Order Non-Convex TVα,p Model in Image Deblurring. Fractal and Fractional. 2024; 8(10):567. https://doi.org/10.3390/fractalfract8100567

Chicago/Turabian Style

Chen, Bao, Xiaohua Ding, and Yuchao Tang. 2024. "A Novel Fractional-Order Non-Convex TVα,p Model in Image Deblurring" Fractal and Fractional 8, no. 10: 567. https://doi.org/10.3390/fractalfract8100567

Article Metrics

Back to TopTop