Next Article in Journal
Characterization of Modal Frequencies and Orientation of Axisymmetric Resonators in Coriolis Vibratory Gyroscopes
Next Article in Special Issue
A 3D-2D Multibranch Feature Fusion and Dense Attention Network for Hyperspectral Image Classification
Previous Article in Journal
Development of an Automated, Non-Enzymatic Nucleic Acid Amplification Test
Previous Article in Special Issue
Extreme Low-Resolution Activity Recognition Using a Super-Resolution-Oriented Generative Adversarial Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization for Image Restoration

1
Artificial Intelligence Industrial Technology Research Institute, Nanjing Institute of Technology, Nanjing 211167, China
2
Jiangsu Engineering Research Center of IntelliSense Technology and System, Nanjing Institute of Technology, Nanjing 211167, China
*
Authors to whom correspondence should be addressed.
Micromachines 2021, 12(10), 1205; https://doi.org/10.3390/mi12101205
Submission received: 15 September 2021 / Revised: 27 September 2021 / Accepted: 27 September 2021 / Published: 1 October 2021

Abstract

:
Sparse coding (SC) models have been proven as powerful tools applied in image restoration tasks, such as patch sparse coding (PSC) and group sparse coding (GSC). However, these two kinds of SC models have their respective drawbacks. PSC tends to generate visually annoying blocking artifacts, while GSC models usually produce over-smooth effects. Moreover, conventional 1 minimization-based convex regularization was usually employed as a standard scheme for estimating sparse signals, but it cannot achieve an accurate sparse solution under many realistic situations. In this paper, we propose a novel approach for image restoration via simultaneous patch-group sparse coding (SPG-SC) with dual-weighted p minimization. Specifically, in contrast to existing SC-based methods, the proposed SPG-SC conducts the local sparsity and nonlocal sparse representation simultaneously. A dual-weighted p minimization-based non-convex regularization is proposed to improve the sparse representation capability of the proposed SPG-SC. To make the optimization tractable, a non-convex generalized iteration shrinkage algorithm based on the alternating direction method of multipliers (ADMM) framework is developed to solve the proposed SPG-SC model. Extensive experimental results on two image restoration tasks, including image inpainting and image deblurring, demonstrate that the proposed SPG-SC outperforms many state-of-the-art algorithms in terms of both objective and perceptual quality.

1. Introduction

As an important task in the field of image processing, image restoration has attracted considerable interests for many researchers and been widely applied in various areas such as medical image analysis [1], remote sensing [2] and digital photography [3]. The goal of image restoration is to reconstruct a high-quality image from its degraded (e.g., noisy, blurred or pixels missing) observation, which is typically an ill-posed inverse problem and can be mathematically modeled as
y = H x + n ,
where x , y are lexicographically stacked representations of the original image and degradation observation, respectively. H stands for a non-invertible degradation matrix and n is usually assumed to be an additive Gaussian white noise. Through selecting specific values for H , the model in Equation (1) can represent different image restoration tasks. For instance, when H is an identity matrix, Equation (1) represents a simple image denoising problem [4,5]; when H is a blur matrix, it is an image deblurring problem [6,7]; when H is a diagonal matrix whose diagonal entries are either 1 or 0 (keeping or killing corresponding pixels), it denotes an image inpainting problem [8,9], and when H is a random projection matrix, it is an image compressive sensing recovery problem [10,11]. In this paper, we mainly focus on image inpainting and image deblurring problems.
Due to the ill-posed nature of image restoration, image prior of the original image x is usually employed to regularize the solution space and finally achieve a high-quality reconstructed image. In general, image prior-based regularization for image restoration can be expressed by the following minimization problem,
x ^ = arg min x 1 2 y H x 2 2 + λ Φ ( x ) ,
where · 2 denotes the 2 -norm and the first term in Equation (2) is the fidelity term. Φ ( x ) represents the regularization term, which takes the employed image priors into account. λ is a regularization parameter to balance these two terms. To tackle the ill-posed image restoration problem, image prior knowledge plays a critical role in enhancing the performance of image restoration algorithms. In other words, it is vital to devise an effective regularization model of Equation (2) for image restoration, which reflects the image prior information. During recent decades, various image prior-based regularization models have been proposed in the literature to depict the statistical features of natural images [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22].
Early regularization models mainly consider the prior on the level of pixels, such as total variation (TV) [13,14], which actually assumes that the nature image gradient obeys Laplacian distribution. TV-based methods remove the noise artifacts effectively, but they often erase the fine details and lean to over-smooth the images due to the piecewise constant assumption [6,7].
Another crucial property of natural images is to model the prior on image patches. One representative research is sparse coding (SC) model, which is generally classified into two categories: patch sparse coding (PSC) [15,16,17,23,24] and group sparse coding (GSC) [5,10,11,18,19,20,21,25,26,27]. PSC usually assumes that each patch of an image can be accurately represented by a sparse coefficient vector whose entries are mostly zeros or close to zero based on a dictionary of atoms. A series of dictionary learning approaches have been proposed and exploited to image restoration and other image processing tasks [17,28,29,30]. For example, the famous dictionary learning method is KSVD [1], which has achieved promising performance in numerous applications, ranging from image denoising to computer vision [31,32]. Mairal et al. [28] proposed an online dictionary learning (ODL) approach to various machine learning and image processing tasks. However, in image restoration, PSC is usually unstable and tends to generate visual annoying blocking artifacts [33]. Moreover, the PSC model not only requires expensive computation to learn an off-the-shelf dictionary, but also commonly takes no account of the correlation of similar patches [10,11,18,19].
To overcome the above-mentioned disadvantages of PSC, recent advances of GSC, which inspired by the success of nonlocal self-similarity (NSS) prior in images [4], exploit nonlocal similar patch group as the basic unit for SC and have shown great potential in a variety of image restoration tasks [5,10,11,18,19,21,27,34,35,36]. For instance, a very popular method is BM3D [5], which exploited the NSS prior to construct 3D arrays and conducted these 3D arrays with transform domain filtering. To the best of our knowledge, it is the earliest one that uses both NSS prior and sparsity for image restoration. Mairal et al. [18] advanced the idea of NSS by GSC. LPG-PCA [35] calculated statistical parameters for PCA learning through using the nonlocal similar patches as data samples. Dong et al. [36] proposed a joint local and nonlocal sparsity constraints for image restoration. Zhang et al. [10] exploited the NSS prior for image restoration under the framework of group-based sparse representation. Since the matrix formed by nonlocal similar patches in a natural image is of low-rank, in [37,38,39,40,41,42,43], the authors transformed the image restoration into the low-rank matrix approximation problems, which have achieved highly competitive reconstruction results. Though the GSC model has achieved a great success in miscellaneous image restoration applications, it tends to smooth out some fine details of the reconstructed images [43]. Furthermore, SC-based image restoration problem is naturally modeled using the 0 -norm penalty [44]. However, the fact of 0 minimization being NP-hard has encouraged us to relax the 0 minimization to some tractable alternatives. A widely used scheme is that 0 minimization is replaced by its convex 1 counterpart, which is widely used as a standard scheme for estimating sparse signals, and many optimization algorithms have been developed to solve the 1 minimization problem [45,46,47]. However, a fact that cannot be ignored is, 1 minimization cannot achieve an accurate sparsity solution under many practical situations including image restoration problems [10,11,19,48]. Moreover, weighted 1 minimization [49], p [50] minimization, and even the weighted p [51] minimization which has been demonstrated achieving better sparse solutions, are proposed to estimate sparse signals [11] for further practical use.
Bearing the above concerns in mind, this paper proposes a novel approach for image restoration via simultaneous patch-group sparse coding (SPG-SC) with a dual-weighted p minimization. The local and nonlocal sparse representations synchronously exploited in SPG-SC could eliminate the block artifacts and over-smooth often occurred in PSC or GSC-based methods. Moreover, a new dual-weighted p minimization based on non-convex regularization is presented, which aims for enhancing the sparse representation capability of the proposed SPG-SC framework in image restoration tasks. The major contributions of this paper are summarized as follows. First, compared to the existing SC-based methods, the proposed SPG-SC exploits the local sparsity and nonlocal sparse representation simultaneously. Secondly, to improve the sparse representation capability of the proposed SPG-SC model, a dual-weighted p minimization-based non-convex regularization is proposed. Thirdly, to make the optimization tractable, we develop a non-convex generalized iteration shrinkage algorithm based on the alternating direction method of multipliers (ADMM) framework to solve the proposed SPG-SC model. Experimental results demonstrate that in the tasks of image inpainting and image deblurring, our proposed SPG-SC outperforms many state-of-the-art methods both quantitatively and qualitatively.
The remainder of this paper is organized as follows. Section 2 introduces the related works about sparse coding for image processing. Section 3 presents a novel sparse coding model, i.e., SPG-SC for image restoration. Section 4 describes the experimental results for image inpainting and image deblurring. Finally, concluding remarks are provided in Section 5.

2. Sparse Coding for Image Processing

2.1. Patch Sparse Coding

SC exhibits promising performance for various image processing tasks [15,16,17,23], which assumes that an image can be spanned by a set of bases or dictionary atoms in a transfer domain. According to [15], the basic unit of SC for images is patch. Mathematically, denote an image by x R N ; let x i = R i x , i = 1 , , n , denote an image patch of size b × b extracted from location i, where R i represents the matrix extracting the patch x i from x . For a given dictionary D R b × M , b M , the sparse coding processing of each patch x i is to obtain a sparse vector α i such that x i D α i . Please note that most of the elements are zeros in vector α i . In general, the sparse coding problem of x i over D is solved by the following optimization problem,
α ^ i = arg min α i 1 2 x i D α i 2 2 + λ α i 0 ,
where λ is a non-negative parameter to balance the fidelity and the sparsity regularization; the notation · 0 is the 0 -norm (quasi-norm), i.e., counting the number of nonzero elements in vector α i . Following this, the whole image x can be sparsely represented by a set of sparse codes { α i } i = 1 n . Concatenating n patches, let X = [ x 1 , , x n ] R b × n denote all the patches extracted from the image. Since D is shared by these patches, we thus have
A ^ = arg min A 1 2 X D A F 2 + λ A 0 ,
where · F represents the Frobenius norm, A = [ α 1 , , α n ] R M × n is the sparse coefficient matrix, and the 0 -norm is imposed on each column of A (corresponding to each patch).

2.2. Group Sparse Coding

Instead of using a single patch as the basic unit in PSC, GSC employs patch group as its unit. In this subsection, we briefly introduce the GSC model [10,11,18,19,52]. Specifically, for an image x , we first divide it into n overlapped patches x i of size b × b , i = 1 , , n . Secondly, in contrast to PSC, for each patch x i , we search m patches that are the most similar to itself within an L × L sized window to form a patch group X G i , denoted by X G i = { x i , 1 , , x i , m } , where x i , m denotes the m-th similar patch (column vector) in the i-th patch group. It is worth noting that the K-Nearest Neighbor (KNN) algorithm [53] is used to search similar patches here. Finally, similar to PSC, given a dictionary D G i R b × K , each group X G i can be sparsely represented B G i = D G i T X G i and can be solved by the following 0 minimization problem,
B ^ G i = arg min B G i 1 2 X G i D G i B G i F 2 + λ B G i 0 ,
where B G i is the group sparse coefficient of each group X G i , and the 0 -norm is imposed on each column in B G i . To put all groups in one shot, we define the notation Q i R n × m , which is the searching and extracting operations of the similar patches for the i-th patch, i.e., X G i = X Q i . Concatenating n patch groups, we thus have
X G = X [ Q 1 , , Q n ] = X Q R b × ( m n ) .
Since each p a t c h   g r o u p has its own dictionary in GSC and they are not necessarily shared, let
D G = [ D G 1 , , D G n ] R b × ( n K ) ,
B ¯ G = [ B ¯ G 1 , , B ¯ G n ] R ( n K ) × ( m n ) ,
where { B ¯ G i } i = 1 n R n K × m is an expanded (longer with more rows) version of B G i R K × m , with B G i in the corresponding locations (from ( ( i 1 ) K + 1 ) -th row to ( i K ) -th row) but zeros elsewhere, i.e., corresponding to D G i in D G . The GSC problem we are going to solve now becomes
B ¯ ^ G = arg min B ¯ G 1 2 X G D G B ¯ G F 2 + λ B ¯ G 0 ,
where the 0 -norm is again imposed on each column, and this holds true for the following derivations in this paper. Please note that both X in PSC and X G in GSC are constructed from the same original image x .

3. Image Restoration Using Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization

As mentioned before, the PSC model usually leads to the visual annoying blocking artifacts, while the GSC model is apt to produce the over-smooth effect in various image restoration tasks. In this section, to cope with these problems, we propose a novel simultaneous patch-group sparse coding (SPG-SC) model for image restoration, rather than using the PSC model in Equation (4) or the GSC model in Equation (9) individually.
Before doing this, we first make some preliminary transformations to connect the PSC model in Equation (4) with the GSC model in Equation (9). Specifically, recall that each patch (column) in the patch-group X G is from X and it can be sparsely represented by Equation (4). Hence, except for the sparse coding processing in Equation (9), we can also have
X G = D A G ,
where A G R M × ( m n ) is composed of the corresponding columns in A , namely A G is an expanded version of A in Equation (4), where each column is reproduced by m times according to the patch searching in X G . In this case, similar to Equation (3), A G can be solved by
A ^ G = arg min A G 1 2 X G D A G F 2 + λ A G 0 .

3.1. Modeling of Simultaneous Patch-Group Sparse Coding for Image Restoration

Now, after integrating the above PSC model in Equation (11) and the GSC model in Equation (9) into the regularization-based framework of Equation (2), the proposed SPG-SC model for image restoration can be represented as follows:
( X ^ G , A ^ G , B ¯ ^ G ) = argmin X G , A G , B ¯ G 1 2 Y G H G X G F 2 + μ 1 2 X G D A G F 2 + λ A G 0 + μ 2 2 X G D G B ¯ G F 2 + ρ B ¯ G 0 ,
where ρ is now playing the same role of λ as in Equation (9). Y G is obtained from y in the same procedure of X G , and similarly for H G , which is obtained from H . Here we introduce the parameters μ 1 and μ 2 to make the solution of Equation (12) more feasible.
However, it is well-known that 0 minimization is discontinuous and NP-hard, solving Equation (12) is a difficult combinatorial optimization problem. For this reason, 0 minimization is usually relaxed by the 1 minimization to make the optimization tractable. Unfortunately, for some practical cases, such as image restoration problems [10,11,19,48], 1 minimization is just an estimation to the 0 minimization and cannot obtain the desirable sparse solution. Therefore, inspired by Zha’s work [11], which has demonstrated that the weighted p minimization can achieve a better sparse solution than some existing minimization methods such as 1 minimization [49], p minimization [50] and the weighted 1 minimization [51], we propose a new dual-weighted p minimization based on our proposed SPG-SC framework for image restoration. To be concrete, Equation (12) can be rewritten as
( X ^ G , A ^ G , B ¯ ^ G ) = argmin X G , A G , B ¯ G 1 2 Y G H G X G F 2 + μ 1 2 X G D A G F 2 + λ W G A G p + μ 2 2 X G D G B ¯ G F 2 + ρ K G B ¯ G p ,
where ∘ is the element-wise product of two matrices (Hadamard product). The notation · p denotes the p -norm to characterize the sparsity of sparse coefficients. It is worth noting that we employ the p-norm to matrix here and p -norm is imposed to each column of sparse coefficients. W G and K G are the weights of sparse coefficient A G and B G , respectively. The weights could improve the representation capability of the sparse coefficients [51].

3.2. Generalized Iteration Shrinkage Algorithm Based on the ADMM Framework to Solve the Proposed SPG-SC Model

Since the objective function of Equation (13) is a large scale non-convex optimization problem, and to make the optimization tractable, we adopt a generalized iteration shrinkage algorithm based on the ADMM framework [54,55] to solve it, which has demonstrated to be quite effective, making each sub-problem solved efficiently. Specifically, the Equation (13) can be translated into the following five iterative steps by exploiting ADMM scheme:
X G ( t + 1 ) = arg min X G 1 2 Y G H G X G F 2 + μ 1 2 X G D A G ( t ) C ( t ) F 2 + μ 2 2 X G D G B ¯ G ( t ) J ( t ) F 2 ,
A G ( t + 1 ) = arg min A G λ W G A G p + μ 1 2 X G ( t + 1 ) D A G C ( t ) F 2 ,
B ¯ G ( t + 1 ) = arg min B ¯ G ρ K G B ¯ G p + μ 2 2 X G ( t + 1 ) D G B ¯ G J ( t ) F 2 ,
C ( t + 1 ) = C ( t ) ( X G ( t + 1 ) D A G ( t + 1 ) ) ,
J ( t + 1 ) = J ( t ) ( X G ( t + 1 ) D G B ¯ G ( t + 1 ) ) .
Obviously, the minimization of Equation (13) falls into three sub-problems, i.e., X G , A G and B ¯ G sub-problem. Fortunately, there is an efficient solution to each sub-problem, which will be discussed in the following subsections. Moreover, each problem can be solved patch by patch for image restoration. To be concrete, take the i-th patch x i as an example, y i = H i x i , where H i represents the degraded matrix in the i-th patch. Recall that A G is an expanded version of A in the PSC model and we have x i = D α i . After the α i is solved, we can straightforwardly obtain A G . In the GSC model, let β i concatenate all the group coefficients including the i-th patch; we thus have x i = D G β i . Then, we consider solving the problem for each patch and the superscript t is omitted for conciseness. More specifically, we translate the A G sub-problem to { α i } i = 1 n sub-problem, and translate the B ¯ G sub-problem to { B G i } i = 1 n sub-problem, and translate the X G sub-problem to { x i } i = 1 n sub-problem, respectively.

3.2.1. X G Sub-Problem

Given A G and B ¯ G , X G sub-problem in Equation (14) for each patch x i , becomes
x ^ i = arg min x i 1 2 y i H i x i 2 2 + μ 1 2 x i D α i c i F 2 + μ 2 2 x i D G β i j i F 2 ,
It can be seen that Equation (19) is essentially a minimization problem of strictly convex quadratic function, and therefore, it admits a closed-form solution for x i , which can be expressed as
x ^ i = H i T H i + ( μ 1 + μ 2 ) I 1 H i T y i + μ 1 ( D α i + c i ) + μ 2 ( D G β i + j i ) ,
where I is an identity matrix with the desired dimensions. c i and j i are the corresponding elements of C and J , respectively. It is worth noting that each x i is jointly estimated in Equation (20) using both PSC ( A G ) in Equation (11) and GSC ( B ¯ G ) in Equation (9) in one shot. In our experiments, we notice that this joint estimation plays a key role in the performance improvement of our proposed SPG-SC model for image restoration (see Section 4 for more details).

3.2.2. A G Sub-Problem

As mentioned before, A G is an expanded version of A , and therefore A G can be solved by the A sub-problem. Based on Equation (15), for i-th patch, α i sub-problem can be rewritten as
α ^ i = arg min α i 1 2 r i D α i 2 2 + λ μ 1 w i α i p ,
where r i = x i c i . Apparently, Equation (21) can be regarded as a sparse coding problem, which can be solved by a variety of non-convex algorithms [50,56,57,58]. However, we can see that how to devise an effective dictionary is quite important in solving A G sub-problem. In general, we can learn an over-complete dictionary with a high computational complexity from natural image dataset [28,30]. However, SC over an over-complete dictionary is potentially unstable and tends to produce visual annoying blocking artifacts in image restoration [33]. In this PSC problem, to achieve a more stable and sparser representation for each patch, we learn the principle component analysis (PCA)-based sub-dictionaries [59] for solving A G sub-problem. Specifically, we first define R = X G C in Equation (15) as a good approximation of D A G . Secondly, we extract image patches from the observation R and use K-means algorithm to generate q clusters [6]. Finally, we learn q PCA sub-dictionaries from each cluster, namely D = [ D 1 , , D j ] , j = 1 , , q , and then one PCA sub-dictionary is adaptively selected for a given patch.
Now, recalling Equation (21), a generalized soft-thresholding (GST) algorithm [57] is developed to solve it, which is more efficient to implement and converges to a more accurate solution. To be concrete, for fixed D , λ , w i and p, the solution of Equation (21) can be computed as
α ^ i = GST ( D 1 r i , λ μ 1 w i , p ) .
For more details about the GST algorithm, please refer to [57]. This procedure is applied to all patches for achieving A ^ G , which is the final solution to A G sub-problem in Equation (15). In addition, for the details setting of the weight w i , please refer to Section 3.3.

3.2.3. B ¯ G Sub-Problem

Given X G , and according to Equation (16), B ¯ G sub-problem can be rewritten as
B ¯ ^ G = arg min B ¯ G 1 2 R G D G B ¯ G F 2 + ρ μ 2 K G B ¯ G p ,
where R G = X G J .
Recalling the relationship of B ¯ G , B G , B G i and β i , for each patch, we can obtain the other three after solving any one of them. Now, instead of considering each patch as basic unit in the A G sub-problem, we consider each patch-group here. For i-th patch-group, we have the following minimization problem,
B ^ G i = arg min B G i 1 2 R G i D G i B G i F 2 + ρ μ 2 K G i B G i p ,
where K G i is a weight assigned to each patch-group R G i . Each weight matrix K G i can enhance the representation ability of each group sparse coefficient B G i [51]. Similar to A G sub-problem, one important issue of solving the sub-problem B G is the selection of the dictionary. In the B ¯ G sub-problem, to better adapt to image local structures, instead of learning an over-complete dictionary for each patch-group as in [18] and inspired by [10,60,61], we learn a singular value decomposition (SVD)-based sub-dictionary for each patch-group. Specifically, we apply the singular value decomposition (SVD) to R G i ,
R G i = U G i G G i V G i T = j = 1 c g i , j u i , j v i , j T ,
where G G i = diag ( g i , 1 , , g i , c ) is a diagonal matrix, c = min ( b , m ) , j = 1 , , c , and u i , j , v i , j are the columns of U G i and V G i , respectively.
Following this, we define each dictionary atom d i , j of the adaptive dictionary D G i for each group R G i , i.e.,
d i , j = u i , j v i , j T , j = 1 , , c ,
We have therefore learned an adaptive dictionary, i.e.,
D G i = [ d i , 1 , , d i , c ] .
One can observe that the devised SVD-based dictionary learning method only needs one SVD operation per patch-group.
Due to the orthogonality of the dictionary D G i , Equation (24) can be rewritten as
B ^ G i = min B G i 1 2 G G i B G i F 2 + ρ μ 2 K G i B G i p = min β i 1 2 g i β i 2 2 + ρ μ 2 k i β i p ,
where R G i = D G i G G i , and g i , β i and k i denote the vectorization form of the matrix G G i , B G i and K G i , respectively.
To achieve the solution of Equation (28) effectively, we invoke the aforementioned GST algorithm [57] to solve it. To be concrete, a closed-form solution of Equation (28) can be achieved as follows:
β ^ i = GST ( g i , ρ μ 2 k i , p ) .
This process is performed across all n patch groups to achieve B ^ G , which is the final solution to B ¯ G sub-problem in Equation (16).

3.3. Setting the Weight and Regularization Parameter

Inspired by [51], large weights could be used to discourage nonzero entries in the recovered signal, while small weights could be used to encourage nonzero entries. In other words, the weights are inversely proportional to the magnitudes of the sparse coefficients. Therefore, we set the weight w i in A G sub-problem as follows:
w i = 1 | α i | + ε P ,
where ε P is a positive constant.
Similarly, the weight K G i in B ¯ G sub-problem is set as
K G i = 1 | B G i | + ε G ,
where ε G is a positive constant.
Both λ and ρ are regularization parameters. In this paper, to make the proposed image restoration algorithm steadily, we adaptively set the regularization parameters λ and ρ . Specifically, inspired by [62], λ in Equation (21) for A G sub-problem is set to
λ = η 2 2 σ n 2 δ i + ϵ P ,
where σ n represents the noise variance. δ i denotes the estimated standard variance of the sparse coefficients of nonlocal similar patches in j-th cluster [6] and η , ϵ P are the positive constants.
Similar to the setting of λ , the parameter ρ in Equation (24) for B ¯ G sub-problem is set as
ρ = τ 2 2 σ n 2 σ i + ϵ G ,
where σ i is the estimated standard variance of the group sparse coefficient of each patch-group R G i . τ , ϵ P are the positive constants.

3.4. Summary of the Proposed Algorithm

Up to now, we have solved the above three sub-problems X G , A G and B ¯ G of the proposed SPG-SC model using a non-convex generalized iteration shrinkage algorithm under the framework of ADMM. In practice we can achieve the effective solution for each separated sub-problem, which can guarantee the whole algorithm more efficient and effective. The complete description of the proposed dual-weighted p minimization-based SPG-SC model for image restoration is exhibited in Algorithm 1.
Algorithm 1 Image Restoration Using SPG-SC Model.
Require: 
The observed image y and measurement matrix H .
   1:
Initial X G 0 = 0 , A G 0 = 0 , B ¯ G = 0 , C = 0 and J = 0 .
   2:
Set parameters b, m, L, μ 1 , μ 2 , η , τ , p, t, σ n , ε P , ε G , ϵ P and ϵ G .
   3:
for t = 0 Max-Iter do
   4:
 Update X G ( t + 1 ) by Equation (20);
   5:
R ( t + 1 ) = X G ( t + 1 ) C ( t ) ;
   6:
 Construct dictionary D by R ( t + 1 ) using K-means algorithm and PCA.
   7:
for Each patch r i do
   8:
  Choose the best match PCA dictionary D i for r i ;
   9:
  Compute α i ( t ) by D i 1 r i ;
 10:
  Update w i ( t + 1 ) by computing Equation (30);
 11:
  Update λ by computing Equation (32);
 12:
  Update α i t + 1 by computing Equation (22);
 13:
end for
 14:
R G ( t + 1 ) = X G ( t + 1 ) J ( t ) ;
 15:
for Each patch-group R G i do
 16:
  Construct dictionary D G i by computing Equation (27);
 17:
  Compute B G i ( t ) by D G i 1 R G i ;
 18:
  Update K G i ( t + 1 ) by computing Equation (31);
 19:
  Update ρ by computing Equation (33);
 20:
  Update B G i ( t + 1 ) by computing Equation (29);
 21:
end for
 22:
 Update A G ( t + 1 ) by concatenating all α i ;
 23:
 Update D ( t + 1 ) by concatenating all D i ;
 24:
 Update B ¯ G ( t + 1 ) by concatenating all B G i ;
 25:
 Update D G ( t + 1 ) by concatenating all D G i ;
 26:
 Update C ( t + 1 ) by Equation (17);
 27:
 Update J ( t + 1 ) by Equation (18);
 28:
end for
 29:
Output : The final restored image x ^ by aggregating patches in X G .

4. Experimental Results

In this section, extensive experimental results are reported to illustrate the effectiveness of the proposed SPG-SC-based image restoration algorithm. We consider two standard image restoration problems including image inpainting and image deblurring. The experimental test images are shown as in Figure 1. Both peak signal to noise ratio (PSNR) and structural similarity (SSIM) [63] are used to evaluate different image restoration algorithms objectively. The following objective function is chosen as the stopping criterion for the proposed SPG-SC-based image restoration algorithm, i.e.,
x ^ t x ^ t 1 2 2 x ^ t 1 2 2 < ξ ,
where ξ is a small tolerance. The source code of our SPG-SC-based image restoration algorithm is available at: https://drive.google.com/open?id=1-nD7Mkb6Kn1TWzzxk5Pg886loIX_yb8P.

4.1. Parameter Setting

Before giving the experimental results, we first briefly introduce the parameter setting of the proposed image restoration algorithms. Specifically, we consider two interesting scenarios in image inpainting, i.e., random pixel corruption and text inlayed. The parameters of the proposed SPG-SC model for image inpainting are set as follows. The size of each patch b × b is set to 8 × 8 . The searching window of similar patches L × L is set to 25 × 25 , σ n = 2 and the number of similar patches m is set to 60. Four small constants ( ε P , ε G , ϵ P , ϵ G ) are set to (0.1, 0.1, e 14 , 0.4) to avoid dividing by zero. The parameters ( μ 1 , μ 2 , η , τ , p, ξ ) are set to (0.00009, 0.0007, 0.8, 0.6, 0.7, 0.0030), (0.0001, 0.0007, 0.4, 1.1, 0.6, 0.0032), (0.0001, 0.0009, 1, 1, 0.55, 0.0024), (0.0002, 0.01, 0.9, 0.3, 1, 0.0004), (0.0001, 0.04, 0.7, 0.9, 1, 0.0001) and (0.0001, 0.0007, 0.4, 1.1, 0.6, 0.0015) for 90%, 80%, 70%, 60%, 50% pixels missing and text inlayed, respectively.
In image deblurring, two types of blur kernels are considered in this paper, i.e., 9 × 9 uniform kernel and a Gaussian kernel with standard deviation 1.6. Each blurred image is generated using a blur kernel to the original image first, and then followed by adding an additive white Gaussian noise with standard variance σ n = 2 . The parameter setting of our proposed SPG-SC model for image deblurring is as follows. The size of each patch b × b is set to 8 × 8 and the searching window L × L is set to 20 × 20 . The searching similar patches m is set to 60. Four small constants ( ε P , ε G , ϵ P , ϵ G ) are as the same setting as the image inpainting task. The parameters ( μ 1 , μ 2 , η , τ , p, ξ ) are set to (0.0003, 0.03, 0.2, 1.2, 0.75, 0.00018) and (0.0001, 0.02, 0.1, 0.8, 0.9, 0.00012) for uniform kernel and Gaussian kernel, respectively. Moreover, we will provide a detailed discussion on how to choose the best power p in Section 4.5.

4.2. Image Inpainting

In this subsection, we apply the proposed SPG-SC algorithm to image inpainting task. We compare it with nine advanced methods, including BPFA [64], IPPO [65], ISD-SB [66], JSM [7], Aloha [60], NGS [67], BKSVD [68], WNNM [40] and TSLRA [69]. It is worth nothing that nonlocal redundancies are used for IPPO, JSM, Aloha, NGS, WNNM and TSLRA methods. BKSVD is a typical PSC method, while JSM and NGS are based on the GSC models. WNNM (In this paper, we employ the WNNM model along with ADMM algorithm to image inpainting and image deblurring tasks.) exploits low-rank prior via nuclear norm and achieves a state-of-the-art denoising result. TSLRA is also a low-rank method that delivers the state-of-the-art image inpainting performance.
The PSNR and SSIM comparison results for a collection of 13 color images in the case of 80%, 70%, 60%, 50% pixels missing and text inlayed are shown in Table 1 and Table 2, respectively, with the best results highlighted in bold. As can be seen from Table 1, the proposed SPG-SC achieves better results than the other image inpainting methods. On average, our proposed SPG-SC enjoys a PSNR performance gain over BPFA by 2.79 dB, over IPPO by 1.26 dB, over ISD-SB by 6.22 dB, over JSM by 1.42 dB, over Aloha by 1.86 dB, over NGS by 3.56 dB, over BKSVD by 3.52 dB, over WNNM by 0.45dB and over TSLRA by 1.82 dB. In particular, one can observe that under the conditions of 80%, 70%, 60% and 50% pixels missing, the proposed SPG-SC consistently outperforms the other competing methods for all test images in terms of PSNR. Regarding SSIM, the proposed SPG-SC also performs better performance than all competing methods in most cases. To be concrete, in terms of SSIM, the proposed SPG-SC achieves 0.0398, 0.0122, 0.1135, 0.0169, 0.0216, 0.0457, 0.0600, 0.0060 and 0.0193 gain on average over BPFA, IPPO, ISD-SB, JSM, Aloha, NGS, BKSVD, WNNM and TSLRA for all cases, respectively.
Apart from the objective measurements mentioned above, human subject perceptivity is judging of image quality ultimately, which is also critical to evaluate an image restoration algorithm. We show the visual comparisons of the images Zebra and Light with 80% pixels missing in Figure 2 and Figure 3, respectively. Meanwhile, the visual comparison of image T o w e r with text inlayed is shown in Figure 4. On the whole, we can observe that BPFA, ISD-SB, JSM, NGS and BKSVD methods are all prone to produce some visual annoying blocking artifacts, while IPPO, Aloha, WNNM and TSLRA methods often over-smooth the images and lose some details. By contrast, our proposed SPG-SC approach can preserve fine details and suppress undesirable visual artifacts more effective than all competing methods.

4.3. Image Deblurring

In this subsection, we describe the experimental results of the proposed SPG-SC-based image deblurring. We compare it with leading non-blind deblurring methods, including BM3D [70], L0-ABS [71], ASDS [59], EPLL [72], NCSR [6], JSM [7], L2-r-L0 [73], WNNM [40] and NLNCDR [74]. Please note that BM3D, ASDS, EPLL, NCSR, JSM, WNNM and NLNCDR are using the image NSS priors. BM3D is a well-known one that uses the GSC model for image restoration and delivers state-of-the-art denoising results. NCSR exploits a sparsity residual model under the GSC framework, which is one of the state-of-the-art image deblurring algorithms. ASDS and JSM jointly consider local sparsity and nonlocal sparsity constraints.
As can be seen from Table 3, we report the PSNR and SSIM results for a collection of 14 color images for these approaches. It can be observed that our proposed SPG-SC achieves better PSNR and SSIM results than the other competing methods in most cases. Specifically, on average, our proposed SPG-SC achieves {0.69 dB, 1.08 dB, 0.53 dB, 3.61 dB, 0.16 dB, 1.65 dB, 0.66 dB, 0.09 dB and 0.99 dB} gains in PSNR and {0.0267, 0.0317, 0.0307, 0.0456, 0.0107, 0.0989, 0.0198, 0.0047 and 0.0482} gains in SSIM over BM3D, L0-ABS, ASDS, EPLL, NCSR, JSM, L2-r-L0, WNNM and NLNCDR for all cases, respectively.
We also show the visual comparison results of image deblurring from Figure 5 to Figure 6. One can clearly see that ASDS, NCSR, JSM and NLNCDR methods still suffer from some undesirable visual blocking artifacts, such as ring artifacts. At the same time, BM3D, L0-ABS, EPLL, L2-r-L0 and WNNM methods are apt to generate the over-smooth effect, such as some details are lost in the recovered images. Compared to the competing methods, the proposed SPG-SC not only produces visually pleasant results, but also preserves image details and textures with a higher accuracy.
In summary, our proposed SPG-SC approach seems to keep a good balance between artifact removal and detail preservations, which is attributed to the following aspects: (1) the local sparsity and nonlocal sparse representation are simultaneously considered in our proposed SPG-SC method; (2) a dual-weighted p minimization further enhances the sparse representation capability of our proposed SPG-SC model; (3) we have developed two sub-dictionaries to better adapt to image local structures.

4.4. Algorithm Convergence

Since the objective function of Equation (13), comprising two p norms, is non-convex, it is quite difficult to provide its theoretical proof for global convergence. In this subsection, we give empirical evidence to illustrate the convergence of our proposed SPG-SC algorithm. Concretely, the convergence of the proposed SPG-SC algorithm is shown in Figure 7, where Figure 7a plots the evolutions of PSNR values versus the iteration numbers for image inpainting with 80% pixels missing (including image Barbara, Butterfly, Fence and Lily), and Figure 7b plots the evolutions of PSNR values versus the iteration numbers for image deblurring with Gaussian kernel (including image Lily, Agaric, Corn and Flowers). We can clearly see that with the growth of iteration numbers, the PSNR curves of the recovered images monotonically increase and ultimately become flat and stable. Thus, we conclude that the proposed SPG-SC algorithm exhibits a good convergence performance.

4.5. Suitable Setting of the Power p

In this subsection, we present how to obtain the best performance of our proposed SPG-SC-based image restoration algorithm through selecting the suitable power p value. To show the influence of the power p value introduced by the proposed dual-weighted p minimization, we randomly choose 20 images (size: 256 × 256 ) from the Berkeley Segmentation Dataset 200 (BSD200) [75] as test images and conduct our proposed algorithm with different p values to image restoration tasks including image inpainting and image deblurring. The average PSNR results under different power p values from 0.05 to 1 with interval 0.05 are shown in Figure 8. It can be found from Figure 8a to Figure 8f that the best PSNR results of image inpainting for our proposed algorithm is achieved by selecting p = 0.70, 0.60, 0.55, 1, 1 and 0.60 when 90%, 80%, 70%, 60%, 50% pixels missing and text inlayed, respectively. From Figure 8g to Figure 8h, we can observe that the best reconstructed performance of image deblurring is obtained by choosing p = 0.75 and 0.90 for uniform kernel and Gaussian kernel, respectively. Therefore, in image inpainting, we choose p = 0.70, 0.60, 0.55, 1, 1 and 0.60 for 90%, 80%, 70%, 60%, 50% pixels missing and text inlayed, respectively. In image deblurring, we choose p = 0.75 and 0.90 for uniform kernel and Gaussian kernel, respectively.

5. Conclusions

This paper proposed a novel method for image restoration via simultaneous patch-group sparse coding (SPG-SC) with dual-weighted p minimization. Compared with the existing sparse coding-based methods, the proposed SPG-SC considered the local sparsity and nonlocal sparse representation simultaneously. We have proposed a dual-weighted p minimization-based non-convex regularization to improve the sparse representation capability of the proposed SPG-SC model. Two sub-dictionaries have been used to better adapt to image local structures, rather than learning an over-complete dictionary with a high computational complexity from natural image dataset. To make the optimization tractable, we have developed a non-convex generalized iteration shrinkage algorithm based on the alternating direction method of multipliers (ADMM) framework to solve the proposed SPG-SC model. Experimental results on two image restoration applications including image inpainting and image deblurring, have demonstrated that the proposed SPG-SC achieves better results than many state-of-the-art algorithms and exhibits a good convergence property.

Author Contributions

Conceptualization, J.Z.; Data curation, L.J.; Formal analysis, Y.T.; Writing—original draft, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China under Grant 62002160, 62072238 and 61703201, in part by China CPSF Fund (Grant No. 2019M651698), and in part by Science Foundation of Nanjing Institute of Technology (Grant No. YKJ201861 and ZKJ202003).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, H.; He, X.; Tao, D.; Tang, Y.; Wang, R. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognit. 2018, 79, 130–146. [Google Scholar]
  2. Huang, B.; Song, H. Spatiotemporal reflectance fusion via sparse representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar]
  3. Ji, H.; Huang, S.; Shen, Z.; Xu, Y. Robust video restoration by joint sparse and low rank matrix approximation. SIAM J. Imag. Sci. 2011, 4, 1122–1142. [Google Scholar]
  4. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
  5. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Imag. Process. 2007, 16, 2080–2095. [Google Scholar]
  6. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally Centralized Sparse Representation for Image Restoration. IEEE Trans. Imag. Process. 2013, 22, 1620–1630. [Google Scholar]
  7. Zhang, J.; Zhao, D.; Xiong, R.; Ma, S.; Gao, W. Image restoration using joint statistical modeling in a space-transform domain. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 915–928. [Google Scholar]
  8. Ding, D.; Ram, S.; Rodriguez, J.J. Perceptually aware image inpainting. Pattern Recognit. 2018, 83, 174–184. [Google Scholar]
  9. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Joint patch-group based sparse representation for image inpainting. Proc. Mach. Learn. Res. 2018, 95, 145. [Google Scholar]
  10. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Imag. Process. 2014, 23, 3336–3351. [Google Scholar]
  11. Zha, Z.; Liu, X.; Huang, X.; Shi, H.; Xu, Y.; Wang, Q.; Tang, L.; Zhang, X. Analyzing the group sparsity based on the rank minimization methods. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 883–888. [Google Scholar]
  12. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group Sparsity Residual Constraint With Non-Local Priors for Image Restoration. IEEE Trans. Imag. Process. 2020, 29, 8960–8975. [Google Scholar]
  13. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar]
  14. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar]
  15. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Imag. Process. 2006, 15, 3736–3745. [Google Scholar]
  16. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Imag. Process. 2010, 19, 2861–2873. [Google Scholar]
  17. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Imag. Process. 2006, 54, 4311–4322. [Google Scholar]
  18. Mairal, J.; Bach, F.R.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 54–62. [Google Scholar]
  19. Wang, Q.; Zhang, X.; Wu, Y.; Tang, L.; Zha, Z. Nonconvex Weighted p Minimization Based Group Sparse Representation Framework for Image Denoising. IEEE Signal Process. Lett. 2017, 24, 1686–1690. [Google Scholar]
  20. Yang, B.; Ma, A.J.; Yuen, P.C. Learning domain-shared group-sparse representation for unsupervised domain adaptation. Pattern Recognit. 2018, 81, 615–632. [Google Scholar]
  21. Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the 2015 International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 244–252. [Google Scholar]
  22. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C.; Kot, A.C. A Hybrid Structural Sparsification Error Model for Image Restoration. IEEE Trans. Neural Netw. Learn. Syst. 2021, 3, 1–15. [Google Scholar]
  23. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 210–227. [Google Scholar]
  24. Wen, B.; Ravishankar, S.; Bresler, Y. Structured overcomplete sparsifying transform learning with convergence guarantees and applications. Int. J. Comput. Vis. 2015, 114, 137–167. [Google Scholar]
  25. Gao, S.; Chia, L.T.; Tsang, I.W.H.; Ren, Z. Concurrent single-label image classification and annotation via efficient multi-layer group sparse coding. IEEE Trans. Multimed. 2014, 16, 762–771. [Google Scholar]
  26. Wen, B.; Li, Y.; Bresler, Y. When sparsity meets low-rankness: Transform learning with non-local low-rank constraint for image restoration. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2297–2301. [Google Scholar]
  27. Zha, Z.; Yuan, X.; Zhou, J.; Zhu, C.; Wen, B. Image restoration via simultaneous nonlocal self-similarity priors. IEEE Trans. Imag. Process. 2020, 29, 8561–8576. [Google Scholar]
  28. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online dictionary learning for sparse coding. In Proceedings of the 26th annual international conference on machine learning, Montreal, QC, Canada, 14–18 June 2009; pp. 689–696. [Google Scholar]
  29. Hu, J.; Tan, Y.P. Nonlinear dictionary learning with application to image classification. Pattern Recognit. 2018, 75, 282–291. [Google Scholar]
  30. Wang, S.; Zhang, L.; Liang, Y.; Pan, Q. Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2216–2223. [Google Scholar]
  31. Zhang, Q.; Li, B. Discriminative K-SVD for dictionary learning in face recognition. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2691–2698. [Google Scholar]
  32. Budianto; Lun, D.P.K. Robust Fringe Projection Profilometry via Sparse Representation. IEEE Trans. Imag. Process. 2016, 25, 1726–1739. [Google Scholar] [CrossRef]
  33. Elad, M.; Yavneh, I. A Plurality of Sparse Representations Is Better Than the Sparsest One Alone. IEEE Trans. Inf. Theory 2009, 55, 4701–4714. [Google Scholar] [CrossRef]
  34. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhang, J.; Zhu, C. A benchmark for sparse coding: When group sparsity meets rank minimization. IEEE Trans. Imag. Process. 2020, 29, 5094–5109. [Google Scholar]
  35. Zhang, L.; Dong, W.; Zhang, D.; Shi, G. Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recognit. 2010, 43, 1531–1549. [Google Scholar]
  36. Dong, W.; Zhang, L.; Shi, G. Centralized sparse representation for image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1259–1266. [Google Scholar]
  37. Wang, S.; Zhang, L.; Liang, Y. Nonlocal spectral prior model for low-level vision. In Proceedings of the 2012 Asian Conference on Computer Vision, Daejeon, Korea, 5–9 November 2012; pp. 231–244. [Google Scholar]
  38. Zha, Z.; Yuan, X.; Zhou, J.T.; Zhou, J.; Wen, B.; Zhu, C. The power of triply complementary priors for image compressive sensing. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 983–987. [Google Scholar]
  39. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Imag. Process. 2012, 22, 700–711. [Google Scholar]
  40. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar]
  41. Zhang, J.; Xiong, R.; Zhao, C.; Zhang, Y.; Ma, S.; Gao, W. CONCOLOR: Constrained non-convex low-rank model for image deblocking. IEEE Trans. Imag. Process. 2016, 25, 1246–1259. [Google Scholar]
  42. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhang, J.; Zhu, C. From rank estimation to rank approximation: Rank residual constraint for image restoration. IEEE Trans. Imag. Process. 2019, 29, 3254–3269. [Google Scholar]
  43. Li, M.; Liu, J.; Xiong, Z.; Sun, X.; Guo, Z. Marlow: A joint multiplanar autoregressive and low-rank approach for image completion. In Proceedings of the 2016 European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 819–834. [Google Scholar]
  44. Shi, J.; Ren, X.; Dai, G.; Wang, J.; Zhang, Z. A non-convex relaxation approach to sparse dictionary learning. CVPR 2011, 2011, 1809–1816. [Google Scholar] [CrossRef]
  45. Bioucas-Dias, J.M.; Figueiredo, M.A. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Imag. Process. 2007, 16, 2992–3004. [Google Scholar]
  46. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Imag. Process. 2009, 18, 2419–2434. [Google Scholar]
  47. Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imag. Sci. 2009, 2, 323–343. [Google Scholar]
  48. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Imag. Process. 2014, 23, 3618–3632. [Google Scholar]
  49. Donoho, D.L. For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2006, 59, 797–829. [Google Scholar]
  50. Lyu, Q.; Lin, Z.; She, Y.; Zhang, C. A comparison of typical p minimization algorithms. Neurocomputing 2013, 119, 413–424. [Google Scholar]
  51. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar]
  52. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Reconciliation of group sparsity and low-rank models for image restoration. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  53. Keller, J.M.; Gray, M.R.; Givens, J.A. A fuzzy k-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern. 1985, SMC-15, 580–585. [Google Scholar]
  54. He, B.; Liao, L.Z.; Han, D.; Yang, H. A new inexact alternating directions method for monotone variational inequalities. Math. Program. 2002, 92, 103–118. [Google Scholar]
  55. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  56. Chartrand, R.; Wohlberg, B. A nonconvex ADMM algorithm for group sparsity with sparse groups. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6009–6013. [Google Scholar]
  57. Zuo, W.; Meng, D.; Zhang, L.; Feng, X.; Zhang, D. A generalized iterated shrinkage algorithm for non-convex sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 3–6 December 2013; pp. 217–224. [Google Scholar]
  58. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L1/2 regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1013–1027. [Google Scholar]
  59. Dong, W.; Zhang, L.; Shi, G.; Wu, X. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Imag. Process. 2011, 20, 1838–1857. [Google Scholar]
  60. Jin, K.H.; Ye, J.C. Annihilating filter-based low-rank Hankel matrix approach for image inpainting. IEEE Trans. Imag. Process. 2015, 24, 3498–3511. [Google Scholar]
  61. Quan, Y.; Huang, Y.; Ji, H. Dynamic texture recognition via orthogonal tensor dictionary learning. In Proceedings of the IEEE international conference on computer vision, Santiago, Chile, 13–16 December 2015; pp. 73–81. [Google Scholar]
  62. Chang, S.G.; Yu, B.; Vetterli, M. Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. Imag. Process. 2000, 9, 1532–1546. [Google Scholar]
  63. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Imag. Process. 2004, 13, 600–612. [Google Scholar]
  64. Zhou, M.; Chen, H.; Paisley, J.; Ren, L.; Li, L.; Xing, Z.; Dunson, D.; Sapiro, G.; Carin, L. Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images. IEEE Trans. Imag. Process. 2011, 21, 130–144. [Google Scholar]
  65. Ram, I.; Elad, M.; Cohen, I. Image processing using smooth ordering of its patches. IEEE Trans. Imag. Process. 2013, 22, 2764–2774. [Google Scholar]
  66. He, L.; Wang, Y. Iterative support detection-based split bregman method for wavelet frame-based image inpainting. IEEE Trans. Imag. Process. 2014, 23, 5470–5485. [Google Scholar]
  67. Liu, H.; Xiong, R.; Zhang, X.; Zhang, Y.; Ma, S.; Gao, W. Nonlocal Gradient Sparsity Regularization for Image Restoration. IEEE Trans. Circuits Syst. Video Technol. 2017, 27, 1909–1921. [Google Scholar]
  68. Serra, J.G.; Testa, M.; Molina, R.; Katsaggelos, A.K. Bayesian K-SVD using fast variational inference. IEEE Trans. Imag. Process. 2017, 26, 3344–3359. [Google Scholar]
  69. Guo, Q.; Gao, S.; Zhang, X.; Yin, Y.; Zhang, C. Patch-Based Image Inpainting via Two-Stage Low Rank Approximation. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2023–2036. [Google Scholar]
  70. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image restoration by sparse 3D transform-domain collaborative filtering. Image Processing: Algorithms and Systems VI. Int. Soc. Opt. Photon. 2008, 6812, 681207. [Google Scholar]
  71. Portilla, J. Image restoration through l0 analysis-based sparse optimization in tight frames. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3909–3912. [Google Scholar]
  72. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  73. Portilla, J.; Tristan-Vega, A.; Selesnick, I.W. Efficient and robust image restoration using multiple-feature L2-relaxed sparse analysis priors. IEEE Trans. Imag. Process. 2015, 24, 5046–5059. [Google Scholar]
  74. Liu, H.; Tan, S. Image Regularizations Based on the Sparsity of Corner Points. IEEE Trans. Imag. Process. 2019, 28, 72–87. [Google Scholar]
  75. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar]
Figure 1. The 22 color images used in our experiments. Top row, from left to right: Mickey, Barbara, Bear, Butterfly, Fence, Haight, Lake, Lena, Light, Leaves, Lily. Bottom row, from left to right: Pepper, Starfish, Man, Tower, Flowers, Nanna, Corn, Agaric, Monk, Zebra, Fireman.
Figure 1. The 22 color images used in our experiments. Top row, from left to right: Mickey, Barbara, Bear, Butterfly, Fence, Haight, Lake, Lena, Light, Leaves, Lily. Bottom row, from left to right: Pepper, Starfish, Man, Tower, Flowers, Nanna, Corn, Agaric, Monk, Zebra, Fireman.
Micromachines 12 01205 g001
Figure 2. Visual comparison of Zebra by image inpainting with 80% missing pixels. (a) Original image; (b) Degraded image with 80% pixels missing; (c) BPFA [64] (PSNR = 20.90 dB, SSIM = 0.7160); (d) IPPO [65] (PSNR = 22.71 dB, SSIM = 0.7744); (e) ISD-SB [66] (PSNR = 18.41 dB, SSIM = 0.5899); (f) JSM [7] (PSNR = 21.88 dB, SSIM = 0.7556); (g) Aloha [60] (PSNR = 22.72 dB, SSIM = 0.7720); (h) NGS [67] (PSNR = 20.49 dB, SSIM = 0.7132); (i) BKSVD [68] (PSNR = 19.37 dB, SSIM = 0.6912); (j) WNNM [40] (PSNR = 22.67 dB, SSIM = 0.7958); (k) TSLRA [69] (PSNR = 22.37 dB, SSIM = 0.7572); (l) SPG-SC (PSNR = 23.06 dB, SSIM = 0.7966).
Figure 2. Visual comparison of Zebra by image inpainting with 80% missing pixels. (a) Original image; (b) Degraded image with 80% pixels missing; (c) BPFA [64] (PSNR = 20.90 dB, SSIM = 0.7160); (d) IPPO [65] (PSNR = 22.71 dB, SSIM = 0.7744); (e) ISD-SB [66] (PSNR = 18.41 dB, SSIM = 0.5899); (f) JSM [7] (PSNR = 21.88 dB, SSIM = 0.7556); (g) Aloha [60] (PSNR = 22.72 dB, SSIM = 0.7720); (h) NGS [67] (PSNR = 20.49 dB, SSIM = 0.7132); (i) BKSVD [68] (PSNR = 19.37 dB, SSIM = 0.6912); (j) WNNM [40] (PSNR = 22.67 dB, SSIM = 0.7958); (k) TSLRA [69] (PSNR = 22.37 dB, SSIM = 0.7572); (l) SPG-SC (PSNR = 23.06 dB, SSIM = 0.7966).
Micromachines 12 01205 g002
Figure 3. Visual comparison of Light by image inpainting with 80% missing pixels. (a) Original image; (b) Degraded image with 80% pixels missing; (c) BPFA [64] (PSNR = 19.26 dB, SSIM = 0.6285); (d) IPPO [65] (PSNR = 21.49 dB, SSIM = 0.7827); (e) ISD-SB [66] (PSNR = 17.48 dB, SSIM = 0.4902); (f) JSM [7] (PSNR = 20.23 dB, SSIM = 0.7254); (g) Aloha [60] (PSNR = 21.50 dB, SSIM = 0.7734); (h) NGS [67] (PSNR = 18.52 dB, SSIM = 0.6041); (i) BKSVD [68] (PSNR = 18.77 dB, SSIM = 0.5792); (j) WNNM [40] (PSNR = 22.09 dB, SSIM = 0.8236); (k) TSLRA [69] (PSNR = 21.73 dB, SSIM = 0.7780); (l) SPG-SC (PSNR = 22.43 dB, SSIM = 0.8318).
Figure 3. Visual comparison of Light by image inpainting with 80% missing pixels. (a) Original image; (b) Degraded image with 80% pixels missing; (c) BPFA [64] (PSNR = 19.26 dB, SSIM = 0.6285); (d) IPPO [65] (PSNR = 21.49 dB, SSIM = 0.7827); (e) ISD-SB [66] (PSNR = 17.48 dB, SSIM = 0.4902); (f) JSM [7] (PSNR = 20.23 dB, SSIM = 0.7254); (g) Aloha [60] (PSNR = 21.50 dB, SSIM = 0.7734); (h) NGS [67] (PSNR = 18.52 dB, SSIM = 0.6041); (i) BKSVD [68] (PSNR = 18.77 dB, SSIM = 0.5792); (j) WNNM [40] (PSNR = 22.09 dB, SSIM = 0.8236); (k) TSLRA [69] (PSNR = 21.73 dB, SSIM = 0.7780); (l) SPG-SC (PSNR = 22.43 dB, SSIM = 0.8318).
Micromachines 12 01205 g003
Figure 4. Visual comparison of Tower by image inpainting with text inlayed. (a) Original image; (b) Degraded image with 80% pixels missing; (c) BPFA [64] (PSNR = 30.94 dB, SSIM = 0.9530); (d) IPPO [65] (PSNR = 31.91 dB, SSIM = 0.9685); (e) ISD-SB [66] (PSNR = 28.81 dB, SSIM = 0.9345); (f) JSM [7] (PSNR = 32.48 dB, SSIM = 0.9696); (g) Aloha [60] (PSNR = 30.34 dB, SSIM = 0.9550); (h) NGS [67] (PSNR = 30.21 dB, SSIM = 0.9465); (i) BKSVD [68] (PSNR = 30.35 dB, SSIM = 0.9413); (j) WNNM [40] (PSNR = 32.70 dB, SSIM = 0.9727); (k) TSLRA [69] (PSNR = 31.43 dB, SSIM = 0.9633); (l) SPG-SC (PSNR = 33.23 dB, SSIM = 0.9728).
Figure 4. Visual comparison of Tower by image inpainting with text inlayed. (a) Original image; (b) Degraded image with 80% pixels missing; (c) BPFA [64] (PSNR = 30.94 dB, SSIM = 0.9530); (d) IPPO [65] (PSNR = 31.91 dB, SSIM = 0.9685); (e) ISD-SB [66] (PSNR = 28.81 dB, SSIM = 0.9345); (f) JSM [7] (PSNR = 32.48 dB, SSIM = 0.9696); (g) Aloha [60] (PSNR = 30.34 dB, SSIM = 0.9550); (h) NGS [67] (PSNR = 30.21 dB, SSIM = 0.9465); (i) BKSVD [68] (PSNR = 30.35 dB, SSIM = 0.9413); (j) WNNM [40] (PSNR = 32.70 dB, SSIM = 0.9727); (k) TSLRA [69] (PSNR = 31.43 dB, SSIM = 0.9633); (l) SPG-SC (PSNR = 33.23 dB, SSIM = 0.9728).
Micromachines 12 01205 g004
Figure 5. Visual comparison of Lily by image deblurring with uniform kernel. (a) Original image; (b) noisy and blurred image ( 9 × 9 uniform kernel, σ n = 2 ); (c) BM3D [70] (PSNR = 28.58 dB, SSIM = 0.8119); (d) L0-ABS [71] (PSNR = 28.05 dB, SSIM = 0.8004); (e) ASDS [59] (PSNR = 29.21 dB, SSIM = 0.8290); (f) EPLL [72] (PSNR = 27.04 dB, SSIM = 0.7981); (g) NCSR [6] (PSNR = 29.39 dB, SSIM = 0.8393); (h) JSM [7] (PSNR = 26.97 dB, SSIM = 0.6924); (i) L2-r-L0 [73] (PSNR = 28.47 dB, SSIM = 0.8155); (j) WNNM [40] (PSNR = 29.25 dB, SSIM = 0.8406); (k) NLNCDR [74] (PSNR = 28.69 dB, SSIM = 0.8039); (l) SPG-SC (PSNR = 29.40 dB, SSIM = 0.8476).
Figure 5. Visual comparison of Lily by image deblurring with uniform kernel. (a) Original image; (b) noisy and blurred image ( 9 × 9 uniform kernel, σ n = 2 ); (c) BM3D [70] (PSNR = 28.58 dB, SSIM = 0.8119); (d) L0-ABS [71] (PSNR = 28.05 dB, SSIM = 0.8004); (e) ASDS [59] (PSNR = 29.21 dB, SSIM = 0.8290); (f) EPLL [72] (PSNR = 27.04 dB, SSIM = 0.7981); (g) NCSR [6] (PSNR = 29.39 dB, SSIM = 0.8393); (h) JSM [7] (PSNR = 26.97 dB, SSIM = 0.6924); (i) L2-r-L0 [73] (PSNR = 28.47 dB, SSIM = 0.8155); (j) WNNM [40] (PSNR = 29.25 dB, SSIM = 0.8406); (k) NLNCDR [74] (PSNR = 28.69 dB, SSIM = 0.8039); (l) SPG-SC (PSNR = 29.40 dB, SSIM = 0.8476).
Micromachines 12 01205 g005
Figure 6. Visual comparison of Agaric by image deblurring with Gaussian kernel. (a) Original image; (b) noisy and blurred image (fspecial(‘gaussian’, 25, 1.6), σ n = 2 ); (c) BM3D [70] (PSNR = 30.34 dB, SSIM = 0.8368); (d) L0-ABS [71] (PSNR = 30.07 dB, SSIM = 0.8392); (e) ASDS [59] (PSNR = 30.06 dB, SSIM = 0.8113); (f) EPLL [72] (PSNR = 27.69 dB, SSIM = 0.8118); (g) NCSR [6] (PSNR = 30.56 dB, SSIM = 0.8466); (h) JSM [7] (PSNR = 29.96 dB, SSIM = 0.8087); (i) L2-r-L0 [73] (PSNR = 30.30 dB, SSIM = 0.8458); (j) WNNM [40] (PSNR = 30.68 dB, SSIM = 0.8570); (k) NLNCDR [74] (PSNR = 29.92 dB, SSIM = 0.8081); (l) SPG-SC (PSNR = 30.84 dB, SSIM = 0.8603).
Figure 6. Visual comparison of Agaric by image deblurring with Gaussian kernel. (a) Original image; (b) noisy and blurred image (fspecial(‘gaussian’, 25, 1.6), σ n = 2 ); (c) BM3D [70] (PSNR = 30.34 dB, SSIM = 0.8368); (d) L0-ABS [71] (PSNR = 30.07 dB, SSIM = 0.8392); (e) ASDS [59] (PSNR = 30.06 dB, SSIM = 0.8113); (f) EPLL [72] (PSNR = 27.69 dB, SSIM = 0.8118); (g) NCSR [6] (PSNR = 30.56 dB, SSIM = 0.8466); (h) JSM [7] (PSNR = 29.96 dB, SSIM = 0.8087); (i) L2-r-L0 [73] (PSNR = 30.30 dB, SSIM = 0.8458); (j) WNNM [40] (PSNR = 30.68 dB, SSIM = 0.8570); (k) NLNCDR [74] (PSNR = 29.92 dB, SSIM = 0.8081); (l) SPG-SC (PSNR = 30.84 dB, SSIM = 0.8603).
Micromachines 12 01205 g006
Figure 7. Convergence analysis of the proposed algorithm. (a) PSNR results versus iteration number for image inpainting with 80% pixels missing. (b) PSNR results versus iteration number for image deblurring with Gaussian kernel.
Figure 7. Convergence analysis of the proposed algorithm. (a) PSNR results versus iteration number for image inpainting with 80% pixels missing. (b) PSNR results versus iteration number for image deblurring with Gaussian kernel.
Micromachines 12 01205 g007
Figure 8. Testing the different power p value for the influence of image restoration tasks. (af) PSNR values versus p for image inpainting; (g,h) PSNR values versus p for image deblurring.
Figure 8. Testing the different power p value for the influence of image restoration tasks. (af) PSNR values versus p for image inpainting; (g,h) PSNR values versus p for image deblurring.
Micromachines 12 01205 g008
Table 1. PSNR (dB) comparison of BPFA [64], IPPO [65], ISD-SB [66], JSM [7], Aloha [60], NGS [67], BKSVD [68], WNNM [40], TSLRA [69] and SPG-SC for image inpainting.
Table 1. PSNR (dB) comparison of BPFA [64], IPPO [65], ISD-SB [66], JSM [7], Aloha [60], NGS [67], BKSVD [68], WNNM [40], TSLRA [69] and SPG-SC for image inpainting.
Pixels Missing = 80%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]24.5325.1124.0426.2419.4223.7829.5019.2627.3029.5826.7923.9420.9024.65
IPPO [65]26.3328.3225.1327.9820.9025.5630.6421.4928.3330.4826.3024.5022.7126.05
ISD-SB [66]22.2522.3518.5721.3917.0018.7026.0117.4824.5325.1923.0021.4718.4121.26
JSM [7]26.0926.9525.5728.5921.3726.1830.4620.2327.9930.4827.0724.5921.8825.96
Aloha [60]25.3329.5924.8828.8820.6225.9030.8921.5027.7029.9526.3323.8822.7226.01
NGS [67]24.5023.8823.8525.2618.7623.8728.8718.5227.0829.3526.1723.4720.4924.16
BKSVD [68]23.7225.2122.0024.2018.8322.0528.1618.7726.4927.7525.3622.9319.3723.45
WNNM [40]26.6630.4926.4629.7421.4327.1030.9922.0928.9430.7427.6624.6022.6726.89
TSLRA [69]25.7128.2225.3228.8320.8525.4730.5821.7328.1729.3126.8424.2622.3725.97
SPG-SC26.7530.6926.4729.8021.8327.1631.4122.4328.9431.5528.0324.6423.0627.14
Pixels Missing = 70%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]26.1628.3226.6828.8721.4626.9831.6221.5829.3031.7428.9325.6622.7826.93
IPPO [65]28.5930.8927.6830.0823.0228.5832.9723.4730.2833.0528.9126.1124.7628.34
ISD-SB [66]24.4023.5622.6523.1618.8921.8528.1618.7026.4628.3725.0923.1820.1723.43
JSM [7]28.2530.4827.9730.4623.0129.2832.6923.1229.8333.4729.3626.6423.9528.35
Aloha [60]27.1132.4027.2930.5722.1229.0432.8023.1729.5832.7628.2225.7724.5528.11
NGS [67]26.6826.1126.3627.3221.0326.4430.7720.7828.8331.5928.3525.2222.7126.32
BKSVD [68]26.1727.5825.0028.3521.1225.2930.9620.8528.6530.9627.7925.0723.0626.22
WNNM [40]29.1633.0529.1931.5523.5630.5533.3224.0030.7933.4930.0726.6124.7529.24
TSLRA [69]27.6430.7927.7630.7522.6128.0332.6423.4329.9232.7228.7826.0524.2528.11
SPG-SC29.2934.2029.2231.8023.9630.6433.6824.3031.2634.7930.5726.8125.2229.67
Pixels Missing = 60%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]27.8331.0628.8830.7923.3329.8333.5423.6231.3534.2030.9827.2824.5329.02
IPPO [65]30.7633.5529.8532.1425.3430.8834.8925.1332.1735.1631.0927.8126.7930.43
ISD-SB [66]26.5924.8625.0725.3021.0224.5530.5219.8128.2330.6827.3624.9522.3425.48
JSM [7]29.8533.2129.8332.2324.7031.4734.5624.8331.5935.4731.4028.0925.9030.24
Aloha [60]28.5935.1329.1632.3323.5831.4134.7224.4731.4735.0030.1927.1626.2429.96
NGS [67]28.0928.2428.3730.1122.8128.8732.8122.7830.5333.5930.2627.0424.3928.30
BKSVD [68]28.5329.8627.7030.7223.3928.6133.4823.0031.0033.4429.9926.6825.2728.59
WNNM [40]31.2335.6131.2733.1825.8732.8935.0625.4332.8035.4932.2828.1027.0731.25
TSLRA [69]29.2833.3729.4232.3224.2130.1934.2624.8031.5534.9630.6927.6025.9229.89
SPG-SC31.4436.8231.6033.6626.3433.5636.0125.6833.3436.8733.0528.5527.3331.87
Pixels Missing = 50%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]29.4334.0130.9832.8225.4032.7935.6125.7333.4136.4433.1328.8326.3731.15
IPPO [65]32.7435.9131.6933.9527.5333.3236.5026.7034.0436.9133.1029.5728.4232.34
ISD-SB [66]27.9626.5727.7627.6022.9226.9732.0421.1730.0532.4329.1626.7523.9127.33
JSM [7]31.9635.8731.4733.7526.6733.7836.3926.4833.4637.3533.2429.4827.7732.13
Aloha [60]30.3337.4630.7833.7925.1634.0136.4125.8433.3336.8831.8528.7127.6731.71
NGS [67]29.7530.9330.2832.0024.5031.2334.5624.6232.3135.5932.1028.5326.0330.19
BKSVD [68]29.9533.5829.6432.4425.2331.2535.4424.6832.9335.8731.9928.2726.9730.63
WNNM [40]33.6737.4733.0034.5328.3635.4136.8027.2834.7437.2634.2729.8329.0733.21
TSLRA [69]31.0035.7431.0133.8926.0232.5635.5226.2733.2036.6132.4429.1427.6731.62
SPG-SC34.0039.1133.2535.2228.9036.2637.8127.3735.4238.6034.9830.2029.3333.88
Text Inlayed
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]31.7034.2731.7132.2326.6431.7835.2728.6335.1837.5033.8830.9427.0432.06
IPPO [65]34.0437.6533.9835.1029.1035.2637.2929.9236.6739.4235.3531.9129.9934.28
ISD-SB [66]29.9630.4328.0927.6224.6127.6433.1324.9432.7234.7031.3828.8124.9629.15
JSM [7]32.9937.7933.1935.4128.6935.4036.9829.6535.6739.2735.1732.4829.1133.98
Aloha [60]30.4939.1631.5834.9426.2134.7436.0328.3834.4737.4032.0630.3428.8732.67
NGS [67]31.1033.5731.7828.7326.1630.0534.7127.2634.0035.6133.0230.2126.2430.96
BKSVD [68]31.4335.1629.0931.6926.5929.7434.6627.7734.0134.9032.8330.3527.9131.24
WNNM [40]34.5139.5834.5036.2529.9336.3437.3130.3136.6839.7336.1832.7029.8634.91
TSLRA [69]32.4337.7832.6435.2328.2133.6632.7629.4135.4237.0234.5031.4328.9533.03
SPG-SC34.8739.9434.5437.0730.4336.2437.7330.2636.5639.8436.2733.2330.4635.19
Table 2. SSIM comparison of BPFA [64], IPPO [65], ISD-SB [66], JSM [7], Aloha [60], NGS [67], BKSVD [68], WNNM [40], TSLRA [69] and SPG-SC for image inpainting.
Table 2. SSIM comparison of BPFA [64], IPPO [65], ISD-SB [66], JSM [7], Aloha [60], NGS [67], BKSVD [68], WNNM [40], TSLRA [69] and SPG-SC for image inpainting.
Pixels Missing = 80%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]0.81170.80420.85170.79600.73070.85570.88990.62850.82340.91270.83790.77900.71600.8029
IPPO [65]0.86780.88340.89950.86140.82510.91190.90850.78270.85870.92380.82430.82170.77440.8572
ISD-SB [66]0.75060.64420.73630.59940.64030.69410.80710.49020.70590.83350.70350.65950.58990.6811
JSM [7]0.85980.83540.90260.85300.83200.92130.89880.72540.84180.92240.83830.82570.75560.8471
Aloha [60]0.83000.91180.88050.86990.79550.90850.90950.77340.84020.91770.82170.80900.77200.8492
NGS [67]0.82300.75940.86350.78980.73510.86870.87670.60410.81740.91170.82720.77830.71320.7976
BKSVD [68]0.77130.79120.78170.78330.69510.77820.85000.57920.78040.87590.77410.73440.69120.7605
WNNM [40]0.87380.91480.91840.87170.85260.93190.89680.82360.86150.91460.84350.84260.79580.8724
TSLRA [69]0.85360.87860.89280.86790.81190.90290.90010.77800.84600.91660.83110.80780.75720.8496
SPG-SC0.87700.91620.92410.87800.85550.93750.91710.83180.87190.93530.86600.83650.79660.8803
Pixels Missing = 70%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]0.86610.89190.91240.87260.82690.92760.92690.78640.88560.94350.89420.85190.80420.8762
IPPO [65]0.91510.93340.93560.90420.88780.95380.94220.86120.90880.95180.89230.87710.84980.9087
ISD-SB [66]0.82510.72590.85410.71310.74300.82220.86470.62220.79550.89710.79570.75080.70470.7780
JSM [7]0.90640.92280.93770.89960.88310.95810.93540.85280.89350.95340.89540.88600.83440.9045
Aloha [60]0.87970.95050.92050.91050.85570.95490.94200.84960.89340.94930.87930.87380.84320.9002
NGS [67]0.87910.85560.91450.86070.83030.92330.91450.75380.87280.94140.88560.84780.80630.8681
BKSVD [68]0.85090.87750.87530.86150.80130.88960.90940.73310.85790.92640.85520.82470.79410.8505
WNNM [40]0.91950.94490.94730.90980.90370.96410.93580.88600.90520.94540.89970.89620.85840.9166
TSLRA [69]0.89930.92980.93470.90710.87170.94520.93670.85020.89490.94990.88620.87050.83210.9006
SPG-SC0.92270.95820.95200.92010.90650.96960.94890.89020.92080.96180.91670.89650.86650.9254
Pixels Missing = 60%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]0.90330.93940.94360.91250.88440.96150.94980.86620.92600.96200.92800.89710.86470.9184
IPPO [65]0.94250.95980.95660.93460.92870.97260.96010.90570.93940.96720.92900.91460.90010.9393
ISD-SB [66]0.87490.79690.90170.79940.82600.89530.90490.70970.85520.92910.85870.82250.79940.8441
JSM [7]0.93270.95540.95700.92960.91950.97510.95570.90100.92860.96820.92930.91820.88560.9351
Aloha [60]0.91270.96970.94280.93850.89680.97360.95940.89100.92880.96570.91710.90720.88990.9302
NGS [67]0.91190.90990.94510.90860.88420.95560.94430.84520.91150.96020.92220.89820.86300.9123
BKSVD [68]0.90500.93240.92660.90750.87800.94800.94090.84140.91210.95150.90590.87900.86260.9070
WNNM [40]0.94500.96510.96300.93760.93810.97800.95250.91750.93790.96040.93150.92410.90600.9428
TSLRA [69]0.92630.95770.95310.93430.91040.96660.95550.89340.92820.96540.92310.90860.88300.9312
SPG-SC0.94850.97500.96770.94610.94010.98400.96670.92090.95000.97390.94530.92820.91060.9505
Pixels Missing = 50%
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]0.93120.96330.96170.93900.92260.97950.96580.91690.95160.97230.95100.92600.90760.9453
IPPO [65]0.96060.97490.96970.95500.95400.98320.97230.93500.95990.97690.95310.94130.93010.9589
ISD-SB [66]0.90770.85620.93640.86580.87920.93550.92980.79680.89890.94840.90020.87560.85580.8913
JSM [7]0.95370.97410.96950.95020.94590.98460.97070.93220.95280.97750.95180.94110.92180.9558
Aloha [60]0.93710.98150.95800.95550.92440.98500.97190.92120.95250.97640.94180.93230.91970.9505
NGS [67]0.93860.94690.96300.93760.91900.97340.96250.89830.94080.97350.94640.92960.90350.9410
BKSVD [68]0.93020.95610.94870.93170.91460.97130.95760.89260.94090.96300.93410.91000.90030.9347
WNNM [40]0.96510.97560.97650.95410.96100.98680.96740.94470.95920.97180.95350.94730.93690.9615
TSLRA [69]0.94800.97430.96720.95360.93950.98030.97020.92610.95180.97580.94900.93610.91920.9532
SPG-SC0.96610.98450.97680.96210.96230.99060.97760.94650.96780.98140.96310.94950.93960.9668
Text Inlayed
ImagesMickeyBarbaraButterflyFenceHaightLeavesLenaLightLilyPepperStarfishTowerZebraAverage
BPFA [64]0.96050.96580.96950.95550.94820.97210.96880.95300.96600.97820.96300.95300.93620.9608
IPPO [65]0.97780.98410.98500.97640.97510.98920.98110.96750.97710.98810.97550.96850.95870.9772
ISD-SB [66]0.95020.93360.95860.91950.93580.95010.95490.90770.94560.97210.94750.93450.91410.9403
JSM [7]0.97270.98280.98360.97470.97290.98920.97920.96550.97150.98700.97320.96960.95350.9750
Aloha [60]0.95300.98580.96530.97230.94920.98530.97550.95410.96300.98030.95280.95500.94290.9642
NGS [67]0.95270.96330.97590.94120.95230.95580.96540.93960.95510.96390.96330.94650.93370.9545
BKSVD [68]0.94740.96390.94630.94790.94230.95510.95840.94130.95240.96100.94790.94130.93590.9493
WNNM [40]0.97840.98660.98650.97890.97850.99020.98060.97140.97690.98760.97620.97270.96040.9788
TSLRA [69]0.96900.98290.98010.97250.96910.98430.96950.96210.96960.98320.96970.96330.94940.9711
SPG-SC0.97790.98780.98690.97970.97840.99080.98190.97080.97650.98840.97780.97280.96070.9793
Table 3. PSNR (dB) comparison of BM3D [70], L0-ABS [71], ASDS [59], EPLL [72], NCSR [6], JSM [7], L2-r-L0 [73], WNNM [40], NLNCDR [74] and SPG-SC for image deblurring.
Table 3. PSNR (dB) comparison of BM3D [70], L0-ABS [71], ASDS [59], EPLL [72], NCSR [6], JSM [7], L2-r-L0 [73], WNNM [40], NLNCDR [74] and SPG-SC for image deblurring.
9 × 9 Uniform Kernel, σ n = 2
ImagesBarbaraBearFenceLakeLenaLilyFlowersNannaCornAgaricMonkZebraManFiremanAverage
BM3D [70]26.8930.4928.9427.3230.3528.5828.5426.4226.7529.0234.3323.6827.3126.5328.22
0.78140.80740.83250.82300.85630.81190.80220.80010.84060.76950.89790.75610.73310.74350.8040
L0-ABS [71]25.5730.8427.4127.3330.1528.0528.4225.9926.1328.7434.4622.5427.1526.4727.80
0.73440.82460.79900.82890.85970.80040.79990.79250.82010.76290.90330.73530.72900.74900.7956
ASDS [59]26.8631.2729.4827.8831.2229.2129.1027.0127.3129.5235.3824.1727.7827.3228.82
0.79380.83330.84680.83440.87950.82900.81590.82610.85250.79540.91850.78440.76820.78010.8256
EPLL [72]23.6428.8425.6925.0928.1027.0426.3324.0424.5428.0533.3422.4625.5325.1526.27
0.73080.82500.79170.82870.86340.79810.80060.79610.81690.75850.91390.73640.71930.74230.7944
NCSR [6]27.1031.1429.8428.1231.2729.3929.2927.0727.8929.5635.0424.6427.9127.4028.98
0.79880.82640.85690.84720.87600.83930.82760.82860.86990.79800.90280.79840.77470.78570.8307
JSM [7]25.7228.3227.2626.2228.0526.9727.1525.4725.6927.4229.9923.3226.3625.7126.69
0.69530.66120.74560.70000.69530.69240.65240.71790.77510.66520.66980.70550.65890.68070.6940
L2-r-L0 [73]26.0731.1027.9227.8830.4428.4728.7326.5227.0029.0835.0423.3527.4726.7728.27
0.76100.83240.81670.84570.87120.81550.81250.81450.84790.78150.91850.76420.74680.76260.8136
WNNM [40]27.2331.3330.1628.1731.3529.2529.2126.8628.2229.5235.5324.3327.6827.2329.01
0.80650.83890.85700.85710.88980.84060.83360.82770.88250.79730.92570.78460.76020.78130.8345
NLNCDR [74]26.2230.7528.2327.5830.4428.6928.7326.5926.6829.1633.7323.3627.6527.0228.20
0.75520.80050.81810.80870.83800.80390.78890.80260.83000.77150.85810.75480.75120.76470.7962
SPG-SC27.5131.3730.1228.2131.4229.4029.3527.0227.9629.6135.8024.6027.9027.2429.11
0.81870.84080.86190.85920.89140.84760.84310.83570.87750.80740.92870.79900.77960.78880.8414
Gaussian Kernel: fspecial(‘gaussian’, 25, 1.6), σ n = 2
ImagesBarbaraBearFenceLakeLenaLilyFlowersNannaCornAgaricMonkZebraManFiremanAverage
BM3D [70]25.7731.9927.3129.1732.2430.4129.8427.9228.9130.3436.9124.6428.0027.8029.37
0.79870.86180.79780.88360.90280.87010.85920.86520.89700.83680.93370.81270.77330.81380.8505
L0-ABS [71]23.8632.0326.0729.0632.1530.5429.6127.4528.7530.0737.3824.0127.6927.5629.02
0.71510.87760.77470.89340.91150.88010.86120.86600.90670.83920.94740.80550.77850.82360.8486
ASDS [59]25.3331.6026.9629.0131.9130.2829.6327.9629.1530.0635.1024.7127.9427.8729.11
0.75860.82460.77100.84500.86790.83680.81020.84600.88630.81130.87440.79970.75500.80420.8208
EPLL [72]22.0927.6723.8722.6128.0126.6725.0923.8223.8927.6934.3722.5824.8123.7225.49
0.69140.85700.75170.85620.89760.85190.83890.83220.86230.81180.94280.78130.74870.78540.8221
NCSR [6]25.9332.2427.4129.4632.6530.8130.2028.2229.6930.5636.9225.0528.2528.1529.68
0.78540.86210.80510.88650.90360.87730.86160.86810.90790.84660.92570.82790.79020.82940.8555
JSM [7]25.8931.3127.0828.9731.4830.0329.5227.8429.0129.9634.4324.6627.8827.7628.99
0.75920.81350.77190.84610.84870.83430.80820.83990.88610.80870.84810.80080.75430.80510.8161
L2-r-L0 [73]24.1832.4426.5029.5132.5330.5329.9727.9729.4030.3037.7824.2328.1227.8729.38
0.73010.88020.78500.89960.91760.88130.86560.87070.91110.84580.95430.81260.78430.82440.8545
WNNM [40]25.5132.6227.4629.6533.0030.9030.1728.1629.9330.6838.1024.5328.2428.1429.79
0.76690.88370.80610.90170.92390.88860.87410.87640.91930.85700.95590.81830.78970.83360.8639
NLNCDR [74]24.4331.1926.6728.9931.4230.1729.4727.7128.8829.9234.3224.4627.8427.8128.81
0.72950.81390.76540.84900.85150.84010.81320.84190.88810.80810.85020.79590.75870.80630.8151
SPG-SC26.0832.5927.5029.6733.0330.9130.2728.2529.8530.8437.9824.7728.3128.2029.88
0.78940.88040.81040.90020.92130.88800.87510.87790.91730.86030.95160.82610.79550.83550.8664
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, J.; Tong, Y.; Jiao, L. Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization for Image Restoration. Micromachines 2021, 12, 1205. https://doi.org/10.3390/mi12101205

AMA Style

Zhang J, Tong Y, Jiao L. Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization for Image Restoration. Micromachines. 2021; 12(10):1205. https://doi.org/10.3390/mi12101205

Chicago/Turabian Style

Zhang, Jiachao, Ying Tong, and Liangbao Jiao. 2021. "Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization for Image Restoration" Micromachines 12, no. 10: 1205. https://doi.org/10.3390/mi12101205

APA Style

Zhang, J., Tong, Y., & Jiao, L. (2021). Simultaneous Patch-Group Sparse Coding with Dual-Weighted p Minimization for Image Restoration. Micromachines, 12(10), 1205. https://doi.org/10.3390/mi12101205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop