Next Article in Journal
A Statistical Evaluation Method Based on Fuzzy Failure Data for Multi-State Equipment Reliability
Next Article in Special Issue
Vehicle Re-Identification Method Based on Multi-Task Learning in Foggy Scenarios
Previous Article in Journal
Fractional Modelling of H2O2-Assisted Oxidation by Spanish broom peroxidase
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning the Hybrid Nonlocal Self-Similarity Prior for Image Restoration

School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1412; https://doi.org/10.3390/math12091412
Submission received: 1 April 2024 / Revised: 22 April 2024 / Accepted: 3 May 2024 / Published: 6 May 2024

Abstract

:
As an immensely important characteristic of natural images, the nonlocal self-similarity (NSS) prior has demonstrated great promise in a variety of inverse problems. Unfortunately, most current methods utilize either the internal or the external NSS prior learned from the degraded image or training images. The former is inevitably disturbed by degradation, while the latter is not adapted to the image to be restored. To mitigate such problems, this work proposes to learn a hybrid NSS prior from both internal images and external training images and employs it in image restoration tasks. To achieve our aims, we first learn internal and external NSS priors from the measured image and high-quality image sets, respectively. Then, with the learned priors, an efficient method, involving only singular value decomposition (SVD) and a simple weighting method, is developed to learn the HNSS prior for patch groups. Subsequently, taking the learned HNSS prior as the dictionary, we formulate a structural sparse representation model with adaptive regularization parameters called HNSS-SSR for image restoration, and a general and efficient image restoration algorithm is developed via an alternating minimization strategy. The experimental results indicate that the proposed HNSS-SSR-based restoration method exceeds many existing competition algorithms in PSNR and SSIM values.

1. Introduction

Along with the advancement of various optical technologies and sensors, images have become one of the most important carriers of information. Unfortunately, image degradation is inevitable during acquisition, transmission, and storage because of defects in the imaging system and interference from various external factors. Therefore, image restoration, which strives to reconstruct the underlying uncorrupted image x from the corrupted measurement y , is essential in a lot of fields of science and engineering. In general, the image degradation process is modeled as:
y = Φ x + v ,
where Φ denotes the degradation operator and  v represents the white Gaussian noise. In Equation (1), different settings of Φ correspond to different image restoration problems. To be specific, when Φ is an identity matrix, Equation (1) becomes image denoising [1]; when Φ is a blurring matrix, Equation (1) converts to image deblurring [2,3]; and when Φ is a random projection matrix, Equation (1) denotes image compressive sensing [4,5].
As image restoration in Equation (1) is a typically ill-posed linear inverse problem, an image prior is often required to constrain the solution space. Specifically, from the standpoint of maximum a posteriori (MAP) estimation, the latent high-quality image can be inferred by solving the following regularization problem [2]:
x ^ = arg min x y Φ x 2 2 + η Ψ ( x ) ,
where · 2 2 denotes the l 2 -norm, y Φ x 2 2 is the fidelity term associated with Gaussian noise, Ψ ( x ) is the regularization term that relies on the image prior, and  η is employed to balance these two terms.
Due to the curse of dimensionality, it is almost impossible to model the whole image. A remedy is to use the image patch as the basic unit of modeling [6,7]. Thus, over the past few decades, the patch-based prior has been extensively studied and has achieved favorable image restoration performance, such as patch-based sparse representation [8,9,10,11,12,13] and patch-based image modeling [6,14,15,16]. Recently, deep learning has also been adopted to learn image priors in a supervised manner and has spawned promising results in various image restoration applications [17,18,19,20,21,22]. Both the model-based and deep learning-based approaches mentioned above, however, are dedicated to mining the local properties of images, whose performance is restricted by largely neglecting the self-similarity and nonlocal properties of images [1,23,24]. In addition, deep learning methods require a training set consisting of extensive degraded/high quality image pairs for supervised learning, which renders them difficult to apply or causes undesirable artifacts in some tasks, such as medical imaging and remote sensing [25,26].
As we all know, natural images have rich self-repeating structures in nonlocal regions, i.e., the so-called nonlocal self-similarity (NSS) prior [27,28]. Compared to the patch-based prior, the NSS prior enables us to cluster together nonlocal patches with similar patterns over the whole image and use such similar patch groups as the basic unit for restoration, which is especially helpful for recovering image structures [27,28,29], such as textures and edges. Given the great success of the nonlocal means (NLM) [27] method, a series of NSS prior-based methods have been developed successively and have shown impressive restoration effects. These approaches can be broadly summarized into three clusters, i.e., filter-based methods [27,29,30], patch group-based sparse representation methods [4,23,31,32,33,34,35,36,37,38], and low-rank approximation-based methods [2,39,40,41,42,43,44]. Apart from focusing on the internal NSS prior of the corrupted image, some recent approaches have paid attention to exploiting the external NSS prior learned from high-quality natural images [28,45,46]. For example, Xu et al. [28] developed a patch group prior-based denoising (PGPD) method for learning dictionaries from the natural image corpus. Liu et al. [46] formulated a external NSS prior-based group sparsity mixture model for image denoising. Although the aforementioned NSS prior-based methods have shown their potential in recovering image structures, exploiting the internal NSS of the observed image often suffers from overfitting data corruption [5], while the external NSS prior learned from training images is not well-adapted to the image to be recovered [47].
To rectify the weakness of using a single NSS prior, some more recent works proposed to jointly utilize internal and external NSS priors [1,3,5,7,24,26,48,49]. For instance, Zha et al. [1] developed a denoising method based on sparse residuals by using an external NSS prior. Liu et al. [49] proposed a group sparse representation-based super-resolution algorithm to leverage internal and external correlations. Zha et al. [24] proposed to simultaneously use internal and external NSS priors for image restoration. Yuan et al. [3] formulated a joint group dictionary-based structural sparse model for image restoration. Zha et al. [5] developed a hybrid structural sparsification error model to jointly exploit internal and external NSS priors. Yuan et al. [7] suggested the joint use of a low-rank prior and an external NSS prior. These methods have led to promising restoration results, since the complementary information of internal and external NSS priors is exploited.
Unlike the above works, in this paper, we propose to learn a hybrid NSS (HNSS) prior for image restoration. In particular, most of existing works mainly concentrate their attention on how to jointly utilize two priors, i.e., internal and external NSS priors, while this work focuses on developing a new paradigm to learn one HNSS prior from both the internal degraded and external natural image sets and applies the learned prior to image restoration. It can be seen that the technical route of our proposed method is quite different from the above existing works. The flowchart of our method is presented in Figure 1. To the best of our knowledge, how to learn an HNSS prior remains an unsolved problem, and this paper thus takes a stab at it. We summarize the main contributions as follows:
  • We develop a flexible yet simple approach to learn the HNSS prior from both internal degraded and external natural image sets.
  • An HNSS prior-based structural sparse representation (HNSS-SSR) model with adaptive regularization parameters is formulated for image restoration.
  • A general and efficient image restoration algorithm is developed by employing an alternating minimization strategy to solve the resulting image restoration problem.
  • Extensive experimental results indicate that our proposed HNSS-SSR model exceeds many existing competition algorithms in terms of quantitative and qualitative quality.
The remainder of this paper is organized as follows. Section 2 elaborates on how to learn the HNSS prior. Section 3 formulates an HNSS-SSR model for image restoration. Section 4 presents the experimental results, followed by the conclusion of this paper in Section 5.

2. Learning the Hybrid Nonlocal Self-Similarity Prior

Here, internal and external NSS priors are first learned from the observed image and training image sets. Specifically, the Gaussian mixture model (GMM) is employed to learn internal and external NSS priors, respectively, since Zoran and Weiss [14,50] have shown that GMM can learn priors more efficiently, i.e., obtaining higher log likelihood scores and better denoising performance, compared to other common methods. On this basis, the HNSS prior is then learned by singular value decomposition (SVD) and by an efficient yet simple weighting method.

2.1. Learning the Internal NSS Prior from a Degraded Image

Given a degraded image y , our desired goal is to learn the NSS prior of its corresponding latent high-quality image x , i.e., the internal NSS prior. However, since the underlying original image is unknown, it is first initialized to the degraded image, i.e.,  x = y . Then, we divide x into N overlapped local patches x i with size m × m , and the n most similar patches for each x i are found to construct a similar patch group X ¯ i = { x i , j } j = 1 n , where x i , j is a vectorized image patch. Specifically, for patch x i , we compute the Euclidean distance between it and each patch, i.e.,  s i , j = x i x j 2 2 , j = 1 , , N , and then select n patches with the smallest distance as similar patches. In practice, this can be done via the K-Nearest Neighbor (KNN) [51] method.
In view of its great success in modeling image patches [6,14,15] and patch groups [28,52], GMM with finite Gaussian components is adopted in this paper to learn both internal and external NSS priors (which will be introduced in the next subsection). As a result, the following likelihood:
p ( X ¯ i ) = k = 1 K I π k , I j = 1 n N ( x i , j | μ k , I , Σ k , I ) ,
is employed for each patch group X ¯ i to learn the internal NSS prior, where K I is a hyperparameter denoting the total number of Gaussian components; μ k , I , Σ k , I , and  π k , I denote the mean vector, covariance matrix, and weight of the k-th Gaussian component, respectively; and  k = 1 K I π k , I = 1 . Regarding all patch groups as independent samples [1,28,52], the overall log-likelihood function for learning the internal NSS prior can be given as:
ln L I = i = 1 N ln k = 1 K I π k , I j = 1 n N ( x i , j | μ k , I , Σ k , I ) .
By maximizing Equation (4) over all patch groups { X ¯ i } i = 1 N , the parameters of the GMM can be learned, which describe the internal NSS prior. Note that the subscript I is used to indicate the internal NSS prior.
However, it is a fact that different patch groups contain different fine-scale details of the image be recovered. Accordingly, in this paper, when learning the internal NSS prior, instead of directly optimizing Equation (4), we assign an exclusive Gaussian component to each patch group, i.e.,
p ( k | X ¯ i , μ k , I , Σ k , I ) = 1 when k = i , 0 otherwise .
Hence, the total number of Gaussian components for learning the internal NSS prior is naturally set as K I = N , and for each patch group X ¯ i , its corresponding μ i , I and Σ i , I are obtained by the following maximum likelihood (ML) estimate [15,52]:
( μ i , I , Σ i , I ) = arg max μ i , I , Σ i , I log p ( X ¯ i | μ i , I , Σ i , I ) .
Specifically, μ i , I and Σ i , I can be estimated as:
μ i , I = 1 n j = 1 n x i , j ,
Σ i , I = 1 n j = 1 n ( x i , j μ i , I ) ( x i , j μ i , I ) T .

2.2. Learning the External NSS Prior from a Natural Image Corpus

With a set of pre-collected natural images, a total of L similar patch groups are first extracted to form an external training patch group set, which is denoted as { Z ¯ l } l = 1 L , where Z ¯ l = { z l , j } j = 1 d , z l , j is the j-th vectorized patch of patch group Z ¯ l , and d is the number of similar patches. As in Equation (4), by the use of the GMM, the log-likelihood function over the training set { Z ¯ l } l = 1 L for learning the external NSS prior is formulated as:
ln L E = l = 1 L ln k = 1 K E π k , E j = 1 d N ( z l , j | μ k , E , Σ k , E ) ,
where the subscript E is used to indicate the external NSS prior, and the other variables have meanings similar to those in Equation (4).
Instead of capturing fine-scale details of the image be recovered, the aim of the external NSS prior is to learn the rich structural information of the images, such as edges with different orientations and contours with various shape. As a result, the Expectation Maximization (EM) algorithm [53] is adopted to maximize Equation (9). In the E-step, the posterior probability and mixing weight for the k-th component are updated as follows:
p ( k | Z ¯ l , μ k , E , Σ k , E ) = π k , E j = 1 d N ( z l , j | μ k , E , Σ k , E ) i = 1 K E π i , E j = 1 d N ( z l , j | μ i , E , Σ i , E ) ,
q k = l = 1 L p ( k | Z ¯ l , μ k , E , Σ k , E ) ,
π k , E = q k L .
In the M-step, the k-th Gaussian component is calculated as:
μ k , E = l = 1 L p ( k | Z ¯ l , μ k , E , Σ k , E ) j = 1 d z l , j q k ,
Σ k , E = l = 1 L p ( k | Z ¯ l , μ k , E , Σ k , E ) j = 1 d ( z l , j μ k , E ) ( z l , j μ k , E ) T q k .
The external NSS prior can be progressively learned by performing the above two steps successively until convergence. Please refer to [53] for more details about the EM algorithm. In practice, it is notable that, as the internal NSS prior has learned the main background information of the image be recovered, i.e.,  { μ i , I } i = 1 N , it is not a requirement to learn them from training images. Therefore, all patch groups in { Z ¯ l } l = 1 L are preprocessed by mean subtraction, and  μ k , E in Equation (13) is naturally set to be 0 . This mean subtraction operation can also greatly reduce the total number of mixing components needed to learn [1,28].

2.3. Learning the Hybrid NSS Prior for Patch Groups

Now, for each patch group of the image be recovered, we learn the HNSS prior from its corresponding internal and external NSS priors. As described in Section 2.1, the Gaussian component with parameters μ i , I and Σ i , I depicts the internal NSS prior of X ¯ i , and the most suitable external NSS prior for X ¯ i is determined by calculating the MAP probability:
k = arg max v j = 1 n N ( x i , j μ i , I | 0 , Σ v , E + σ 2 I ) l = 1 K E j = 1 n N ( x i , j μ i , I | 0 , Σ l , E + σ 2 I ) ,
where I denotes the identity matrix. The corresponding Gaussian component is parameterized by 0 and Σ k , E .
Next, to better characterize the structure and detail information contained in patch group X ¯ i , we first learn a set of internal and external bases by performing SVD on Σ i , I and Σ k , E , respectively:
Σ i , I = D i , I S i , I D i , I T ,
Σ k , E = D k , E S k , E D k , E T .
With the internal NSS prior ( μ i , I , D i , I ) and external NSS prior D k , E , an improved HNSS prior for X ¯ i can then be learned by the following form:
μ i , H = μ i , I D i , H = D k , E diag ( w k ) + D i , I diag ( 1 w k ) ,
where w k = [ w k , 1 , , w k , r , , w k , m ] T with 0 w k , r 1 . One can see that Equation (18) provides a simple yet flexible way to learn the HNSS prior. Specifically, a weighting scheme that allows different weights to be assigned to different bases is employed, and Equation (18) can be reduced to the internal or external prior by setting w k = 0 or w k = 1 .
As shown in Equation (18), the problem becomes how to learn w k . A straightforward approach is to set w = 0 . 5 , but it treats each basis equally. However, as  D k , E is learned from external natural images and represents the k-th subspaces of the external NSS prior, it is beneficial to recover the common latent structures, but it cannot be adaptive to the given image. While D i , I can characterize the fine-scale details that are particular to the degraded image, the common structures are disturbed by degradation. As a result, different weights should be assigned to different bases. Actually, the SVD in Equation (17) has helped us learn such weights implicitly. It is well-known that singular values in S k , E characterize the properties of singular vectors in D k , E . Concretely, singular value vectors with large singular values characterize the main structure of the image, while singular value vectors with small singular values represent the fine-scale details. Hence, in this work, each weight is computed as follows:
w k , r = s r , E p = 1 m s p , E ,
where s r , E is the r-th singular value of S k , E .
By learning the HNSS prior for each patch group in the above manner, the HNSS prior for the whole image can be formed as { ( μ i , H , D i , H ) } i = 1 N . In the next section, the learned prior is used for image restoration.

3. Image Restoration via the Hybrid NSS Prior

In this section, we first formulate an HNSS prior-based structural sparse representation (HNSS-SSR) model with adaptive regularization parameters and then develop a general restoration algorithm by applying it to image restoration.

3.1. HNSS Prior-Based Structural Sparse Representation

As described in Section 2, the learned HNSS prior can characterize the common structures and fine-scale details of the given image well. On the other hand, the structural sparse representation has exhibited notable success in many image restoration tasks [1,4,23,24,28,35]. As a result, we incorporate the learned HNSS prior into the structured sparse representation. Specifically, by using the learned HNSS prior as the dictionary, the proposed HNSS-SSR model is formulated as:
A ^ i = arg min A i X i Γ i D i , H A i F 2 + λ i T A i 1 ,
where · F 2 denotes the Frobenius norm, X i = [ x i , 1 , , x i , j , , x i , n ] R m × n is the matrix form of X ¯ i , Γ i = [ γ i , 1 , , γ i , j , , γ i , n ] with γ i , j = μ i , H , A i stands for the group sparse coefficient, · 1 denotes that the l 1 -norm is imposed on each column in A i , and λ i = [ λ i , 1 , , λ i , r , , λ i , m ] T is a regularization parameter vector with non-negative λ i , r . Note that, since X i contains similar patches, the same regularization parameter λ i , r is assigned to the coefficients associated with the r-th atom in D i , H .
To make the proposed HNSS-SSR model more stable, we connect the sparse estimation problem in Equation (20) with the MAP estimation problem to adaptively update regularization parameters. Concretely, for a given patch group X i = Γ i + D i , H A i + v , where v N ( 0 , σ 2 ) is the Gaussian noise, we can form the MAP estimation of A i as:
A ^ i = arg min A i 1 2 σ 2 X i Γ i D i , H A i F 2 ln p ( A i ) .
In literature, the i.i.d. Laplacian distribution is usually used to characterize the statistical properties of sparse coefficients [1,11,13,28,35,47]. Hence, by imposing the Laplacian distribution with the same parameter on the coefficients associated with the same atom of D i , H , p ( A i ) can be written as:
p ( A i ) = r = 1 m j = 1 n 1 2 θ i , r exp 2 θ i , r | α i , r , j | ,
where α i , r , j is the ( r , j ) -th element of A i , and  θ i , r is the estimated standard deviation of { α i , r , j } j = 1 n [13,39]. Substituting Equation (22) into Equation (21) and deriving, we have the following:
A ^ i = arg min A i 1 2 σ 2 X i Γ i D i , H A i F 2 + r = 1 m j = 1 n 2 θ i , r | α i , r , j | .
By connecting Equation (20) with Equation (23), each λ i , r can be adaptively calculated as follows:
λ i , r = 2 2 σ 2 θ i , r + ε ,
where ε is a small constant for numerical stability.
Once the group sparse coefficient A ^ i is estimated by solving Equation (20), the corresponding patch group can be reconstructed as:
X ^ i = D i , H A ^ i + Γ i .

3.2. Image Restoration

The proposed HNSS-SSR model is now used for image restoration tasks, and we develop a general restoration algorithm. Specifically, by embedding our proposed HNSS-SSR of Equation (20) into the regularization problem of Equation (2), the HNSS-SSR-based restoration framework can be first formulated as:
( x ^ , { A i ^ } i = 1 N ) = arg min x , { A i } i = 1 N y Φ x 2 2 + η i = 1 N R i ( x ) Γ i D i , H A i F 2 + λ i T A i 1 ,
where R i ( x ) = [ R i , 1 x , , R i , n x ] denotes the patch group extraction operation, and  R i , n is a patch extraction matrix. With the learned HNSS prior, the proposed restoration framework in Equation (26) can both adapt to the image to be recovered and also mitigate the overfitting to degradation.
Then, we employ the alternating minimization strategy to efficiently solve Equation (26). In particular, Equation (26) can be decomposed into x sub-problem and A i sub-problems, which can be solved efficiently.

3.2.1. Solving the A i Sub-Problem

Given x , Equation (26) reduces to the following A i sub-problem:
{ A i ^ } i = 1 N = arg min { A i } i = 1 N i = 1 N R i ( x ) Γ i D i , H A i F 2 + λ i T A i 1 ,
which consists of a series of HNSS-SSR problems proposed in Equation (20). As a result, we here adopt the Iterative Soft Thresholding Algorithm (ISTA) [54] to update A i , i.e.,
A ^ i = S λ i / 2 c ( A ^ i 1 c D i , H T ( D i , H A ^ i R i ( x ) + Γ i ) ) ,
where c represents the square spectral norm of D i , H , and  S λ i / 2 c is the soft-thresholding operator:
S λ ( C ) = sgn ( C ) max ( | C | λ h T , 0 ) ,
where h R n × 1 with all elements 1, and ⊙ represents the element-wise multiplication operation. Note that ISTA has been proven to converge effectively to a global optimum.

3.2.2. Solving the x Sub-Problem

Given the updated A i , let X ^ i = D i , H A ^ i + Γ i , and we can naturally obtain the x sub-problem as follows:
x ^ = arg min x y Φ x 2 2 + η i = 1 N ( R i ( x ) X ^ i F 2 ) ,
which allows for the following solution:
x ^ = Φ T Φ + η i N j n R i , j T R i , j 1 Φ T y + η i N j n R i , j T x ^ i , j ,
where x ^ i , j stands for the j-th column vector in X ^ i .
In practice, the higher performance can be achieved by alternately solving the above A i and x sub-problems T times. To mitigate the effect of degradation on prior learning, in the t-th iteration, the output x t 1 of the previous iteration is used to update the HNSS prior. Furthermore, to steadily create solutions, the iterative regularization strategy [55] is employed to estimate σ in each iteration as follows:
σ t = γ σ 2 x t 1 y 2 2 ,
where γ denotes a constant. To conclude, Algorithm 1 fully summarizes our proposed HNSS-SSR-based restoration algorithm.
Algorithm 1 HNSS-SSR-based Image Restoration
Input: Degraded image y , measurement matrix Φ , and external NSS prior GMM model
Output: The restored image x ^ .
  1:
Initialization:
Set x ^ 0 = y ;
Set parameters m, n, T, γ , η , and  σ .
  2:
for  t = 1  to T do
  3:
   Compute σ t by Equation (32);
  4:
   Perform KNN search on x ^ t 1 to get { X i t 1 } ;
  5:
   for Each X i t 1  do
  6:
     Learn internal NSS prior by Equations (7) and (8);
  7:
     Select the most suitable external prior by Equation (15);
  8:
     Learn HNSS prior ( μ i , H , D i , H ) by Equations (16)–(19);
  9:
     Update λ i by Equation (24);
10:
     Update A ^ i by Equation (28);
11:
     Recover X ^ i by Equation (25);
12:
   end for
13:
   Reconstruct x ^ t by Equation (31).
14:
end for
15:
Return The final restored image x ^ T .

4. Experimental Results

Here, we conduct image denoising and deblurring experiments to reveal the validity of our learned HNSS prior and proposed restoration algorithm. Figure 2 illustrates 16 test images used in this work. As the human vision system is susceptible to variations in illuminance, the restoration for color images is only focused on the luminance channel. To objectively assess the different restoration algorithms, the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [56] are jointly used as evaluation metrics. To achieve fair comparisons, we run the source codes released by the authors to obtain the restoration results of other competing approaches. In external NSS prior learning, the total number of Gaussian components K E and the number of similar patches d are set to 32 and 10, respectively. The patch groups for learning were extracted from the Kodak photoCD dataset (http://r0k.us/graphics/kodak/, accessed on 13 September 2022).

4.1. Image Denoising

This subsection performs image denoising experiments using our proposed HNSS-SSR restoration algorithm. It is worth noting that image denoising is an ideal benchmark for evaluating image priors and restoration algorithms. The noisy observations are generated by disturbing test images with additive white Gaussian noises. The detailed parameter settings for denoising experiments are given below. The size of the image patch m × m is set to 7 × 7 , 8 × 8 , and 9 × 9 for σ 30 , 30 < σ 60 , and 60 < σ , respectively. The number of similar patches n, scaling factor γ , and iteration times T are set to ( 70 , 0.70 , 8 ) , ( 90 , 0.68 , 8 ) , ( 120 , 0.65 , 8 ) , and ( 140 , 0.64 , 10 ) for σ 20 , 20 < σ 40 , 40 < σ 60 , and 60 < σ , respectively. We empirically fix the regularization parameter η = 0.14 for all cases.
To objectively demonstrate its denoising capability, our proposed HNSS-SSR is first contrasted with several existing superior denoising algorithms, which include BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], and SNSS [24]. Among them, BM3D, NCSR, and RRC utilize the internal NSS prior, while PGPD uses the external NSS prior. Moreover, GSRC-ENSS and SNSS jointly use internal and external NSS priors and achieve superior denoising effects. Table 1 and Table 2 illustrate the denoising results of various competing approaches, and we mark the highest objective metric values in bold. It is obvious that our proposed HNSS-SSR delivers admirable denoising capabilities. Specifically, in Table 1, one can observe that our proposed HNSS-SSR has the highest PSNR in a majority of cases. Furthermore, in terms of average PSNR, our proposed HNSS-SSR enjoys a performance gain over BM3D by 0.35 dB, over NCSR by 0.50 dB, over PGPD by 0.18 dB, over GSRC-ENSS by 0.25 dB, over RRC by 0.22 dB, and over SNSS by 0.17 dB. In Table 2, it can be observed that the SSIM results of the proposed HNSS-SSR exceed other competing approaches in most cases. In terms of average SSIM, our proposed HNSS-SSR realizes 0.0112–0.0278, 0.0116–0.0228, 0.0044–0.0254, 0.0149–0.0225, 0.0058–0.0170, and 0.0075–0.0133 gains over the other six denoising methods respectively mentioned above. Moreover, the visual denoising results of various approaches are presented in Figure 3 and Figure 4. From Figure 3, we can observe that the comparison methods have a tendency to over-smooth edge details. In Figure 4, it can be observed that the comparison algorithms not only are likely to smooth the latent structure, but also suffer from different degrees of undesired artifacts. Fortunately, our proposed HNSS-SSR is extremely beneficial in recovering the latent structure and fine-scale details while effectively suppressing artifacts.
We also evaluate the proposed HNSS-SSR on the BSD68 dataset [57]. In addition to the above methods, two recently proposed methods with excellent denoising performance, i.e., GSMM [46] and LRENSS [7], are also used to compare with our method. Table 3 lists the corresponding PSNR and SSIM results. Note that the denoising results of GSMM are quoted from Reference [46]. From Table 3, it can be seen that the proposed HNSS-SSR consistently outperforms all other methods except LRENSS. Furthermore, the denoising results of the proposed HNSS-SSR are comparable to LRENSS in terms of PSNR and SSIM.
The validity of our proposed HNSS-SSR is further demonstrated by comparing it with the deep learning-based denoising approaches. Specifically, we evaluate our proposed HNSS-SSR, TNRD [19], and S2S [58] on the Set12 dataset [20]. The average PSNR and SSIM results are listed in Table 4, with the best results highlighted in bold. It can be seen that the proposed HNSS-SSR is better than TNRD and S2S across the board. In particular, the proposed HNSS-SSR achieves {0.19 dB, 0.43 dB} average PSNR gains, and {0.0072, 0.0196} average SSIM gains over TNRD and S2S, respectively.

4.2. Image Deblurring

In this subsection, we apply the proposed HNSS-SSR to image deblurring. Following prior works [13,24], we adopt the uniform blur kernel with size 9 × 9 and the Gaussian kernel with standard deviation 1.6 to assess all deblurring approaches. For each test image, it is first blurred by a blur kernel and then corrupted by the additive white Gaussian noise with standard deviation 2 to generate the degraded image. In deblurring experiments, we set ( m × m , n , T , η , γ ) to ( 6 × 6 , 30 , 200 , 0.04 , 1 ) , respectively.
The deblurring performance of our proposed HNSS-SSR is verified by comparing it with several leading methods, including BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], and SNSS [24]. Note that BM3D, EPLL, and NCSR are three typical deblurring approaches, and JSM, MS-EPLL, and SNSS are recently developed algorithms with advanced performance. The single NSS prior is utilized by all comparison methods except SNSS, which uses both internal and external NSS priors. The deblurring results of different algorithms are presented in Table 5 and Table 6. We can observe that our proposed HNSS-SSR has the highest PSNR and SSIM in most cases compared to other competing deblurring approaches, and only SNSS is slightly better than the proposed HNSS-SSR in individual cases. Furthermore, for uniform blur, the proposed HNSS-SSR achieves {1.35 dB, 3.25 dB, 0.33 dB, 3.16 dB, 2.85 dB, 0.18 dB} average PSNR gains and {0.0391, 0.0391, 0.0135, 0.1672, 0.0340, 0.0030} average SSIM gains over BM3D, EPLL, NCSR, JSM, MS-EPLL, and SNSS, respectively. For Gaussian blur, our proposed HNSS-SSR achieves {1.22 dB, 5.51 dB, 0.67 dB, 1.64 dB, 4.78 dB, 0.35 dB} average PSNR gains and {0.0265, 0.0440, 0.0214, 0.0675, 0.0408, 0.0034} average SSIM gains over BM3D, EPLL, NCSR, JSM, MS-EPLL, and SNSS, respectively. The visual deblurring results of different approaches are presented in Figure 5 and Figure 6. It can be obviously observed that BM3D, NCSR, JSM, and MS-EPLL produce a lot of unpleasant artifacts, while EPLL and SNSS cause over-smoothing phenomena. In comparison, our proposed HNSS-SSR method effectively eliminates artifacts while delivering a friendly visual perception.
The proposed HNSS-SSR is also tested on the Set14 dataset [61], and compared with the recently proposed JGD-SSR model [3] and LRENSS prior [7]. Note that JGD-SSR jointly utilizes the internal and external NSS priors, while LRENSS jointly utilizes the low-rank prior and external NSS prior. The average PSNR and SSIM results are listed in Table 7. It can be seen that the proposed HNSS-SSR has performance comparable to JGD-SSR and LRENSS and has considerable PSNR and SSIM gains compared to other methods.
The benefit of our proposed HNSS-SSR is further evidenced by making a comparison with deep learning-based approaches, specifically involving RED [62], IRCNN [63], and H-PnP [64], on the Set14 dataset [61]. Table 8 presents the deblurring results. One can clearly see that our proposed HNSS-SSR is far preferable to RED. Meanwhile, the proposed HNSS-SSR not only yields comparable PSNR results with IRCNN and H-PnP, but also has the best SSIM results. As we all know, SSIM is more consistent with human vision than PSNR, so SSIM can usually lead to a more objective quantitative evaluation [56]. In particular, the SSIM gains of our proposed HNSS-SSR over RED, IRCNN, and H-PnP are 0.0081, 0.0045, and 0.0038, respectively.

4.3. Computational Time

In this subsection, we report the running time of different denoising and deblurring methods on the 256 × 256 image in Table 9. All methods are tested on Intel® Core™ i7-9700 3.00 GHz CPU PC under the MATLAB 2019a environment. Note that the experimental results of GSMM are obtained from Reference [46], so its running time is not reported here. One can see that, for image denoising, the proposed HNSS-SSR is slower than only BM3D and PGPD, and for image deblurring, the proposed HNSS-SSR is faster than SNSS and LRENSS.

5. Conclusions

This paper proposed to learn a new NSS prior, namely the HNSS prior, from both internal and external image data and applied it to the image restoration problem. Two sets of GMMs for depicting internal and external NSS priors were first learned from the degraded observation and natural image sets, respectively. Subsequently, based on learned internal and external priors, the HNSS prior that can better characterize the image structure and detail information was efficiently learned by SVD and a simple weighting method. An HNSS prior-based structural sparse representation (HNSS-SSR) model with adaptive regularization parameters was then formulated for the image restoration problem. Further, we adopted an alternate minimization strategy to solve the corresponding restoration problem, resulting in a general restoration algorithm. Experimental results have validated that, compared to many classical or excellent approaches, our proposed HNSS-SSR algorithm not only provides better visual results but also yields competitive PSNR and SSIM metrics.

Author Contributions

Conceptualization, W.Y. and H.L.; methodology, W.Y. and H.L.; software, W.Y.; resources, H.L., L.L. and W.W.; writing—original draft preparation, W.Y.; writing—review and editing, W.Y., H.L., L.L. and W.W.; supervision, H.L., L.L. and W.W.; funding acquisition, H.L., L.L. and W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grants 92270117, U2034209, and 62376214, the Natural Science Basic Research Program of Shaanxi under Grant 2023-JC-YB-533, and the Construction Project of Qin Chuangyuan Scientists and Engineers in Shaanxi Province under Grant 2024QCY-KXJ-160.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zha, Z.; Zhang, X.; Wang, Q.; Bai, Y.; Chen, Y.; Tang, L.; Liu, X. Group sparsity residual constraint for image denoising with external nonlocal self-similarity prior. Neurocomputing 2018, 275, 2294–2306. [Google Scholar] [CrossRef]
  2. Yuan, W.; Liu, H.; Liang, L.; Xie, G.; Zhang, Y.; Liu, D. Rank minimization via adaptive hybrid norm for image restoration. Signal Process. 2023, 206, 108926. [Google Scholar] [CrossRef]
  3. Yuan, W.; Liu, H.; Liang, L. Joint group dictionary-based structural sparse representation for image restoration. Digit. Signal Process. 2023, 137, 104029. [Google Scholar] [CrossRef]
  4. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [PubMed]
  5. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C.; Kot, A.C. A hybrid structural sparsification error model for image restoration. IEEE Trans. Neural Networks Learn. Syst. 2021, 33, 4451–4465. [Google Scholar] [CrossRef] [PubMed]
  6. Papyan, V.; Elad, M. Multi-scale patch-based image restoration. IEEE Trans. Image Process. 2016, 25, 249–261. [Google Scholar] [CrossRef] [PubMed]
  7. Yuan, W.; Liu, H.; Liang, L.; Wang, W.; Liu, D. Image restoration via joint low-rank and external nonlocal self-similarity prior. Signal Process. 2024, 215, 109284. [Google Scholar] [CrossRef]
  8. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  9. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  10. Mairal, J.; Elad, M.; Sapiro, G. Sparse representation for color image restoration. IEEE Trans. Image Process. 2007, 17, 53–69. [Google Scholar] [CrossRef]
  11. Dong, W.; Zhang, L.; Shi, G.; Wu, X. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 2011, 20, 1838–1857. [Google Scholar] [CrossRef] [PubMed]
  12. Gai, S. Theory of reduced biquaternion sparse representation and its applications. Expert Syst. Appl. 2023, 213, 119245. [Google Scholar] [CrossRef]
  13. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2013, 22, 1620–1630. [Google Scholar] [CrossRef] [PubMed]
  14. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  15. Yu, G.; Sapiro, G.; Mallat, S. Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity. IEEE Trans. Image Process. 2012, 21, 2481–2499. [Google Scholar] [PubMed]
  16. Colak, O.; Eksioglu, E.M. On the fly image denoising using patch ordering. Expert Syst. Appl. 2022, 190, 116192. [Google Scholar] [CrossRef]
  17. Burger, H.C.; Schuler, C.J.; Harmeling, S. Image denoising: Can plain neural networks compete with BM3D? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2392–2399. [Google Scholar]
  18. Wang, J.; Wang, Z.; Yang, A. Iterative dual CNNs for image deblurring. Mathematics 2022, 10, 3891. [Google Scholar] [CrossRef]
  19. Chen, Y.; Pock, T. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1256–1272. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  21. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; Timofte, R. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6360–6376. [Google Scholar] [CrossRef]
  22. Li, X.; Wang, J.; Liu, X. Deep Successive Convex Approximation for Image Super-Resolution. Mathematics 2023, 11, 651. [Google Scholar] [CrossRef]
  23. Yuan, W.; Liu, H.; Liang, L. Image restoration via exponential scale mixture-based simultaneous sparse prior. IET Image Process. 2022, 16, 3268–3283. [Google Scholar] [CrossRef]
  24. Zha, Z.; Yuan, X.; Zhou, J.; Zhu, C.; Wen, B. Image restoration via simultaneous nonlocal self-similarity priors. IEEE Trans. Image Process. 2020, 29, 8561–8576. [Google Scholar] [CrossRef] [PubMed]
  25. Zha, Z.; Yuan, X.; Wen, B.; Zhang, J.; Zhou, J.; Zhu, C. Simultaneous nonlocal self-similarity prior for image denoising. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; pp. 1119–1123. [Google Scholar]
  26. Wen, B.; Li, Y.; Bresler, Y. Image recovery via transform learning and low-rank modeling: The power of complementary regularizers. IEEE Trans. Image Process. 2020, 29, 5310–5323. [Google Scholar] [CrossRef] [PubMed]
  27. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  28. Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 244–252. [Google Scholar]
  29. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  30. Hou, Y.; Xu, J.; Liu, M.; Liu, G.; Liu, L.; Zhu, F.; Shao, L. NLH: A blind pixel-level non-local method for real-world image denoising. IEEE Trans. Image Process. 2020, 29, 5121–5135. [Google Scholar] [CrossRef]
  31. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar]
  32. Dong, W.; Shi, G.; Ma, Y.; Li, X. Image restoration via simultaneous sparse coding: Where structured sparsity meets gaussian scale mixture. Int. J. Comput. Vis. 2015, 114, 217–232. [Google Scholar] [CrossRef]
  33. Yuan, W.; Liu, H.; Liang, L.; Wang, W.; Liu, D. A hybrid structural sparse model for image restoration. Opt. Laser Technol. 2022, 171, 110401. [Google Scholar] [CrossRef]
  34. Ou, Y.; Swamy, M.; Luo, J.; Li, B. Single image denoising via multi-scale weighted group sparse coding. Signal Process. 2022, 200, 108650. [Google Scholar] [CrossRef]
  35. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group sparsity residual constraint with non-local priors for image restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
  36. Zha, Z.; Yuan, X.; Wen, B.; Zhang, J.; Zhou, J.; Zhu, C. Image restoration using joint patch-group-based sparse representation. IEEE Trans. Image Process. 2020, 29, 7735–7750. [Google Scholar] [CrossRef]
  37. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Image restoration via reconciliation of group sparsity and low-rank models. IEEE Trans. Image Process. 2021, 30, 5223–5238. [Google Scholar] [CrossRef] [PubMed]
  38. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C.; Kot, A.C. Low-rankness guided group sparse representation for image restoration. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 7593–7607. [Google Scholar] [CrossRef]
  39. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2012, 22, 700–711. [Google Scholar] [CrossRef]
  40. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
  41. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhang, J.; Zhu, C. From rank estimation to rank approximation: Rank residual constraint for image restoration. IEEE Trans. Image Process. 2019, 29, 3254–3269. [Google Scholar] [CrossRef] [PubMed]
  42. Chen, J.F.; Wang, Q.W.; Song, G.J.; Li, T. Quaternion matrix factorization for low-rank quaternion matrix completion. Mathematics 2023, 11, 2144. [Google Scholar] [CrossRef]
  43. Xu, C.; Liu, X.; Zheng, J.; Shen, L.; Jiang, Q.; Lu, J. Nonlocal low-rank regularized two-phase approach for mixed noise removal. Inverse Probl. 2021, 37, 085001. [Google Scholar] [CrossRef]
  44. Lu, J.; Xu, C.; Hu, Z.; Liu, X.; Jiang, Q.; Meng, D.; Lin, Z. A new nonlocal low-rank regularization method with applications to magnetic resonance image denoising. Inverse Probl. 2022, 38, 065012. [Google Scholar] [CrossRef]
  45. Li, H.; Jia, X.; Zhang, L. Clustering based content and color adaptive tone mapping. Comput. Vis. Image Underst. 2018, 168, 37–49. [Google Scholar] [CrossRef]
  46. Liu, H.; Li, L.; Lu, J.; Tan, S. Group sparsity mixture model and its application on image denoising. IEEE Trans. Image Process. 2022, 31, 5677–5690. [Google Scholar] [CrossRef]
  47. Xu, J.; Zhang, L.; Zhang, D. External prior guided internal prior learning for real-world noisy image denoising. IEEE Trans. Image Process. 2018, 27, 2996–3010. [Google Scholar] [CrossRef] [PubMed]
  48. Yue, H.; Sun, X.; Yang, J.; Wu, F. CID: Combined image denoising in spatial and frequency domains using Web images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Mandi, India, 16–19 December 2014; pp. 2933–2940. [Google Scholar]
  49. Liu, J.; Yang, W.; Zhang, X.; Guo, Z. Retrieval compensated group structured sparsity for image super-resolution. IEEE Trans. Multimed. 2017, 19, 302–316. [Google Scholar] [CrossRef]
  50. Zoran, D.; Weiss, Y. Natural images, Gaussian mixtures and dead leaves. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2012; Volume 25. [Google Scholar]
  51. Keller, J.M.; Gray, M.R.; Givens, J.A. A fuzzy k-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern. 1985, SMC-15, 580–585. [Google Scholar]
  52. Niknejad, M.; Rabbani, H.; Babaie-Zadeh, M. Image restoration using Gaussian mixture models with spatially constrained patch clustering. IEEE Trans. Image Process. 2015, 24, 3624–3636. [Google Scholar] [CrossRef]
  53. Wu, C.J. On the convergence properties of the EM algorithm. Ann. Stat. 1983, 11, 95–103. [Google Scholar] [CrossRef]
  54. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. A J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  55. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  56. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  57. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar] [CrossRef] [PubMed]
  58. Quan, Y.; Chen, M.; Pang, T.; Ji, H. Self2self with dropout: Learning self-supervised denoising from single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 1890–1898. [Google Scholar]
  59. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image restoration by sparse 3D transform-domain collaborative filtering. In Proceedings of the Image Processing: Algorithms and Systems VI, San Jose, CA, USA, 28–29 January 2008; pp. 62–73. [Google Scholar]
  60. Zhang, J.; Zhao, D.; Xiong, R.; Ma, S.; Gao, W. Image restoration using joint statistical modeling in a space-transform domain. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 915–928. [Google Scholar] [CrossRef]
  61. Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Proceedings of the Curves and Surfaces: 7th International Conference, Avignon, France, 24–30 June 2010; pp. 711–730. [Google Scholar]
  62. Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
  63. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3929–3938. [Google Scholar]
  64. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.T.; Zhou, J.; Zhu, C. Triply complementary priors for image restoration. IEEE Trans. Image Process. 2021, 30, 5819–5834. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of the proposed method.
Figure 1. The flowchart of the proposed method.
Mathematics 12 01412 g001
Figure 2. Test images in experiments.
Figure 2. Test images in experiments.
Mathematics 12 01412 g002
Figure 3. Denoising visual results for Starfish with σ = 50 . (a) Original image; (b) BM3D [29] (PSNR = 25.04 dB, SSIM = 0.7433); (c) NCSR [13] (PSNR = 25.09 dB, SSIM = 0.7453); (d) PGPD [28] (PSNR = 25.11 dB, SSIM = 0.7454); (e) GSRC-ENSS [1] (PSNR = 25.44 dB, SSIM=0.7606); (f) RRC [41] (PSNR = 25.34 dB, SSIM = 0.7589); (g) SNSS [24] (PSNR = 25.25 dB, SSIM = 0.7491); (h) HNSS-SSR (PSNR = 25.53 dB, SSIM = 0.7671).
Figure 3. Denoising visual results for Starfish with σ = 50 . (a) Original image; (b) BM3D [29] (PSNR = 25.04 dB, SSIM = 0.7433); (c) NCSR [13] (PSNR = 25.09 dB, SSIM = 0.7453); (d) PGPD [28] (PSNR = 25.11 dB, SSIM = 0.7454); (e) GSRC-ENSS [1] (PSNR = 25.44 dB, SSIM=0.7606); (f) RRC [41] (PSNR = 25.34 dB, SSIM = 0.7589); (g) SNSS [24] (PSNR = 25.25 dB, SSIM = 0.7491); (h) HNSS-SSR (PSNR = 25.53 dB, SSIM = 0.7671).
Mathematics 12 01412 g003
Figure 4. Denoising visual results for Leaves with σ = 75 . (a) Original image; (b) BM3D [29] (PSNR = 22.49 dB, SSIM = 0.8072); (c) NCSR [13] (PSNR = 22.60 dB, SSIM=0.8233); (d) PGPD [28] (PSNR = 22.61 dB, SSIM = 0.8121); (e) GSRC-ENSS [1] (PSNR = 22.90 dB, SSIM = 0.8339); (f) RRC [41] (PSNR = 22.91 dB, SSIM = 0.8377); (g) SNSS [24] (PSNR = 22.98 dB, SSIM = 0.8365); (h) HNSS-SSR (PSNR = 23.17 dB, SSIM = 0.8465).
Figure 4. Denoising visual results for Leaves with σ = 75 . (a) Original image; (b) BM3D [29] (PSNR = 22.49 dB, SSIM = 0.8072); (c) NCSR [13] (PSNR = 22.60 dB, SSIM=0.8233); (d) PGPD [28] (PSNR = 22.61 dB, SSIM = 0.8121); (e) GSRC-ENSS [1] (PSNR = 22.90 dB, SSIM = 0.8339); (f) RRC [41] (PSNR = 22.91 dB, SSIM = 0.8377); (g) SNSS [24] (PSNR = 22.98 dB, SSIM = 0.8365); (h) HNSS-SSR (PSNR = 23.17 dB, SSIM = 0.8465).
Mathematics 12 01412 g004
Figure 5. Deblurring results for Lake with uniform kernel. (a) Original image; (b) BM3D [59] (PSNR = 27.32 dB, SSIM = 0.8230); (c) EPLL [14] (PSNR = 25.12 dB, SSIM = 0.8285); (d) NCSR [13] (PSNR = 28.12 dB, SSIM = 0.8471); (e) JSM [60] (PSNR = 25.90 dB, SSIM = 0.7021); (f) MS-EPLL [6] (PSNR = 25.74 dB, SSIM = 0.8288); (g) SNSS [24] (PSNR = 28.06 dB, SSIM = 0.8538); (h) HNSS-SSR (PSNR = 28.41 dB, SSIM = 0.8609).
Figure 5. Deblurring results for Lake with uniform kernel. (a) Original image; (b) BM3D [59] (PSNR = 27.32 dB, SSIM = 0.8230); (c) EPLL [14] (PSNR = 25.12 dB, SSIM = 0.8285); (d) NCSR [13] (PSNR = 28.12 dB, SSIM = 0.8471); (e) JSM [60] (PSNR = 25.90 dB, SSIM = 0.7021); (f) MS-EPLL [6] (PSNR = 25.74 dB, SSIM = 0.8288); (g) SNSS [24] (PSNR = 28.06 dB, SSIM = 0.8538); (h) HNSS-SSR (PSNR = 28.41 dB, SSIM = 0.8609).
Mathematics 12 01412 g005
Figure 6. Deblurring results for Flowers with Gaussian kernel. (a) Original image; (b) BM3D [59] (PSNR = 29.84 dB, SSIM = 0.8592); (c) EPLL [14] (PSNR = 25.14 dB, SSIM = 0.8397); (d) NCSR [13] (PSNR = 30.20 dB, SSIM = 0.8617); (e) JSM [60] (PSNR = 29.51 dB, SSIM = 0.8081); (f) MS-EPLL [6] (PSNR = 27.20 dB, SSIM = 0.8569); (g) SNSS [24] (PSNR = 30.25 dB, SSIM = 0.8773); (h) HNSS-SSR (PSNR = 30.52 dB, SSIM = 0.8827).
Figure 6. Deblurring results for Flowers with Gaussian kernel. (a) Original image; (b) BM3D [59] (PSNR = 29.84 dB, SSIM = 0.8592); (c) EPLL [14] (PSNR = 25.14 dB, SSIM = 0.8397); (d) NCSR [13] (PSNR = 30.20 dB, SSIM = 0.8617); (e) JSM [60] (PSNR = 29.51 dB, SSIM = 0.8081); (f) MS-EPLL [6] (PSNR = 27.20 dB, SSIM = 0.8569); (g) SNSS [24] (PSNR = 30.25 dB, SSIM = 0.8773); (h) HNSS-SSR (PSNR = 30.52 dB, SSIM = 0.8827).
Mathematics 12 01412 g006
Table 1. PSNR comparison of BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], SNSS [24], and HNSS-SSR for image denoising.
Table 1. PSNR comparison of BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], SNSS [24], and HNSS-SSR for image denoising.
σ = 30 σ = 50
MethodsBM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
BM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
Bear28.8928.7629.0128.7828.8928.9629.0926.8226.7126.8126.6726.7426.7726.84
Bike25.9125.9426.1125.9126.1126.0626.4223.0023.0523.3923.2323.3623.3923.57
Buddhist31.8731.4531.8231.5431.8131.6431.8229.4829.0929.3629.0529.4329.1929.38
Butterfly27.5527.9427.7428.2328.2728.1828.5924.7925.0525.2125.6425.5925.5125.83
Cameraman28.6428.5828.5428.2028.4328.5828.7526.1326.1526.4626.3026.2726.3926.48
Corn26.5926.8326.7227.1527.0226.9127.3523.7623.7723.7724.3924.2224.2024.54
Cowboy27.6127.5627.6627.6527.7327.6727.9224.7524.7425.0525.0225.0325.0825.21
Flower27.9727.9128.1128.1028.1228.1428.4725.4925.3225.6425.6325.7225.8326.00
Flowers27.8427.6628.0427.8327.9627.9928.2925.3925.1025.5125.3825.4725.5125.59
Girls26.2926.2526.4426.2626.2826.2926.6123.6623.5723.9023.7023.7823.8824.03
Hat29.7729.7929.9129.5829.8729.8730.2127.6027.4627.8827.6727.9127.9728.06
Lake26.7426.7626.9026.9826.8926.8327.1424.2924.1924.4924.5124.4824.4424.63
Leaves27.8128.1427.9928.1528.3528.2528.6924.6824.9525.0225.2325.3025.2525.52
Lena29.6829.5729.8129.6529.8829.8229.9627.1427.1827.3827.1227.3927.4127.49
Plants30.7030.2630.7330.5030.9030.8730.9628.1127.6628.2527.8728.3228.3828.29
Starfish27.6527.7727.6728.0327.9527.8128.1925.0425.0925.1125.4425.3425.2525.53
Average28.2228.2028.4628.2828.4028.3728.6525.6325.5725.9225.8025.9025.9026.06
σ = 75 σ = 100
MethodsBM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
BM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
Bear25.3425.1325.3025.2725.1325.1325.2824.2824.0824.3524.2524.1023.9724.20
Bike21.1221.0121.4221.3321.3221.4721.6019.9419.6820.0919.9120.0120.2220.33
Buddhist27.5627.1027.5127.2927.4227.1927.5026.2225.8126.2126.1126.1825.8426.06
Butterfly22.8322.9523.0323.5123.3523.4123.7221.3821.3121.4822.0621.7722.0322.23
Cameraman24.3324.2324.6424.5224.4624.5924.7123.0822.9323.2323.2223.0223.4023.46
Corn21.8321.6821.7522.2021.9922.0822.4220.5420.2620.4920.8020.5520.7120.99
Cowboy22.8822.6523.0423.0423.0223.1123.2321.6821.2621.7121.6921.6021.8121.91
Flower23.8223.5023.8323.8723.7724.0624.1122.6622.2322.6622.5022.4622.7322.77
Flowers23.9923.4724.0023.7623.8623.9723.9523.1222.4923.1522.8322.8322.9022.77
Girls22.0621.8622.1522.0221.9522.1322.2621.0420.7321.0720.8820.7121.0321.11
Hat26.0825.8926.3026.2326.4926.5326.6025.0024.7425.1825.2125.2725.5025.44
Lake22.6322.5022.7622.7122.6422.6122.8121.5621.3821.6421.6321.3721.5521.64
Leaves22.4922.6022.6122.9022.9122.9823.1720.9020.8620.9521.4621.2221.4821.54
Lena25.3825.2325.5125.4925.5525.6625.7824.0823.8224.2224.3024.3524.5424.56
Plants26.2525.7526.3426.0326.4026.3926.3924.9824.4825.0724.7124.9125.0825.02
Starfish23.2723.2023.2323.4523.3223.3223.5722.1021.9122.0822.1021.9822.0822.25
Average23.8723.6724.0023.9823.9724.0424.1922.6622.3722.7022.7322.6522.8022.89
Table 2. SSIM comparison of BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], SNSS [24], and HNSS-SSR for image denoising.
Table 2. SSIM comparison of BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], SNSS [24], and HNSS-SSR for image denoising.
σ = 30 σ = 50
MethodsBM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
BM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
Bear0.78070.77800.78220.77840.78170.78150.78890.71110.71100.71130.71000.71690.71230.7187
Bike0.82690.82030.82900.81940.82470.82080.83930.71460.70730.72620.71570.72850.72500.7360
Buddhist0.87020.86720.86640.86230.87050.86730.87060.81700.81770.80870.80480.81940.81670.8202
Butterfly0.90190.90730.90470.90920.91640.91430.91840.84400.85650.85740.86580.87290.87040.8755
Cameraman0.83730.83940.82590.82040.82810.82850.83780.78280.78350.77740.77320.78010.78430.7883
Corn0.86790.87160.87120.87930.87870.87410.88560.77740.77860.77930.80520.80410.79820.8137
Cowboy0.85580.85440.85530.85400.85800.85200.86140.78370.78330.78820.78790.79680.79130.7978
Flower0.81940.81760.82170.82140.82400.82300.83690.72830.72220.73310.73400.74130.74460.7552
Flowers0.79500.78680.79800.79350.79890.79920.81220.69630.68850.69940.69490.71030.70610.7150
Girls0.80650.80230.80890.80110.80010.79610.81520.70290.69620.71290.70440.71180.70960.7217
Hat0.83260.84110.83190.82250.83600.83380.84560.77370.77760.77750.77100.78790.78830.7925
Lake0.82870.82900.82980.83270.83230.82500.84180.74330.74310.74890.75150.75710.74820.7653
Leaves0.92780.93240.93010.93430.93660.93370.94150.86800.87940.87930.88880.89100.88880.8977
Lena0.86190.86370.86630.86250.87120.86750.87490.79710.80690.80470.79740.81250.80960.8182
Plants0.83730.82970.83720.83460.84590.84610.84770.76690.76020.76720.75850.77890.78780.7796
Starfish0.82890.83050.82760.83510.83040.82580.83970.74330.74530.74540.76060.75890.74910.7671
Average0.84240.84200.84920.84130.84580.84300.85360.76570.76610.77780.77020.77930.77690.7851
σ = 75 σ = 100
MethodsBM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
BM3DNCSRPGPDGSRC-
ENSS
RRCSNSSHNSS-
SSR
Bear0.65380.66040.65320.65970.66190.65550.66450.61100.62600.60870.61790.62730.61770.6277
Bike0.61660.60560.62630.62080.62540.63110.63960.54600.52930.54700.53660.54780.56180.5696
Buddhist0.75760.77070.75670.75570.76840.76470.77460.71110.73600.70620.70930.73830.72850.7348
Butterfly0.78820.81210.80050.81880.82740.82620.83240.73480.76380.74490.77770.78340.79040.7947
Cameraman0.73400.74130.73010.72510.72140.74450.74660.69280.70570.67760.68160.65530.71300.7111
Corn0.68390.67690.67920.71140.70440.70000.72750.60360.58370.59540.62360.61100.61370.6467
Cowboy0.71430.71260.71880.72010.73130.72770.73350.65890.65590.65520.65780.67390.67460.6793
Flower0.64820.64170.64720.65410.64990.66980.67280.58620.57630.58030.57950.58460.60470.6070
Flowers0.62690.61760.62740.61990.63340.63560.63990.58480.57470.57790.57070.56900.58550.5885
Girls0.62230.61560.62720.62480.62030.62990.64130.56510.55670.56390.56200.55050.57210.5828
Hat0.72380.73670.72940.73250.75040.75300.75570.68330.70480.68130.69220.71700.72420.7232
Lake0.67160.67390.67640.67860.68220.67310.69180.61780.62290.61730.62230.62330.62310.6403
Leaves0.80720.82330.81210.83390.83770.83650.84650.74820.76270.74670.78830.78110.79000.7986
Lena0.73590.74880.74240.74260.75650.75880.76570.68150.69890.68550.69450.71780.72050.7208
Plants0.70060.70080.70140.69700.71720.72520.71800.65250.65930.64750.64280.66800.67760.6737
Starfish0.66700.66950.66370.68070.67410.66910.69000.60530.60680.60210.61110.60810.61120.6288
Average0.69700.70050.70700.70470.71010.71250.72130.64270.64770.64510.64800.65350.66300.6705
Table 3. Average denoising result comparison of BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], SNSS [24], GSMM [46], LRENSS [7], and HNSS-SSR on the BSD68 dataset [57].
Table 3. Average denoising result comparison of BM3D [29], NCSR [13], PGPD [28], GSRC-ENSS [1], RRC [41], SNSS [24], GSMM [46], LRENSS [7], and HNSS-SSR on the BSD68 dataset [57].
MethodsBM3DNCSRPGPDGSRC-ENSSRRCSNSSGSMMLRENSSHNSS-SSR
σ = 15 31.080.872231.190.877031.130.869631.060.867031.060.864431.290.876531.320.880431.360.881931.370.8829
σ = 25 28.560.801628.620.804528.620.799428.550.798528.560.793628.720.800728.800.810828.870.812228.850.8108
σ = 50 25.620.686625.590.686425.750.687025.610.681525.670.684025.730.687625.850.695925.900.701825.870.7012
Average28.420.786828.370.789328.500.785328.410.782328.430.780628.580.788328.660.795728.710.798628.700.8013
Table 4. Average denoising result comparison of TNRD [19], S2S [58], and HNSS-SSR on the Set12 dataset [20].
Table 4. Average denoising result comparison of TNRD [19], S2S [58], and HNSS-SSR on the Set12 dataset [20].
Methods σ = 15 σ = 25 σ = 50 Average
TNRD32.510.896730.060.852026.810.766629.780.8384
S2S32.090.889430.040.849326.500.739229.540.8260
HNSS-SSR32.640.899930.240.856627.020.780329.970.8456
Table 5. PSNR comparison of BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], SNSS [24], and HNSS-SSR for image deblurring.
Table 5. PSNR comparison of BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], SNSS [24], and HNSS-SSR for image deblurring.
Uniform Blur, σ = 2 Gaussian Blur, σ = 2
MethodsBM3DEPLLNCSRJSMMS-
EPLL
SNSSHNSS-
SSR
BM3DEPLLNCSRJSMMS-
EPLL
SNSSHNSS-
SSR
Bear30.4928.8731.1428.0929.1531.3731.4831.9927.6332.2531.3029.9932.6632.82
Bike24.5723.1925.4123.8923.9225.4725.2626.6522.9226.9826.6523.4926.9027.22
Buddhist34.3333.4435.0229.9333.2935.3535.5936.9134.4636.9034.4233.5538.2438.35
Butterfly26.8024.4428.8325.6525.2629.1429.5228.5822.0029.7828.7922.7530.2031.00
Cameraman27.3026.0228.5926.2026.8228.6728.6727.4626.6228.3127.4527.4328.1328.24
Corn26.7524.5627.8725.5525.2628.2428.5828.9123.8929.6929.0024.4230.0830.45
Cowboy27.1925.9327.9925.9026.5428.0928.0228.0524.8628.4527.9526.5928.4728.65
Flower28.5827.0429.3826.8827.6129.3729.5530.4126.6430.8230.0127.3431.0831.42
Flowers28.5426.3129.2826.8726.7429.3129.4229.8425.1430.2029.5127.2030.2530.52
Girls26.4724.0027.1525.2924.0027.2227.3427.8222.7028.1127.7223.2128.1528.50
Hat30.6329.2231.3028.2329.4431.4531.6031.7829.2032.2431.0628.0432.5332.76
Lake27.3225.1228.1225.9025.7428.0628.4129.1722.6029.4828.9126.2329.6329.91
Leaves26.8923.4628.9825.4823.4829.0829.6029.0021.3830.3429.1621.5330.6931.63
Lena30.3528.1331.2628.0028.4631.3231.5332.2428.0032.6731.4626.6433.0233.33
Plants32.0729.8333.1228.8829.5833.5233.7833.9930.1834.6532.8731.3535.5935.93
Starfish28.0826.3229.2026.6327.0829.4229.6330.2026.2030.9830.0826.3531.4031.76
Average28.5226.6229.5426.7127.0229.6929.8730.1925.9030.7429.7726.6331.0631.41
Table 6. SSIM comparison of BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], SNSS [24], and HNSS-SSR for image deblurring.
Table 6. SSIM comparison of BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], SNSS [24], and HNSS-SSR for image deblurring.
Uniform Blur, σ = 2 Gaussian Blur, σ = 2
MethodsBM3DEPLLNCSRJSMMS-
EPLL
SNSSHNSS-
SSR
BM3DEPLLNCSRJSMMS-
EPLL
SNSSHNSS-
SSR
Bear0.80740.82510.82690.66210.82630.83860.84050.86180.85600.86180.81340.86730.88360.8852
Bike0.75890.73930.79960.70460.77410.80810.80190.85110.80820.85990.84030.82740.86780.8730
Buddhist0.89790.91580.90260.67010.89260.92050.92460.93370.94340.92560.84810.92970.95830.9563
Butterfly0.87140.87430.90760.76290.88520.92120.92510.91570.88400.92200.88140.89220.94220.9470
Cameraman0.82580.83450.85680.67310.82560.85920.86310.84160.84860.85470.78450.82710.87320.8756
Corn0.84060.81750.86920.77530.83240.88440.89090.89700.86190.90790.88600.86780.92210.9264
Cowboy0.84520.85440.86680.71810.85800.87660.87760.88610.86980.88800.84520.88380.90310.9045
Flower0.81190.79840.83920.69370.81730.84450.84840.87010.85110.87730.83400.86080.89250.8975
Flowers0.80220.79800.82730.65530.81050.84020.84020.85920.83970.86170.80810.85690.87730.8827
Girls0.79070.78530.82160.72400.80810.83070.83100.85370.82030.86260.84040.83530.87320.8780
Hat0.84270.84350.85050.64280.82200.85970.86450.86370.86730.86740.79380.83840.89090.8930
Lake0.82300.82850.84710.70210.82880.85380.86090.88360.85660.88650.84570.86330.90210.9061
Leaves0.89470.87920.93450.81790.89500.94100.94700.93380.89220.94520.91530.89860.95870.9654
Lena0.85630.86490.87530.69660.86060.88620.89030.90280.89760.90360.84850.89440.92460.9267
Plants0.85630.86360.87450.67070.85790.89320.89690.90420.90000.90570.84050.90110.93360.9349
Starfish0.81780.82050.85210.72380.82900.86210.86520.88490.86530.89370.86120.86960.90940.9136
Average0.83390.83390.85950.70580.83900.87000.87300.88390.86640.88900.84290.86960.90700.9104
Table 7. Average deblurring result comparison of BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], SNSS [24], JGD-SSR [3], LRENSS [7], and HNSS-SSR on the Set14 dataset [61].
Table 7. Average deblurring result comparison of BM3D [59], EPLL [14], NCSR [13], JSM [60], MS-EPLL [6], SNSS [24], JGD-SSR [3], LRENSS [7], and HNSS-SSR on the Set14 dataset [61].
MethodsBM3DEPLLNCSRJSMMS-EPLLSNSSJGD-SSRLRENSSHNSS-SSR
Uniform29.130.802627.230.797930.030.823927.220.681927.260.805030.000.822230.380.829430.250.830830.340.8289
Gaussian30.200.854427.210.837130.740.852929.860.808028.690.843430.960.863131.350.868331.300.870331.380.8697
Average29.670.828527.220.817530.390.838428.540.745028.190.824230.480.842730.870.848930.780.850630.860.8493
Table 8. Average deblurring result comparison of RED [62], IRCNN [63], H-PnP [64], and HNSS-SSR on the Set14 dataset [61].
Table 8. Average deblurring result comparison of RED [62], IRCNN [63], H-PnP [64], and HNSS-SSR on the Set14 dataset [61].
MethodsUniform BlurGaussian BlurAverage
RED30.030.823830.910.856630.470.8402
IRCNN30.300.828131.290.859630.780.8438
H-PnP30.250.823831.330.865130.790.8445
HNSS-SSR30.340.828931.380.869730.860.8493
Table 9. Running time in seconds (s) of different denoising and deblurring methods.
Table 9. Running time in seconds (s) of different denoising and deblurring methods.
Image Denoising ( σ = 50 )
MethodsBM3D [29]NCSR [13]PGPD [28]GSRC-ENSS [1]RRC [41]SNSS [24]GSMM [46]LRENSS [7]HNSS-SSR
Time (s)0.8224.38.3369.2226.6602.1-108.649.4
Image Deblurring
MethodsBM3D [59]EPLL [14]NCSR [13]JSM [60]MS-EPLL [6]SNSS [24]JGD-SSR [3]LRENSS [7]HNSS-SSR
Time (s)0.949.798.1158.9214.24830.4405.8707.6690.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, W.; Liu, H.; Liang, L.; Wang, W. Learning the Hybrid Nonlocal Self-Similarity Prior for Image Restoration. Mathematics 2024, 12, 1412. https://doi.org/10.3390/math12091412

AMA Style

Yuan W, Liu H, Liang L, Wang W. Learning the Hybrid Nonlocal Self-Similarity Prior for Image Restoration. Mathematics. 2024; 12(9):1412. https://doi.org/10.3390/math12091412

Chicago/Turabian Style

Yuan, Wei, Han Liu, Lili Liang, and Wenqing Wang. 2024. "Learning the Hybrid Nonlocal Self-Similarity Prior for Image Restoration" Mathematics 12, no. 9: 1412. https://doi.org/10.3390/math12091412

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop