Next Article in Journal
On-Orbit Calibration of Installation Matrix between Remote Sensing Camera and Star Camera Based on Vector Angle Invariance
Previous Article in Journal
A Review of the State of the Art in Non-Contact Sensing for COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Compressive Sensing via Hybrid Nonlocal Sparsity Regularization

State Key Lab of Integrated Services Networks, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(19), 5666; https://doi.org/10.3390/s20195666
Submission received: 19 August 2020 / Revised: 21 September 2020 / Accepted: 29 September 2020 / Published: 3 October 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
This paper focuses on image compressive sensing (CS). As the intrinsic properties of natural images, nonlocal self-similarity and sparse representation have been widely used in various image processing tasks. Most existing image CS methods apply either self-adaptive dictionary (e.g., principle component analysis (PCA) dictionary and singular value decomposition (SVD) dictionary) or fixed dictionary (e.g., discrete cosine transform (DCT), discrete wavelet transform (DWT), and Curvelet) as the sparse basis, while single dictionary could not fully explore the sparsity of images. In this paper, a Hybrid NonLocal Sparsity Regularization (HNLSR) is developed and applied to image compressive sensing. The proposed HNLSR measures nonlocal sparsity in 2D and 3D transform domain simultaneously, and both self-adaptive singular value decomposition (SVD) dictionary and fixed 3D transform are utilized. We use an efficient alternating minimization method to solve the optimization problem. Experimental results demonstrate that the proposed method outperforms existing methods in both objective evaluation and visual quality.

1. Introduction

As a joint framework of sampling and compression, compressive sensing (CS) [1,2] shows that if a signal is sparse in some domains, it can be perfectly reconstructed from fewer samples than Nyquist rate. This characteristic demonstrates its two great potentials in signal acquisition and processing. First, as the number of samples is greatly reduced, this make it possible for some devices with limited sensor size to obtain high definition information using low definition sensors. Figure 1 shows the architecture of the single-pixel camera [3]. With a sensor with only one pixel, this system can get a complete image. Second, the CS framework transfers the computational burden to the decoding side. For some energy limited applications, such as wireless sensor network, this advantage can greatly extend the life cycle of the nodes. As the encoding side is simplified, the performance of the system depends largely on the performance of the decoding side, namely, the “Recovery method” part in Figure 1. This paper focus on the recovery method of image CS. Due to the advantages mentioned above, CS have been applied in many fields, such as digital imaging [3], background subtraction [4], medical imaging [5], and remote sensing [6].
In the framework of compressive sensing, a one-dimensional sparse signal can be reconstructed by solving a L 0 -norm minimization problem. Since L 0 -norm minimization is non-convex and NP-hard, L 0 -norm is often replaced by L 1 -norm. It has been proved that these two norm are equivalent in most cases [2] and many CS recovery methods are proposed, such as iterative thresholding algorithm [7], orthogonal matching pursuit [8], and split Bregman algorithm [9].
For image compressive sensing, the key issue is how to exploit the intrinsic prior information of images. As the model of prior knowledge has a significant impact on the performance of image compressive sensing algorithms, many kinds of regularizations have been developed. Conventional regularization terms, such as Mumford–Shah (MS) model [10] and total variation (TV) [7,11,12,13], are established under the assumption that images are locally smooth. For example, Li et al. [13] proposed a TV-based CS algorithm and developed an efficient augmented lagrangian method to solve it. Candes et al. [11] enhanced the sparsity of TV norm via a weighted strategy. However, these regularizations only consider local smoothness of images and cannot restore details and textures well. TV norm also favors piecewise constant solution, resulting in oversmoothing. To overcome this problem and improve performance, many compressive sensing methods utilized the prior information of transform coefficients [14,15,16]. Kim et al. [15] modeled the statistical characteristics between transform coefficients with a Gaussian Scale Mixture (GSM) and achieved better reconstruction performance.
In the past few years, sparse representation has begun to emerge and demonstrated good performance in various image processing tasks [17,18,19,20,21]. The purpose of sparse representation is to represent a signal with as few atoms as possible in a learned over-complete dictionary. Compared with fixed dictionary, the learned dictionary can better express sparsity of images. However, dictionaries are generally learned from external clean images, and it may suffer from high computational complexity.
Recently, inspired by nonlocal means (NLM) [22], many algorithms based on nonlocal self-similarity have been proposed [23,24,25,26,27,28,29]. Dabov et al. proposed a Block-Matching and 3D filtering (BM3D) algorithm for image denoising [23]. In BM3D, similar patches in a degraded image are grouped into 3D arrays and collaborative filtering is performed in 3D transform domain. Egiazarain et al. extended BM3D to compressive sensing and proposed BM3D-CS. Zhang et al. [26] proposed a structural group sparsity representation (SGSR) model to enforce image sparsity in an adaptive SVD domain. Dong et al. [28] proposed a nonlocal low-rank regularization (NLR) to exploit the self-similarity, and applied it to the reconstruction of photographic and MRI images. In [29], Zha et al. incorporated a non-convex penalty function to group sparse representation and obtained state-of-the-art reconstruction performance. Gao et al. [30] proposed to use Z-score standardization to improve the sparse representation ability of patch groups. Keshavarzian et al. [31] proposed to utilize the principle component analysis (PCA) to learn a dictionary for each group and introduced non-convex L L p -norm regularization to better promote the sparsity of the patch group coefficients. In [32], internal self-adaptive dictionary and external learned dictionary were used to encode a patch group alternately and achieved better performance than single dictionary.
Another idea is to exploit both local sparsity and nonlocal self-similarity [33,34,35,36,37]. For example, Zhang et al. [33] combined local anisotropic total variation with nonlocal 3D sparsity, and named it Collaborative Sparsity Measure (CoSM). Different from the work in [33], Eslahi et al. [37] used curvelet transform to enforce local patterns. In [34], Dong et al. utilized local patch-based sparsity and nonlocal self-similarity constrain to balance the trade-off between adaptation and robustness. Zhou, et al. [38] proposed a data-adaptive kernel regressor to extract local structure and used nonlocal means filter to enforce nonlocal information.
With the development of deep learning, many convolutional neural network (CNN) based image compressive sensing algorithms have been proposed. For example, Kulkarni et al. [39] proposed a non-iterative and parallelizable CNN architecture to get an initial recovery and fed it into an off-the-shelf denoiser to obtain the final image. Zhang et al. [40] cast the Iterative Shrinkage- Thresholding Algorithm (ISTA) into CNN framework and developed an effective strategy to solve it. In [41], low-rank tensor factor analysis was utilized to capture nonlocal correlation and a deep convolutional architecture was adopted to accelerate the matrix inversion in CS. DR 2 -Net [42] utilized a linear mapping to reconstruct a preliminary image and used residual learning to further promote the reconstruction quality. Yang et al. [43] unrolled the Alternating Direction Method Multipliers (ADMM) to be a deep architecture and proposed ADMM-CSNet. Zhang et al. [44] proposed a optimization-inspired explicable deep network OPINE-Net and all the parameters were learned end-to-end using back-propagation.
In this paper, we propose a Hybrid NonLocal Sparsity Regularization (HNLSR) for image compressive sensing. First, different from the methods mentioned above, two nonlocal self-similarity constrains are applied to exploit the intrinsic sparsity of images simultaneously. Then, fixed dictionaries are universal, and learned dictionaries are more robust to the image itself. To take advantages of them, both fixed 3D transform and 2D self-adaptive dictionary are utilized. Finally, for the non-convex model of HNLSR, we use the split Bregman to divide it into several subproblems, making it easier and more efficient to be solved. The flowchart is illustrated in Figure 2. Experimental results show that compared with both model-based algorithms and deep learning-based algorithms, the proposed HNLSR-CS demonstrates the superiority of its performance.
The remainder of this paper is organized as follows. Section 2 introduces the related works. In Section 3, we present the proposed method. The experiment and analysis are elaborated in Section 4. Section 5 concludes the paper.

2. Related Work

2.1. Compressive Sensing

For a n-dimension signal x R n , its CS measurements can be expressed as
y = Φ x
where y R m and Φ R m × n ( m n ) . Φ is the measurement matrix which meets the restricted isometry property (RIP) [1]. If x is sparse in a transform domain Ψ R n × n , namely, x = Ψ α , the reconstruction of x can be formulated as
α ^ = arg min α α 0 s . t . y = Φ Ψ α
where α 0 is the L 0 -norm that counts the nonzero elements in α .
The unconstrained Lagrangian form of Equation (2) is
α ^ = arg min α 1 2 y Φ Ψ α 2 2 + λ α 0
where λ is the regularization parameter. After getting the solution of Equation (4), x can be restored by x ^ = Ψ Ψ α ^ .
For image compressive sensing, the optimization problem can be written as
x ^ = arg min x 1 2 y Φ x 2 2 + R ( x )
where x stands for an image, Φ is the measurement matrix and R ( x ) is the regularization item which exploits the intrinsic prior information of images.

2.2. Sparse Representation and Group-Based Sparsity

For an image x R N , it can be divided into many overlapped patches. Suppose a patch x i of size n × n at location i, i = 1 , 2 , , N , sparse representation means that this patch can be represented over a redundant dictionary D i
α i ^ = arg min α i α i 0 s . t . x i = D i α i
Nonlocal self-similarity means that a patch has many similar patches in other positions [18,22,23]. We search its ( m 1 ) best matched patches and form them into a data matrix x G i R n × m , where each column of x G i denotes a similar patch, so we have
x G i = R G i ( x )
where subscript G i is the number of the group, R G i is an operator that extract all the similar patches and x G i is a patch group. Given a proper dictionary D G i , this group can be expressed as
α G i ^ = arg min α G i α G i 0 s . t . x G i = D G i α G i
where α G i ^ is the sparse coefficient. After getting α G i ^ , the whole image can be reconstructed via [45]
x ( i N R G i T 1 ( n × m ) ) 1 ( i N R G i T D G i α G i ^ )
where 1 ( n × m ) is a matrix of size n × m with all the elements being 1. Equation (8) means that we can restore the image by putting patches back to their original locations and averaging them on a pixel-by-pixel basis.

2.3. Nonlocal Self-Similarity in 3D Transform Domain

Dabov et al. proposed the well-known BM3D [23] for image denoising and the self-similarity in 3D transform domain has attracted great attention since then [24,33,37]. For a patch x i of size n × n , after searching its ( m 1 ) similar patches, they are stacked into a 3D array Z of size n × n × m . Next, a 3D transform is performed to get the transform coefficients
T 3 D ( Z ) = Θ
where T 3 D ( · ) is a transform operator and Θ are coefficients. Since these coefficients are considered sparse, they are shrunken by some filters (e.g., soft-thresholding or hard-thresholding). Then, the sparse coefficients are inverted to generate the estimated group. These estimates are returned to their original positions. Nonlocal 3D sparsity can explore high degree sparsity of images, and can well preserve details and differences between patches.

2.4. Split Bregman Iteration

The split Bregman iteration (SBI) [9] was proposed to solve various optimization problem. Considering a constrained problem:
min x , z H ( x ) + G ( z ) s . t . x = K z
where H : R N R , G : R M R and K N × M R N × M . H ( · ) and G ( · ) are convex functions. This optimization problem can be efficiently solved by Algorithm 1. According to the SBI framework, as x and z have some relationship, the optimization problem can be split into two subproblem (namely, step 3 and step 4). The rationale behind is that in step 3 and step 4, only one variable is solved at a time, making it much easier than the original problem.
Algorithm 1 Split Bregman Iteration (SBI).
  1:
Set μ , k = 0 , x ( 0 ) = 0 , z ( 0 ) = 0 , b ( 0 ) = 0 .
  2:
repeat
  3:
x ( k + 1 ) = arg min x H ( x ) + μ 2 x K z ( k ) d ( k ) 2 2 ;
  4:
z ( k + 1 ) = arg min z G ( z ) + μ 2 x ( k + 1 ) K z d ( k ) 2 2 ;
  5:
d ( k + 1 ) = d ( k ) ( x ( k + 1 ) K z ( k + 1 ) ) ;
  6:
k = k + 1 .
  7:
until Stop criterion is satisfied.

3. Proposed Method

3.1. Hybrid Non-Local Sparsity Regularization (Hnlsr)

Integrating two kinds of different nonlocal regularizations, we propose a Hybrid Non-Local Sparsity Regularization (HNLSR), and it can be expressed as
R H N L S R ( x ) = λ α 0 + τ T 3 D ( Z ( x ) ) 1 = λ α 0 + τ Θ 1
where α are the coefficients under certain 2D sparse dictionary and λ and τ are regularization parameters. Z ( x ) is the 3D form of x. The proposed regularization has two advantages:
  • It constrains sparsity in both 2D and 3D domains, which means that it can better explore the intrinsic nonlocal similarity of images.
  • We use a self-adaptive dictionary as the 2D sparse basis and a fixed 3D transform to measure sparsity in high-dimensional space. Two kinds of different dictionaries can improve the robustness of the regularization.
Next, we will apply the proposed HNLSR to image compressive sensing and show how to solve the optimization problem.

3.2. Image Cs Via Hnlsr

Incorporating Equation (11) into Equation (4), the proposed optimization problem for image CS is expressed as
x ^ = min x 1 2 y Φ x 2 2 + λ α 0 + τ Θ 1
where λ and τ are regularization parameters. We use the SBI framework to solve this optimization problem. Define H ( x ) = 1 2 y Φ x 2 2 and G ( z ) = λ α 0 + τ Θ 1 , so the corresponding K are sparse dictionaries. Invoking Line 3 in Algorithm 1, we obtain
x ( k + 1 ) = arg min x 1 2 y Φ x 2 2 + μ 2 x D z ( k ) d ( k ) 2 2 = arg min x 1 2 y Φ x 2 2 + μ 2 [ x , x ] [ D 2 D , D 3 D ] α ( k ) Θ ( k ) b ( k ) c ( k ) 2 2
where d ( k ) = b ( k ) c ( k ) . Splitting the second term in Equation (13), we have
x ( k + 1 ) = arg min x 1 2 y Φ x 2 2 + μ 1 2 x D 2 D α ( k ) b ( k ) 2 2 + μ 2 2 x D 3 D Θ ( k ) c ( k ) 2 2
Then we apply Line 4 and Equation (12) is transformed into
z ( k + 1 ) = arg min z G ( z ) + μ 2 x ( k + 1 ) D z d ( k ) 2 2 = arg min α , Θ λ α 0 + τ Θ 1 + μ 2 [ x ( k + 1 ) , x ( k + 1 ) ] [ D 2 D , D 3 D ] α Θ b ( k ) c ( k ) 2 2
Finally, b and c can be calculated by
b ( k + 1 ) = b ( k ) ( x ( k + 1 ) D 2 D α ( k + 1 ) )
c ( k + 1 ) = c ( k ) ( x ( k + 1 ) D 3 D Θ ( k + 1 ) )
Therefore, the minimization problem of Equation (12) is divided into several subproblems and the solution to each subproblem will be discussed below.

3.2.1. x -Subproblem

Given α , Θ , b and c , Equation (14) is a convex quadratic function optimization problem and we can use gradient descent method to solve this problem efficiently
x ( k + 1 ) = x ( k ) η ( k ) g ( k )
where g ( k ) is the gradient direction of Equation (14)
g ( k ) = Φ T Φ x ( k ) Φ T y + μ 1 ( x ( k ) D 2 D α ( k ) b ( k ) ) + μ 2 ( x ( k ) D 3 D Θ ( k ) c ( k ) )
and η ( k ) is the optimal step-size and calculated via
η = g T g g T Φ T Φ g + ( μ 1 + μ 2 ) g T g
The superscript k of g is omitted for conciseness.

3.2.2. z -Subproblem

Given x , b and c , z -subproblem Equation (15) can be divided into two formulas
α ( k + 1 ) = arg min α λ α 0 + μ 1 2 x ( k + 1 ) D 2 D α b ( k ) 2 2
Θ ( k + 1 ) = arg min Θ τ Θ 1 + μ 2 2 x ( k + 1 ) D 3 D Θ c ( k ) 2 2
Let us define x ¯ ( k + 1 ) = x ( k + 1 ) b ( k ) , where x ¯ ( k + 1 ) can be seen as the noisy observation of x ( k + 1 ) . Therefore, Equation (21) can be rewritten as
α ( k + 1 ) = arg min α λ α 0 + μ 1 2 x ¯ ( k + 1 ) D 2 D α 2 2
As patch group is the basic unit of sparse coding, this problem can be split into divided into several subproblems. Moreover, for each subproblem, the coefficients of each group are the variables to be solved. Therefore, Equation (23) can be solved by
min α m = 1 n 1 2 x ¯ G m ( k + 1 ) D 2 D m α G m F 2 + θ α G m 0
where θ = λ μ 1 and x ¯ G m ( k + 1 ) is image patch group. D 2 D m and α G M are corresponding dictionary and sparse coefficients. For every group, we adopt the singular value decomposition (SVD) to generate the 2D dictionary. Applying the SVD to a group x ¯ G m , we have
x ¯ G m = U G m Σ G m V G m T
where Σ G m is a diagonal matrix formed by the eigenvalues. Moreover, the dictionary is defined as
D G m = U G m V G m T
Therefore, for every optimization problem in Equation (24), it has a close-form solution
α G m i = h a r d ( Σ G m i , 2 θ ) = Σ G m i 1 ( a b s ( Σ G m i ) 2 θ )
where h a r d ( · ) is hard thresholding function and ⊙ stands for the element-wise product operator.
Similar to the α -subproblem, we define x ^ ( k + 1 ) = x ( k + 1 ) c ( k ) and consider the fact that the probability of every overlapped image patch appearing is equal, we can solve Equation (22) by
min Θ m = 1 n 1 2 x ^ G m ( k + 1 ) D 3 D Θ G m F 2 + τ μ 2 Θ G m 1
where x ^ G m ( k + 1 ) is a 3D patch array. This problem can be seen as a filtering problem in transform domain. Invoking the Bayesian framework [21], the maximum a posterior (MAP) estimation of Θ G m with x ^ G m ( k + 1 ) is
Θ G m = arg max Θ G m log P ( Θ G m x ^ G m ( k + 1 ) ) = arg max Θ G m log P ( x ^ G m ( k + 1 ) Θ G m ) + log P ( Θ G m )
Assuming that x ^ G m ( k + 1 ) is disturbed by Gaussian noise with standard deviation σ n and Θ G m follows i.i.d Laplacian distribution
P ( Θ G m ) = i 1 2 σ i exp Θ G m i σ i
where σ i is the standard deviations of Θ G m i . Substituting Equation (30) into Equation (29), we can obtain
arg min Θ G m 1 2 x ^ G m ( k + 1 ) D 3 D Θ G m F 2 + 2 2 σ n 2 × i = 1 l 1 σ i Θ G m i
From the above analysis, we can know that τ μ 2 = 2 2 σ n 2 σ i and Equation (31) can be solved by soft thresholding function
Θ G m = sgn ( D 3 D T x ^ G m ( k + 1 ) ) · max D 3 D T x ^ G m ( k + 1 ) 2 2 σ n 2 σ i , 0
The proposed method for image compressive sensing is summarized in Algorithm 2.
Algorithm 2 Image compressive sensing via HNLSR.
  1:
Input: measurement y , measurement matrix Φ
Initialization:
(1) Set k, λ , τ , b , c , m, σ n ;
(2) Estimate an initial image x i n i t ;
  2:
Compute x via Eq.(19);
  3:
for Each patch do
  4:
 (1) Block-matching and form patch group;
  5:
 (2) Generate dictionary for every patch group via Eq.(25);
  6:
 (3) Compute α via Eq.(27);
  7:
end for
  8:
for Each patch do
  9:
 (1) Search similar patches and arrange as 3D arrays;
10:
 (2) Compute Θ via Eq.(32);
11:
end for
12:
Update b via Eq.(16);
13:
Update c via Eq.(17);
14:
k = k + 1 .
15:
Output: Reconstructed image x *

4. Experimental Results

4.1. Implementation Details

This section presents the performance of the proposed HNLSR methods. In our experiment, eight commonly used images are used to test the reconstruction performance of the algorithms (shown in Figure 3). The size of them is 256 × 256 . In the measurement phase, a image is divided into blocks of size 32 × 32 and Gaussian matrix is applied to generate measurements for each block. In the reconstruction phase, the size of overlapping patches is 8 × 8 . Step size, i.e., the distance between two image patches in the horizontal or vertical direction, is set as 4. For every image patch, we search its 59 similar patches in a 20 × 20 window. μ 1 and μ 2 are set to (0.0025, 0.0025), (0.0025, 0.00025), and (0.0025, 0.0001) when the sampling rates are 0.1, 0.2, and 0.3, respectively. The 3D dictionary is composed of 2D DCT and 1D Haar wavelet. Maximum iteration number is 120. We use peak signal-to-noise ratio (PSNR)and feature similarity (FSIM) [46] as the performance evaluation indices. All experiments are performed in Matlab R2017a on computer with Intel Core i5-6500 CPU of 3.2 Ghz, 8 GB memory, and Windows 10 operating system.

4.2. Comparison with State-of-the-Art Methods

We compare our method with six representative methods: MH-BCS [47], RCoS [33], ALSB [27], GSR [45], JASR [37], and GSR-NCR [29]. MH-BCS uses residual in the measurement domain and multihypothesis predictions to improve reconstruction quality; RCoS utilizes nonlocal 3D sparsity and local 2D sparsity (namely, total variation (TV)) to explore the intrinsic of images; ALSB is a patch-based sparse representation method; JASR employs discrete curvelet transform (DCuT) to constrain local sparsity and combines it with nonlocal 3D sparsity; GSR is an extended version of SGSR [26]. Both GSR and GSR-NCR are group-based method, and their difference is GSR uses L 0 -norm to constrain the sparse coefficients, while GSR-NCR uses non-convex L p -norm. GSR and GSR-NCR are known as the stat-of-the-art methods. The PSNR and FSIM results are shown in Table 1 and Table 2 respectively, and the best result for each sampling rate is marked in bold.
We can see that compared with MH-BCS, methods based on non-local self-similarity have obvious advantages in performance. As a patch-based algorithm, ALSB is inferior to other methods in most cases. JASR performs better than RCoS since DCuT is better than TV in depicting local characteristics. Compared with methods using fixed dictionaries (namely RCoS and JASR), methods using self-adaptive dictionaries have better performance in general. The proposed method combines fixed dictionary with self-adaptive dictionary and get the best performance in most cases.
Some visual comparisons are illustrated in Figure 4, Figure 5, Figure 6 and Figure 7. In Figure 4, it is obvious that MH-BCS generates the worst result. ALSB, GSR, and GSR-NCR suffer from some artifacts in the water surface area. RCoS and JASR have better results, but the edge of the tripod is a little blurry. In Figure 5, other methods produce some undesirable traces in the blank area, and the proposed method is not only pure in the blank area, but also has relatively sharp leaf edges. MH-BCS, RCoS, and ALSB produce some unexpected noise in the white area around the eyes in Figure 6, and the pattern around the eyes of the proposed method is the clearest. It is evident that in terms of visual quality, the proposed method outperforms other methods.
We also compare the HNLSR-CS with three representative deep learning methods: ReconNet [39], ISTA-Net + [40], and DR 2 -Net [42]. We use pretrained models for testing and the PSNR and FSIM results are reported in Table 3 and Table 4. The best results are highlighted in bold. The proposed method obtains the best result in most cases.
Some visual comparisons are shown in Figure 8 and Figure 9. In Figure 8, ReconNet, ISTA-Net + , and DR 2 -Net all suffer from block effects, and the proposed method has the best details. In Figure 9, ReconNet, and DR 2 -Net still have some block artifacts; ISTA-Net + has the best PSNR, but it produces some undesirable artifacts, resulting in worse FSIM than ours. These results also prove the superiority of the proposed method.

4.3. Effect of Parameters of Similar Patches

In this section, we discuss how the parameters of similar patches affects the performance of the method. With other variables fixed, we change the number of similar patches at intervals of 10 between 30 and 90. The comparisons are shown in Figure 10. We can see from the figure that all three curves are relatively stable, which means that the performance is not sensitive to the number of image patches. Considering the performance and complexity of the method, we set the number of similar patches to 60.

4.4. Convergence

As Equation (12) is non-convex, it is difficult to give a theoretical proof of the convergence of the proposed method, so we only show its stability through empirical evidence. Figure 11 shows the curve of PSNR versus iteration number of four images at the sampling rate of 0.2 and 0.3, respectively. We can see from the figure that with the iteration number increases, PSNR changes drastically at the beginning, and then gradually become stable. This illustrates the good convergence performance of the proposed method.

5. Conclusions and Future Work

This paper proposes a Hybrid Nonlocal Sparsity Regularization (HNLSR) method for image compressive sensing. Different from existing methods, the proposed HNLSR does not consider the local sparsity of images, but uses two dictionaries to explore the nonlocal self-similarity. The 2D dictionary is self-generated and the 3D dictionary is a fixed dictionary, which can combine the advantages of adaptability and versatility from different dictionaries. An effective framework based on SBI is present to solve the optimization problem. The convergence and stability of the proposed method have also been proven. Experimental results show that compared with methods which are based on local and nonlocal regularizations or single nonlocal regularization, the proposed method performs better than most existing image compressive sensing methods in both quality assessment and visual quality.
As multiple dictionaries can improve the performance, we are considering some research directions. For example, learning different dictionaries for different areas of the images (e.g., smooth area and textured area). Another direction is to learn multi-scale dictionaries and select them adaptively according to the parameters. Our future work include extending the proposed method to other image processing tasks (e.g., denoising, deblocking, and deblurring) and high-dimensional data (e.g., videos and multispectral images). For high-dimensional or multi-frame data, how to collect similar patches (intra- or inter-frame) is also a problem to be solved.

Author Contributions

Conceptualization, L.L.; methodology, L.L.; software, L.L.; validation, L.L. and S.X.; formal analysis, L.L. and Y.Z.; investigation, L.L.; resources, S.X.; data curation, L.L. and Y.Z.; writing—original draft preparation, L.L.; writing—review and editing, S.X. and Y.Z.; visualization, L.L.; supervision, S.X.; project administration, L.L.; funding acquisition, S.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC No. 61372069), National Defense Pre-research Foundation, the SRF for ROCS, SEM (JY0600090102), the “111” project of China (No. B08038), and the Fundamental Research Funds for the Central Universities.

Acknowledgments

We would like to appreciate the anonymous reviewers for their constructive comments which greatly improve the quality of the paper. We would also thank the authors of [27,29,33,37,39,40,42,45,47] for providing their codes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  3. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
  4. Cevher, V.; Sankaranarayanan, A.; Duarte, M.F.; Reddy, D.; Baraniuk, R.G.; Chellappa, R. Compressive sensing for background subtraction. In Proceedings of the European Conference on Computer Vision (ECCV), Marseille, France, 12–18 October 2008; pp. 155–168. [Google Scholar]
  5. Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed sensing MRI. IEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  6. Alonso, M.T.; López-Dekker, P.; Mallorquí, J.J. A novel strategy for radar imaging based on compressive sensing. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4285–4295. [Google Scholar] [CrossRef] [Green Version]
  7. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Common. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  8. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  9. Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  10. Mumford, D.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Common. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
  11. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  12. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  13. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  14. He, L.; Carin, L. Exploiting structure in wavelet-based Bayesian compressive sensing. IEEE Trans. Signal Process. 2009, 57, 3488–3497. [Google Scholar]
  15. Kim, Y.; Nadar, M.S.; Bilgin, A. Compressed sensing using a Gaussian scale mixtures model in wavelet domain. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Hong Kong, China, 12–15 September 2010; pp. 3365–3368. [Google Scholar]
  16. He, L.; Chen, H.; Carin, L. Tree-structured compressive sensing with variational Bayesian analysis. IEEE Signal Process. Lett. 2009, 17, 233–236. [Google Scholar]
  17. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  18. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  19. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 210–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
  21. Dong, W.; Zhang, L.; Shi, G.; Wu, X. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 2011, 20, 1838–1857. [Google Scholar] [CrossRef] [Green Version]
  22. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
  23. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  24. Egiazarian, K.; Foi, A.; Katkovnik, V. Compressed sensing image reconstruction via recursive spatially adaptive filtering. In Proceedings of the IEEE International Conference on Image Processing (ICIP), San Antonio, TX, USA, 16–19 September 2007; pp. I-549–I-552. [Google Scholar]
  25. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2012, 22, 1620–1630. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Zhang, J.; Zhao, D.; Jiang, F.; Gao, W. Structural group sparse representation for image compressive sensing recovery. In Proceedings of the IEEE Data Compression Conference (DCC), Snowbird, UT, USA, 20–22 March 2013; pp. 331–340. [Google Scholar]
  27. Zhang, J.; Zhao, C.; Zhao, D.; Gao, W. Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process. 2014, 103, 114–126. [Google Scholar] [CrossRef] [Green Version]
  28. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef]
  29. Zha, Z.; Zhang, X.; Wang, Q.; Tang, L.; Liu, X. Group-based sparse representation for image compressive sensing reconstruction with non-convex regularization. Neurocomputing 2018, 296, 55–63. [Google Scholar] [CrossRef] [Green Version]
  30. Gao, Z.; Ding, L.; Xiong, Q.; Gong, Z.; Xiong, C. Image compressive sensing reconstruction based on z-score standardized group sparse representation. IEEE Access 2019, 7, 90640–90651. [Google Scholar] [CrossRef]
  31. Keshavarzian, R.; Aghagolzadeh, A.; Rezaii, T. LLp norm regularization based group sparse representation for image compressed sensing recovery. Signal Process. Image Commun. 2019, 78, 477–493. [Google Scholar] [CrossRef]
  32. Li, L.; Xiao, S.; Zhao, Y. Joint group and residual sparse coding for image compressives sensing. Neurocomputing 2020, 405, 72–84. [Google Scholar] [CrossRef]
  33. Zhang, J.; Zhao, D.; Zhao, C.; Xiong, R.; Ma, S.; Gao, W. Image compressive sensing recovery via collaborative sparsity. IEEE J. Emerg. Sel. Top. Circuits Syst. 2012, 2, 380–391. [Google Scholar] [CrossRef]
  34. Dong, W.; Shi, G.; Li, X.; Zhang, L.; Wu, X. Image reconstruction with locally adaptive sparsity and nonlocal robust regularization. Signal Process. Image Commun. 2012, 27, 1109–1122. [Google Scholar] [CrossRef]
  35. Dong, W.; Yang, X.; Shi, G. Compressive sensing via reweighted TV and nonlocal sparsity regularisation. Electron. Lett. 2013, 49, 184–186. [Google Scholar] [CrossRef]
  36. Zhang, J.; Liu, S.; Xiong, R.; Ma, S.; Zhao, D. Improved total variation based image compressive sensing recovery by nonlocal regularization. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2823–2839. [Google Scholar]
  37. Eslahi, N.; Aghagolzadeh, A. Compressive sensing image restoration using adaptive curvelet thresholding and nonlocal sparse regularization. IEEE Trans. Image Process. 2016, 25, 3126–3140. [Google Scholar] [CrossRef] [PubMed]
  38. Zhou, Y.; Guo, H. Collaborative block compressed sensing reconstruction with dual-domain sparse representation. Inf. Sci. 2019, 472, 77–93. [Google Scholar] [CrossRef]
  39. Kulkarni, K.; Lohit, S.; Turaga, P.; Kerviche, R.; Ashok, A. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 449–458. [Google Scholar]
  40. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1828–1837. [Google Scholar]
  41. Zhang, X.; Yuan, X.; Carin, L. Nonlocal low-rank tensor factor analysis for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 8232–8241. [Google Scholar]
  42. Yao, H.; Dai, F.; Zhang, S.; Zhang, Y.; Tian, Q.; Xu, C. DR2-Net: Deep Residual Reconstruction Network for image compressive sensing. Neurocomputing 2019, 359, 483–493. [Google Scholar] [CrossRef] [Green Version]
  43. Yang, Y.; Sun, J.; Li, H.; Xu, Z. ADMM-CSNet: A deep learning approach for image compressive sensing. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 521–538. [Google Scholar] [CrossRef]
  44. Zhang, J.; Zhao, C.; Gao, W. Optimization-inspired compact deep compressive sensing. IEEE J. Sel. Top. Signal Process. 2020, 14, 765–774. [Google Scholar] [CrossRef] [Green Version]
  45. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [Green Version]
  46. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  47. Chen, C.; Tramel, E.W.; Fowler, J.E. Compressed-sensing recovery of images and video using multihypothesis predictions. In Proceedings of the Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6-9 November 2011; pp. 1193–1198. [Google Scholar]
Figure 1. Architecture of the single-pixel camera [3].
Figure 1. Architecture of the single-pixel camera [3].
Sensors 20 05666 g001
Figure 2. Flowchart of the proposed HNLSR-CS.
Figure 2. Flowchart of the proposed HNLSR-CS.
Sensors 20 05666 g002
Figure 3. Eight test images. (a) Boats. (b) Cameraman. (c) Fingerprint. (d) Leaves. (e) Lena. (f) Monarch. (g) Parrots. (h) Peppers.
Figure 3. Eight test images. (a) Boats. (b) Cameraman. (c) Fingerprint. (d) Leaves. (e) Lena. (f) Monarch. (g) Parrots. (h) Peppers.
Sensors 20 05666 g003
Figure 4. Reconstruction of Cameraman with sampling rate = 0.1. (a) Original image; (b) MH (PSNR = 22.13 dB, FSIM = 0.7692); (c) RCoS (PSNR = 22.97 dB, FSIM = 0.7942); (d) ALSB (PSNR = 22.97 dB, FSIM = 0.8021); (e) GSR (PSNR = 22.89 dB, FSIM = 0.8154); (f) JASR (PSNR = 23.54 dB, FSIM = 0.8139); (g) GSR-NCR(PSNR = 22.50 dB, FSIM = 0.8012); (h) Proposed HNLSR (PSNR = 24.67 dB, FSIM = 0.8408)).
Figure 4. Reconstruction of Cameraman with sampling rate = 0.1. (a) Original image; (b) MH (PSNR = 22.13 dB, FSIM = 0.7692); (c) RCoS (PSNR = 22.97 dB, FSIM = 0.7942); (d) ALSB (PSNR = 22.97 dB, FSIM = 0.8021); (e) GSR (PSNR = 22.89 dB, FSIM = 0.8154); (f) JASR (PSNR = 23.54 dB, FSIM = 0.8139); (g) GSR-NCR(PSNR = 22.50 dB, FSIM = 0.8012); (h) Proposed HNLSR (PSNR = 24.67 dB, FSIM = 0.8408)).
Sensors 20 05666 g004
Figure 5. Reconstruction of Leaves with sampling rate = 0.1. (a) Original image; (b) MH (PSNR = 20.89 dB, FSIM = 0.7634); (c) RCoS (PSNR = 22.38 dB, FSIM = 0.8632); (d) ALSB (PSNR = 21.32 dB, FSIM = 0.7916); (e) GSR (PSNR = 23.22 dB, FSIM = 0.8755); (f) JASR (PSNR = 23.62 dB, FSIM = 0.8799); (g) GSR-NCR(PSNR = 22.26 dB, FSIM = 0.8408); (h) Proposed HNLSR (PSNR = 24.54 dB, FSIM = 0.8984)).
Figure 5. Reconstruction of Leaves with sampling rate = 0.1. (a) Original image; (b) MH (PSNR = 20.89 dB, FSIM = 0.7634); (c) RCoS (PSNR = 22.38 dB, FSIM = 0.8632); (d) ALSB (PSNR = 21.32 dB, FSIM = 0.7916); (e) GSR (PSNR = 23.22 dB, FSIM = 0.8755); (f) JASR (PSNR = 23.62 dB, FSIM = 0.8799); (g) GSR-NCR(PSNR = 22.26 dB, FSIM = 0.8408); (h) Proposed HNLSR (PSNR = 24.54 dB, FSIM = 0.8984)).
Sensors 20 05666 g005
Figure 6. Reconstruction of Parrots with sampling rate = 0.2. (a) Original image; (b) MH (PSNR = 29.23 dB, FSIM = 0.9405); (c) RCoS (PSNR = 28.61 dB, FSIM = 0.9311); (d) ALSB (PSNR = 29.73 dB, FSIM = 0.9460); (e) GSR (PSNR = 31.17 dB, FSIM = 0.9524); (f) JASR (PSNR = 31.09 dB, FSIM = 0.9478); (g) GSR-NCR(PSNR = 30.18 dB, FSIM = 0.9435); (h) Proposed HNLSR (PSNR = 31.41 dB, FSIM = 0.9526)).
Figure 6. Reconstruction of Parrots with sampling rate = 0.2. (a) Original image; (b) MH (PSNR = 29.23 dB, FSIM = 0.9405); (c) RCoS (PSNR = 28.61 dB, FSIM = 0.9311); (d) ALSB (PSNR = 29.73 dB, FSIM = 0.9460); (e) GSR (PSNR = 31.17 dB, FSIM = 0.9524); (f) JASR (PSNR = 31.09 dB, FSIM = 0.9478); (g) GSR-NCR(PSNR = 30.18 dB, FSIM = 0.9435); (h) Proposed HNLSR (PSNR = 31.41 dB, FSIM = 0.9526)).
Sensors 20 05666 g006
Figure 7. Reconstruction of Lena with sampling rate = 0.3. (a) Original image; (b) MH (PSNR = 31.99 dB, FSIM = 0.9538); (c) RCoS (PSNR = 32.41 dB, FSIM = 0.9555); (d) ALSB (PSNR = 33.30 dB, FSIM = 0.9650); (e) GSR (PSNR = 34.17 dB, FSIM = 0.9716); (f) JASR (PSNR = 34.05 dB, FSIM = 0.9677); (g) GSR-NCR(PSNR = 33.94 dB, FSIM = 0.9715); (h) Proposed HNLSR (PSNR = 34.27 dB, FSIM = 0.9716)).
Figure 7. Reconstruction of Lena with sampling rate = 0.3. (a) Original image; (b) MH (PSNR = 31.99 dB, FSIM = 0.9538); (c) RCoS (PSNR = 32.41 dB, FSIM = 0.9555); (d) ALSB (PSNR = 33.30 dB, FSIM = 0.9650); (e) GSR (PSNR = 34.17 dB, FSIM = 0.9716); (f) JASR (PSNR = 34.05 dB, FSIM = 0.9677); (g) GSR-NCR(PSNR = 33.94 dB, FSIM = 0.9715); (h) Proposed HNLSR (PSNR = 34.27 dB, FSIM = 0.9716)).
Sensors 20 05666 g007
Figure 8. Reconstruction of Monarch with sampling rate = 0.1. (a) Original image; (b) ReconNet (PSNR = 22.14 dB, FSIM = 0.7840); (c) ISTA-Net + (PSNR = 27.23 dB, FSIM = 0.8862); (d) DR 2 -Net (PSNR = 23.73 dB, FSIM = 0.8282); (e) Proposed HNLSR (PSNR = 27.91 dB, FSIM = 0.8962).
Figure 8. Reconstruction of Monarch with sampling rate = 0.1. (a) Original image; (b) ReconNet (PSNR = 22.14 dB, FSIM = 0.7840); (c) ISTA-Net + (PSNR = 27.23 dB, FSIM = 0.8862); (d) DR 2 -Net (PSNR = 23.73 dB, FSIM = 0.8282); (e) Proposed HNLSR (PSNR = 27.91 dB, FSIM = 0.8962).
Sensors 20 05666 g008
Figure 9. Reconstruction of Monarch with sampling rate = 0.1. (a) Original image; (b) ReconNet (PSNR = 21.11 dB, FSIM = 0.7406); (c) ISTA-Net + (PSNR = 26.58 dB, FSIM = 0.8816); (d) DR 2 -Net (PSNR = 23.10 dB, FSIM = 0.8184); (e) Proposed HNLSR (PSNR = 26.26 dB, FSIM = 0.8907).
Figure 9. Reconstruction of Monarch with sampling rate = 0.1. (a) Original image; (b) ReconNet (PSNR = 21.11 dB, FSIM = 0.7406); (c) ISTA-Net + (PSNR = 26.58 dB, FSIM = 0.8816); (d) DR 2 -Net (PSNR = 23.10 dB, FSIM = 0.8184); (e) Proposed HNLSR (PSNR = 26.26 dB, FSIM = 0.8907).
Sensors 20 05666 g009
Figure 10. Performance comparison with different number of patches for three test images in case of sampling rate = 0.2.
Figure 10. Performance comparison with different number of patches for three test images in case of sampling rate = 0.2.
Sensors 20 05666 g010
Figure 11. Evolutions of PSNR versus iteration number for four test images. (a) Sampling rate = 0.2; (b) Sampling rate = 0.3.
Figure 11. Evolutions of PSNR versus iteration number for four test images. (a) Sampling rate = 0.2; (b) Sampling rate = 0.3.
Sensors 20 05666 g011
Table 1. PSNR(dB) comparison of six representative methods and the proposed method.
Table 1. PSNR(dB) comparison of six representative methods and the proposed method.
RateMethodsBoatsC.manF.printLeavesLenaMonarchParrotsPeppersAverage
0.1MH-BCS26.1122.1320.0820.8926.1323.1925.3425.0023.61
RCoS27.8522.9716.3022.3827.5325.5625.6027.4124.45
ALSB28.1222.9720.6821.3227.0424.3426.0326.6724.65
GSR28.3022.8920.2723.2227.5625.2926.3726.9125.10
JASR28.5923.5421.0423.6227.9025.8326.7627.6025.61
GSR-NCR27.9622.5020.5022.2627.0224.6726.0326.3724.66
Proposed HNLSR28.7724.6721.1224.5428.0426.2627.2227.9126.07
0.2MH-BCS29.9125.8823.1725.1429.8127.1029.2328.4527.34
RCoS31.4225.6819.6427.2230.3629.6028.6130.8727.93
ALSB33.2726.6523.6426.9730.7328.3029.7329.8728.65
GSR33.6927.1723.8530.5431.3630.7831.1730.8329.92
JASR32.7027.7523.9830.2431.1930.6031.0931.0629.83
GSR-NCR33.3026.3023.6729.0330.8729.4630.1820.4629.16
Proposed HNLSR33.8928.3424.0330.9731.5731.1731.4131.1930.32
0.3MH-BCS32.2528.0824.7327.6331.9927.1031.0130.3029.14
RCoS34.3227.9822.7430.9232.4132.5330.5332.6530.51
ALSB36.5929.0125.8131.0133.3031.4131.9832.1331.41
GSR36.9129.6226.2034.4634.1734.2533.8133.0232.81
JASR36.0829.9326.2133.7034.0533.6333.1033.0932.47
GSR-NCR37.2729.3726.3534.9533.9434.6833.0732.8632.81
Proposed HNLSR36.9430.0126.2734.5434.2734.2733.9333.1832.93
Table 2. FSIMcomparison of six representative methods and the proposed method.
Table 2. FSIMcomparison of six representative methods and the proposed method.
RateMethodsBoatsC.manF.printLeavesLenaMonarchParrotsPeppersAverage
0.1MH-BCS0.84890.76920.85120.76340.89130.79120.89810.84890.8328
RCoS0.87650.79420.60270.86320.88630.87570.89190.87940.8337
ALSB0.89340.80210.86820.79160.89650.82510.91050.87350.8576
GSR0.90270.81540.86910.87550.91470.86730.92290.88590.8817
JASR0.90350.81390.87220.87990.91070.88220.91760.89180.8840
GSR-NCR0.8980.80120.86880.84080.91060.83180.91900.87330.8679
Proposed HNLSR0.90420.84080.86220.89840.90920.89070.92040.89620.8903
0.2MH-BCS0.91590.85520.91030.85770.93480.87510.94050.90360.8991
RCoS0.93480.86450.79230.93070.93310.93140.93110.92810.9058
ALSB0.95220.87590.92080.90690.94400.89070.94600.92280.9199
GSR0.95810.89460.92540.95590.95370.94110.95240.93320.9393
JASR0.94580.89610.92560.95160.94340.94090.94780.93420.9342
GSR-NCR0.95260.87970.92250.94300.94700.92160.94350.92680.9296
Proposed HNLSR0.95890.90960.92710.95860.95450.94540.95260.93640.9429
0.3MH-BCS0.94390.89380.93310.89610.95380.8990.95630.92690.9254
RCoS0.96150.90890.89370.95790.95550.95550.95010.94720.9413
ALSB0.97480.91900.94710.95080.96500.93030.96200.94550.9493
GSR0.97700.93250.95200.97650.97160.96360.96680.95130.9614
JASR0.97230.93110.95100.97190.96770.96100.96230.95050.9585
GSR-NCR0.97830.93050.95340.97990.97150.96680.96600.95010.9621
Proposed HNLSR0.97720.93660.95230.97690.97160.96390.96700.95250.9623
Table 3. PSNR (dB) comparison of deep learning methods and the proposed method.
Table 3. PSNR (dB) comparison of deep learning methods and the proposed method.
RateMethodsBoatsC.manF.printLeavesLenaMonarchParrotsPeppersAverage
0.04ReconNet21.3619.2614.6715.4021.2818.1920.2719.5618.75
ISTA-Net + 22.2320.4514.9916.3822.6419.5421.9721.4719.96
DR 2 -Net22.1119.8415.0416.2922.1318.9321.1620.3119.48
Proposed HNLSR23.2221.0914.9018.0824.5220.4923.4623.4921.16
0.1ReconNet24.1521.2815.8418.3523.8321.1122.6322.1421.17
ISTA-Net + 27.4423.6617.4723.4427.6526.5826.5827.2325.01
DR 2 -Net25.5822.4617.2120.2625.3923.1023.9423.7322.71
Proposed HNLSR28.7724.6721.1224.5428.0426.2627.2227.9126.07
0.25ReconNet27.3023.1519.1021.9126.5424.3225.5924.7724.09
ISTA-Net + 33.7129.1923.4731.9632.7033.4131.9932.7031.14
DR 2 -Net30.0925.6221.6325.6529.4227.9528.7328.4927.20
Proposed HNLSR35.4429.3425.1433.2533.0833.3632.7932.4131.85
Table 4. FSIM comparison of deep learning methods and the proposed method.
Table 4. FSIM comparison of deep learning methods and the proposed method.
RateMethodsBoatsC.manF.printLeavesLenaMonarchParrotsPeppersAverage
0.04ReconNet0.73100.69540.58730.61220.76410.68330.78350.73270.6987
ISTA-Net + 0.76160.73000.57810.68760.80030.74030.82350.78060.7378
DR 2 -Net0.75740.71340.60130.67700.78690.72170.79910.75870.7269
Proposed HNLSR0.78050.75010.57170.76170.83890.77340.87320.81360.7704
0.1ReconNet0.79100.74400.67140.68350.81370.74060.82850.78400.7571
ISTA-Net + 0.87560.82890.70070.87600.89670.88160.90620.88620.8565
DR 2 -Net0.84150.78960.73050.79480.84880.81840.86050.82820.8140
Proposed HNLSR0.90420.84080.86220.89840.90920.89070.92040.89620.8903
0.25ReconNet0.87300.80300.81660.77650.87650.81520.88010.84600.8359
ISTA-Net + 0.95750.92050.91110.96230.95830.96070.95600.94910.9469
DR 2 -Net0.91980.85750.87930.89020.92000.89890.92040.90340.8987
Proposed HNLSR0.96990.92440.94270.97150.96460.96050.96020.94690.9551

Share and Cite

MDPI and ACS Style

Li, L.; Xiao, S.; Zhao, Y. Image Compressive Sensing via Hybrid Nonlocal Sparsity Regularization. Sensors 2020, 20, 5666. https://doi.org/10.3390/s20195666

AMA Style

Li L, Xiao S, Zhao Y. Image Compressive Sensing via Hybrid Nonlocal Sparsity Regularization. Sensors. 2020; 20(19):5666. https://doi.org/10.3390/s20195666

Chicago/Turabian Style

Li, Lizhao, Song Xiao, and Yimin Zhao. 2020. "Image Compressive Sensing via Hybrid Nonlocal Sparsity Regularization" Sensors 20, no. 19: 5666. https://doi.org/10.3390/s20195666

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop