Next Article in Journal
Individual Phase Full-Power Testing Method for High-Power STATCOM
Next Article in Special Issue
MID Filter: An Orientation-Based Nonlinear Filter For Reducing Multiplicative Noise
Previous Article in Journal
Ultrasonic Health Monitoring of Lithium-Ion Batteries
Previous Article in Special Issue
Wiener–Granger Causality Theory Supported by a Genetic Algorithm to Characterize Natural Scenery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation

1
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
School of Electronic and Information, Suzhou University of Science and Technology, Suzhou 215009, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(7), 753; https://doi.org/10.3390/electronics8070753
Submission received: 14 May 2019 / Revised: 28 June 2019 / Accepted: 29 June 2019 / Published: 3 July 2019

Abstract

:
In this paper, an altered adaptive algorithm on block-compressive sensing (BCS) is developed by using saliency and error analysis. A phenomenon has been observed that the performance of BCS can be improved by means of rational block and uneven sampling ratio as well as adopting error analysis in the process of reconstruction. The weighted mean information entropy is adopted as the basis for partitioning of BCS which results in a flexible block group. Furthermore, the synthetic feature (SF) based on local saliency and variance is introduced to step-less adaptive sampling that works well in distinguishing and sampling between smooth blocks and detail blocks. The error analysis method is used to estimate the optimal number of iterations in sparse reconstruction. Based on the above points, an altered adaptive block-compressive sensing algorithm with flexible partitioning and error analysis is proposed in the article. On the one hand, it provides a feasible solution for the partitioning and sampling of an image, on the other hand, it also changes the iteration stop condition of reconstruction, and then improves the quality of the reconstructed image. The experimental results verify the effectiveness of the proposed algorithm and illustrate a good improvement in the indexes of the Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Gradient Magnitude Similarity Deviation (GMSD), and Block Effect Index (BEI).

1. Introduction

The traditional Nyquist sampling theorem states that the sampling frequency of a signal must be more than twice its highest frequency to ensure that the original signal is completely reconstructed from the sampled value, while the compressive sensing (CS) theory breaks through the traditional limitation of the Nyquist sampling theorem in signal acquisition and can achieve reconstructing a high-dimensional sparse signal or compressible signal from the lower-dimensional measurement [1]. As an alternative to the Nyquist sampling theorem, CS theory is being widely studied, especially in the current image processing. The research of CS theory mainly focuses on several important aspects such as sparse representation, measurement matrix construction, and the reconstruction algorithm [2,3]. The main research hotspot of sparse representation is how to construct a sparse dictionary of the orthogonal system and an over-complete dictionary for suboptimal approximation [4,5]. The construction of the measurement matrix mainly includes the universal random measurement matrix and the improved deterministic measurement matrix [6]. The research of the reconstruction algorithm mainly focuses on the suboptimal solution problem and a training algorithm based on self-learning [7,8]. With the advancement of research and application about CS theory, especially in 2D or 3D image processing, the CS technology faces several challenges, including computational dimensional disaster and the spatial storage problem with the increase of the images geometric scale. To solve these challenges, the researchers proposed many fast-compressive sensing algorithms to solve the computation cost and the block-compressive sensing (BCS) algorithm to solve the space storage problem [9,10,11,12]. This article is based on the analysis of the above two points.
The CS recovery algorithm of images can mainly be divided into convex optimization recovery algorithms, non-convex recovery algorithms, and hybrid algorithms. The convex optimization algorithms include basis pursuit (BP), greedy basis pursuit (GBP), iterative hard threshold (IHT), etc. Non-convex algorithms include orthogonal matching pursuit (OMP), subspace matching basis pursuit (SP), and iteratively reweighted least square (IRLS), etc. The hybrid algorithms include sparse Fourier description (SF), chain pursuit (CP), and heavy hitters on steroids pursuit (HHSP) and other mixed algorithms [13,14,15]. The convex optimization algorithms based on l1 minimization have benefits on the reconstruction effect, but with large computational complexity and high time complexity. Compared with convex optimization algorithms, the non-convex algorithms, such as the greedy pursuit algorithm, operate quickly, with a slightly poor accuracy based on l0 minimization, and can also meet the general requirements of practical applications. In addition, the iterative threshold method has also been widely used in both of them with excellent performance. However, the iterative threshold method is sensitive to the selection of the threshold and the initial value of the iteration that affects the efficiency and accuracy of the algorithm [16,17]. The selection of thresholds in this process often uses simple error values (including absolute or relative values) or quantitative iterations as stopping criterion of the algorithm, which does not guarantee algorithm optimization [18,19].
The focus of this paper is on three aspects, namely, the block partitioning under weighted information entropy, the adaptive sampling based on synthetic features, and the iterative reconstruction through error analysis. The mean information entropy (MIE) and texture saliency (TS) are introduced in the block partitioning to provide a basis for promoting the algorithm. This part of adaptive sampling mainly improves the overall image quality through designing the block sampling rate by means of variance and local saliency (LS). The iterative reconstruction part mainly uses the relationship of three errors to provide the number of iterations required for the best reconstructed image in different noise backgrounds. Based on the above points, this paper proposes an altered adaptive block-compression sensing algorithm with flexible partitioning and error analysis, which is called FE-ABCS.
The remainder of this paper is organized as follows. In Section 2, we focus on the preliminaries of BCS. Section 3 includes the problem formulation and important factors. Then, the structure of the proposed FE-ABCS algorithm is presented in Section 4. In Section 5, the experiments and results analysis are listed to show the benefit of the FE-ABCS. The paper concludes with Section 6.

2. Preliminaries

2.1. Compressive Sensing

The algorithm theory of compressive sensing is derived from the sparse characteristic of natural signals that can be sparsely represented under a certain sparse transform basis, enabling direct sampling of sparse signals (sampling and compressing simultaneously). Set the sparse representation s of an original digital signal x which can be obtained by the transformation of sparse basis Ψ with K sparse coefficients and the signal x is observed by a measurement matrix Φ , then the observation signal y can be expressed as:
y = Φ x = Φ Ψ s = Ω s
where, x R N , s R N , and y R M . Consequently, Ω R M × N is the product of the matrix Φ R M × N and Ψ R N × N , named the sensing matrix, and the value of M is much less than N because of the compressive sensing theory.
The reconstruction process is an NP-hard problem which restores the N-dimensional original signal from the M-dimensional measurement value through nonlinear projection and cannot be solved directly. Candès et al. pointed out that, the number M must meet the condition M = O ( K log ( N ) ) in order to reconstruct the N-dimensional signal x accurately, and the sensing matrix Ω must satisfy the Restricted Isometry Property (RIP) [20]. Furthermore, the former theories proved that the original signal x can be accurately reconstructed from the measured value y by solving the l 0 norm optimization problem:
x ^ = Ψ s ^ , s ^ = arg   min s 0 s . t . y = Φ x = Φ Ψ s
In the above formula, 0 is the l 0 norm of a vector, which represents the number of non-zero elements in the vector.
With the wide application of CS technology, especially for 2D/3D image signal processing, it inevitably leads to a dimensional computing disaster problem (because the amount of calculation increases with the square/cube of dimensions), which is not directly overcome by CS technology itself. Here, it is necessary to introduce block partitioning and parallel processing to improve the algorithm, that is, the BCS algorithm improves its universality.

2.2. Block-Compressive Sensing (BCS)

The traditional method of BCS used in image signal processing is to segment the image and process the sub-images in parallel for reducing the cost of storage and calculation. Suppose the original image ( I W × I H ) with N = W × H pixels in total, the observation with M-dimension and the definition of total sampling rate (TSR = M/N), in the normal processing of BCS, the image is divided into small blocks with a size of B × B , each of which is sampled with the same operator. Let x i represent the vectorized signal of the i-th block through raster scanning, and the output vector y i of BCS measurement can be written as:
y i = Φ B x i
where, Φ B is an m × n matrix with n = B 2 and m = n · T S R . The matrix Φ B is usually taken as an orthonormalized i.i.d Gaussian matrix. For the whole image, the equivalent sampling operator Φ in (1) is thus a block diagonal matrix taking the following form:
Φ = [ Φ B 0 0 Φ B ]

2.3. Problems of BCS

The mentioned BCS algorithm for solving the storage space, dividing image into multiple sub-images, reduces the scale of the measurement matrix on the one hand, and on the other hand could be conducive to the parallel processing of the sub-images. However, BCS still has the following problems that need to be investigated and solved:
  • Most existing research papers of BCS do not perform useful analysis on image partitioning and then segment according to the analysis result [21,22]. The common partitioning method (n = B × B) of BCS only considers reducing the computational complexity and storage space problem without considering the integrity of the algorithm and other potential effects, such as providing a better foundation for subsequent sampling and reconstructing by combining the structural features and the information entropy of the image.
  • The basic sampling method used in BCS is to sample each sub-block uniformly according to the total sampling rate (TSR), while the adaptive sampling method selects different sampling rates according to the sampling feature of each sub-block [23]. Specifically, the detail block allocates a larger sampling rate, and the smooth block matches a smaller sampling rate, thereby improving the overall quality of the reconstructed image at the same TSR. But the crux is that the studies of criteria (feature) used to assign adaptive sampling rates are rarely seen in recent articles.
  • Although there are many studies on the improvement of the BCS iterative construction algorithm [24], few articles focus on optimizing the performance of the algorithm from the aspect of iteration stop criterion in the image reconstruction process, especially in the noise background.
In addition, the improvement on BCS also includes blockiness elimination and engineering implementation of the algorithm. Finally, although BCS technology still has some aspects to be solved, due to its advantages, the technology has been widely applied to optical/remote sensing imaging, medical imaging, wireless sensor networks, and so on [25].

3. Problem Formulation and Important Factors

3.1. Flexible Partitioning by Mean Information Entropy (MIE) and Texture Structure (TS)

Reasonable block partitioning reduces the information entropy (IE) of each sub-block to improve the performance of the BCS algorithm at the same total sampling rate (TSR), and ultimately improves the quality of the entire reconstructed image. In our paper, we adopt flexible partitioning with image sub-block shape n = r o w × c o l u m n = l × h to remove the blindness of image partitioning with the help of texture structure (TS) and mean information entropy (MIE) instead of the primary shape n = B × B . The expression of TS is based on the gray-tone spatial-dependence matrices and the angular second moment (ASM) [26,27]. The value of TS is defined as follows using ASM:
g T S = i = 0 255 j = 0 255 { p ( i , j , d , a ) } 2 p ( i , j , d , a ) = P ( i , j , d , a ) / R
where, P ( i , j , d , a ) is the (i,j)-th entry in a gray-tone spatial-dependence matrix, p ( i , j , d , a ) is the normalized form of P ( i , j , d , a ) , ( i , j , d , a ) is the neighboring pixel pair with distance d , orientation a , and gray value ( i , j ) in the image, and R denotes the number of neighboring resolution cell pairs. The definition of MIE of the whole image is as follows:
g M I E = 1 N / n i = 1 N / n ( j = 0 255 e i , j log 2 e i , j ) = 1 T 1 i = 1 T 1 ( j = 0 255 e i , j log 2 e i , j )
where, e i , j is the proportion of pixels with gray value j in the i-th sub-image, and T 1 is the number of sub-images.
Suppose the flexible partitioning of BCS is reasonable, increasing the similarity between pixels in each sub-block and reducing the MIE of the whole image sub-blocks will inevitably bring about a decrease in the difficulty of image sampling and recovery, which means that the flexible partitioning itself is a process of reducing order and rank. Figure 1 shows the effect on the MIE of four 256 × 256-pixel-testing gray images with 256 gray levels by different partitioning methods when the number of pixels per sub-image is limited to 256. The abscissa represents different 2-base partitioning modes, the ordinate represents the MIE of the whole image in different partitioning modes. Figure 1 indicates that images with different structures reach minimum MIE on different partitioning points which will be used in flexible partitioning to provide a basis for block segmentation.
Furthermore, the MIE guiding the partitioning of the image only considers the pixel level of the image, i.e., gray scale distribution, without considering the image optimization of the spatial level, i.e., texture structure. In fact, TS information is also very important for image restoration algorithms. Therefore, this paper uses the method of g M I E combined with g T S to provide the basis for flexible partitioning, that is, weighted MIE (WM) which is defined as follows:
g F B = c T S × g M I E = f ( g T S ) × g M I E
where, c T S is the weighting coefficient, f ( ) is the weighting coefficient function, and its value is related to the TS information g T S .

3.2. Adaptive Sampling with Variance and Local Salient Factor

The feature selection for distinguishing the detail block and the smooth block is very important on the process of adaptive sampling. Information entropy, variance, and local standard deviation are often used as criteria for features, respectively. The shortcomings are found in using the above features individually as criteria for adaptive sampling, such as information entropy only reflects the probability of gray distribution, the variance is also only related to the degree of dispersion of the pixels, and the local standard deviation only focuses on the spatial distribution of the pixels. Secondly, the adaptive sampling rate is mostly set using segment adaptive sampling instead of step-less adaptive sampling in the previous literature [28], which leads to the discontinuity of sampling rate and the inadequacy utilization of the distinguishing feature.
In order to overcome the shortcomings of individual features, this paper uses the synthetic feature to distinguish between smooth blocks and detail blocks. The synthetic feature for adaptive sampling is defined as:
J ( x i ) = L ( x i ) λ 1 × D ( x i ) λ 2
where, D ( x i ) and L ( x i ) denote the variance and local salient factor in the i-th sub-image, and λ 1 and λ 2 are the corresponding weighting coefficients. The expressions of variance and local salient factor for the sub-block are as follows:
D ( x i ) = 1 n j = 1 n ( x i j μ i ) 2 L ( x i ) = 1 n j = 1 n k = 1 q | x i j k x i j | x i j
where, x i j is the gray value of the j-th pixel in the i-th sub-image, μ i is the gray mean of the i-th sub-block image, x i j k is the gray value of the k-th pixel in the salient operator domain around the center pixel x i j , and q represents the number of pixels in the salient operator domain. The synthetic feature J ( x i ) can not only reflect the degree of dispersion and relative difference of sub-image pixels, but also combines the relationship between sensory amount and physical quantity of Weber’s Law [29].
In order to avoid the disadvantage of segmented adaptive sampling, step-less adaptive sampling is adopted in this literature [30,31,32]. The key point of step-less adaptive sampling is how to select a continuous sampling rate accurately based on the synthetic feature. The selection of continuous sampling rates is achieved by setting the sampling rate factor ( η S R ) based on the relationship between the sensory amount and the physical quantity in Weber’s Law. The sampling rate factor ( η S R ) and the step-less adaptive sampling rate ( c S R ) are defined as follows:
η S R ( x i ) = log 2 J ( x i ) 1 T 1 j = 1 T 1 log 2 J ( x j )
c S R ( x i ) = η S R ( x i ) × T S R
where, TSR is the total sampling rate of the whole image.

3.3. Error Estimation and Iterative Stop Criterion in Reconstruction Process

The goal of the reconstruction process is to provide a good representative of the original signal:
x = [ x 1 , x 2 , , x N ] T , x R N .
Given the noisy observed output ( y ˜ ) and finite-length sparsity (K), the performance of reconstruction is usually measured by the similarity or the error function between x * and x . In addition, the reconstruction method, whether it is a convex optimization algorithm or a non-convex optimization algorithm, needs to solve the NP-hard problem by linear programming (LP), wherein the number of the correlation vectors is crucial. Therefore, the error estimation and the selection of the number of correlation vectors are two important factors of reconstruction. Especially in some non-convex optimization restoration algorithms, such as the OMP algorithm, the selection of the number of correlation vectors is linearly related to the number ( v ) of iterations of the algorithm. So, the two points (error estimation and optimal iteration) need to be discussed below.

3.3.1. Reconstruction Error Estimation in Noisy Background

In the second section, Equation (1) was used to describe the relationship model between the original signal and the noiseless observed signal, but the actual observation is always in the noise background, so the observed signal in this noisy environment is as shown in the following equation:
y ˜ = Φ x + w = Φ Ψ s + w = Ω s + w
where, y ˜ is the observed output in the noisy environment, and w is an additive white Gaussian noise (AWGN) with zero-mean and standard deviation σ w . The M-dimension AWGN w is independent of the signal x . Here, we discuss the reconstruction error in two steps, the first step confirms the entire reconstruction model, and the second step derives the relevant reconstruction error.
Since the original signal ( x ) itself is not sparse, it is K-sparse under sparse basis ( Ψ ), so we have:
s = Ψ 1 x , s = [ s 1 , s 2 , , s k , , s N ] T
where, [ s 1 , s 2 , , s k , , s N ] T is a vector of length N which only has K non-zero elements, i.e., the remaining N-K micro elements are zero or much smaller than any of the K non-zero elements. Assuming that the first K elements of the sparse representation s are just non-zero elements without loss of generality, we can have:
s = [ s K s N K ]
where, s K is a K dimensional vector and s N K is a vector of length N-K. The actual observed signal obtained by Equations (13) and (15) can be described as follows:
y ˜ = y + w = Ω s + w = [ Ω K Ω N K ] [ s K s N K ] + w = Ω K s K + Ω N K s N K + w
where, Ω = [ ω 1 ω K ω K + 1 ω N ] is an M × N matrix generated of N vectors with M-dimension.
In order to estimate the error of the recovery algorithm accurately, we define three error functions using the l 2 norm:
Original   data   error :   e x = 1 N x x 2 2
Observed   data   error :   e y = 1 M y y * 2 2
Sparse   data   error :   e s = 1 N s s 2 2
where, x , y , s represent the reconstructed values of x , y , s , respectively. The three reconstructed values are obtained by maximum likelihood (ML) estimation using l 0 minimization. The number of iterations in the restoration algorithm is v times, that is, the number of correlation vectors. In addition, in the process of solving s by using pseudo-inverse, which is based on the least-squares algorithm, the value of v is smaller than M. Using Equations (13) and (15), the expressions of x , y , s are listed as follows:
s * = [ s v * s N v * ] = [ s v * 0 N v * ] = [ Ω v + y ˜ 0 N v * ] = [ Ω v + ( Ω v s v + Ω N v s N v + w ) 0 N v * ] = [ s v + Ω v + ( Ω N v s N v + w ) 0 N v * ]
x * = Ψ s *
y = Ω s
where, Ω v + is the pseudo inverse of Ω v , and its expression is Ω v + = ( Ω v T Ω v ) 1 Ω v T .
Using Equations (20–22), the three error functions are rewritten as follows:
e x = 1 N [ Ψ v Ω v + ( Ω N v s N v + w ) + Ψ N v s N v ] 2 2
e y = 1 M [ Ω v Ω v + ( Ω N v s N v + w ) + Ω N v s N v ] 2 2
e s = 1 N [ Ω v + ( Ω N v s N v + w ) s N v ] 2 2 .
According to the definition of Ψ , Ω and the RIP, we know:
e s = e x
( 1 δ K ) e s e y ( 1 + δ K ) e s
where, δ K ( 0 , 1 ) represents a coefficient associated with Ω and K. According to Gershgorin circle theorem [33], δ K = ( K 1 ) μ ( Ω ) for all K < μ ( Ω ) 1 , where μ ( Ω ) denotes the coherency of Ω :
μ ( Ω ) = max 1 i < j N | ω i , ω j | ω i 2 ω j 2 .
Using Equations (26) and (27), the boundaries of the original data error are as follows:
1 ( 1 + δ K ) e y e x 1 ( 1 δ K ) e y .
Therefore, from the above analysis, we can conclude that the three errors are consistent, and the minimizing of the three errors is equivalent. Due to the complexity and reliability of the calculation ( e x -too complicated, e s -insufficient dimensions), e y is used as the target in the optimization function of the recovery algorithm.

3.3.2. Optimal Iterative Recovery of Image in Noisy Background

The optimal iterative recovery of image discussed in this paper refers to the case where the error function of the image is the smallest, as can be seen from the minimization of e y in the form of l 2 norm:
v o p t = { v | arg min v e y }
arg min e y = arg min 1 M G v Ω N v s N v C v w 2 2
{ G v = I Ω v Ω v + C v = Ω v Ω v +
while G v is a projection matrix of rank M v , C v is a projection matrix of rank v . Since the projection matrices G v and C v in Equation (30) are orthogonal, the inner product of the two vectors G v Ω N v s N v and C v w is zero and therefore:
e y = 1 M C v w 2 2 + 1 M G v Ω N v s N v 2 2 = e y w + e y s
According to [34], the observed data error e y is a Chi-square random variable with degree of freedom v , and the expected value and the variance of e y are as follows:
M σ w 2 e y χ v 2
E ( e y ) = v M σ w 2 + 1 M G v Ω N v s N v 2 2
var ( e y ) = 2 v M 2 ( σ w 2 ) 2
The expected value of e y has two parts. The first part v M σ w 2 is the noise-related part, which is a function that is positively related to the number v . The second part 1 M G v Ω N v s N v 2 2 is a function of the unstable micro element s N v , which is decreased as the number v increases. Therefore, the observed data error e y is normally called a bias-variance tradeoff.
Due to the existence of the uncertain part e y s , this results in an impossible-to-seek optimal number of iterations v o p t by solving the minimum value of e y directly. As a result, another bias-variance tradeoff e y is introduced to provide probabilistic bounds on the e y s by using the noisy output y ˜ instead of noiseless output y :
e y * = 1 M y ˜ y * 2 2 = 1 M G v Ω N v s N v + G v w 2 2 .
According to [35], the second observed data error e y is a Chi-square random variable of order M v , and the expected value and the variance of e y are as follows:
M σ w 2 e y * χ M v 2
E ( e y * ) = M v M σ w 2 + 1 M G v Ω N v s N v 2 2 = M v M σ w 2 + e y s
var ( e y * ) = 2 ( M v ) M 2 ( σ w 2 ) 2 + 4 σ w 2 M 2 G v Ω N v s N v 2 2
So, we can derive probabilistic bounds for the observed data error e y using probability distribution of the two Chi-square random variables:
e y ( p 1 , p 2 ) _ e y e y ( p 1 , p 2 ) ¯
where, p 1 is the confidence probability on a random variable of the observed data error e y , and p 2 is the validation probability on a random variable of the second observed data error e y . As both of the two errors satisfy the Chi-square distribution, Gaussian distribution can be used to estimate them. Therefore, confidence probability p 1 and validation probability p 2 can be calculated as:
p 1 = Q ( α ) = α α 1 2 π e x 2 2 d x
p 2 = Q ( β ) = β β 1 2 π e x 2 2 d x
where, α and β denote the tuning parameters of confidence and validation probabilities, respectively. Furthermore, the worst case is considered when calculating the minimum value of e y , that is, by calculating the minimum value of the upper bound of e y :
v o p t = { v | arg min v e y ¯ } = { v | arg min v e y ( p 1 , p 2 ) ¯ } = { v | arg min v e y ( α , β ) ¯ } = { v | arg min v ( 2 v M M σ w 2 + e y * + α 2 v M σ w 2 + β var ( e y * ) ) }
Normally, based on Akaike information criterion (AIC) or Bayesian information criterion (BIC), the optimum number of iterations can be chosen as follows:
AIC: Set α = β = 0
e y ¯ = e y ( α = 0 , β = 0 ) ¯ = ( 2 v M 1 ) σ w 2 + e y *
BIC: Set α = v log M and β = 0 .
e y ¯ = e y ( α = v log M , β = 0 ) ¯ = ( ( 2 + 2 log M ) v M 1 ) σ w 2 + e y *
where, e y can be calculated based on the noisy observation data and the reconstruction algorithm.

3.3.3. Application of Error Estimation on BCS

The proposed algorithm (FE-ABCS) is based on block-compressive sensing, so the optimal number of iterations ( v o p t ) derived in the above section also requires a variant to be applied to the above algorithm:
v o p t i = { v i | arg min v i e y i ¯ } = { v i | arg min v i e y i ( p 1 i , p 2 i ) ¯ } = { v i | arg min v i e y i ( α i , β i ) ¯ } = { v i | arg min v i ( 2 v i m i + α i 2 v i m i σ w i 2 + β i var ( e y i * ) + e y i * ) }
where, i = 1 , 2 , , T 1 represents the serial number of all sub-images. Similarly, the values of α i and β i can be valued according to the AIC and BIC criteria.

4. The Proposed Algorithm (FE-ABCS)

With the knowledge presented in the previous section, the altered ABCS (FE-ABCS) is proposed for the recovery of block sparse signals in noiseless or noise backgrounds. The workflow of the proposed algorithm is presented in Section 4.1 while the specific parameter settings of the proposed algorithm is introduced in Section 4.2.

4.1. The Workflow and Pseudocode of FE-ABCS

In order to better express the idea of the proposed algorithm, the workflow of the typical BCS algorithm and the FE-ABCS algorithm are presented, as shown in Figure 2.
According to Figure 2, compared with the traditional BCS algorithm, the main innovations of this paper can be reflected in the following points:
  • Flexible partitioning: using the weighted MIE as the block basis to reduce the average complexity of the sub-images from the pixel domain and the spatial domain;
  • Adaptive sampling: adopting synthetic feature and step-less sampling to ensure a reasonable sample rate for each subgraph;
  • Optimal number of iterations: using the error estimate method to ensure the minimum error output of the reconstructed image in the noisy background.
Furthermore, since the FE-ABCS algorithm is based on iterative recovery algorithm, especially the non-convex optimization iterative recovery algorithm, this paper uses the OMP algorithm as the basic comparison algorithm without loss of generality. The full pseudocode of the proposed algorithm is presented as follows.
FE-ABCS Algorithm based on OMP (Orthogonal Matching Pursuit)
1: Input: Original image I, total sampling
 rate TSR, sub-image dimension n
 ( n = 2 b , b = 2 , 4 , 6 , ), sparse matrix Ψ R n × n ,
 initialized measurement matrix Φ R n × n
2: Initialization: { x | x I a n d x R N } ;
T 1 = N n ; // q u a n t i t y o f s u b i m a g e s
T 2 = 1 + log 2 n ; // t y p e o f f l e x i b l e p a r t i t i o n i n g
 step1: flexible partitioning (FP)
3: for j = 1,…,T2 do
4: l j × h j = 2 j 1 × 2 T 2 j ; { I i , i = 1 , , T 1 } j I ;
{ x i , i = 1 , , T 1 } j x ;
5: g M I E j = M I E ( { x i , i = 1 , , T 1 } j ) ;
c T S j = ( f ( g T S ) ) j = ( f ( [ g T S H , g T S V ] ) ) j ;
6: g F B j = c T S j · g M I E j ;
 // Weighted MIE--Base of FP
7: end for
8: j o p t = arg min j ( { g F B j , j = 1 , , T 2 } ) ;
9: l × h = 2 j o p t 1 × 2 T 2 j o p t ; { I i , i = 1 , , T 1 } I ;
{ x i , i = 1 , , T 1 } x ;
 step2: adaptive sampling (AS)
10: for i = 1,…,T1 do
11:  D ( x i ) x i ; L ( x i ) I i ; J ( x i ) = L ( x i ) λ 1 · D ( x i ) λ 2 ;
  // synthetic feature (J)--Base of AS
12: end for
13: η ( { x i , i = 1 , , T 1 } ) = log 2 J ( { x i , i = 1 , , T 1 } ) 1 T 1 i = 1 T 1 log 2 J ( x i ) ;
14: c S R ( { x i , i = 1 , , T 1 } ) = η ( { x i , i = 1 , , T 1 } ) · T S R ;
 // c S R --AS ratio of sub-images
{ m i , i = 1 , , T 1 } = c S R ( { x i , i = 1 , , T 1 } ) · n ;
15: Φ = ( ϕ 1 , , ϕ n ) T , χ i = r a n d p e r m ( n ) , Φ χ i = Φ ( χ i , : ) ;
16: { Φ i , i = 1 , , T 1 } = { Φ χ i ( [ 1 , , m i ] , : ) , i = 1 , , T 1 } ;
17: { y i , i = 1 , , T 1 } = { Φ i · x i , i = 1 , , T 1 } ;

 step3: restoring based on error estimation
18: { y ˜ i , i = 1 , , T 1 } = { y i + w i , i = 1 , , T 1 } ;
 // w i : m i d i m e n s i o n   A W G N , w i = 0 : n o i s e l e s s
19: { Ω i , i = 1 , , T 1 } = { Φ i · Ψ , i = 1 , , T 1 } ;
20: for i = 1,…,T1 do
21:   Ω i = { ω i 1 , , ω i n } , r = y ˜ i , A = , s * = 0 n ;
  // { ω i j , j = 1 , , n } -- column vector of Ω i
22:   v o p t i = { v i | arg min v i e y i ¯ } ;
  // calculate optimal iterative of sub-images
23:   for j = 1 , , v o p t i do
24:     = arg min j | r , w i j | ;
25:     A = A { } ;
26:     r = y ˜ i Ω i ( : , A ) · [ Ω i ( : , A ) ] + · y ˜ i ;
27:   end for
28:  s i = [ Ω i ( : , A ) ] + · y ˜ i ;
 // s i --reconstructed sparse representation
29:  x i * = Ψ · s i ;
 // x i --reconstructed original signal
30: end for
31: x * = { x i * , i = 1 , , T 1 } , I r * = { x * , l , h } ;
 // I r --reconstructed image without filter
 step4: multimode filtering (MF)
32: i f ( B E I B E I * ) ; // B E I * T h r e s h o d o f b l o c k e f f e c t
33:  I r * = d e b l o c k ( I r * ) ;
34: e n d i f
35: i f ( T S R T S R * ) ; // T S R * T h r e s h o d o f T S R
36: I F * = w i e n e r f i l t e r ( I r * ) ;
37: e l s e I F * = m e d f i l t e r ( I r * ) ;
38: e n d i f
39: I * = I F *
 // I --reconstructed image with M

4.2. Specific Parameter Setting of FE-ABCS

4.2.1. Setting of the Weighting Coefficient c T S

The most important step in achieving flexible partitioning is to the minimum of the weighted MIE, where the design of the weighting coefficient function is the most critical point. Therefore, this section focuses on the specific design of the function which ensures optimal partitioning of the image:
c T S = f ( g T S ) = { ( f ( g T S ) ) j , 1 , , T 2 } = { ( f ( [ g T S H , g T S V ] ) ) j , 1 , , T 2 } = { o n e s ( 1 , T 2 ) g T S H G T S & g T S V G T S [ 0 , 1 , , b ] / ( b / 2 ) g T S H G T S & g T S V > G T S [ b , , 1 , 0 ] / ( b / 2 ) g T S H > G T S & g T S V G T S [ b , log 2 ( n / 2 + 2 ) , log 2 ( n / 4 + 4 ) , , log 2 ( 4 + n / 4 ) , log 2 ( 2 + n / 2 ) , b ] b / 2 + 1 g T S H > G T S & g T S V > G T S
where, g T S H and g T S V represent the value of horizontal and vertical TS by using ASM, and G T S represents the threshold at which the TS feature value reaches a significant degree.

4.2.2. Setting of the Adaptive Sampling Rate c S R

In the process of adaptive observation, the most important thing is to design a reasonable random observation matrix, and the dimension of this matrix needs to be constrained by the adaptive sampling rate, so as to assign the different sampling of each sub-image with a different complexity. Therefore, the setting of c S R = { c S R ( x i ) , i = 1 , , T 1 } is crucial, and its basic form is mainly determined by the synthetic feature ( J = { J ( x i ) , i = 1 , , T 1 } ) and the sampling rate factor ( η S R = { η S R ( x i ) , i = 1 , , T 1 } ).
The definition of J ( x i ) can be implemented by setting the corresponding weighting coefficients λ 1 a n d λ 2 . This article obtains the optimization values for λ 1 a n d λ 2 through analysis and partial verification experiments: λ 1 = 1 a n d λ 2 = 2 .
The purpose of setting η S R ( x i ) is to establish the mapping function relationship between J ( x i ) and c S R by Equations (10) and (11). However, the mapping relationship established by Equation (10) does not consider the minimum sampling rate. In fact, the minimum sampling rate factor (MSRF) is considered in the proposed algorithm to improve performance, that is, the function between η S R ( x i ) and J ( x i ) should be modified as follows.
  • Initial value calculation of η S R ( x i ) : get the initial value of the sampling factor by Equation (10).
  • Judgment of η S R ( x i ) through MSRF ( η min ): if the corresponding sampling rate factor of all image sub-blocks meets the minimum threshold requirement ( η S R ( x i ) > η min , i { 1 , 2 , , T 1 } ), there is no need for modification, however, if it is not satisfied, modify it.
  • Modifying of η S R ( x i ) : if η S R ( x i ) η min , then η S R ( x i ) = η min ; if η S R ( x i ) > η min , then use the following equation to modify the value:
    η S R ( x i ) = = ( 1 + ( 1 η min ) T 1 T 1 T 1 ) log 2 J ( x i ) 1 T 1 j = 1 T 1 log 2 J ( x j )
    where, T 1 is the number of sub-images that can meet the requirement of the minimum threshold.

4.2.3. Setting of the Iteration Stop Condition v o p t

The focus of the proposed algorithm in the iterative reconstruction part is to make the best effect of the rebuilt image by choosing v o p t in the actual noisy background. This paper combines BIC and BCS to propose the calculation formula of the optimal iteration number of the proposed algorithm:
v o p t = { v o p t i , i = 1 , , T 1 } = { v i | arg min v i ( ( 2 + 2 log m i ) v i m i m i σ w i 2 + e y i * ) , i = 1 , , T 1 } .

5. Experiments and Results Analysis

In order to evaluate the FE-ABCS algorithm, experimental verification is performed in three scenarios. This paper first discusses the performance of the improved algorithm by flexible partitioning and adaptive sampling in the absence of noise, and secondly discusses how to combine the number of optimal iterations to eliminate the noise effect and achieve the best quality (comprehensive indicator) under noisy conditions. Finally, the differences between this proposed algorithm and other non-CS image compression algorithms is analyzed. The experiments were carried out in the matlab2016b software environment, and 20 typical grayscale images with 256 × 256 resolution were used for testing, which were selected from the LIVE Image Quality Assessment Database, the SIPI Image Database, the BSDS500 Database, and other digital image processing standard test Databases. The performance indicators mainly adopt Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Gradient Magnitude Similarity Deviation (GMSD), Block Effect Index (BEI), and Computational Cost (CC). The above five performance indicators are defined as follows:
The PSNR indicator is an index that shows the amplitude error between the reconstructed image and the original image, which is the most common and widely used objective measure of image quality:
P S N R = 20 × log 10 ( 255 1 N i = 1 N ( x i x i ) 2 )
where, x i and x i are the gray value of i-th sub-image of the reconstructed image and the original image.
The SSIM indicator is adopted to indicate similarity between the reconstructed image and the original image:
S S I M = ( 2 μ x μ x + c 1 ) ( 2 σ x x + c 2 ) ( μ x 2 + μ x 2 + c 1 ) ( σ x 2 + σ x 2 + c 2 )
where, μ x and μ x are the mean of x and x*, σ x and σ x are the standard deviation of x and x*, σ x x represents the covariance of x and x*, constant c 1 = ( 0.01 L ) 2 and c 2 = ( 0.03 L ) 2 , and L is the range of pixel values.
The GMSD indicator is mainly used to characterize the degree of distortion of the reconstructed image. The larger the value, the worse the quality of the reconstructed image:
G M S D = s t d ( { G M S ( i ) | i = 1 , , N } ) = s t d ( { 2 m x ( i ) m x ( i ) + c 3 m x 2 ( i ) + m x 2 ( i ) + c 3 | i = 1 , , N } )
where, s t d ( ) is the standard deviation operator, GMS is the gradient magnitude similarity between x and x*, m x ( i ) = ( h H x ( i ) ) 2 + ( h V x ( i ) ) 2 and m y ( i ) = ( h H x ( i ) ) 2 + ( h V x ( i ) ) 2 are the gradient magnitude of x(i) and x*(i), h H and h V represent the Prewitt operator of horizontal and vertical direction, and c 3 is an adjustment constant.
The main purpose of introducing BEI is to evaluate the blockiness of the algorithm in a noisy condition, which means that the larger the value, the more obvious the block effect:
B E I = log 2 [ s u m ( e d g e ( x * ) ) s u m ( e d g e ( x ) ) + s u m ( | e d g e ( x * ) e d g e ( x ) | ) 2 ]
where, e d g e ( ) denotes the edge acquisition function of the image, s u m ( ) represents the function of finding the number of all edge points of the image, and | | is an absolute value operator.
The Computational Cost is introduced to measure the efficiency of the algorithm, which is usually represented by Computation Time (CT). The smaller the value of CT, the higher the efficiency of the algorithm:
C T = t e n d t s t a r t
where, t s t a r t and t e n d indicate the start time and end time, respectively.
In addition, the sparse basis and the random measurement matrices use discrete cosine orthogonal basis and orthogonal symmetric Toeplitz matrices [36,37], respectively.

5.1. Experiment and Analysis without Noise

5.1.1. Performance Comparison of Various Algorithms

In order to verify the superiority of the proposed ABCS algorithm, this paper mainly uses the OMP algorithm as the basic reconstruction algorithm. Based on the OMP reconstruction algorithm, eight BCS algorithms (including the proposed algorithm with the idea of flexible partitioning and adaptive sampling) are listed, and the performance of these algorithms under different overall sampling rates is compared, which is shown in Table 1. In this experiment, four normal grayscale standard images are used for performance testing, the dimension of the subgraph and the iterative number of reconstructions are limited to 256 and one quarter of the measurement’s dimension, respectively.
These 8 BCS algorithms are named as M-B_C, M-B_S, M-FB_MIE, M-FB_WM, M-B_C-A_S, M-FB_WM-A_I, M-FB_WM-A_V, and M-FB_WM-A_S respectively, which in turn represent BCS with a fixed column block, BCS with a fixed square block, BCS with flexible partitioning by MIE, BCS with flexible partitioning by WM, BCS with a fixed column block and IE-adaptive sampling, BCS with WM-flexible partitioning and IE-adaptive sampling, BCS with WM-flexible partitioning and variance-adaptive sampling, and BCS with WM-flexible partitioning and SF-adaptive sampling (A form of FE-ABCS algorithm in the absence of noise). Comparing the data in Table 1, there are the following consensuses:
  • Analysis of the performance indicators of the first four algorithms shows that for the BCS algorithm, BCS with a fixed column block is inferior to BCS with a fixed square block because square partitioning makes good use of the correlation of intra-block regions. MIE-based partitioning minimizes the average amount of information entropy of the sub-images. However, when the overall image has obvious texture characteristics, simply using MIE as the partitioning basis may not necessarily achieve a good effect, and the BCS algorithm based on weighted MIE combined with the overall texture feature can achieve better performance indicators.
  • Comparing the adaptive BCS algorithms under different features in Table 1, variance has obvious superiority to IE among the single features, because the variance not only contains the dispersion of gray distribution but also the relative difference of the individual gray distribution of sub-images. In addition, the synthetic feature (combined local saliency) has a better effect than a single feature. The main reason for this is that the synthetic feature not only considers the overall difference of the subgraphs, but also the inner local-difference of the subgraphs.
  • Combining experimental results of the eight BCS algorithms in Table 1 reveals that using adaptive sampling or flexible partitioning alone does not provide the best results, but the proposed algorithm combining the two steps will have a significant effect on both PSNR and SSIM.
Figure 3 shows the reconstructed images of Cameraman using the above eight BCS algorithms and multimode filter at the overall sampling rate of 0.5. Compared with other algorithms, the graph (i) reconstructed by the proposed algorithm has good quality both in performance indicators and subjective vision. Adding multimode filtering has improved the performance of the above eight BCS algorithms. While comparing the corresponding data (SR = 0.5) in Figure 3 and Table 1, it was found that PSNR has a certain improvement after adding multimode filtering, and so as to SSIM under the first six BCS algorithms except the latter two algorithms. The reason is that the adaptive sampling rates of the latter two algorithms are both related to the variance (the more variance, the more sampling rate), and SSIM is related to both the variance and the covariance. In addition, to filter out high-frequency noise, the filtering process will also lose some high-frequency components (a contribution to the improvement of SSIM) of signal itself. Therefore, the latter two algorithms will reduce the value of SSIM for images with a lot of high-frequency components (SSIM value of graph (h) and (i) of Figure 3 is a little smaller than the corresponding value in Table 1), but for most images without lots of high-frequency information, the value of SSIM is improved.
Secondly, in order to evaluate the effectiveness and universality of the proposed algorithm, in addition to the OMP reconstruction algorithm as the basic comparison algorithm, the IRLS and BP reconstruction algorithms were also adopted and combined with the proposed method to generate eight BCS algorithms, respectively. Table 2 shows the experimental data records of the above two types of algorithms tested with the standard image Lena. From the resulting data, the proposed method has a certain improvement for the BCS algorithm based on IRLS and BP, although it will bring a slightly higher cost in computation time due to the increase of the proposed algorithm’s complexity itself.
Furthermore, comparative experiments of the proposed algorithm combined with different image reconstruction algorithms (OMP, IRLS, BP, and SP) have also been carried out. Figure 4 is the data record of the above experiments tested with the standard image Lena under the conditions of TSR = 0.4 and TSR = 0.6, respectively. The experimental data shows that the proposal using these four algorithms has little difference between the PSNR and SSIM performance index. However, in terms of the GMSD index, the IRLS and BP algorithms are obviously better than the OMP and SP. In terms of calculation time, BP is based on the l 1 norm, its performance is significantly worse than the other three, which is also consistent with the content of Section 1.

5.1.2. Parametric Analysis of the Proposed Algorithm

The main points of this proposed algorithm involves the design and verification of weighting coefficients ( c T S , λ 1 , λ 2 ) and minimum sampling rate factor ( η min ). The design of the three weighting coefficients of the algorithm in this paper was introduced in Section 4.2, and its effect on performance was reflected in the comparison of the eight algorithms in Section 5.1.1. Here, only the selection and effect of η min need to be researched, and the influence of η min on the PSNR under different TSR is analyzed.
Figure 5 shows the analysis results of the test image Lena on the correlation between PSNR, TSR, and MSRF ( η min ). It can be seen from Figure 5 that the optimal value (maximizing the PSNR of Lena’s recovery image) of minimum sampling rate factor (OMSRF) decreases as the TSR increases. In addition, the gray value in Figure 5 means the revised PSNR of the recovery image ( PSNR * = PSNR - PSNR ¯ ).
Then, many other test images were analyzed in this paper to verify the relationship between TSR and OMSRF ( η o p t = { η min | arg max η min ( P S N R ( x , η min ) ) } ), and the experimental results of eight typical test images are shown in Figure 6. According to the data, the reasonable setting of MSRF ( η o p t ) in the algorithm can be obtained by the curve fitting method. The baseline fitting method (a simple curve method) is used in the proposed algorithm of this article ( η o p t = 0.1 + 6 × ( 0.8 T S R ) / 7 ).

5.2. Experiment and Analysis Under Noisy Conditions

5.2.1. Effect Analysis of Different Iteration Stop Conditions on Performance

In the case of noiseless, the larger the iterative number ( v ) of the reconstruction algorithm, the better the effect of the reconstructed image. But in the noisy condition, the quality of the reconstructed image does not become monotonous with the increase of v , which has been carefully analyzed in Section 3.3. The usual iteration stop conditions are: (1) using the sparsity ( ς ) of signal as the stopping condition, i.e., fixed number of iterations ( v s t o p 1 = ς · m ), and (2) using the certain differential threshold ( γ ) of the recovery value as the stopping condition, i.e., the difference between the adjacent two results of the iterative output less than the threshold ( v s t o p 2 = = { v | arg min v ( y v 1 * y v * γ ) } ). Since the above two methods could not guarantee the optimal recovery of the original signal in the noisy background, the innovation of the FE-ABCS algorithm is to make up for the above deficiency and propose a constraint ( v o p t ) based on error analysis to ensure the best iterative reconstruction. Then the rationality of the proposed scheme would be verified through experiments in this section, and without loss of generality, OMP is used as the basic reconstruction algorithm, just like what was done in Section 5.1.
The specific experimental results of test image Lena for selecting different iteration stop conditions under different noise backgrounds are recorded in Table 3. The value of Noise-std represents the standard deviation of additional Gaussian noise signal. From the overall trend of Table 3, selecting v o p t has better performance than selecting v s t o p 1 as the stop condition for iterative reconstruction. This advantage is especially pronounced as the Noise-std increases.
In addition, in order to comprehensively evaluate the impact of different iteration stop conditions on the performance of reconstructed images, this paper combined the above three indicators to form a composite index PSB (PSB = PSNR × SSIM/BEI) for evaluating the quality of reconstructed images. The relationship between PSB and Noise-std of the reconstructed image under different iterations was researched in this article, so as to explore the relationship between PSB and TSR. Figure 7 shows the corresponding relationship between the PSB, Noise-std, and TSR under the above six different iteration stop conditions of Lena. It can be seen from Figure 7a that compared with the other five sparsity-based ( ς ) reconstruction algorithms, the v o p t -based error analysis reconstruction algorithm generally has relatively good performance under different noise backgrounds. Similarly, Figure 7b shows that the v o p t -based error analysis reconstruction algorithm has advantages over other algorithms at different total sampling rates.
Furthermore, the differential threshold (γ)-based reconstruction algorithm and the v o p t -based error analysis reconstruction algorithm were compared in this article. Two standard test images and two real-nowadays images are adopted for the comparative experiment at the condition of Noise-std = 20 and TSR = 0.5. Experimental results show that the v o p t -based error analysis reconstruction algorithm has a significant advantage over the γ-based reconstruction algorithm in both PSNR and PSGBC (another composite index: PSGBC = PSNR × SSIM/GMSD/BEI/CT), which can be seen from Table 4, although there is a slight loss in BEI. Figure 8 shows the reconstruction images of these four images with differential threshold (γ) and error analysis ( v o p t ) as the iterative stop condition.

5.2.2. Impact of Noise-Std and TSR on v o p t

Since v o p t is important to the proposed algorithm in this paper, it is necessary to analyze its influencing factors. According to Equation (44), v o p t is mainly determined by the measurement dimension of the signal and the added noise intensity under the BIC condition. In this section, the test image Lena is divided into 256 sub-images, and the relationship between the optimal iterative recovery stop condition ( v o p t i ) of each sub-image, the TSR and the Noise-std is analyzed, and the experimental results are recorded in Figure 9. It can be seen from Figure 9a that the correlation between v o p t and TSR is small, but it can be seen from Figure 9b that v o p t has a strong correlation with Noise-std, that is, the larger the Noise-std, the smaller the v o p t .

5.3. Application and Comparison Experiment of FE-ABCS Algorithm in Image Compression

5.3.1. Application of FE-ABCS Algorithm in Image Compression

Although the FE-ABCS algorithm belongs to the CS theory which is mainly used for reconstruction of sparse images at low sampling rates, the algorithm can also be used for image compression after modification. The purpose of conventional image compression algorithms (such as JPEG, JPEG2000, TIFF, and PNG) is to reduce the amount of data and maintain a certain image quality through quantization and encoding. Therefore, the quantization and encoding module are added to the FE-ABCS in Figure 2b to form a new algorithm for image compression, which is shown in Figure 10 and named FE-ABCS-QC.
In order to demonstrate the difference between the proposed algorithm and the traditional image compression, without loss of generality, the JPEG2000 algorithm is selected as the comparison algorithm and is shown in Figure 11. Comparing Figure 10 and Figure 11, it is found that the modules of FDWT and IDWT in the JPEG2000 algorithm are replaced by the observing module and the restoring module in the proposal respectively, and the dimensions of the input and output signals are both different in the observing and restoring module (M < T1 × n = N), that is different from the modules of FDWT and IDWT in which dimensions of the input and output signals are the same (both T1 × n = N). These differences make the proposed algorithm have a larger compression ratio (CR) and smaller bits per pixel (bpp) than JPEG2000 under the same quantization and encoding conditions.

5.3.2. Comparison Experiment between the Proposed Algorithm and the JPEG2000 Algorithm

In general, the evaluation of image compression algorithms is performed by rate-distortion performance. For the comparing of the FE-ABCS-QC and JPEG2000 algorithms, the indicators of PSNR, SSIM, and GMSD are adopted in this section. In addition, the definition of Rate (bpp) in the above two algorithms is as follows:
R a t e = K N
where, K* is the number of bits in the code stream after encoding, and N is the number of pixels in the original image.
In order to compare the performance of the above two algorithms, multiple standard images are tested, and Table 5 records the complete experimental data for the two algorithms at various rates when using Lena and Monarch as test images. At the same time, the relationship between PSNR (used as the main distortion evaluation index) and the Rate of the two test images is illustrated in Figure 12. Based on the objective data of Table 5 and Figure 12, it can be seen that, compared with the JPEG2000 algorithm, the advantage of the FE-ABCS-QC algorithm becomes stronger with the increase of the rate, that is, at the small rate, the JPEG2000 algorithm is superior to the FE-ABCS-QC algorithm, while at medium and slightly larger rates, the JPEG2000 algorithm is not as good as the FE-ABCS-QC algorithm.
Furthermore, the experiment results are recorded in the form of images in addition to the objective data comparison. Figure 13 shows the two algorithms’ comparison of the compressed image restoration effects in the case of bpp = 0.25 when using Bikes as the test image. Comparing (b) and (c) of Figure 13, the image generated by the FE-ABCS-QC algorithm is slightly better than the one of the JPEG2000 algorithm, either from the perception of objective data or subjective sense.
Finally, the following conclusions could be gained by observing experimental data and theoretical analysis.
  • Small Rate (bpp): the reason why the performance of the FE-ABCS-QC algorithm is worse than the JPEG2000 algorithm at this condition is that the small value of M which changes with Rate causes the observing process to fail to cover the overall information of the image.
  • Medium or slightly larger Rate (bpp): the explanation for the phenomenon that the performance of the FE-ABCS-QC algorithm is better than the JPEG2000 algorithm in this situation is that the appropriate M can ensure the complete acquisition of image information and can also provide a certain image compression ratio to generate a better basis for quantization and encoding.
  • Large Rate (bpp): this case of the FE-ABCS-QC algorithm is not considered because the algorithm belongs to the CS algorithm and requires M << N itself.

6. Conclusions

Based on the traditional block-compression sensing theory model, an improved algorithm (FE-ABCS) was proposed in this paper, and its overall workflow and key points were specified. Compared with the traditional BCS algorithm, firstly, a flexible partition was adopted in order to improve the rationality of partitioning in the proposed algorithm, secondly the synthetic feature was used to provide a more reasonable adaptive sampling basis for each sub-image block, and finally error analysis was added in the iterative reconstruction process to achieve minimum error between the reconstructed signal and the original signal in the noisy background. The experimental results show that the proposed algorithm can improve the image quality in both noiseless and noisy backgrounds, especially in the improvement of a reconstructed image’s composite index under a noisy background, and will be beneficial to the practical application of the BCS algorithm, and the application of the FE-ABCS algorithm in image compression.

Author Contributions

Conceptualization, methodology, software, validation and writing, Y.Z.; data curation and visualization, Q.S. and Y.Z.; formal analysis, W.L. and Y.Z. supervision, project administration and funding acquisition, W.L.

Funding

This work was supported by the National Natural Science Foundation of China (61471191).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  3. Shi, G.; Liu, D.; Gao, D. Advances in theory and application of compressed sensing. Acta Electron. Sin. 2009, 37, 1070–1081. [Google Scholar]
  4. Sun, Y.; Xiao, L.; Wei, Z. Representations of images by a multi-component Gabor perception dictionary. Acta Electron. Sin. 2008, 34, 1379–1387. [Google Scholar]
  5. Xu, J.; Zhang, Z. Self-adaptive image sparse representation algorithm based on clustering and its application. Acta Photonica Sin. 2011, 40, 316–320. [Google Scholar]
  6. Wang, G.; Niu, M.; Gao, J.; Fu, F. Deterministic constructions of compressed sensing matrices based on affine singular linear space over finite fields. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2018, 101, 1957–1963. [Google Scholar] [CrossRef]
  7. Li, S.; Wei, D. A survey on compressive sensing. Acta Autom. Sin. 2009, 35, 1369–1377. [Google Scholar] [CrossRef]
  8. Palangi, H.; Ward, R.; Deng, L. Distributed compressive sensing: A deep learning approach. IEEE Trans. Signal Process. 2016, 64, 4504–4518. [Google Scholar] [CrossRef]
  9. Chen, C.; He, L.; Li, H.; Huang, J. Fast iteratively reweighted least squares algorithms for analysis-based sparse reconstruction. Med. Image Anal. 2018, 49, 141–152. [Google Scholar] [CrossRef]
  10. Gan, L. Block compressed sensing of natural images. In Proceedings of the 15th International Conference on Digital Signal Processing, Cardiff, UK, 1–4 July 2007; pp. 403–406. [Google Scholar]
  11. Unde, A.S.; Deepthi, P.P. Fast BCS-FOCUSS and DBCS-FOCUSS with augmented Lagrangian and minimum residual methods. J. Vis. Commun. Image Represent. 2018, 52, 92–100. [Google Scholar] [CrossRef]
  12. Kim, S.; Yun, U.; Jang, J.; Seo, G.; Kang, J.; Lee, H.N.; Lee, M. Reduced computational complexity orthogonal matching pursuit using a novel partitioned inversion technique for compressive sensing. Electronics 2018, 7, 206. [Google Scholar] [CrossRef]
  13. Qi, R.; Yang, D.; Zhang, Y.; Li, H. On recovery of block sparse signals via block generalized orthogonal matching pursuit. Signal Process. 2018, 153, 34–46. [Google Scholar] [CrossRef]
  14. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Areas Commun. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  15. Lotfi, M.; Vidyasagar, M. A fast noniterative algorithm for compressive sensing using binary measurement matrices. IEEE Trans. Signal Process. 2018, 66, 4079–4089. [Google Scholar] [CrossRef]
  16. Yang, J.; Zhang, Y. Alternating direction algorithms for l1 problems in compressive sensing. SIAM J. Sci. Comput. 2011, 33, 250–278. [Google Scholar] [CrossRef]
  17. Yin, H.; Liu, Z.; Chai, Y.; Jiao, X. Survey of compressed sensing. Control Decis. 2013, 28, 1441–1445. [Google Scholar]
  18. Dinh, K.Q.; Jeon, B. Iterative weighted recovery for block-based compressive sensing of image/video at a low subrate. IEEE Trans. Circ. Syst. Video Technol. 2017, 27, 2294–2308. [Google Scholar] [CrossRef]
  19. Liu, L.; Xie, Z.; Yang, C. A novel iterative thresholding algorithm based on plug-and-play priors for compressive sampling. Future Internet 2017, 9, 24. [Google Scholar]
  20. Wang, Y.; Wang, J.; Xu, Z. Restricted p-isometry properties of nonconvex block-sparse compressed sensing. Signal Process. 2014, 104, 1188–1196. [Google Scholar] [CrossRef]
  21. Mahdi, S.; Tohid, Y.R.; Mohammad, A.T.; Amir, R.; Azam, K. Block sparse signal recovery in compressed sensing: Optimum active block selection and within-block sparsity order estimation. Circuits Syst. Signal Process. 2018, 37, 1649–1668. [Google Scholar]
  22. Wang, R.; Jiao, L.; Liu, F.; Yang, S. Block-based adaptive compressed sensing of image using texture information. Acta Electron. Sin. 2013, 41, 1506–1514. [Google Scholar]
  23. Amit, S.U.; Deepthi, P.P. Block compressive sensing: Individual and joint reconstruction of correlated images. J. Vis. Commun. Image Represent. 2017, 44, 187–197. [Google Scholar]
  24. Liu, Q.; Wei, Q.; Miao, X.J. Blocked image compression and reconstruction algorithm based on compressed sensing. Sci. Sin. 2014, 44, 1036–1047. [Google Scholar]
  25. Wang, H.L.; Wang, S.; Liu, W.Y. An overview of compressed sensing implementation and application. J. Detect. Control 2014, 36, 53–61. [Google Scholar]
  26. Xiao, D.; Xin, C.; Zhang, T.; Zhu, H.; Li, X. Saliency texture structure descriptor and its application in pedestrian detection. J. Softw. 2014, 25, 675–689. [Google Scholar]
  27. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Texture features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef]
  28. Cao, Y.; Bai, S.; Cao, M. Image compression sampling based on adaptive block compressed sensing. J. Image Graph. 2016, 21, 416–424. [Google Scholar]
  29. Shen, J. Weber’s law and weberized TV restoration. Phys. D Nonlinear Phenom. 2003, 175, 241–251. [Google Scholar] [CrossRef]
  30. Li, R.; Cheng, Y.; Li, L.; Chang, L. An adaptive blocking compression sensing for image compression. J. Zhejiang Univ. Technol. 2018, 46, 392–395. [Google Scholar]
  31. Liu, H.; Wang, C.; Chen, Y. FBG spectral compression and reconstruction method based on segmented adaptive sampling compressed sensing. Chin. J. Lasers 2018, 45, 279–286. [Google Scholar]
  32. Li, R.; Gan, Z.; Zhu, X. Smoothed projected Landweber image compressed sensing reconstruction using hard thresholding based on principal components analysis. J. Image Graph. 2013, 18, 504–514. [Google Scholar]
  33. Gershgorin, S.; Donoho, D.L. Ueber die Abgrenzung der Eigenwerte einer Matrix. Izv. Akad. Nauk. SSSR Ser. Math. 1931, 1, 749–754. [Google Scholar]
  34. Beheshti, S.; Dahleh, M.A. Noisy data and impulse response estimation. IEEE Trans. Signal Process. 2010, 58, 510–521. [Google Scholar] [CrossRef]
  35. Beheshti, S.; Dahleh, M.A. A new information-theoretic approach to signal denoising and best basis selection. IEEE Trans. Signal Process. 2005, 53, 3613–3624. [Google Scholar] [CrossRef]
  36. Bottcher, A. Orthogonal symmetric Toeplitz matrices. Complex Anal. Oper. Theory 2008, 2, 285–298. [Google Scholar] [CrossRef]
  37. Duan, G.; Hu, W.; Wang, J. Research on the natural image super-resolution reconstruction algorithm based on compressive perception theory and deep learning model. Neurocomputing 2016, 208, 117–126. [Google Scholar] [CrossRef]
Figure 1. Effect of different partitioning methods on mean information entropy (MIE) of images (the abscissa represents flexible partitioning with shape n = l × h = 2 i - 1 × 2 9 i = 256 ).
Figure 1. Effect of different partitioning methods on mean information entropy (MIE) of images (the abscissa represents flexible partitioning with shape n = l × h = 2 i - 1 × 2 9 i = 256 ).
Electronics 08 00753 g001
Figure 2. The workflow of two block-compressive sensing (BCS) algorithms. (a) Typical BCS algorithm, (b) FE-ABCS algorithm.
Figure 2. The workflow of two block-compressive sensing (BCS) algorithms. (a) Typical BCS algorithm, (b) FE-ABCS algorithm.
Electronics 08 00753 g002aElectronics 08 00753 g002b
Figure 3. Reconstructed images of Cameraman and performance indicators with different BCS algorithms (TSR = 0.5).
Figure 3. Reconstructed images of Cameraman and performance indicators with different BCS algorithms (TSR = 0.5).
Electronics 08 00753 g003
Figure 4. The comparison of the proposed algorithm with 4 reconstruction algorithms (OMP, IRLS, BP, SP): (a) TSR = 0.4, (b) TSR = 0.6.
Figure 4. The comparison of the proposed algorithm with 4 reconstruction algorithms (OMP, IRLS, BP, SP): (a) TSR = 0.4, (b) TSR = 0.6.
Electronics 08 00753 g004
Figure 5. Correlation between PSNR*, MSRF, and TSR of test image Lena.
Figure 5. Correlation between PSNR*, MSRF, and TSR of test image Lena.
Electronics 08 00753 g005
Figure 6. Correlation between OMSRF ( η o p t ) and TSR.
Figure 6. Correlation between OMSRF ( η o p t ) and TSR.
Electronics 08 00753 g006
Figure 7. The correlation between the PSB, Noise-std, and TSR under the six different iteration stop conditions of Lena: (a) PSB changes with Noise-std, (b) PSB changes with TSR.
Figure 7. The correlation between the PSB, Noise-std, and TSR under the six different iteration stop conditions of Lena: (a) PSB changes with Noise-std, (b) PSB changes with TSR.
Electronics 08 00753 g007
Figure 8. Iterative reconstruction images based on γ and v o p t at the condition of Noise-std = 20 and TSR = 0.5.
Figure 8. Iterative reconstruction images based on γ and v o p t at the condition of Noise-std = 20 and TSR = 0.5.
Electronics 08 00753 g008
Figure 9. Correlation between v o p t , TSR, and Noise-std of sub-images: (a) Noise-std = 20, (b) TSR = 0.4.
Figure 9. Correlation between v o p t , TSR, and Noise-std of sub-images: (a) Noise-std = 20, (b) TSR = 0.4.
Electronics 08 00753 g009
Figure 10. The workflow of the FE-ABCS-QC algorithm.
Figure 10. The workflow of the FE-ABCS-QC algorithm.
Electronics 08 00753 g010
Figure 11. The workflow of the JPEG2000 algorithm.
Figure 11. The workflow of the JPEG2000 algorithm.
Electronics 08 00753 g011
Figure 12. Rate-distortion performance for JPEG2000 and FE-ABCS-QC: (a) Lena, (b) Monarch.
Figure 12. Rate-distortion performance for JPEG2000 and FE-ABCS-QC: (a) Lena, (b) Monarch.
Electronics 08 00753 g012
Figure 13. The two algorithms’ comparison of test image Bikes at the condition of bpp = 0.25: (a) original image, (b) JPEG2000 image (PSNR = 29.80, SSIM = 0.9069, GMSD = 0.1964), (c) image by the FE-ABCS-QC algorithm (PSNR = 30.50, SSIM = 0.9366, GMSD = 0.1574).
Figure 13. The two algorithms’ comparison of test image Bikes at the condition of bpp = 0.25: (a) original image, (b) JPEG2000 image (PSNR = 29.80, SSIM = 0.9069, GMSD = 0.1964), (c) image by the FE-ABCS-QC algorithm (PSNR = 30.50, SSIM = 0.9366, GMSD = 0.1574).
Electronics 08 00753 g013
Table 1. The Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) of reconstructed images with eight BCS algorithms based on OMP. (TSR = total sampling rate).
Table 1. The Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) of reconstructed images with eight BCS algorithms based on OMP. (TSR = total sampling rate).
ImagesAlgorithmsTSR = 0.2TSR = 0.3TSR = 0.4TSR = 0.5TSR = 0.6
PSNR/SSIMPSNR/SSIM PSNR/SSIMPSNR/SSIMPSNR/SSIM
LenaM-B_C29.0945/0.768429.8095/0.8720 30.6667/0.9281 31.8944/0.957232.9854/0.9719
M-B_S31.1390/0.886631.7478/0.922732.4328/0.948033.2460/0.965134.2614/0.9763
M-FB_MIE30.7091/0.8613 31.2850/0.911532.0093/0.941332.9032/0.960033.8147/0.9737
M-FB_WM31.1636/0.888031.7524/0.923632.4623/0.947933.2906/0.964534.2691/0.9763
M-B_C-A_I29.1187/0.783829.8433/0.880330.8898/0.930532.0023/0.957733.2193/0.9732
M-FB_WM-A_I31.1763/0.896731.8872/0.934432.7542/0.958433.7353/0.973234.8647/0.9827
M-FB_WM-A_V31.2286/0.908732.0579/0.943333.0168/0.964334.1341/0.977535.4341/0.9856
M-FB_WM-A_S31.3609/0.913832.0943/0.948733.1958/0.968134.3334/0.980735.8423/0.9878
GoldhillM-B_C28.4533/0.7747 28.9144/0.871829.3894/0.908029.7706/0.9315 30.2421/0.9495
M-B_S29.5494/0.878529.9517/0.908930.3330/0.934130.8857/0.951431.4439/0.9640
M-FB_MIE29.7012/0.888229.9811/0.915430.4465/0.936430.9347/0.951631.5153/0.9642
M-FB_WM29.7029/0.886730.0277/0.915130.4827/0.936130.9555/0.951631.5333/0.9649
M-B_C-A_I28.4436/0.780928.8691/0.869329.3048/0.908929.7046/0.932130.2355/0.9499
M-FB_WM-A_I29.6708/0.891830.0833/0.921530.5120/0.942431.0667/0.957431.6899/0.9697
M-FB_WM-A_V29.5370/0.895730.0891/0.925330.5379/0.945631.0922/0.960731.8011/0.9724
M-FB_WM-A_S29.7786/0.897530.1482/0.927230.5689/0.947231.1310/0.962231.8379/0.9736
CameramanM-B_C28.5347/0.778729.0078/0.855929.3971/0.905129.9417/0.937930.6612/0.9592
M-B_S31.1796/0.876331.4929/0.912131.9203/0.939132.3009/0.958132.7879/0.9704
M-FB_MIE31.1487/0.878231.5067/0.912331.8644/0.940332.3170/0.957732.7946/0.9703
M-FB_WM31.2118/0.867531.4645/0.907231.8130/0.936532.2050/0.955932.6811/0.9686
M-B_C-A_I28.5669/0.785228.8807/0.861229.3928/0.916429.9924/0.946130.6130/0.9639
M-FB_WM-A_I31.2554/0.890131.5975/0.929632.0955/0.953332.6859/0.970133.4007/0.9802
M-FB_WM-A_V31.2869/0.908531.8762/0.955032.5052/0.974633.3531/0.984834.4449/0.9904
M-FB_WM-A_S31.3916/0.928731.9731/0.962132.6508/0.979033.6779/0.987734.8958/0.9918
CoupleM-B_C28.6592/0.758229.0162/0.855729.5471/0.910930.2260/0.944030.9136/0.9640
M-B_S30.1529/0.891230.6910/0.928931.2853/0.954131.9693/0.969532.7464/0.9796
M-FB_MIE30.1920/0.889530.7257/0.928231.2948/0.953131.9509/0.969232.7424/0.9794
M-FB_WM30.1357/0.891730.6890/0.925931.3185/0.953931.9520/0.969132.7622/0.9793
M-B_C-A_I28.5694/0.742829.0442/0.858929.5828/0.908830.2127/0.944430.9839/0.9642
M-FB_WM-A_I30.2105/0.902730.7783/0.941331.4680/0.963032.3143/0.975933.2604/0.9840
M-FB_WM-A_V30.1896/0.909930.8541/0.945431.4990/0.967032.3769/0.979233.3260/0.9864
M-FB_WM-A_S30.3340/0.911730.9047/0.947531.5496/0.968632.3788/0.979833.3561/0.9869
Table 2. The PSNR and SSIM of reconstructed images with eight BCS algorithms based on iteratively reweighted least square (IRLS) and basis pursuit (BP).
Table 2. The PSNR and SSIM of reconstructed images with eight BCS algorithms based on iteratively reweighted least square (IRLS) and basis pursuit (BP).
Restoring MethodAlgorithmsTSR = 0.4TSR = 0.6
PSNRSSIMGMSDCTPSNRSSIMGMSDCT
IRLSR-B_C32.380.93610.19435.72933.420.97900.134813.97
R-B_S32.670.96340.14685.92835.080.98430.098714.94
R-FB_MIE32.140.95930.16585.98634.440.98250.109413.79
R-FB_WM32.460.96310.14416.07134.800.98410.099214.25
R-B_C-A_I30.550.94600.18826.01134.080.98250.123814.46
R-FB_WM-A_I33.050.97140.13466.50736.010.98940.086314.98
R-FB_WM-A_V33.250.97510.12166.99436.710.99140.069117.34
R-FB_WM-A_S33.590.97870.11887.456 37.230.99270.066119.38
BPP-B_C30.560.93800.198433.4733.350.97870.137869.67
P-B_S32.720.96380.148434.2234.610.98230.107271.00
P-FB_MIE32.000.95310.162734.0434.140.98100.114968.78
P-FB_WM32.820.96350.151235.0834.570.98320.107069.48
P-B_C-A_I30.720.94280.197333.8433.570.97950.133570.27
P-FB_WM-A_I33.010.97050.144235.6335.700.98840.088870.73
P-FB_WM-A_V33.320.97500.127736.9836.370.99090.074272.59
P-FB_WM-A_S33.490.97730.121037.85 37.450.99320.066274.41
Table 3. The experimental results of Lena at different stop conditions and noise background (TSR = 0.4).
Table 3. The experimental results of Lena at different stop conditions and noise background (TSR = 0.4).
Sparsity v ς = 0.1 v ς = 0.2 v ς = 0.3 v ς = 0.4 v ς = 0.5 v o p t
Noise-std PSNR and SSIM and GMSD and BEI and CT
532.9532.7132.3932.1231.9632.48
0.9610.9650.9610.9570.9550.960
0.1650.1600.1690.1730.1780.171
10.6310.2510.049.929.9810.09
0.7410.9491.1661.5671.9111.481
1032.4031.8131.3031.0730.9232.23
0.9570.9550.9490.9410.9370.956
0.1690.1790.1900.1940.2000.177
10.2910.0410.1410.1510.2010.21
0.7410.9221.2331.6371.9610.926
1531.6330.8730.4630.2730.1231.81
0.9480.9370.9260.9170.9120.949
0.1800.2020.2130.2200.2230.185
10.3210.2510.3210.3810.3910.25
0.7270.9331.1541.5502.0630.798
2030.9730.0829.8329.6629.6131.48
0.9360.9160.8980.8870.8790.941
0.1970.2190.2360.2440.2470.191
10.4510.3410.3910.4310.4310.50
0.7200.9141.2031.4762.3480.739
3030.0329.3429.0428.9428.8930.75
0.9010.8620.8320.8160.8030.920
0.2270.2570.2660.2720.2750.204
10.5910.5210.5410.5510.6710.62
0.9010.9471.2371.5001.9960.690
4029.3828.7728.5728.5028.4530.14
0.8560.7950.7560.7340.7170.899
0.2520.2770.2860.2900.2910.221
10.7910.5710.7110.6810.6710.72
0.7360.7911.1691.5342.0470.678
Table 4. The performance indexes of test images under different iterative stop condition.
Table 4. The performance indexes of test images under different iterative stop condition.
ImagesLenaBaboonFlowersOriental GateIndex
Stop Condition
γ = 30032.1729.8431.4033.05PSNR
0.95620.87030.97440.9698SSIM
0.16610.20950.18120.1636GMSD
10.1610.729.8668.845BEI
0.83100.855 00.844 01.050CT
21.9313.5220.2821.09PSGBC
γ =131.9929.7531.5032.66PSNR
0.95740.87310.97680.9693SSIM
0.16380.17380.15710.1459GMSD
10.0110.589.5838.745BEI
2.9052.7042.823 3.578CT
6.4295.2247.2406.934PSGBC
v o p t 32.7530.0331.7833.15PSNR
0.96030.87290.97710.9637SSIM
0.14710.19560.15380.1471GMSD
9.89810.689.6939.025BEI
0.76300.89000.85000.8740CT
28.3114.1024.5027.53PSGBC
Table 5. The comparison results of different test-images under the various conditions (bits per pixel (bpp)) based on the JPEG2000 algorithm and the FE-ABCS-QC algorithm.
Table 5. The comparison results of different test-images under the various conditions (bits per pixel (bpp)) based on the JPEG2000 algorithm and the FE-ABCS-QC algorithm.
MethodJPEG2000
(PSNR/SSIM/GMSD)
FE-ABCS-QC
(PSNR/SSIM/GMSD)
∆P/∆S/∆G
Test Image
Lenabpp = 0.062531.64/0.9387/0.184230.58/0.7341/0.2478−1.06/−0.2046/0.0636
bpp = 0.12533.38/0.9697/0.139932.79/0.9413/0.1702−0.59/−0.0284/0.0303
bpp = 0.234.59/0.9807/0.116134.20/0.9710/0.1339−0.39/−0.0097/0.0178
bpp = 0.2535.25/0.9850/0.099637.80/0.9932/0.06122.55/0.0082/−0.0384
bpp = 0.335.73/0.9875/0.091738.28/0.9941/0.05542.55/0.0066/−0.0363
Monarchbpp = 0.062530.56/0.8184/0.233527.52/0.3615/0.2726−3.04/−0.4569/0.0391
bpp = 0.12531.31/0.9074/0.188129.12/0.6388/0.2568−2.19/−0.2686/0.0687
bpp = 0.232.49/0.9466/0.155432.91/0.9473/0.15070.42/0.0007/−0.0047
bpp = 0.2532.81/0.9572/0.147635.77/0.9886/0.06822.96/0.0314/−0.0794
bpp = 0.333.40/0.9679/0.130536.17/0.9896/0.06642.77/0.0217/−0.0641

Share and Cite

MDPI and ACS Style

Zhu, Y.; Liu, W.; Shen, Q. Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation. Electronics 2019, 8, 753. https://doi.org/10.3390/electronics8070753

AMA Style

Zhu Y, Liu W, Shen Q. Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation. Electronics. 2019; 8(7):753. https://doi.org/10.3390/electronics8070753

Chicago/Turabian Style

Zhu, Yongjun, Wenbo Liu, and Qian Shen. 2019. "Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation" Electronics 8, no. 7: 753. https://doi.org/10.3390/electronics8070753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop