4.2.2. Setting of the Adaptive Sampling Rate *cSR*

In the process of adaptive observation, the most important thing is to design a reasonable random observation matrix, and the dimension of this matrix needs to be constrained by the adaptive sampling rate, so as to assign the different sampling of each sub-image with a different complexity. Therefore, the setting of *cSR* = # *cSR*(*xi*), *<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>T</sup>*<sup>1</sup> \$ is crucial, and its basic form is mainly determined by the synthetic feature (*J* = # *<sup>J</sup>*(*xi*), *<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>T</sup>*<sup>1</sup> \$ ) and the sampling rate factor (η*SR* = # <sup>η</sup>*SR*(*xi*), *<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>T</sup>*<sup>1</sup> \$ ).

The definition of *<sup>J</sup>*(*xi*) can be implemented by setting the corresponding weighting coefficients λ<sup>1</sup> *and* λ2. This article obtains the optimization values for λ<sup>1</sup> *and* λ<sup>2</sup> through analysis and partial verification experiments: <sup>λ</sup><sup>1</sup> = <sup>1</sup> *and* <sup>λ</sup><sup>2</sup> = 2.

The purpose of setting <sup>η</sup>*SR*(*xi*) is to establish the mapping function relationship between *<sup>J</sup>*(*xi*) and *cSR* by Equations (10) and (11). However, the mapping relationship established by Equation (10) does not consider the minimum sampling rate. In fact, the minimum sampling rate factor (MSRF) is considered in the proposed algorithm to improve performance, that is, the function between <sup>η</sup>*SR*(*xi*) and *<sup>J</sup>*(*xi*) should be modified as follows.


$$\eta\_{\rm SR}(\mathbf{x}\_i) = = (1 + (1 - \eta\_{\rm min}) \frac{T\_1 - T\_1 \prime}{T\_1 \prime}) \frac{\log\_2 J(\mathbf{x}\_i)}{\frac{T\_1}{T\_1 \prime} \sum\_{j=1}^{T\_1} \log\_2 J(\mathbf{x}\_j)} \tag{49}$$

where, *T*<sup>1</sup> is the number of sub-images that can meet the requirement of the minimum threshold. 4.2.3. Setting of the Iteration Stop Condition *vopt*

The focus of the proposed algorithm in the iterative reconstruction part is to make the best effect of the rebuilt image by choosing *vopt* in the actual noisy background. This paper combines BIC and BCS to propose the calculation formula of the optimal iteration number of the proposed algorithm:

$$v\_{opt} = \left\{ v\_{opt}^{i}, i = 1, \cdots, T\_{1} \right\} = \left\{ v^{i} \middle| \operatorname\*{argmin}\_{v^{i}} \left( \frac{(2 + \sqrt{2} \log m^{i}) v^{i} - m^{i}}{m^{i}} \sigma\_{w^{i}}^{2} + \epsilon\_{y\_{i}}^{\*} \right), i = 1, \cdots, T\_{1} \right\}. \tag{50}$$

### **5. Experiments and Results Analysis**

In order to evaluate the FE-ABCS algorithm, experimental verification is performed in three scenarios. This paper first discusses the performance of the improved algorithm by flexible partitioning and adaptive sampling in the absence of noise, and secondly discusses how to combine the number of optimal iterations to eliminate the noise effect and achieve the best quality (comprehensive indicator) under noisy conditions. Finally, the differences between this proposed algorithm and other non-CS image compression algorithms is analyzed. The experiments were carried out in the matlab2016b software environment, and 20 typical grayscale images with 256 × 256 resolution were used for testing, which were selected from the LIVE Image Quality Assessment Database, the SIPI Image Database, the BSDS500 Database, and other digital image processing standard test Databases. The performance indicators mainly adopt Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Gradient Magnitude Similarity Deviation (GMSD), Block Effect Index (BEI), and Computational Cost (CC). The above five performance indicators are defined as follows:

The PSNR indicator is an index that shows the amplitude error between the reconstructed image and the original image, which is the most common and widely used objective measure of image quality:

$$PSNR = 20 \times \log\_{10}(\frac{255}{\sqrt{\frac{1}{N}\sum\_{i=1}^{N}(x\_i - x\_i^\*)^2}}) \tag{51}$$

where, *xi* and *x*<sup>∗</sup> *<sup>i</sup>* are the gray value of *i*-th sub-image of the reconstructed image and the original image.

The SSIM indicator is adopted to indicate similarity between the reconstructed image and the original image:

$$SSIM = \frac{(2\mu\_{\text{x}}\mu\_{\text{x}^\*} + c\_1)(2\sigma\_{\text{xx}^\*} + c\_2)}{(\mu\_{\text{x}}^2 + \mu\_{\text{x}^\*}^2 + c\_1)(\sigma\_{\text{x}}^2 + \sigma\_{\text{x}^\*}^2 + c\_2)}\tag{52}$$

where, μ*<sup>x</sup>* and μ*x*<sup>∗</sup> are the mean of *x* and *x\** , σ*<sup>x</sup>* and σ*x*<sup>∗</sup> are the standard deviation of *x* and *x\** , σ*xx*<sup>∗</sup> represents the covariance of *x* and *x\** , constant *<sup>c</sup>*<sup>1</sup> = (0.01*L*) <sup>2</sup> and *c*<sup>2</sup> = (0.03*L*) 2 , and *L* is the range of pixel values.

The GMSD indicator is mainly used to characterize the degree of distortion of the reconstructed image. The larger the value, the worse the quality of the reconstructed image:

$$\text{GMSD} = \text{std}(\{\text{GMS}(i) | i = 1, \cdots, N\}) = \text{std}(\left\{ \frac{2m\_{\text{x}}(i)m\_{\text{x}^\*}(i) + c\_3}{m\_{\text{x}}^2(i) + m\_{\text{x}^\*}^2(i) + c\_3} | i = 1, \cdots, N\right\}) \tag{53}$$

where,*std*(∗)is the standard deviation operator, *GMS* is the gradient magnitude similarity between *<sup>x</sup>* and *x\** , *mx*(*i*) = ! (*hH* <sup>⊗</sup> *<sup>x</sup>*(*i*))<sup>2</sup> + (*hV* <sup>⊗</sup> *<sup>x</sup>*(*i*))<sup>2</sup> and *my*(i) <sup>=</sup> ! (*hH* <sup>⊗</sup> *<sup>x</sup>*∗(*i*))<sup>2</sup> + (*hV* <sup>⊗</sup> *<sup>x</sup>*∗(*i*))<sup>2</sup> are the gradient magnitude of *x*(*i*) and *x\** (*i*), *hH* and *hV* represent the Prewitt operator of horizontal and vertical direction, and *c*<sup>3</sup> is an adjustment constant.

The main purpose of introducing BEI is to evaluate the blockiness of the algorithm in a noisy condition, which means that the larger the value, the more obvious the block effect:

$$BEI = \log\_2\left[\frac{sum(\text{edge}(\mathbf{x}^\*)) - sum(\text{edge}(\mathbf{x})) + sum(\left| \text{edge}(\mathbf{x}^\*) - \text{edge}(\mathbf{x}) \right|)}{2}\right] \tag{54}$$

where, *edge*(∗) denotes the edge acquisition function of the image, *sum*(∗) represents the function of finding the number of all edge points of the image, and |∗| is an absolute value operator.

The Computational Cost is introduced to measure the efficiency of the algorithm, which is usually represented by Computation Time (CT). The smaller the value of CT, the higher the efficiency of the algorithm:

$$CT = t\_{end} - t\_{start} \tag{55}$$

where, *tstart* and *tend* indicate the start time and end time, respectively.

In addition, the sparse basis and the random measurement matrices use discrete cosine orthogonal basis and orthogonal symmetric Toeplitz matrices [36,37], respectively.

### *5.1. Experiment and Analysis without Noise*

### 5.1.1. Performance Comparison of Various Algorithms

In order to verify the superiority of the proposed ABCS algorithm, this paper mainly uses the OMP algorithm as the basic reconstruction algorithm. Based on the OMP reconstruction algorithm, eight BCS algorithms (including the proposed algorithm with the idea of flexible partitioning and adaptive sampling) are listed, and the performance of these algorithms under different overall sampling rates is compared, which is shown in Table 1. In this experiment, four normal grayscale standard images are used for performance testing, the dimension of the subgraph and the iterative number of reconstructions are limited to 256 and one quarter of the measurement's dimension, respectively.

These 8 BCS algorithms are named as M-B\_C, M-B\_S, M-FB\_MIE, M-FB\_WM, M-B\_C-A\_S, M-FB\_WM-A\_I, M-FB\_WM-A\_V, and M-FB\_WM-A\_S respectively, which in turn represent BCS with a fixed column block, BCS with a fixed square block, BCS with flexible partitioning by MIE, BCS with flexible partitioning by WM, BCS with a fixed column block and IE-adaptive sampling, BCS with WM-flexible partitioning and IE-adaptive sampling, BCS with WM-flexible partitioning and variance-adaptive sampling, and BCS with WM-flexible partitioning and SF-adaptive sampling (A form of FE-ABCS algorithm in the absence of noise). Comparing the data in Table 1, there are the following consensuses:



**Table 1.** The Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) of reconstructed images with eight BCS algorithms based on OMP. (TSR = total sampling rate).

Figure 3 shows the reconstructed images of Cameraman using the above eight BCS algorithms and multimode filter at the overall sampling rate of 0.5. Compared with other algorithms, the graph (i) reconstructed by the proposed algorithm has good quality both in performance indicators and subjective vision. Adding multimode filtering has improved the performance of the above eight BCS algorithms. While comparing the corresponding data (SR = 0.5) in Figure 3 and Table 1, it was found that PSNR has a certain improvement after adding multimode filtering, and so as to SSIM under the first six BCS algorithms except the latter two algorithms. The reason is that the adaptive sampling rates of the latter two algorithms are both related to the variance (the more variance, the more sampling rate), and SSIM is related to both the variance and the covariance. In addition, to filter out high-frequency noise, the filtering process will also lose some high-frequency components (a contribution to the improvement of SSIM) of signal itself. Therefore, the latter two algorithms will reduce the value of SSIM for images with a lot of high-frequency components (SSIM value of graph (h) and (i) of Figure 3 is a little smaller than the corresponding value in Table 1), but for most images without lots of high-frequency information, the value of SSIM is improved.

**Figure 3.** Reconstructed images of Cameraman and performance indicators with different BCS algorithms (TSR = 0.5).

Secondly, in order to evaluate the effectiveness and universality of the proposed algorithm, in addition to the OMP reconstruction algorithm as the basic comparison algorithm, the IRLS and BP reconstruction algorithms were also adopted and combined with the proposed method to generate eight BCS algorithms, respectively. Table 2 shows the experimental data records of the above two types of algorithms tested with the standard image Lena. From the resulting data, the proposed method has a certain improvement for the BCS algorithm based on IRLS and BP, although it will bring a slightly higher cost in computation time due to the increase of the proposed algorithm's complexity itself.


**Table 2.** The PSNR and SSIM of reconstructed images with eight BCS algorithms based on iteratively reweighted least square (IRLS) and basis pursuit (BP).

Furthermore, comparative experiments of the proposed algorithm combined with different image reconstruction algorithms (OMP, IRLS, BP, and SP) have also been carried out. Figure 4 is the data record of the above experiments tested with the standard image Lena under the conditions of TSR = 0.4 and TSR = 0.6, respectively. The experimental data shows that the proposal using these four algorithms has little difference between the PSNR and SSIM performance index. However, in terms of the GMSD index, the IRLS and BP algorithms are obviously better than the OMP and SP. In terms of calculation time, BP is based on the *l*<sup>1</sup> norm, its performance is significantly worse than the other three, which is also consistent with the content of Section 1.

**Figure 4.** The comparison of the proposed algorithm with 4 reconstruction algorithms (OMP, IRLS, BP, SP): (**a**) TSR = 0.4, (**b**) TSR = 0.6.

### 5.1.2. Parametric Analysis of the Proposed Algorithm

The main points of this proposed algorithm involves the design and verification of weighting coefficients (*cTS*, λ1, λ2) and minimum sampling rate factor (ηmin). The design of the three weighting coefficients of the algorithm in this paper was introduced in Section 4.2, and its effect on performance was reflected in the comparison of the eight algorithms in Section 5.1.1. Here, only the selection and effect of ηmin need to be researched, and the influence of ηmin on the PSNR under different TSR is analyzed.

Figure 5 shows the analysis results of the test image Lena on the correlation between PSNR, TSR, and MSRF (ηmin). It can be seen from Figure 5 that the optimal value (maximizing the PSNR of Lena's

recovery image) of minimum sampling rate factor (OMSRF) decreases as the TSR increases. In addition, the gray value in Figure 5 means the revised PSNR of the recovery image (PSNR∗ <sup>=</sup> PSNR−PSNR).

**Figure 5.** Correlation between PSNR\* , MSRF, and TSR of test image Lena.

Then, many other test images were analyzed in this paper to verify the relationship between TSR and OMSRF (η*opt* = ⎧ ⎪⎪⎨ ⎪⎪⎩ ηmin argmax ηmin (*PSNR*(*x*, <sup>η</sup>min)) ⎫ ⎪⎪⎬ ⎪⎪⎭ ), and the experimental results of eight typical test images are shown in Figure 6. According to the data, the reasonable setting of MSRF (η*opt*) in the algorithm can be obtained by the curve fitting method. The baseline fitting method (a simple curve method) is used in the proposed algorithm of this article (η*opt* <sup>=</sup> 0.1 <sup>+</sup> <sup>6</sup> <sup>×</sup> (0.8 <sup>−</sup> *TSR*)/7).

**Figure 6.** Correlation between OMSRF (η*opt*) and TSR.

### *5.2. Experiment and Analysis Under Noisy Conditions*

5.2.1. Effect Analysis of Different Iteration Stop Conditions on Performance

In the case of noiseless, the larger the iterative number (*v*) of the reconstruction algorithm, the better the effect of the reconstructed image. But in the noisy condition, the quality of the reconstructed image does not become monotonous with the increase of *v*, which has been carefully analyzed in Section 3.3. The usual iteration stop conditions are: (1) using the sparsity (ς) of signal as the stopping condition, i.e., fixed number of iterations (*vstop*<sup>1</sup> <sup>=</sup> <sup>ς</sup>·*m*), and (2) using the certain differential threshold (γ) of the recovery value as the stopping condition, i.e., the difference between the adjacent two results of the iterative output less than the threshold (*v*s*top*<sup>2</sup> == \* *v* argmin*<sup>v</sup> y*<sup>∗</sup> *<sup>v</sup>*−<sup>1</sup> <sup>−</sup> *<sup>y</sup>*<sup>∗</sup> *<sup>v</sup>* ≤ γ +). Since the above two methods could not guarantee the optimal recovery of the original signal in the noisy background, the innovation of the FE-ABCS algorithm is to make up for the above deficiency and propose a constraint (*vopt*) based on error analysis to ensure the best iterative reconstruction. Then the rationality of the proposed scheme would be verified through experiments in this section, and without loss of generality, OMP is used as the basic reconstruction algorithm, just like what was done in Section 5.1.

The specific experimental results of test image Lena for selecting different iteration stop conditions under different noise backgrounds are recorded in Table 3. The value of Noise-std represents the standard deviation of additional Gaussian noise signal. From the overall trend of Table 3, selecting *vopt* has better performance than selecting *vstop*<sup>1</sup> as the stop condition for iterative reconstruction. This advantage is especially pronounced as the Noise-std increases.


**Table 3.** The experimental results of Lena at different stop conditions and noise background (TSR = 0.4).

In addition, in order to comprehensively evaluate the impact of different iteration stop conditions on the performance of reconstructed images, this paper combined the above three indicators to form a composite index PSB (PSB = PSNR × SSIM/BEI) for evaluating the quality of reconstructed images. The relationship between PSB and Noise-std of the reconstructed image under different iterations was researched in this article, so as to explore the relationship between PSB and TSR. Figure 7 shows the corresponding relationship between the PSB, Noise-std, and TSR under the above six different iteration stop conditions of Lena. It can be seen from Figure 7a that compared with the other five

sparsity-based (ς) reconstruction algorithms, the *vopt*-based error analysis reconstruction algorithm generally has relatively good performance under different noise backgrounds. Similarly, Figure 7b shows that the *vopt*-based error analysis reconstruction algorithm has advantages over other algorithms at different total sampling rates.

**Figure 7.** The correlation between the PSB, Noise-std, and TSR under the six different iteration stop conditions of Lena: (**a**) PSB changes with Noise-std, (**b**) PSB changes with TSR.

Furthermore, the differential threshold (γ)-based reconstruction algorithm and the *vopt*-based error analysis reconstruction algorithm were compared in this article. Two standard test images and two real-nowadays images are adopted for the comparative experiment at the condition of Noise-std = 20 and TSR = 0.5. Experimental results show that the *vopt*-based error analysis reconstruction algorithm has a significant advantage over the γ-based reconstruction algorithm in both PSNR and PSGBC (another composite index: PSGBC = PSNR × SSIM/GMSD/BEI/CT), which can be seen from Table 4, although there is a slight loss in BEI. Figure 8 shows the reconstruction images of these four images with differential threshold (γ) and error analysis (*vopt*) as the iterative stop condition.


**Table 4.** The performance indexes of test images under different iterative stop condition.

**Figure 8.** Iterative reconstruction images based on γ and *vopt* at the condition of Noise-std = 20 and TSR = 0.5.

### 5.2.2. Impact of Noise-Std and TSR on *vopt*

Since *vopt* is important to the proposed algorithm in this paper, it is necessary to analyze its influencing factors. According to Equation (44), *vopt* is mainly determined by the measurement dimension of the signal and the added noise intensity under the BIC condition. In this section, the test image Lena is divided into 256 sub-images, and the relationship between the optimal iterative recovery stop condition (*vi opt*) of each sub-image, the TSR and the Noise-std is analyzed, and the experimental results are recorded in Figure 9. It can be seen from Figure 9a that the correlation between *vopt* and TSR is small, but it can be seen from Figure 9b that *vopt* has a strong correlation with Noise-std, that is, the larger the Noise-std, the smaller the *vopt*.

**Figure 9.** Correlation between *vopt*, TSR, and Noise-std of sub-images: (**a**) Noise-std = 20, (**b**) TSR = 0.4.

### *5.3. Application and Comparison Experiment of FE-ABCS Algorithm in Image Compression*

### 5.3.1. Application of FE-ABCS Algorithm in Image Compression

Although the FE-ABCS algorithm belongs to the CS theory which is mainly used for reconstruction of sparse images at low sampling rates, the algorithm can also be used for image compression after modification. The purpose of conventional image compression algorithms (such as JPEG, JPEG2000, TIFF, and PNG) is to reduce the amount of data and maintain a certain image quality through quantization and encoding. Therefore, the quantization and encoding module are added to the FE-ABCS in Figure 2b to form a new algorithm for image compression, which is shown in Figure 10 and named FE-ABCS-QC.

**Figure 10.** The workflow of the FE-ABCS-QC algorithm.

In order to demonstrate the difference between the proposed algorithm and the traditional image compression, without loss of generality, the JPEG2000 algorithm is selected as the comparison algorithm and is shown in Figure 11. Comparing Figures 10 and 11, it is found that the modules of FDWT and IDWT in the JPEG2000 algorithm are replaced by the observing module and the restoring module in the proposal respectively, and the dimensions of the input and output signals are both different in the observing and restoring module (*M* < *T*<sup>1</sup> × *n* = *N*), that is different from the modules of FDWT and IDWT in which dimensions of the input and output signals are the same (both *T*<sup>1</sup> × *n* = *N*). These differences make the proposed algorithm have a larger compression ratio (CR) and smaller bits per pixel (bpp) than JPEG2000 under the same quantization and encoding conditions.

**Figure 11.** The workflow of the JPEG2000 algorithm.

5.3.2. Comparison Experiment between the Proposed Algorithm and the JPEG2000 Algorithm

In general, the evaluation of image compression algorithms is performed by rate-distortion performance. For the comparing of the FE-ABCS-QC and JPEG2000 algorithms, the indicators of PSNR, SSIM, and GMSD are adopted in this section. In addition, the definition of Rate (bpp) in the above two algorithms is as follows:

$$Rate = \frac{K^\*}{N} \tag{56}$$

where, *K\** is the number of bits in the code stream after encoding, and *N* is the number of pixels in the original image.

In order to compare the performance of the above two algorithms, multiple standard images are tested, and Table 5 records the complete experimental data for the two algorithms at various rates when using Lena and Monarch as test images. At the same time, the relationship between PSNR (used as the main distortion evaluation index) and the Rate of the two test images is illustrated in Figure 12. Based on the objective data of Table 5 and Figure 12, it can be seen that, compared with the JPEG2000 algorithm, the advantage of the FE-ABCS-QC algorithm becomes stronger with the increase of the rate, that is, at the small rate, the JPEG2000 algorithm is superior to the FE-ABCS-QC algorithm, while at medium and slightly larger rates, the JPEG2000 algorithm is not as good as the FE-ABCS-QC algorithm.

**Table 5.** The comparison results of different test-images under the various conditions (bits per pixel (bpp)) based on the JPEG2000 algorithm and the FE-ABCS-QC algorithm.


**Figure 12.** Rate-distortion performance for JPEG2000 and FE-ABCS-QC: (**a**) Lena, (**b**) Monarch.

Furthermore, the experiment results are recorded in the form of images in addition to the objective data comparison. Figure 13 shows the two algorithms' comparison of the compressed image restoration effects in the case of bpp = 0.25 when using Bikes as the test image. Comparing (**b**) and (**c**) of Figure 13, the image generated by the FE-ABCS-QC algorithm is slightly better than the one of the JPEG2000 algorithm, either from the perception of objective data or subjective sense.

**Figure 13.** The two algorithms' comparison of test image Bikes at the condition of bpp = 0.25: (**a**) original image, (**b**) JPEG2000 image (PSNR = 29.80, SSIM = 0.9069, GMSD = 0.1964), (**c**) image by the FE-ABCS-QC algorithm (PSNR = 30.50, SSIM = 0.9366, GMSD = 0.1574).

Finally, the following conclusions could be gained by observing experimental data and theoretical analysis.


### **6. Conclusions**

Based on the traditional block-compression sensing theory model, an improved algorithm (FE-ABCS) was proposed in this paper, and its overall workflow and key points were specified. Compared with the traditional BCS algorithm, firstly, a flexible partition was adopted in order to improve the rationality of partitioning in the proposed algorithm, secondly the synthetic feature was used to provide a more reasonable adaptive sampling basis for each sub-image block, and finally error analysis was added in the iterative reconstruction process to achieve minimum error between the reconstructed signal and the original signal in the noisy background. The experimental results show that the proposed algorithm can improve the image quality in both noiseless and noisy backgrounds, especially in the improvement of a reconstructed image's composite index under a noisy background, and will be beneficial to the practical application of the BCS algorithm, and the application of the FE-ABCS algorithm in image compression.

**Author Contributions:** Conceptualization, methodology, software, validation and writing, Y.Z.; data curation and visualization, Q.S. and Y.Z.; formal analysis, W.L. and Y.Z. supervision, project administration and funding acquisition, W.L.

**Funding:** This work was supported by the National Natural Science Foundation of China (61471191).

**Conflicts of Interest:** The authors declare no conflict of interest.
