**1. Introduction**

The traditional Nyquist sampling theorem states that the sampling frequency of a signal must be more than twice its highest frequency to ensure that the original signal is completely reconstructed from the sampled value, while the compressive sensing (CS) theory breaks through the traditional limitation of the Nyquist sampling theorem in signal acquisition and can achieve reconstructing a high-dimensional sparse signal or compressible signal from the lower-dimensional measurement [1]. As an alternative to the Nyquist sampling theorem, CS theory is being widely studied, especially in the current image processing. The research of CS theory mainly focuses on several important aspects such as sparse representation, measurement matrix construction, and the reconstruction algorithm [2,3]. The main research hotspot of sparse representation is how to construct a sparse dictionary of the orthogonal system and an over-complete dictionary for suboptimal approximation [4,5]. The construction of the measurement matrix mainly includes the universal random measurement matrix and the improved deterministic measurement matrix [6]. The research of the reconstruction algorithm mainly focuses on the suboptimal solution problem and a training algorithm based on self-learning [7,8]. With the advancement of research and application about CS theory, especially in 2D or 3D image processing, the CS technology faces several challenges, including computational dimensional disaster and the spatial storage problem with the increase of the images geometric scale. To solve these challenges, the

researchers proposed many fast-compressive sensing algorithms to solve the computation cost and the block-compressive sensing (BCS) algorithm to solve the space storage problem [9–12]. This article is based on the analysis of the above two points.

The CS recovery algorithm of images can mainly be divided into convex optimization recovery algorithms, non-convex recovery algorithms, and hybrid algorithms. The convex optimization algorithms include basis pursuit (BP), greedy basis pursuit (GBP), iterative hard threshold (IHT), etc. Non-convex algorithms include orthogonal matching pursuit (OMP), subspace matching basis pursuit (SP), and iteratively reweighted least square (IRLS), etc. The hybrid algorithms include sparse Fourier description (SF), chain pursuit (CP), and heavy hitters on steroids pursuit (HHSP) and other mixed algorithms [13–15]. The convex optimization algorithms based on *l1* minimization have benefits on the reconstruction effect, but with large computational complexity and high time complexity. Compared with convex optimization algorithms, the non-convex algorithms, such as the greedy pursuit algorithm, operate quickly, with a slightly poor accuracy based on *l0* minimization, and can also meet the general requirements of practical applications. In addition, the iterative threshold method has also been widely used in both of them with excellent performance. However, the iterative threshold method is sensitive to the selection of the threshold and the initial value of the iteration that affects the efficiency and accuracy of the algorithm [16,17]. The selection of thresholds in this process often uses simple error values (including absolute or relative values) or quantitative iterations as stopping criterion of the algorithm, which does not guarantee algorithm optimization [18,19].

The focus of this paper is on three aspects, namely, the block partitioning under weighted information entropy, the adaptive sampling based on synthetic features, and the iterative reconstruction through error analysis. The mean information entropy (MIE) and texture saliency (TS) are introduced in the block partitioning to provide a basis for promoting the algorithm. This part of adaptive sampling mainly improves the overall image quality through designing the block sampling rate by means of variance and local saliency (LS). The iterative reconstruction part mainly uses the relationship of three errors to provide the number of iterations required for the best reconstructed image in different noise backgrounds. Based on the above points, this paper proposes an altered adaptive block-compression sensing algorithm with flexible partitioning and error analysis, which is called FE-ABCS.

The remainder of this paper is organized as follows. In Section 2, we focus on the preliminaries of BCS. Section 3 includes the problem formulation and important factors. Then, the structure of the proposed FE-ABCS algorithm is presented in Section 4. In Section 5, the experiments and results analysis are listed to show the benefit of the FE-ABCS. The paper concludes with Section 6.

### **2. Preliminaries**

### *2.1. Compressive Sensing*

The algorithm theory of compressive sensing is derived from the sparse characteristic of natural signals that can be sparsely represented under a certain sparse transform basis, enabling direct sampling of sparse signals (sampling and compressing simultaneously). Set the sparse representation *s* of an original digital signal *x* which can be obtained by the transformation of sparse basis Ψ with K sparse coefficients and the signal *x* is observed by a measurement matrix Φ, then the observation signal *y* can be expressed as:

$$\mathbf{y} = \Phi \mathbf{x} = \Phi \Psi \mathbf{s} = \Omega \mathbf{s} \tag{1}$$

where, *x* ∈ *RN*, *s* ∈ *RN*, and *y* ∈ *RM*. Consequently, Ω ∈ *RM*×*<sup>N</sup>* is the product of the matrix Φ ∈ *RM*×*<sup>N</sup>* and Ψ ∈ *RN*×*N*, named the sensing matrix, and the value of *M* is much less than *N* because of the compressive sensing theory.

The reconstruction process is an NP-hard problem which restores the *N*-dimensional original signal from the *M*-dimensional measurement value through nonlinear projection and cannot be solved directly. Candès et al. pointed out that, the number *M* must meet the condition *M* = *O*(*K* log(*N*)) in order to reconstruct the N-dimensional signal *x* accurately, and the sensing matrix Ω must satisfy the Restricted

Isometry Property (RIP) [20]. Furthermore, the former theories proved that the original signal *x* can be accurately reconstructed from the measured value *y* by solving the *l*<sup>0</sup> norm optimization problem:

$$\pounds = \text{"} \ $, \quad \$  = \text{arg min} \|\text{s}\|\_0 \\ \text{s.t.} \, y = \spadesuit \text{x} = \spadesuit \text{y} \\ \text{s} \tag{2}$$

In the above formula, ∗<sup>0</sup> is the *l*<sup>0</sup> norm of a vector, which represents the number of non-zero elements in the vector.

With the wide application of CS technology, especially for 2D/3D image signal processing, it inevitably leads to a dimensional computing disaster problem (because the amount of calculation increases with the square/cube of dimensions), which is not directly overcome by CS technology itself. Here, it is necessary to introduce block partitioning and parallel processing to improve the algorithm, that is, the BCS algorithm improves its universality.

### *2.2. Block-Compressive Sensing (BCS)*

The traditional method of BCS used in image signal processing is to segment the image and process the sub-images in parallel for reducing the cost of storage and calculation. Suppose the original image (*IW* <sup>×</sup> *IH*) with *<sup>N</sup>* <sup>=</sup> *<sup>W</sup>* <sup>×</sup> *<sup>H</sup>* pixels in total, the observation with *<sup>M</sup>*-dimension and the definition of total sampling rate (*TSR* = *M*/*N*), in the normal processing of BCS, the image is divided into small blocks with a size of *B* × *B*, each of which is sampled with the same operator. Let *xi* represent the vectorized signal of the i-th block through raster scanning, and the output vector *yi* of BCS measurement can be written as:

$$y\_i = \Phi\_B x\_i \tag{3}$$

where, <sup>Φ</sup>*<sup>B</sup>* is an *<sup>m</sup>* <sup>×</sup> *<sup>n</sup>* matrix with *<sup>n</sup>* <sup>=</sup> *<sup>B</sup>*<sup>2</sup> and *<sup>m</sup>* <sup>=</sup> *n*·*TSR*. The matrix Φ*<sup>B</sup>* is usually taken as an orthonormalized i.i.d Gaussian matrix. For the whole image, the equivalent sampling operator Φ in (1) is thus a block diagonal matrix taking the following form:

$$
\Phi = \begin{bmatrix}
\Phi\_B & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & \Phi\_B
\end{bmatrix} \tag{4}
$$

### *2.3. Problems of BCS*

The mentioned BCS algorithm for solving the storage space, dividing image into multiple sub-images, reduces the scale of the measurement matrix on the one hand, and on the other hand could be conducive to the parallel processing of the sub-images. However, BCS still has the following problems that need to be investigated and solved:


• Although there are many studies on the improvement of the BCS iterative construction algorithm [24], few articles focus on optimizing the performance of the algorithm from the aspect of iteration stop criterion in the image reconstruction process, especially in the noise background.

In addition, the improvement on BCS also includes blockiness elimination and engineering implementation of the algorithm. Finally, although BCS technology still has some aspects to be solved, due to its advantages, the technology has been widely applied to optical/remote sensing imaging, medical imaging, wireless sensor networks, and so on [25].

### **3. Problem Formulation and Important Factors**
