*Article* **Constrained Backtracking Matching Pursuit Algorithm for Image Reconstruction in Compressed Sensing**

**Xue Bi 1,\*, Lu Leng 2,3,\*, Cheonshik Kim 4,\*, Xinwen Liu 5, Yajun Du <sup>6</sup> and Feng Liu <sup>5</sup>**


**Abstract:** Image reconstruction based on sparse constraints is an important research topic in compressed sensing. Sparsity adaptive matching pursuit (SAMP) is a greedy pursuit reconstruction algorithm, which reconstructs signals without prior information of the sparsity level and potentially presents better reconstruction performance than other greedy pursuit algorithms. However, SAMP still suffers from being sensitive to the step size selection at high sub-sampling ratios. To solve this problem, this paper proposes a constrained backtracking matching pursuit (CBMP) algorithm for image reconstruction. The composite strategy, including two kinds of constraints, effectively controls the increment of the estimated sparsity level at different stages and accurately estimates the true support set of images. Based on the relationship analysis between the signal and measurement, an energy criterion is also proposed as a constraint. At the same time, the four-to-one rule is improved as an extra constraint. Comprehensive experimental results demonstrate that the proposed CBMP yields better performance and further stability than other greedy pursuit algorithms for image reconstruction.

**Keywords:** constrained backtracking matching pursuit; sparse reconstruction; compressed sensing; greedy pursuit algorithm; image processing

#### **1. Introduction**

Image reconstruction is a significant application of multimedia signal processing. Compressed sensing (CS) is a technique that reconstructs sparse, compressible signals from under-determined random linear measurements. Over the past few decades, CS has been widely applied to image processing, including image reconstruction [1–5] and acquisition [6–8].

Various algorithms have been proposed for CS-based signal reconstruction with sparse constraints [9], which can be categorized into three classes. The first class is the non-convex optimization [10], such as re-weighted *l*<sup>1</sup> norm minimization [11] and *lq* norm minimization [12]. However, non-convex optimization is a non-deterministic polynomial (NP)-hard problem, which is hard to solve. The second class focuses on convex optimization based on the minimization of the *l*<sup>1</sup> norm. The basis pursuit (BP) algorithm is typically used for convex optimization, but its *l*<sup>1</sup> norm-based cost function is sometimes not differentiable. It also involves high computational complexity, thus limiting its practical applications [13–15].

The third category includes a set of greedy pursuit algorithms, which are to easy implement and have low computational complexity [13–21]. Specifically, orthogonal matching pursuit (OMP) [15–17], stage-wise OMP (StOMP) [18], and regularized orthogonal matching pursuit (ROMP) [19,20] have been proposed. The reconstruction complexity of basic

**Citation:** Bi, X.; Leng, L.; Kim, C.; Liu, X.; Du, Y.; Liu, F. Constrained Backtracking Matching Pursuit Algorithm for Image Reconstruction in Compressed Sensing. *Appl. Sci.* **2021**, *11*, 1435. https://doi.org/ 10.3390/app11041435

Academic Editor: Andrés Márquez Received: 17 January 2021 Accepted: 2 February 2021 Published: 5 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

greedy pursuit algorithms is roughly about O(*k*MN), which is much lower than that of BP algorithm.

While the greedy pursuit algorithms show superiority in easy implementation and computational efficiency, they typically require additional measurements for reconstruction and lack stable reconstruction capability. The problem is alleviated when backtracking is introduced. For example, the subspace pursuit (SP) algorithm [21] and compressive sampling pursuit (CoSaMP) algorithm [22] have been proposed based on the backtracking scheme. The difference between SP and CoSaMP is that the latter chooses 2*k* indices to combine the estimated support set from the previous iteration. However, it is necessary to estimate the sparsity level of signal *k* before applying SP and CoSaMP. Indeed, it is impractical to know the accurate sparsity level *k* of unknown signals in advance.

Then, sparsity adaptive matching pursuit (SAMP), which can recover signals without knowing the sparsity level, was proposed by Do et al. [23]. It alternatively estimates the sparsity level when the residue's energy increases between two consecutive stages and updates the support set size of the signal using a fixed and small step size. SAMP has apparent advantages when processing one-dimensional sparse signals. However, since one is used as the initial step size, when processing high-dimensional signals, the small step size significantly affects the result and efficiency of reconstruction. To further improve the reconstruction performance, an energy-based adaptive matching pursuit (EAMP) has been proposed [24]. One limitation of EAMP is that it only focuses on the binary signal reconstruction. Rasha et al. used the structured Wilkinson matrix as the measurement matrix to improve the efficiency of SAMP [25]. More recently, the improved generalized sparsity adaptive matching pursuit (IGSAMP) algorithm has been proposed. This algorithm uses a nonlinear step size to approximate the sparsity level, and only a small initial step size can be selected. Meanwhile, it requires carefully choosing the parameters without referring to the sensitivity of a large step size [26].

To improve the reconstruction performance of the sparsity adaptive matching pursuit algorithm and make it less sensitive to the step size, we propose a compositely constrained backtracking matching pursuit (CBMP) algorithm for image reconstruction. The main contributions of this paper are summarized as follows.


#### **2. Preliminaries**

*2.1. A Review of Compressed Sensing*

CS compresses the signal at the time of sampling while maintaining the ability to reconstruct the original signal. For a signal *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>N</sup>* that has at most *<sup>k</sup>* terms as nonzero components in some bases Ψ, the compressed signal *y* is obtained through the following linear transform:

$$y = \Phi \Psi \mathbf{x} \tag{1}$$

where *y* is an *M* × 1 vector and Φ denotes an *M* × *N* random measurement matrix with *M N*.

In general, *M* is much larger than *N*, so the reconstruction *x* from the measurements *y* can be solved by forming an underdetermined set of linear equations. Thus, the CS reconstruction is generally an ill-posed problem. To guarantee an exact reconstruction of every *k* sparse signal, one of the most important assumptions of CS is that the measurement matrix Φ satisfies the restricted isometry property (RIP) [27,28] with parameters (*k*, *δ*) [29–31].

$$(1 - \delta\_k) \|\mathbf{x}\|\_2^2 \le \|\Phi \mathbf{x}\|\_2^2 \le (1 + \delta\_k) \|\mathbf{x}\|\_2^2 \tag{2}$$

where *δ<sup>k</sup>* is the RIP constant and 0 < *δ<sup>k</sup>* < 1, *k* < *M*.

When a matrix satisfies the RIP, the lengths of all sufficiently sparse vectors are approximately preserved under the matrix transformation [29]. In [19,21], it was demonstrated that if *<sup>δ</sup>*2*<sup>K</sup>* <sup>&</sup>lt; <sup>√</sup><sup>2</sup> <sup>−</sup> 1,, then the signal can be exactly reconstructed via a finite number of iterations.

The CS reconstruction aims to find the sparsest possible solution that satisfies Equation (1). Then, the CS model [1,31] is represented as:

$$\min \|\Psi \mathbf{x}\|\_0 \quad \text{subject to} \quad y = \Phi \Psi \mathbf{x} \tag{3}$$

where Ψ*x*<sup>0</sup> is the *l*<sup>0</sup> norm and denotes the number of nonzero components in (Ψ*x*).

#### *2.2. A Review of the Greedy Pursuit Algorithms*

Among the reconstruction algorithms used in CS, the greedy pursuit algorithms are the most widely used due to their easy implementation and low computational complexity.

The goal of greedy pursuit algorithms is to find the support set of the unknown signal. After finding the support set, the signal can be reconstructed by solving a least squares problem [31–33]. There exit the indices of the optimal support set *J* ∈ {1, 2, . . . , *n*}, and *z*<sup>∗</sup> satisfies *y* = *z*∗*ϕJ*. *ϕ<sup>J</sup>* is the *J*-th column (index) of Φ. Then, the error function *e*(*j*) is:

$$\begin{aligned} \varkappa(j) &= \min\_{z} \left\| z \boldsymbol{\varrho}\_{j} - y \right\|\_{2}^{2} = \min\_{z} \left[ \left( \boldsymbol{\varrho}\_{j}^{T} \boldsymbol{\varrho}\_{j} \right) z^{2} - 2 \left( \boldsymbol{\varrho}\_{j}^{T} \boldsymbol{y} \right) z + \boldsymbol{y}^{T} \boldsymbol{y} \right] \\ &= \min\_{z} \left( \boldsymbol{\varrho}\_{j}^{T} \boldsymbol{\varrho}\_{j} \right) \left( z - \frac{\boldsymbol{\varrho}\_{j}^{T} \boldsymbol{y}}{\boldsymbol{\varrho}\_{j}^{T} \boldsymbol{\varrho}\_{j}} \right)^{2} + \boldsymbol{y}^{T} \boldsymbol{y} - \frac{\left( \boldsymbol{\varrho}\_{j}^{T} \boldsymbol{y} \right)^{2}}{\boldsymbol{\varrho}\_{j}^{T} \boldsymbol{\varrho}\_{j}} \end{aligned}$$

Letting *e*(*j*) = 0, the optimal solution:

$$z^\* = \left\{ \begin{array}{c} \frac{\wp\_f^T y}{\left\Vert \left[\wp\_f\right] \right\Vert^2}, j = J\\ 0, \text{ otherwise.} \end{array} \right\} \tag{4}$$

The matching pursuit (MP) algorithm is one of the most classical and primitive greedy pursuit algorithms. As described in Equation (4), only the column *J* minimizing the error function is selected in each iteration of the MP algorithm [32]. Later on, the OMP algorithm [15] was developed based on the MP algorithm. As stated in OMP, some indices are searched, corresponding to the most significant correlations between the measurement matrix and the residual. In each iteration, only one or more coordinates are selected and added to the support set. These selected coordinates correspond to the columns (indices) of observation matrices with the largest correlation with the residuals. The optimization iterates until the termination condition is satisfied. Finally, the pseudoinverse of the observation matrix corresponding to the obtained support set is used for signal reconstruction.

CS-based greedy pursuit algorithms adopted in CS include OMP [15–17], StOMP [18], ROMP [19,20], SP [21], CoSaMP [22], SAMP [23], EAMP [24], and IGSAMP [26]. Utilizing some criteria, they can approximate the sparse signals iteratively. Each of the algorithms iteratively computes the estimated support set of the signals. In each iteration, one or several coordinates are added to the support set. In particular, in OMP, only one column of Φ is added to the estimated support set. In StOMP, a hard threshold is used to choose several columns that are to be added to the support set. Both algorithms have to select these columns previously. Otherwise, these algorithms cannot be rectified.

These greedy pursuit algorithms required more measurements for exact reconstruction and lacked stable reconstruction capability until the backtrackingidea was introduced in SP [21] and CoSaMP [22]. Refining the last estimated support set, the backtracking scheme allows eliminating the wrong coordinates, which are selected in the previous iterations. The candidate set is introduced into the greedy pursuit algorithm, which is the key point of the backtracking. However, both SP [21] and CoSaMP [22] require prior knowledge about the sparsity level *k*, which is impractical to know previously. SAMP [23], on the other hand, was put forward to gradually approach the sparsity level by accumulation with a step size. The SAMP algorithm is shown in Figure 1 and Algorithm 1.

**Figure 1.** The pipeline of sparsity adaptive matching pursuit (SAMP) [23].

#### **Algorithm 1** Sparsity adaptive matching pursuit algorithm

#### **Input:**

*M* × *N* measurement matrix Φ, measurement vector *y*, step size *s*

#### **Initialization:**

*x*ˆ = 0 {Trivial Initialization}, *r*<sup>0</sup> = *y* {Initial residue}, U<sup>0</sup> = ∅ {the estimated support set},

*L* = *s* {size of the support set}, *j* = 1 { stage index}, *i* = 1 { iteration index}.

#### **Repeat:**

1. Preliminary test: find the matched L indices from Φ based on the correlation between <sup>Φ</sup> and *<sup>r</sup>i*−1, that is *<sup>D</sup><sup>i</sup>* <sup>=</sup> *max*(|Φ*Tri*−1|, *<sup>L</sup>*).


if the halting condition is true, then quit the iteration;

else if *r*<sup>2</sup> ≥ *ri*−12, then

*j* = *j* + 1{update the stage index}, *L* = *j* × *s* {update the size of support set};

else T*<sup>i</sup>* = F{update the support set}, *r<sup>i</sup>* = *r* { update the residual}, *i* = *i* + 1.

end if

Until the halting condition is true;

**Output:** *x*ˆ= Φ† <sup>T</sup>*y* {update the stage index}, *L* = *j* × *s* {a sparse reconstruction computed by the least squares algorithm}

SAMP uses the "divide and conquer" principle stage-by-stage to estimate the sparsity level and the true support set of the target signals. SAMP applies two tests, namely the preliminary test and the final test, to estimate the signal's support set. The preliminary test is used to implement the selection of the *L* largest elements corresponding to the most considerable correlation between the residual and the measurement matrix, denoted by *D<sup>i</sup>* = max <sup>Φ</sup>*T*r*i*−<sup>1</sup> <sup>|</sup>, *<sup>L</sup>* . After the preliminary test, a candidate list U is created by the union of the chosen list in the preliminary test and the support set in the previous iteration, represented by U*<sup>i</sup>* <sup>=</sup> *<sup>T</sup>i*−<sup>1</sup> <sup>∪</sup> *<sup>D</sup><sup>i</sup>* . The final test firstly solves a least squares problem to obtain *x*U*<sup>i</sup>* , and then chooses a subset of the *L* largest elements from *x*U*<sup>i</sup>* . This subset of coordinates serves as the support set of the current iteration. The residual is finally updated by subtracting the measured vector *y* from its projection onto the subspace spanned by the columns in the support set. The pseudo-code of SAMP is summarized below.

Φ† = Φ*T*Φ<sup>−</sup><sup>1</sup> Φ*<sup>T</sup>* represents the pseudo-inverse of Φ, in which Φ*<sup>T</sup>* denotes the transposition of Φ. The main innovation of SAMP is that the increment of the residual is used as the criterion to judge the sparsity level by accumulating with the step size. As previously mentioned, SAMP uses a fixed step size that is sensitive to the reconstruction performance [23]. Specifically, when SAMP is applied to two-dimensional images, the selection of the step size seriously affects the image reconstruction performance due to the lack of flexibility and adaptation in the sparsity level update stage. As shown in Figure 2, the reconstruction performance is affected by the step size. When the step size *s* = 64, the peak signal-to-noise ratio (PSNR) is 24.04 dB, whereas when *s* = 512, the PSNR is 28.44 dB.

(**a**) *s* = 64, PSNR = 24.04 dB (**b**) *s* = 512, PSNR = 28.44 dB

**Figure 2.** Performance of SAMP vs. different step sizes.

Then, a variable step size was proposed in EAMP [24], but it focuses on one-dimensional sparse binary signal reconstruction. Recently, IGSAMP [26] was proposed to improve SAMP. Furthermore, it requires one to carefully choose the parameters and control the variable nonlinear step size in the reconstruction process and does not refer to the sensitivity to the step size. In this paper, we propose an improved adaptive greedy algorithm, whose signal reconstruction performance is relatively insensitive to the step size.

#### **3. The Constrained Backtracking Matching Pursuit Algorithm for Image Reconstruction**

To overcome the sensitivity to the step size and improve the reconstruction performance of greedy pursuit algorithms, we propose the CBMP algorithm, which introduces restrictions to the backtracking stage, which provides more flexibility as the algorithm gradually approaches the true sparsity level of the unknown signal. The main steps of CBMP are described as follows:

Considering the signal to be reconstructed is a two-dimensional image, the sparsity level is relatively large; the process of sparsity level estimation is divided into large and small step size estimation stages. In the large step size stage, the increment of the step size is *<sup>s</sup><sup>j</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> *<sup>s</sup>j*−1. *<sup>j</sup>* denotes the stage iteration index. The increment of the sparsity level in the stage of the small step size is fixed and equals the step size of the previous stage.

Due to the advantages of combing information and improving accuracy [34,35], a composite strategy is proposed to effectively control the increment of the estimated sparsity level in the two stages. It includes two constraints controlled by parameters *a* and *b*, which are required in the backtracking threshold operation of CBMP, as described in Algorithm 2. The theoretical support for the composite strategy is clarified as follows:

**Theorem 1.** *Let <sup>x</sup>* <sup>∈</sup> *<sup>R</sup><sup>N</sup> be a sparse signal and <sup>y</sup> be a measurement vector. If the measurement matrix* <sup>Φ</sup> *satisfies the RIP, then x*<sup>2</sup> <sup>2</sup> > <sup>√</sup><sup>1</sup> 2 *y*<sup>2</sup> <sup>2</sup>. *The proof is presented in Appendix A.*

#### **Algorithm 2** The proposed CBMP algorithm

#### **Input:**

*<sup>M</sup>* <sup>×</sup> *<sup>N</sup>* measurement matrix <sup>Φ</sup>, measurement vector *<sup>y</sup>*, step size *<sup>s</sup>*<sup>0</sup>

#### **Initialization:**

*<sup>x</sup>* <sup>=</sup> <sup>0</sup>{trivial Initialization }; *<sup>y</sup>*<sup>0</sup> *<sup>r</sup>* <sup>=</sup> *<sup>y</sup>*{initial residue }; *<sup>T</sup>*<sup>0</sup> <sup>=</sup> <sup>∅</sup>{the estimated support set

}; *<sup>L</sup>*<sup>0</sup> <sup>=</sup> *<sup>s</sup>*0{size of the support set (sparsity level)}; *<sup>j</sup>* <sup>=</sup> <sup>1</sup>{stage index};

i=1{*iterationindex*};U<sup>0</sup> <sup>=</sup> <sup>∅</sup>{ union set }

#### **Repeat the following steps until the stopping condition holds:**

1. Preliminary test: *<sup>v</sup>* = <sup>Φ</sup>*Tyi*−<sup>1</sup> *<sup>r</sup>* , find the matched set *<sup>D</sup><sup>i</sup>* = *Lj*−<sup>1</sup> indices corresponding to the largest absolute values of *<sup>v</sup>*}, that is *<sup>D</sup><sup>i</sup>* <sup>=</sup> max <sup>|</sup>Φ*Tyi*−<sup>1</sup> *<sup>r</sup>* , *<sup>L</sup>j*−<sup>1</sup> .

2. Union operation: to broaden the selection space and make candidate list U*<sup>i</sup>* : U*<sup>i</sup>* = *<sup>T</sup>i*−<sup>1</sup> <sup>∪</sup> *<sup>D</sup><sup>i</sup>* , *x*U*<sup>i</sup>* = Φ† <sup>U</sup>*iy*.

3. Final test: to obtain the vector *x<sup>i</sup> <sup>F</sup>* : find the matched indices *<sup>F</sup><sup>i</sup>* based on the largest absolute values of *<sup>x</sup>*U*<sup>i</sup>* , that is *<sup>F</sup><sup>i</sup>* <sup>=</sup> max <sup>|</sup>*x*U*<sup>i</sup>* <sup>|</sup>, *<sup>L</sup>j*−<sup>1</sup> , *x<sup>i</sup> <sup>F</sup>* = <sup>Φ</sup>† *<sup>F</sup>iy*.


if *xi F* 2 <sup>2</sup> <sup>≤</sup> *<sup>a</sup>y*<sup>2</sup> <sup>2</sup> and size(U*i*) < *b* × *M*, then shift into the large step size estimation stage: *<sup>s</sup><sup>j</sup>* <sup>=</sup> <sup>2</sup> <sup>×</sup> *<sup>s</sup>j*−1, *<sup>L</sup><sup>j</sup>* <sup>=</sup> *<sup>L</sup>j*−<sup>1</sup> <sup>+</sup> *<sup>s</sup><sup>j</sup>* , *y<sup>i</sup> <sup>r</sup>* = *yi*−<sup>1</sup> *<sup>r</sup>* , *j* = *j* + 1, *i* = *i* + 1, then shift into 1.

if *ri F* <sup>2</sup> <sup>&</sup>gt; *<sup>y</sup>i*−<sup>1</sup> *<sup>r</sup>* <sup>2</sup> , then shift into the small step size estimation stage: *s<sup>j</sup>* = *sj*−1, *L<sup>j</sup>* = *Lj*−<sup>1</sup> + *s<sup>j</sup>* , *y<sup>i</sup> <sup>r</sup>* = *yi*−<sup>1</sup> *<sup>r</sup>* , *j* = *j* + 1, *i* = *i* + 1, then shift into 1.

Otherwise, shift into the stage that updates the support set based on the current estimated sparsity level: *y<sup>i</sup> <sup>r</sup>* = *r<sup>i</sup> <sup>F</sup>*, *<sup>L</sup><sup>j</sup>* = *<sup>L</sup>j*−1, *<sup>T</sup><sup>i</sup>* = *<sup>F</sup><sup>i</sup>* , *i* = *i* + 1, then shift into 1. **Output:** *x* = Φ† *<sup>T</sup>y* { a sparse reconstruction computed by the least squares algorithm }

According to Theorem 1, the energy of the original signal *x* is greater than the square root of one half of that of the measurement vector *<sup>y</sup>*, that is *x*<sup>2</sup> <sup>2</sup> > <sup>√</sup><sup>1</sup> 2 *y*<sup>2</sup> <sup>2</sup>. Different step sizes are used in CBMP. Specifically, the estimated sparsity level is far smaller than the true one at the early stage. Based on this theorem, the energy criterion can be improved by introducing a parameter *a* to constrain the reconstruction stages.

Inspired by the "four-to-one" practical rule proposed in [27], the measurement number should be four-times the signal sparsity level for signal reconstruction. In CBMP, U*<sup>i</sup>* is the union of a new matched set and the estimated support set of the previous iteration. *M* is the row number of the measurement matrix. We introduce the "four-to-one" rule to CBMP and use the parameter *b* to constrain the estimation stage. The relationship between the parameters *a* and *b* is analyzed in Section 4.

Figure 3 shows the flowchart of the CBMP algorithm. The reconstruction process is divided into the sparsity level update stage and the support set update stage. As for the details, the sparsity level update stage includes both the large and small step size update stages. In the early stage of reconstruction, the estimated sparsity level is far less than the true one, so large step sizes are adopted to estimate the sparsity level. As the iteration goes on, after the threshold condition is satisfied, it enters a small step size stage. The reason why CBMP can achieve better reconstruction performance than SAMP is attributed to its superior capability in handling the wrong indices (atoms). When the current obtained sparsity level is far less than the true one, those false indices can be easily added into the candidate support set. However, these false indices are difficult to eliminate in the later iteration. Therefore, at the beginning of the iteration, a large step size allows those false indices to be filtered out.

**Figure 3.** The flowchart of the constrained backtracking matching pursuit (CBMP) algorithm.

#### **4. Experimental Results**

Several experiments were conducted to illustrate the performance of the CBMP algorithm. The proposed CBMP was compared with SAMP [23] and IGSAMP [26]. The halting condition used by these algorithms was *yr* - 10−5. For a fair comparison, the same initial step size was used by CBMP, SAMP, and IGSAMP. It should be noted that in SAMP and IGSAMP, the reconstruction results shown in their simulation experiments [23,26] are obtained by a small step size (s = 1). In practical applications, when two-dimensional images are stacked into long one-dimensional vectors, the sparsity level in the transform domain is far greater than one. Correspondingly, the step sizes of the proposed algorithm were relatively large. The step sizes used in the experiment were 64, 128, 256, and 512, respectively. Different sampling rates were used to demonstrate the reconstruction performance of CBMP. The wavelet transform was chosen as the sparse basis to represent images. The quality of recovered images was measured by the peak signal-to-noise ratio (PSNR), which is expressed as:

$$\text{MSE} = \frac{1}{M \times N} \sum\_{i=0}^{M-1} \sum\_{j=0}^{N-1} |I(i,j) - \hat{I}(i,j)|^2 \tag{5}$$

$$\text{PSNR}(I, \hat{I}) = 10 \log\_{10} \left( \frac{\text{MAX}}{\text{MSE}} \right) \tag{6}$$

where *M* = *N* = 512, *I*(*i*, *j*) denotes the original value of the test image at the position (*i*, *j*) and ˆ*I*(*i*, *j*) denotes the reconstructed value at the position (*i*, *j*). The maximum pixel intensity is given as MAX. All images here are expressed using 8 bit intensity values per pixel, so the peak intensity is 255. The experiment configuration is as follows: the CPU was an Intel® Core™ i5-7200U at 2.50 GHz, and the size of the RAM was 8 GB. The programming language used to perform the experiments was MATLAB. Several experiments were conducted to validate the advantages of CBMP.

According to Theorem 1, *x*<sup>2</sup> <sup>2</sup> > <sup>√</sup><sup>1</sup> 2 *y*<sup>2</sup> <sup>2</sup>. In CBMP, *xF* should gradually approach the true one and *xF* is much smaller than *x* at the beginning. Simultaneously, there are two update stages, and then, the threshold parameter *a* is contracted within <sup>1</sup> 4 √2 1 <sup>2</sup> <sup>×</sup> <sup>1</sup> <sup>2</sup> <sup>×</sup> <sup>√</sup><sup>1</sup> 2 . Our experiments demonstrate that the threshold parameters *a* and *b* do not distinctively affect the reconstruction performance if the parameter satisfies *<sup>a</sup>* <sup>≤</sup> <sup>1</sup> 4 √2 . In CBMP, the support set of the signal obtained by the current iteration is constrained by the parameter *a* in the step size update stage, while U*<sup>i</sup>* is the union of the estimated support set of the previous iteration and the currently selected support set. Therefore, the relationship between these two parameters is set as *b* = 2*a*. These two parameters play different roles, as *a* is used after the final test, while *b* corresponds to the union operation after the preliminary test.

The relationship between the threshold parameters and reconstruction performance is shown in Figures 4 and 5. Meanwhile, SAMP and IGSAMP are both tested. Two standard images, "Lena" and "Peppers", were reconstructed to test the reconstruction performance of different parameter pairs (*a*, *b*). For a fair comparison, the sampling rate was 0.4, and the same initial step sizes were used. The initial step sizes were chosen from 64 to 512. From Figure 4, we can see that the reconstruction performance of CBMP with different threshold parameters is better than that of SAMP and IGSAMP. For example, when *a* = <sup>1</sup> <sup>16</sup>√<sup>2</sup> , all the PSNR values of CBMP with different initial step sizes are greater than 33.5. While the initial step size is 512, SAMP achieves the maximum PSNR value, which is less than 32, and IGSAMP offers the maximum PSNR value, which is less than 32.5. Therefore, CBMP offers better reconstruction performance than SAMP and IGSAMP. From Figure 5, it is noticed that if *<sup>a</sup>* <sup>≤</sup> <sup>1</sup> 4 √2 , all the PSNR values of CBMP with different step sizes are greater than 31. Therefore, the introduction of the threshold operation is necessary for the improvement of greedy pursuit algorithms. At the same time, threshold parameters do not distinctively affect the reconstruction performance if they are satisfied with the constrained condition in CBMP. Meanwhile, the reconstruction performance of CBMP with *a* = <sup>1</sup> <sup>16</sup>√<sup>2</sup> is better than the others; thus, this *a* value is regarded as the optimal value in the CBMP.

Tables 1 and 2 compare CBMP, SAMP, and IGSAMP in terms of the reconstruction performance (PSNR) on the Lena image with different sampling ratios and initial step sizes. Tables 3 and 4 compare CBMP, SAMP, and IGSAMP in terms of the reconstruction performance (PSNR) on the Peppers image with different sampling ratios and initial step sizes.

In Table 1, when the sampling ratio is 0.3, each PSNR value of the CBMP algorithm is greater than that of SAMP and IGSAMP. For example, with the initial step size of 64, the PSNR value of SAMP and IGSAMP is 25.45 dB and 26.23 dB, respectively, but the PSNR value of CBMP is 32.13 dB. Table 2 shows the PSNR values of SAMP, IGSAMP, and CBMP with the same sampling ratio of 0.4. Different step sizes are used. The PSNR values of SAMP with different initial step sizes range from 26.99 dB to 31.60 dB. The PSNR values of IGSAMP are increased from 27.32 dB to 32.46 dB. However, the PSNR values of CBMP are greater than those of SAMP and IGSAMP, achieving 33.9675 dB as the average value.

Similarly, Table 3 shows the PSNR values of the Peppers image by SAMP, IGSAMP, and CBMP when the sampling ratio is 0.3. Each PSNR value of the CBMP algorithm is greater than that of SAMP and IGSAMP. Table 4 shows the PSNR values of the Peppers image by SAMP, IGSAMP, and CBMP when the sampling ratio is 0.4. Different step sizes are used. For example, as the initial step size is 512, the PSNR value of SAMP and IGSAMP

is 30.67 dB and 32.75 dB, respectively, while the PSNR value of CBMP is 32.84 dB.Therefore, CBMP can achieve better reconstruction performance with different sampling ratios and initial step sizes.

**Figure 4.** PSNR (dB) under different initial step sizes of the Lena image.



**Table 2.** PSNR (dB) comparison of the Lena image when the sampling ratio is 0.4.


**Table 3.** PSNR (dB) comparison of the Peppers image when the sampling ratio is 0.3.


**Figure 5.** PSNR (dB) under different initial step sizes of the Peppers image.


**Table 4.** PSNR (dB) comparison of the Peppers image when the sampling ratio is 0.4.

Finally, the reconstructed results of the Lena image using SAMP, IGSAMP, and CBMP are shown in Figures 6 and 7. The sampling rate is 0.3; different step sizes are used. The reconstructed results of the Peppers image using SAMP, IGSAMP, and CBMP are shown in Figures 8 and 9.

(**a**) SAMP (**b**) IGSAMP (**c**) CBMP

**Figure 6.** Reconstructed results of the Lena image by SAMP, IGSAMP, and CBMP with the initial step size of 64.

(**a**) SAMP (**b**) IGSAMP (**c**) CBMP

**Figure 7.** Reconstructed results of the Lena image by SAMP, IGSAMP, and CBMP with the initial step size of 512.

**Figure 8.** Reconstructed results of the Peppers image by SAMP, IGSAMP, and CBMP with the initial step size of 64.

**Figure 9.** Reconstructed results of the Peppers image by SAMP, IGSAMP, and CBMP with the initial step size of 512.

In our test, CBMP outperforms SAMP and IGSAMP in terms of visual effect and PSNR, which is irrelevant to the setup of the initial step size value. At the same time, with different step sizes, the reconstruction performance of CBMP is stable. For example, Figures 8a and 9a show different visual reconstruction effects when the initial step size is 64 and 512, individually, and the same conclusion can be made from Figures 8b and 9b. It is noted that the visualization effect is not obvious in Figures 8c and 9c when the initial step size is 64 and 512, respectively. Therefore, it can be concluded that the CBMP algorithm is relatively insensitive to the step size.

#### **5. Conclusions**

In this paper, a constrained backtracking matching pursuit algorithm is proposed for image reconstruction using compressed sensing. A composite strategy, including two constraints, is adopted to effectively control the estimated sparsity level's increment at

different stages and accurately estimate the true support set of the image to be reconstructed. On the one hand, the energy criterion between the estimated signal and the measurement is used as a constraint. On the other hand, the four-to-one practical rule is considered and improved as another control. Due to the introduction of these composite mechanisms, the reconstruction performance of the proposed algorithm outperforms the greedy pursuit algorithms, including SAMP and IGSAMP. In particular, CBMP offers a stable reconstruction performance, which is insensitive to the initial step size. In our future works, the CBMP algorithm will be applied to neural network framework-based signal reconstruction, including medical image reconstruction.

**Author Contributions:** Conceptualization, X.B. and L.L.; Formal analysis, L.L.; Funding acquisition, L.L., X.B. and Y.D.; Investigation, X.B. and L.L.; Methodology, X.B.; Supervision, C.K., Y.D. and F.L.; Writing original draft, X.B.; Writing review—editing, X.B., L.L., C.K., X.L. and F.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work is supported by the National Natural Science Foundation of China (Nos. 61866028, 61872298), the Chunhui Project of the Ministry of Education Project Foundation of China (No. Z2017075), the Sichuan Provincial Department of Education Foundation (No. 17ZB0416) and the Key Project Foundation of Xihua University (No. Z1520908).

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

Let *<sup>x</sup>* <sup>∈</sup> *<sup>R</sup><sup>N</sup>* be a sparse signal and *<sup>y</sup>* be a measurement vector. If the measurement matrix <sup>Φ</sup> satisfies the RIP, then *x*<sup>2</sup> <sup>2</sup> > <sup>√</sup><sup>1</sup> 2 *y*<sup>2</sup> 2.

**Proof.** From the right-hand side of the RIP, one has:

$$\|\Phi\mathbf{x}\|\|\_{2}^{2} \le (1+\delta\_{k})\|\mathbf{x}\|\_{2}^{2} \tag{A1}$$

Furthermore, *y*<sup>2</sup> <sup>2</sup> <sup>≤</sup> (<sup>1</sup> <sup>+</sup> *<sup>δ</sup>k*)*x*<sup>2</sup> <sup>2</sup>. and:

$$\frac{\|y\|\_2^2}{(1+\delta\_k)} \le \|x\|\_2^2\tag{A2}$$

According to the monotonicity of *δ<sup>k</sup>* [21], for two integers *k* < *k* :

$$\delta\_k < \delta\_{k'}$$

1 + *δ<sup>k</sup>* < 1 + *δ*2*<sup>k</sup>*

Furthermore, *δ<sup>k</sup>* < *δ*2*k*:

and:

Then:

$$\frac{||y||\_2^2}{1+\delta\_{2k}} < \frac{||y||\_2^2}{1+\delta\_k} \tag{A3}$$

Combining (A2) with (A3):

$$\frac{\|y\|\_2^2}{1+\delta\_{2k}} < \frac{\|y\|\_2^2}{1+\delta\_k} \le \|x\|\_2^2\tag{A4}$$

Based on the demonstration in SP [21] and RIP [30], 0 <sup>&</sup>lt; *<sup>δ</sup>*2*<sup>k</sup>* <sup>&</sup>lt; <sup>√</sup><sup>2</sup> <sup>−</sup> 1 is the sufficient condition for signal reconstruction in CS.

```
1
1 + δ2k
         <
                1
             1 + δk
```
Then:

\*\*Then\*\*: 
$$\frac{1 < 1 + \delta\_{2k} < \sqrt{2}}{\frac{\|y\|\_{2}^{2}}{\sqrt{2}} < \frac{\|y\|\_{2}^{2}}{1 + \delta\_{2k}} < \||x\|\_{2}^{2}.$$
 Therefore: 
$$\|x\|\_{2}^{2} > \frac{1}{\sqrt{2}} \|y\|\_{2}^{2}.$$

**References**

