**6. Complexity Analysis**

In this section, we discuss the computational complexity of our sparse coding and dictionary pair learning algorithms with regard to those of conventional sparse model counterparts.

We first analyze complexities of the main components of the sparse coding (SC) and dictionary updating (DU) algorithms. In terms of SC, given a set of training samples, **<sup>X</sup>** ∈ R*N*×*L*, the complexity of BtOMP of calculating **Y**ˆ = min **<sup>Y</sup> <sup>X</sup>** <sup>−</sup> **<sup>Φ</sup><sup>Y</sup>**<sup>2</sup> *<sup>F</sup>* + *<sup>γ</sup>*<sup>1</sup>**<sup>Y</sup>** − S*λ*(**Ψ***T***X**)<sup>2</sup> *<sup>F</sup>* + *<sup>γ</sup>*<sup>2</sup>**<sup>Y</sup>**<sup>0</sup> is *<sup>O</sup>*(*K*2*ML*) where *<sup>K</sup>* is the target sparsity and the complexity of threshold of calculating *λ*ˆ = min *<sup>λ</sup>* **<sup>Y</sup>** − S*λ*(**Ψ***T***X**)<sup>2</sup> *F* is *O*(*NML*), which cost most of time in SC step at each iteration. The sparse coefficients **Y** ∈ *N* × *L* and the the threshold values *<sup>λ</sup>* are computed with fixed dictionaries **<sup>Φ</sup>** ∈ R*N*×*<sup>M</sup>* and **<sup>Ψ</sup>** ∈ R*N*×*M*. Correspondingly, the traditional sparse coefficients **B** ∈ *N* × *L* is sparse approximated by dictionary **<sup>D</sup>** ∈ R*N*×*<sup>M</sup>* and the computational complexity is *<sup>O</sup>*(*K*2*ML*).

In terms of DU, with the given training samples **<sup>X</sup>** ∈ R*N*×*L*, we learn a pair of dictionary **<sup>Φ</sup>** ∈ R*N*×*<sup>M</sup>* and **<sup>Ψ</sup>** ∈ R*N*×*M*. We update **<sup>Ψ</sup>** via Problem (34) with a computational complexity of *O*(*N*2*L*). In order to update **Φ**. we need to calculate the gradient via Equation (39) with a computational complexity of *O*(*NML*) and the step size via Equation (40) with a computational complexity of *O*(*rNML*) where *r* is the iteration number of the gradient descent. For the traditional dictionary learning, the corresponding training set is **<sup>X</sup>** ∈ R*N*×*<sup>L</sup>* and the dictionary **<sup>D</sup>** ∈ R*N*×*<sup>M</sup>* is updated by SVD decomposition of rank-1 with a computational complexity of *O*(*KML*).

### **7. Experimental Results**

We demonstrate the effectiveness of our SSM-NTF model by first discussing the convergence of our dictionary pair learning algorithm and then evaluating the performance on natural and piecewise constant image denoising, super resolution and image inpainting.
