*3.2. Transform Learning for the Drtbf Model*

As there are no existing algorithm for solving problem (13), we apply the alternative direction method (ADM) and divide (13) into two sub-problems: A sparse coding phase, which updates the sparse coefficients **Y** and the threshold value *λ*, (Section 3.2.1); and the transform operator pair update phase, which computes **Φ** and **Ψ**, (Section 3.2.2).

#### 3.2.1. Sparse Coding Phase

This subsection presents the sparse coding method for the proposed DRTBF model, in which the sparse coefficients of **Y** are obtained by OMP, and the threshold values *λ* are obtained by a designed elementwise method.

The **Y** Subproblem

The pursuit of **Y** is equivalent to solving the following problem with fixed **Φ**, **Ψ**, and *λ*:

$$\hat{\mathbf{Y}} = \operatorname\*{arg\,min}\_{\mathbf{Y}} \left\| \mathbf{X} - \boldsymbol{\Phi} \mathbf{Y} \right\|\_{F}^{2} + \eta\_{1} \left\| \mathbf{Y} - \mathbf{S}\_{\lambda} (\mathbf{Y}^{T} \mathbf{x}) \right\|\_{F}^{2} + \eta\_{2} \left\| \mathbf{Y} \right\|\_{0\prime} \tag{14}$$

which can be easily solved by OMP [14,34], as (14) can be easily converted to the classical synthesis sparse expression min **<sup>Z</sup>** − **DY**<sup>2</sup> *<sup>F</sup>* such that **<sup>Y</sup>**0, where **<sup>Z</sup>** = [**<sup>X</sup>** <sup>√</sup>*η*1S*λ*(**Ψ***T***x**)] and **<sup>D</sup>** = [**<sup>Φ</sup>** <sup>√</sup>*η*1**I**].

The *λ* Subproblem

With fixed **Φ**, **Ψ**, and **Y**, finding *λ* is equivalent to solving the following problem

$$\lambda = \operatorname\*{arg\,min}\_{\lambda} \|\mathbf{Y} - \mathcal{S}\_{\lambda}(\mathbf{Y}^T \mathbf{x})\|\_{F'}^2 \tag{15}$$

which can be decomposed into *M* individual optimization problems arg min*λ<sup>i</sup>* **y***<sup>i</sup>* − S*λ<sup>i</sup>* (*ψ<sup>T</sup> <sup>i</sup>* **<sup>X</sup>**)<sup>2</sup> 2, *i* = 1, ... , *M*. By denoting J*<sup>i</sup>* := supp(S*λ<sup>i</sup>* (*ψ<sup>T</sup> <sup>i</sup>* **X**)) to be the set of indices of non-zero elements of S*λ<sup>i</sup>* (*ψ<sup>T</sup> <sup>i</sup>* **X**), we have

$$\begin{aligned} \mathcal{S}\_{\lambda\_i}(\mathbf{Y}^T \mathbf{x}\_j) &= \mathbf{Y}^T \mathbf{x}\_{j\prime} \,\forall j \in \mathcal{J}\_i\\ \mathcal{S}\_{\lambda\_i}(\mathbf{Y}^T \mathbf{x}\_j) &= 0, \,\forall j \in \{1, \ldots, L\} \,\bigvee\_i \mathcal{I}\_i. \end{aligned}$$

As the cardinality of J*<sup>i</sup>* depends on *λi*, we transform (15) to another optimization problem:

$$\hat{\lambda}\_i = \operatorname\*{arg\,min}\_{\lambda\_i} \underbrace{\sum\_{j \in \{1, \dots, L\} \cup \mathcal{J}\_i} y\_{ij}^2}\_{f(\lambda\_i)} + \underbrace{\sum\_{j \in \mathcal{J}\_i} (y\_{ij} - \boldsymbol{\Psi}\_i^T \mathbf{x}\_j)^2}\_{g(\lambda\_i)},\tag{16}$$

where *yij* denotes the (*i*, *j*)th entry of **Y** and **x***<sup>i</sup>* denotes the *i*th column of **X**. Denote *l*(*λi*) as

$$I(\lambda\_i) = \sum\_{j \in \{1, 2, \dots, L\} \cup \mathcal{T}} (y\_{ij} - \boldsymbol{\Psi}\_i^T \mathbf{x}\_j)^2 \tag{17}$$

We observe that the function *f*(*λi*) is a monotonically increasing function and that *g*(*λi*) is monotonically decreasing. We take *ψ<sup>T</sup> <sup>i</sup>* **x***i*, *i* = 1, 2, ... , *L* as candidates and compute all the values of *f*(*λi*) + *g*(*λi*). Then, the optimal *λ<sup>i</sup>* should lie in an interval determined by *ψ<sup>T</sup> <sup>i</sup>* **<sup>x</sup>***<sup>k</sup>* and *<sup>ψ</sup><sup>T</sup> <sup>i</sup>* **x***l*, which correspond to the smallest and the second smallest values of *f*(*λi*) + *g*(*λi*), respectively. Then, any suitable value for *λ<sup>i</sup>* can be selected. The algorithm for the threshold is summarized as Algorithm 1.

3.2.2. Transform Pair Update Phase
