*2.1. Image Completion with Tucker Decomposition*

For the *<sup>N</sup>*-th order tensor Y ∈ <sup>R</sup>*I*1×*I*2×...×*IN* , the Tucker decomposition [58] with the ranks *J* = (*J*1, *J*2,..., *JN*) can be formulated as

$$\mathcal{Y} = \mathcal{G} \times\_1 \mathbf{U}^{(1)} \times\_2 \mathbf{U}^{(2)} \times\_3 \dots \times\_N \mathbf{U}^{(N)},\tag{3}$$

where <sup>G</sup> = [*gj*1,...,*jN* ] <sup>∈</sup> <sup>R</sup>*J*1×*J*2×...×*JN* for *Jn* <sup>≤</sup> *In* is the core tensor, and *<sup>U</sup>*(*n*) = [*u*(*n*) <sup>1</sup> , ... , *<sup>u</sup>*(*n*) *Jn* ] = [*uin*,*jn* ] <sup>∈</sup> <sup>R</sup>*In*×*Jn* for *<sup>n</sup>* <sup>=</sup> 1, ... , *<sup>N</sup>* is the factor matrix capturing the features across the *<sup>n</sup>*-th mode of <sup>Y</sup>. The operator ×*<sup>n</sup>* stands for the standard tensor-matrix contraction along the *n*-th mode, which is defined as

$$\left[\mathcal{G}\times\_n \mathbf{U}^{(n)}\right]\_{j\_1,\ldots,j\_{n-1},i\_n,j\_{n+1},\ldots,j\_N} = \sum\_{j\_n=1}^{l\_n} \mathbf{g}\_{j\_1,\ldots,j\_N} u^{(n)}\_{i\_n,j\_n}.\tag{4}$$

The optimization problem for low-rank image completion with the Tucker decomposition can be formulated as follows:

$$\min\_{\mathcal{L}, \mathcal{G}, \mathbf{U}^{(1)}, \dots, \mathbf{U}^{(N)}} \frac{1}{2} ||\mathcal{Z} - \mathcal{Y}||\_F^2 + \Phi(\mathcal{G}, \{\mathbf{U}^{(n)}\}), \quad \text{s.t.} \quad \mathcal{Z}\_{\Omega} = \mathcal{M}\_{\Omega}, \ \mathcal{Z}\_{\Omega} \ge \mathbf{0},$$

$$\forall n, j\_n : ||u\_{j\_n}^{(n)}|| = 1, \ j\_n = 1, \dots, J\_{\mathbb{N}}, n = 1, \dots, N. \tag{5}$$

where Y is given by model (3), and Φ(·) is a penalty function that imposes the desired constraints onto the core tensor <sup>G</sup> and the factor matrices {*U*(*n*) }. The projection Z<sup>Ω</sup> = M<sup>Ω</sup> means that *zi*1,...,*iN* is replaced with *mi*1,...,*iN* if *ωi*1,...,*iN* = 1; otherwise, no changes. Assuming ∀*n* : *Jn* < *In*, the tensor Y in (3) has a low Tucker rank.

Problem (5) can be solved by performing iterative updates with the Tucker decomposition in each step. The Tucker decomposition can be computed in many ways, depending on the constraints imposed on the estimated factors. If the nonnegativity constraints are used (as specified), any nonnegative least square (NNLS) solver can be applied in the alternating optimization scheme. Neglecting the computational cost of using an NNLS solver and the cost of computing the core G in (3), the total computational complexity for approximating the solution to (5) in *K* iterations can be roughly estimated as <sup>O</sup>(*<sup>K</sup>* <sup>∑</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Jn* <sup>∏</sup>*<sup>N</sup> <sup>p</sup>*=<sup>1</sup> *Ip*).
