*2.2. RBF Interpolation*

The RBF interpolation [59] is a commonly used mesh-free method for approximating unstructured and high-dimensional data with high-order interpolants. Given the set of *I* distinct data points (*xi*, *yi*) for *<sup>i</sup>* <sup>=</sup> 1, ... , *<sup>I</sup>*, <sup>∀</sup>*<sup>i</sup>* : *<sup>x</sup><sup>i</sup>* <sup>∈</sup> <sup>R</sup>*P*, *yi* <sup>∈</sup> <sup>R</sup>, the aim is to find the approximating function *<sup>y</sup>*(*x*), which is referred to as the interpolant, satisfying the condition: <sup>∀</sup>*<sup>i</sup>* : *<sup>y</sup>*(*xi*) = *yi*. For the RBFs *<sup>ψ</sup>* : [0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup>, the interpolant takes the form

$$y(\mathbf{x}) = \sum\_{j=1}^{J} w\_j \psi\left(||\mathbf{x} - \mathbf{x}\_j||\right),\tag{6}$$

where {*wi*} are real-value weighting coefficients. For *I* data points, we have

$$y\_i = \sum\_{j=1}^{I} w\_j \psi\left(||\mathbf{x}\_i - \mathbf{x}\_j||\right), \quad \text{where} \quad i = 1, \dots, I \tag{7}$$

The weighting coefficients {*wi*} can be computed from the system of linear equations:

$$y = \Psi w,\tag{8}$$

where *y* = [*y*1, ... , *yI*] *<sup>T</sup>* <sup>∈</sup> <sup>R</sup>*<sup>I</sup>* , **Ψ** = [*ψ* ||*x<sup>i</sup>* <sup>−</sup> *<sup>x</sup>j*|| ] <sup>∈</sup> <sup>R</sup>*I*×*<sup>J</sup>* , and *w* = [*w*1, ... , *wJ*] *<sup>T</sup>* <sup>∈</sup> <sup>R</sup>*<sup>J</sup>* . If *I* ≥ *J* and **Ψ** is a full-rank matrix, system (8) can be solved with any linear least-square solver.

#### **3. Proposed Algorithm**

For interpolation of *N*-way incomplete tensor M, the points *x<sup>i</sup>* and *x<sup>j</sup>* in (7) are expressed by index values: *<sup>x</sup><sup>i</sup>* = [*i*1, *<sup>i</sup>*2, ... , *iN*] <sup>∈</sup> <sup>R</sup>*<sup>N</sup>* and *<sup>x</sup><sup>j</sup>* = [*j*1, *<sup>j</sup>*2, ... , *jN*] <sup>∈</sup> <sup>R</sup>*N*. The distance measure in (6) can also be regarded in a wider sense, and hence the norm *l*<sup>2</sup> can be replaced with the distance function *D*(*xi*, *xj*). For any distance function, the following conditions are satisfied: *D*(*x*, *x*) = 0, ∀*x* = *y* : *D*(*x*, *y*) > 0, *D*(*x*, *y*) = *D*(*y*, *x*), and *D*(*x*, *z*) ≤ *D*(*x*, *y*) + *D*(*y*, *z*). In the *N*-dimensional space, any data point *yi*1,...,*iN* can be modelled with the interpolant

$$\begin{array}{rcl} y\_{i\_1, i\_2, \dots, i\_N} & = & \sum\_{j=1}^J w\_j \Phi \left( -\frac{D(\mathbf{x}\_i \mathbf{x}\_j)}{\tau} \right) \\ & = & \sum\_{j\_1=1}^{J\_1} \sum\_{j\_2=1}^{J\_2} \cdots \sum\_{j\_N=1}^{J\_N} w\_{j\_1 j\_2, \dots, j\_N} \Phi \left( -\frac{D([i\_1, i\_2, \dots, i\_N], [i\_1 j\_2, \dots, j\_N])}{\tau} \right) \end{array} \tag{9}$$

where *τ* > 0 is a scaling factor.

Let *D*([*i*1, *i*2,..., *iN*], [*j*1, *j*2,..., *jN*]) be an additively separable distance function, i.e.,

$$D([i\_1, i\_2, \dots, i\_N]\_\prime, [j\_1, j\_{2\prime}, \dots, j\_N]) = \sum\_{n=1}^N d^{(n)}(i\_n, j\_n),\tag{10}$$

where *d*(*n*)(*in*, *jn*) is a one-variable function that expresses the distance metrics for the variables (*in*, *jn*) in the *n*-th mode. We also assume that the RBF *ψ* is multiplicatively separable. That is

$$
\psi\left(\sum\_{n=1}^{N}\mathbf{x}\_{n}\right) = \prod\_{n=1}^{N} \psi^{(n)}(\mathbf{x}\_{n}).\tag{11}
$$

Considering separability conditions (10), (11), model (9) can be reformulated as follows:

$$\begin{array}{rcl} y\_{i\_{1},\ldots,j\_{N}} & = & \sum\_{j\_{1}=1}^{l\_{1}} \cdots \cdot \sum\_{j\_{N}=1}^{l\_{N}} w\_{j\_{1},\ldots,j\_{N}} \Psi \left( -\frac{\sum\_{1=1}^{N} d^{(n)}(i\_{1n} \bar{\mu})}{\tau} \right) \\ &= & \sum\_{j\_{1}=1}^{l\_{1}} \cdots \cdot \sum\_{j\_{N}=1}^{l\_{N}} w\_{j\_{1},\ldots,j\_{N}} \prod\_{n=1}^{N} \Psi^{(n)} \left( -\frac{d^{(n)}(i\_{n} \bar{\mu})}{\tau} \right) = \sum\_{j\_{1}=1}^{l\_{1}} \cdots \cdot \sum\_{j\_{N}=1}^{l\_{N}} w\_{j\_{1},\ldots,j\_{N}} \prod\_{n=1}^{N} f\_{i\_{n}j\_{n}}^{(n)} \\ &= & \sum\_{j\_{N}=1}^{l\_{N}} \cdots \cdot \left( \sum\_{j\_{1}=1}^{l\_{1}} w\_{j\_{1},\ldots,j\_{N}} f\_{i\_{1}j\_{1}}^{(1)} \right) f\_{i\_{N}j\_{N}}^{(N)} \\ \end{array}$$
  $\text{where } f\_{i\_{1}j\_{1}}^{(n)} = \Psi^{(n)} \left( -\frac{d^{(n)}(i\_{n}j\_{1})}{\tau} \right).$ 

241

Following the standard tensor-matrix contraction rule given in (4), formula (12) can be presented in the following form:

$$\mathcal{V} = \mathcal{W} \times\_1 \mathbf{F}^{(1)} \times\_2 \dots \times\_N \mathbf{F}^{(N)},\tag{13}$$

where <sup>Y</sup> = [*yi*1,...,*iN* ] <sup>∈</sup> <sup>R</sup>*I*1×...×*IN* , <sup>W</sup> = [*wj*1,...,*jN* ] <sup>∈</sup> <sup>R</sup>*J*1×...×*JN* , and <sup>∀</sup>*<sup>n</sup>* : *<sup>F</sup>*(*n*) = [ *<sup>f</sup>* (*n*) *in*,*jn* ] <sup>∈</sup> <sup>R</sup>*In*×*Jn* .

**Remark 1.** *The model given in (13) has a form identical to model (3), where* <sup>W</sup> *is the core tensor, and* {*F*(*n*) } *are factor matrices. Hence, the RBF interpolation model in N-dimensional space, in which separability conditions (10), (11) are satisfied, boils down to the Tucker decomposition model, where the factor matrices* {*F*(*n*) } *are previously determined by rational functions.*

If *d*(*n*)(*in*, *jn*) = *i jn*−<sup>1</sup> *<sup>n</sup>* is expressed by an exponential function, *<sup>τ</sup>* <sup>=</sup> 1, and *<sup>ψ</sup>*(*ξ*) = <sup>−</sup>*ξ*, then *f* (*n*) *in*,*jn* <sup>=</sup> *<sup>ψ</sup>*(*n*) −*d*(*n*)(*in*,*jn*) *τ* = *i jn*−<sup>1</sup> *<sup>n</sup>* , and model (12) takes the form of the standard multivariate polynomial regression:

$$y\_{j\_1,\ldots,j\_N} = \sum\_{j\_1=1}^{J\_1} \cdots \sum\_{j\_N=1}^{J\_N} w\_{j\_1,\ldots,j\_N} \prod\_{n=1}^N (i\_n)^{j\_n-1} \,. \tag{14}$$

where numbers (*J*1, ... , *JN*) determine the degrees of the polynomial with respect to each mode, i.e.,∀*n* : *Jn* = *Dn* +1, where *Dn* is a degree of the polynomial along the *n*-th mode. Thus the multivariate polynomial regression can be also presented in the form of the Tucker decomposition model.

The RBF *ψ* can take various forms, such as Gaussian, polyharmonic splines, multiquadrics, inverse multiquadrics, and inverse quadratics [59]. The Gaussian RBF (GRBF), expressed by *ψ*(*ξ*) = exp{*ξ*}, is commonly used for interpolation; however, it cannot be used for constructing an interpolant with polynomial precision. For example, the GRBF does not approximate a linear function *y*(*x*) well. Hence, the polynomial regression in (14) is more suitable for linear or slowly varying functions, but it often yields underestimated interpolants if the polynomial has too low of a degree. An increase in the degree leads to an ill-conditioned regression problem and unbiased least squares estimates. To tackle this problem, both interpolation approaches can be combined, which leads to the following interpolation model:

$$y\_{j\_1,\ldots,j\_N} = \sum\_{j\_1=1}^{l\_1} \cdots \sum\_{j\_N=1}^{l\_N} w\_{j\_1,\ldots,j\_N} \prod\_{n=1}^N \exp\left(-\frac{d^{(n)}(i\_n,j\_n)}{\tau}\right) + \sum\_{r\_1=1}^{R\_1} \cdots \sum\_{r\_N=1}^{R\_N} c\_{r\_1,\ldots,r\_N} \prod\_{n=1}^N (i\_n)^{r\_n-1}.\tag{15}$$

Model (15) can be equivalently presented in the tensor-matrix contraction form:

$$\mathcal{V} = \mathcal{W} \times\_1 \mathbf{F}^{(1)} \times\_2 \dots \times\_N \mathbf{F}^{(N)} + \mathcal{C} \times\_1 \mathbf{P}^{(1)} \times\_2 \dots \times\_N \mathbf{P}^{(N)},\tag{16}$$

where <sup>C</sup> = [*cr*1,...,*rN* ] <sup>∈</sup> <sup>R</sup>*R*1×...×*RN* is the core tensor of the polynomial regression model, and <sup>∀</sup>*<sup>n</sup>* : *P*(*n*) = [*p* (*n*) *in*,*rn* ]=[1, *in*, *i* 2 *<sup>n</sup>*, ... , *i Rn*−<sup>1</sup> *<sup>n</sup>* ] <sup>∈</sup> <sup>R</sup>*In*×*Rn* is the respective factor matrix with *<sup>p</sup>* (*n*) *in*,*rn* = *i rn*−<sup>1</sup> *<sup>n</sup>* . System (16) has ∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Jn* + <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn* variables, and only <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In* equations, resulting in ambiguity of its solution under the assumption that ∀*n* : *Jn* = *In*. To relax this problem, a side-condition is imposed:

$$\mathcal{W} \times\_1 \mathbf{P}^{(1)T} \times\_2 \dots \times\_N \mathbf{P}^{(N)T} = 0.\tag{17}$$

Vectorizing models (16), (17) and performing the straightforward mathematical operations, we get

$$
\begin{bmatrix} y \\ \mathbf{0} \end{bmatrix} = \begin{bmatrix} F & P \\ P^T & \mathbf{0} \end{bmatrix} \begin{bmatrix} w \\ \mathbf{c} \end{bmatrix} \tag{18}
$$

where *<sup>y</sup>* <sup>=</sup> vec(Y) <sup>∈</sup> <sup>R</sup>∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In* , *<sup>F</sup>* <sup>=</sup> *<sup>F</sup>*(*N*) <sup>⊗</sup> ... <sup>⊗</sup> *<sup>F</sup>*(1) <sup>∈</sup> <sup>R</sup>∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In*×∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Jn* , *<sup>P</sup>* <sup>=</sup> *<sup>P</sup>*(*N*) <sup>⊗</sup> ... <sup>⊗</sup> *<sup>P</sup>*(1) <sup>∈</sup> R∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In*×∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn* , *<sup>w</sup>* <sup>=</sup> vec(W) <sup>∈</sup> <sup>R</sup>∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Jn* , and *<sup>c</sup>* <sup>=</sup> vec(C) <sup>∈</sup> <sup>R</sup>∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn* . The symbol ⊗ denotes the Kronecker product, and vec(·) means the vectorization operator.

To apply model (16) for image completion, let <sup>Y</sup><sup>ˆ</sup> be obtained according to Definition 1. System (18) can be applied to model non-zero entries in <sup>Y</sup>ˆ, using the following transformations: *<sup>y</sup>*<sup>ˆ</sup> <sup>=</sup> vec(Yˆ) = *<sup>S</sup>*vec(Y), *<sup>F</sup>*<sup>ˆ</sup> <sup>=</sup> *SFS*, *<sup>P</sup>*<sup>ˆ</sup> <sup>=</sup> *SP*, and *<sup>w</sup>*<sup>ˆ</sup> <sup>=</sup> *Sw*, where *<sup>S</sup>* <sup>=</sup> diag{vec(Ω)} ∈ <sup>R</sup>∏*<sup>N</sup> <sup>p</sup>*=<sup>1</sup> *Ip*×∏*<sup>N</sup> <sup>p</sup>*=<sup>1</sup> *Ip* is a diagonal matrix with binary values on the main diagonal. The matrices *<sup>F</sup>*<sup>ˆ</sup> and *<sup>P</sup>*<sup>ˆ</sup> have <sup>|</sup>Ω¯ <sup>|</sup> zero-value rows, and *<sup>F</sup>*<sup>ˆ</sup> also has the same number of zero-value columns. After removing the zero-value rows and columns from the transformed system, the observed entries in M can be expressed by the model

$$
\begin{bmatrix} y(\omega) \\ \mathbf{0} \end{bmatrix} = \begin{bmatrix} F(\omega, \omega) & P(\omega, \mathbf{:}) \\ P(\omega, \mathbf{:})^T & \mathbf{0} \end{bmatrix} \begin{bmatrix} w(\omega) \\ \mathbf{c} \end{bmatrix} \tag{19}
$$

where *<sup>y</sup>*(*ω*) = vec(Y(Ω)) <sup>∈</sup> <sup>R</sup>|Ω<sup>|</sup> is a vectorized version of observed entries in <sup>M</sup>, *<sup>F</sup>*(*ω*, *<sup>ω</sup>*) <sup>∈</sup> <sup>R</sup>|Ω|×|Ω<sup>|</sup> is a submatrix of *<sup>F</sup>*<sup>ˆ</sup> by selecting non-zero rows and columns, *<sup>P</sup>*(*ω*, :) <sup>∈</sup> <sup>R</sup>|Ω|×∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn* is obtained from *<sup>P</sup>*<sup>ˆ</sup> by removing all zero-value rows, and *<sup>w</sup>*(*ω*) = vec(W(Ω)) <sup>∈</sup> <sup>R</sup>|Ω<sup>|</sup> , taking into account rule (2). Note that <sup>|</sup>Ω<sup>|</sup> << <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In*, if the number of missing entries in M is relatively large.

The matrix *F*(*ω*, *ω*) is positive-definite because it is generated by a GRBF. The matrix *P*(*ω*, :) might be ill-conditioned if the polynomial functions of higher degrees are used, but the second term in (16) is used to better approximate linear relationships, and hence there is no need to use higher degrees. In this approach, ∀*n* : *Rn* = 3, which leads to second-degree polynomials. The system matrix in (19) is therefore symmetric and positive-definite, and any least-square (LS) solver can be used to compute the vectors *w*(*ω*) and *c* from (19), given *y*(*ω*).

System (18) can also be used to compute the missing entries in M. Having the estimates for *w*(*ω*) and *c*, the missing entries can be obtained by solving the following system of linear equations:

$$
\begin{bmatrix} \hat{y} \\ \mathbf{0} \end{bmatrix} = \begin{bmatrix} \mathbf{F}(\bar{\omega}, \omega) & \mathbf{P}(\bar{\omega}, \cdot) \\ \mathbf{P}(\omega, \cdot)^T & \mathbf{0} \end{bmatrix} \begin{bmatrix} \boldsymbol{w}(\omega) \\ \boldsymbol{c} \end{bmatrix} \tag{20}
$$

where *<sup>F</sup>*(*ω*¯ , *<sup>ω</sup>*) <sup>∈</sup> <sup>R</sup>|Ω¯ |×|Ω<sup>|</sup> is obtained from *<sup>F</sup>* by removing the rows and selecting the columns that are indexed by Ω, and the vector *y*¯ = *y*(*ω*¯ ) contains the estimates for the missing entries in M. Hence, the completed image is expressed by

$$\mathcal{Y} = [y\_{i\_1, \dots, i\_N}], \quad \text{where} \quad y\_{i\_1, \dots, i\_N} = \begin{cases} \ m\_{i\_1, \dots, i\_N} & \text{if } \qquad \omega\_{i\_1, \dots, i\_N} = 1, \\\ \mathcal{Y}\_{i\_1, \dots, i\_N} & \text{otherwise} \end{cases} \tag{21}$$

The system matrix in (19) has the order of <sup>|</sup>Ω<sup>|</sup> <sup>+</sup> <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn*. Applying Gaussian elimination to (19), the computational complexity amounts to O (|Ω<sup>|</sup> <sup>+</sup> <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn*)<sup>3</sup> , and it is relatively large if the number of missing entries in M is small. It is therefore necessary to reduce the computational complexity for the LS problem associated with (19). To tackle this issue, let input tensor <sup>Y</sup><sup>ˆ</sup> <sup>=</sup> <sup>Y</sup><sup>ˆ</sup> (*s*1,...,*sN*) <sup>∈</sup> <sup>R</sup>*I*1×...×*IN* be partitioned into small overlapping subtensors {Y<sup>ˆ</sup> (*s*1,...,*sN*)}, where <sup>∀</sup>*<sup>n</sup>* : *sn* <sup>=</sup> 1, ... , *Sn* and *Sn* is the number of partitions of <sup>Y</sup><sup>ˆ</sup> along its *<sup>n</sup>*-th mode. The total number of subtensors is equal to *SY* = ∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Sn*. Each subtensor can be expressed by <sup>Y</sup><sup>ˆ</sup> (*s*1,...,*sN*) <sup>=</sup> *y*Γ(*s*1),...,Γ(*sN*) <sup>∈</sup> <sup>R</sup>*L*1×...×*LN* , where <sup>∀</sup>*<sup>n</sup>* : *Ln* <sup>=</sup> *In Sn* ≤ *In*. The set <sup>Γ</sup>(*sn*) contains the indices of the entries in <sup>Y</sup><sup>ˆ</sup> which belong to <sup>Y</sup><sup>ˆ</sup> (*s*1,...,*sN*) along its *<sup>n</sup>*-th mode, and it can be expressed by

$$\forall n: \Gamma(\mathbf{s}\_{\mathbb{N}}) = \begin{cases} \{\gamma(\mathbf{s}\_{\mathbb{N}}) + 1, \dots, \gamma(\mathbf{s}\_{\mathbb{N}}) + L\_{\mathbb{N}}\} \in \mathcal{N}^{L\_{\mathbb{N}}} & \text{for} \quad \mathbf{s}\_{\mathbb{N}} < \mathbf{S}\_{\mathbb{N}} \\\{\gamma(\mathbf{S}\_{\mathbb{N}}), \dots, I\_{\mathbb{N}}\} & \text{for} \quad \mathbf{s}\_{\mathbb{N}} = \mathbf{S}\_{\mathbb{N}} \end{cases} \tag{22}$$

where *<sup>γ</sup>*(*sn*) = (*sn* <sup>−</sup> <sup>1</sup>)(*Ln* <sup>−</sup> *<sup>η</sup>n*). The parameter *<sup>η</sup><sup>n</sup>* <sup>=</sup> *<sup>θ</sup><sup>n</sup> Ln* <sup>100</sup>  determines the number of overlapping pixels along the *n*-th mode, and *θ<sup>n</sup>* ∈ [0, 99] expresses the percentage of the overlap along the *n*-th mode.

The proposed methodology for image completion should be applied separately to each subtensor. The missing pixels are completed using a very limited volume of the input tensor but mostly restricted to their nearest neighborhood. It is thus a strategy that resembles PDE-based image inpainting, but it is much more flexible for highly dissipated known pixels and allows us to reduce the computational cost dramatically. Moreover, the factor matrices {*F*(*n*) } and {*P*(*n*) } in (16) are expressed by radial functions, and hence can be precomputed before using the subtensor partitioning procedure. In RBF-based interpolation methods, the samples or pixels that are close to the boundary of the region of interest are usually not well approximated. However, due to overlapping, boundary perturbation effects in our approach are considerably relaxed.

The pseudo-code of the proposed image completion algorithm is presented in Algorithm 1.

#### **Algorithm 1: Tensorial Interpolation for Image Completion (TI-IC)**

**Input :**M ∈ <sup>R</sup>*I*1×...×*IN* – input tensor, <sup>Ω</sup> = [*ωi*1,...,*iN* ] <sup>∈</sup> <sup>R</sup>*I*1×...×*IN* - a binary tensor of indicators for the missing entries, *d*(·, ·) – distance metrics, *τ* - scaling factor, *R* = {*R*1,..., *RN*} – degrees of the interpolation polynomials, {*S*1,..., *SN*} – partitioning rates, {*θ*1,..., *θN*} – rates of overlapping (in percentage). **Output:**Y ∈ <sup>R</sup>*I*1×...×*IN* – completed tensor **<sup>1</sup> for** *n* = 1, . . . , *N* **do <sup>2</sup>** Compute factor matrices *F*(*n*) using metrics *d*, and *P*(*n*) using set *R*. **<sup>3</sup>** *<sup>F</sup>* <sup>=</sup> *<sup>F</sup>*(*N*) <sup>⊗</sup> ... <sup>⊗</sup> *<sup>F</sup>*(1) , *<sup>P</sup>* <sup>=</sup> *<sup>P</sup>*(*N*) <sup>⊗</sup> ... <sup>⊗</sup> *<sup>P</sup>*(1) , **<sup>4</sup> foreach** (*s*1,...,*sN*) **do <sup>5</sup>** Create <sup>Y</sup><sup>ˆ</sup> from <sup>M</sup> using (1), and select <sup>Y</sup><sup>ˆ</sup> (*s*1,...,*sN*) <sup>=</sup> *y*Γ(*s*1),...,Γ(*sN*) <sup>∈</sup> <sup>R</sup>*L*1×...×*LN* from <sup>Y</sup><sup>ˆ</sup> **<sup>6</sup>** and Ω(*s*1,...,*sN*) = *ω*Γ(*s*1),...,Γ(*sN*) <sup>∈</sup> <sup>R</sup>*L*1×...×*LN* from <sup>Ω</sup>, using rule (22), **<sup>7</sup>** Compute *<sup>ω</sup>*(*s*1,...,*sN*) <sup>=</sup> vec(Ω(*s*1,...,*sN*)) and ¯*ω*(*s*1,...,*sN*) <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>ω</sup>*(*s*1,...,*sN*), and *<sup>y</sup>*(*ω*(*s*1,...,*sN*)) = vec(Y<sup>ˆ</sup> (*s*1,...,*sN*)), **<sup>8</sup>** Select *<sup>F</sup>*(*ω*(*s*1,...,*sN*), *<sup>ω</sup>*(*s*1,...,*sN*)) <sup>∈</sup> <sup>R</sup>|Ω(*s*1,...,*sN*)|×|Ω(*s*1,...,*sN*)<sup>|</sup> and *<sup>F</sup>*(*ω*¯ (*s*1,...,*sN*), *<sup>ω</sup>*(*s*1,...,*sN*)) <sup>∈</sup> <sup>R</sup>|Ω¯ (*s*1,...,*sN*)|×|Ω(*s*1,...,*sN*)<sup>|</sup> from *<sup>F</sup>*, *<sup>P</sup>*(*ω*(*s*1,...,*sN*), :) <sup>∈</sup> <sup>R</sup>|Ω(*s*1,...,*sN*)|×∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn* from *P*, **<sup>9</sup>** Compute *w*(*ω*(*s*1,...,*sN*)) and *c* solving system (19), **<sup>10</sup>** Given *<sup>w</sup>*(*ω*(*s*1,...,*sN*)) and *<sup>c</sup>*, compute *<sup>y</sup>*¯(*s*1,...,*sN*) from (20), and construct <sup>Y</sup>(*s*1,...,*sN*) using (21). **<sup>11</sup>** Y = <sup>Y</sup>(*s*1,...,*sN*) <sup>∈</sup> <sup>R</sup>*I*1×...×*IN* .

**Remark 2.** *The computational complexity for calculating matrices F and P in Algorithm 1 amounts to* O ∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In*(*Jn* + *Rn*) *. Assuming Gaussian elimination is used for solving system (19), we have* O *SY*(|Ω(*s*1,...,*sN*) <sup>|</sup> <sup>+</sup> <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn*)<sup>3</sup> *for SY subtensors, and* <sup>O</sup>(*SY*(|Ω¯ (*s*1,...,*sN*) <sup>|</sup> *<sup>+</sup>* <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn*)(|Ω(*s*1,...,*sN*) | + ∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Rn*)) *for computing <sup>y</sup>*¯(*s*1,...,*sN*) *from (20). Let* <sup>|</sup>Ω<sup>|</sup> <sup>=</sup> *<sup>ξ</sup>* <sup>∏</sup>*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *In and* <sup>∀</sup>(*s*1, ... ,*sN*) : <sup>|</sup>Ω(*s*1,...,*sN*) | = *ξ* ∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Ln, where* 0 ≤ *ξ* ≤ 1*. Neglecting matrix P in (19) because it is much smaller than F, the computational complexity for solving system (19) can be roughly estimated as* O *SY*(|Ω(*s*1,...,*sN*) |)3 = O *SYξ*3(∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *Ln*)<sup>3</sup> ≈ O *ξ*<sup>3</sup> ∏*<sup>N</sup> n*=1 *I*3 *n S*2 *n . Note that the computational complexity for solving the same system without partitioning tensors* <sup>Y</sup> *into subtensors and under the same assumption can be roughly estimated as* <sup>O</sup> |Ω| 3 <sup>=</sup> <sup>O</sup> *ξ*<sup>3</sup> ∏*<sup>N</sup> <sup>n</sup>*=<sup>1</sup> *I*<sup>3</sup> *n . Hence, the partitioning strategy decreases the computational complexity S*<sup>2</sup> *<sup>Y</sup> times with respect to the non-partitioned system, and if* ∀*n* : *Jn* = *In, the complexity for precomputing matrix F might predominate.*
