**1. Introduction**

An important challenge in the last few years was the extraction of the main information in large datasets, measurements, observations that appear in signal and hyperspectral image processing, data mining, machine learning. Due to the increasing volume of data required by these applications, approximative low-rank matrix and tensor factorizations play a fundamental role in extracting latent components. The idea is to replace the initial large and maybe noisy and ill conditioned large scale original data by a lower dimensional approximate representation obtained via a matrix or multi-way array factorization or decomposition. Principal Components Analysis is a widely used technique for image recognition or identification. In the matrix case, it involves the computation of eigenvalues or singular decompositions. In the tensor case, even though various factorization techniques have been developed over the last decades (high-order SVD (HOSVD), Candecomp–Parafac (CP) and Tucker decomposition), the recent tensor SVDs (t-SVD and c-SVD), based on the use of the tensor t-product or c-products offer a matrix-like framework for third-order tensors, see [1–15] for more details on recent work related to tensors and applications. In the

**Citation:** Hached, M.; Jbilou, K.; Koukouvinos, C.; Mitrouli, M. A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition.*Mathematics* **2021**, *9*, 1249. https:// doi.org/10.3390/math9111249

Academic Editor: Luca Gemignani

Received: 11 May 2021 Accepted: 25 May 2021 Published: 29 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

present work, we consider third order tensors that could be defined as three dimensional arrays of data. As our study is based on the cosine transform product, we limit this work to three-order tensors.

For a given 3-mode tensor X ∈ R*<sup>n</sup>*1×*n*2×*n*3 , we denote by *xi*1,*i*2,*i*3 the element (*<sup>i</sup>*1, *i*2, *i*3) of the tensor X . A fiber is defined by fixing all the indexes except one. An element *c* ∈ R1×1×*n* is called a tubal-scalar or simply tube of length *n*. For more details refer to [1,2].

#### **2. Definitions and Notations**

#### *2.1. Discrete Cosine Transformation*

In this subsection we recall some definitions and properties of the discrete cosine transformation and the c-product of tensors. During recent years, many advances were made in order to establish a rigorous framework enabling the treatment of problems for which the data is stored in three-way tensors without having to resort to matricization [1,8]. One of the most important feature of such a framework is the definition of a tensor-tensor product as the t-product, based on the Fast Fourier Transform . For applications as image treatment, the tensor-tensor product based on the Discrete Cosine Transformation (DCT) has shown to be an interesting alternative to FFT. We now give some basic facts on the DCT and its associated tensor-tensor product. The DCT of a vector *v* ∈ R*n* is defined by

$$
\psi = \mathbb{C}\_{\text{nvs.}} \in \mathbb{R}^{\mathfrak{n}}, \tag{1}
$$

where *Cn* is the *n* × *n* discrete cosine transform matrix with entries

$$(\mathbb{C}\_n)\_{ij} = \sqrt{\frac{2 - \delta\_{i1}}{n}} \cos \left( \frac{(i - 1)(2j - 1)\pi}{2n} \right) \quad 1 \le i, j \le n$$

with *δij* is the Kronecker delta; see p. 150 in [16] for more details. It is known that the matrix *Cn* is orthogonal, i.e., *C<sup>T</sup> n Cn* = *CnC<sup>T</sup> n* = *In*; see [17]. Furthermore, for any vector *v* ∈ R*<sup>n</sup>*, the matrix vector multiplication *Cnv* can be computed in *<sup>O</sup>*(*nlog*(*n*)) operations. Moreover, Reference [17] have shown that a certain class of Toeplitz-plus-Hankel matrices can be diagonalized by *Cn*. More precisely, we have

$$\text{Cl}\_n\text{th}(\upsilon)\,\text{C}\_n^{-1} = \text{Diag}(\tilde{\upsilon}),\tag{2}$$

where

$$\mathbf{th}(v) = \underbrace{\begin{pmatrix} v\_1 & v\_2 & \dots & v\_n \\ v\_2 & v\_1 & \dots & v\_3 \\ \vdots & \vdots & \dots & \vdots \\ v\_n & v\_{n-1} & \dots & v\_1 \end{pmatrix}}\_{\text{Toeplitz}} + \underbrace{\begin{pmatrix} v\_2 & \dots & v\_n & 0 \\ \vdots & \ddots & \ddots & v\_n \\ & v\_n & 0 & \dots & \vdots \\ 0 & v\_n & \dots & v\_2 \end{pmatrix}}\_{\text{Hankel}}$$

and Diag(*v*˜) is the diagonal matrix whose *i*-th diagonal element is (*v*˜)*<sup>i</sup>*.

#### *2.2. Definitions and Properties of the Cosine Product*

In this subsection, we briefly review some concepts and notations, which play a central role for the elaboration of the tensor global iterative methods based on the c-product; see [18] for more details on the c-product.

Let A ∈ R*<sup>n</sup>*1×*n*2×*n*3 be a real valued third-order tensor, then the operations mat and its inverse ten are defined by

$$\mathtt{mat}(\mathcal{A}) = \underbrace{\begin{pmatrix} A\_1 & A\_2 & \dots & A\_n \\ A\_2 & A\_1 & \dots & A\_3 \\ \vdots & \vdots & \dots & \vdots \\ A\_n & A\_{n-1} & \dots & A\_1 \end{pmatrix}}\_{\mathtt{Block}\,\mathtt{Tosplitz}} + \underbrace{\begin{pmatrix} A\_2 & \dots & A\_n & 0 \\ \vdots & \ddots & \ddots & A\_n \\ A\_n & 0 & \dots & \vdots \\ 0 & A\_n & \dots & A\_2 \end{pmatrix}}\_{\mathtt{Block}\,\mathtt{Hankel}} \in \mathcal{R}^{n\_1n\_3 \times n\_2n\_3}$$

and the inverse operation denoted by ten is simply defined by

$$\mathfrak{ten}(\mathfrak{mat}(\mathcal{A})) = \mathcal{A}.$$

Let us denote A the tensor obtained by applying the DCT on all the tubes of the tensor A. This operation and its inverse are implemented in the Matlab by the commands dct and idct as

$$
\check{\mathcal{A}} = \mathsf{Act}(\mathcal{A}, [\ ], \mathfrak{Z}), \text{ and } \mathsf{tdct}(\check{\mathcal{A}}, [\ ], \mathfrak{Z}) = \mathcal{A}\_{\prime}
$$

where idct denotes the Inverse Discrete Cosine Transform.

:

**Remark 1.** *Notice that the tensor* A *can be computed by using the 3-mode product defined in [2] as follows:*

$$
\bar{\mathcal{A}} = \mathcal{A} \times\_3 \mathcal{M}
$$

*where M is the n*3 × *n*3 *invertible matrix given by*

:

$$\mathcal{M} = \mathcal{W}^{-1} \mathbb{C}\_{n\_{\mathcal{B}}} (I + Z)$$

*where Cn*3 *denote de n*3 × *n*3 *Discrete Cosine Transform DCT matrix, W* = diag(*Cn*3 (:, 1)) *is the diagonal matrix made of the first column of the DCT matrix, Z is n*3 × *n*3 *circulant upshift matrix which can be computed in MATLAB using W* = diag(ones(*<sup>n</sup>*3 − 1, <sup>1</sup>), 1) *and I the n*3 × *n*3 *identity matrix; see [18] for more details.*

Let **A** be the matrix

$$\mathbf{A} = \begin{pmatrix} A^{(1)} \\ & A^{(2)} \\ & & \ddots \\ & & & A^{(n\_3)} \end{pmatrix} \in \mathbb{R}^{n\_3 n\_1 \times n\_3 n\_2} \tag{3}$$

where the matrices *A*(*i*)'s are the frontal slices of the tensor A:. The block matrix mat(A) can also be block diagonalized by using the DCT matrix as follows

$$(\mathbb{C}\_{n\_3} \odot I\_{n\_1}) \mathtt{mat}(\mathcal{A}) \left( \mathbb{C}\_{n\_3}^T \odot I\_{n\_2} \right) = \mathbf{A} \tag{4}$$

**Definition 1.** *The c-product of two tensors*A ∈ R*<sup>n</sup>*1×*n*2×*n*3 *and* B ∈ R*<sup>n</sup>*2×*m*×*n*3 *is the n*1 × *m* × *n*3 *tensor defined by:*

$$\mathcal{A} \star\_{\mathbb{C}} \mathcal{B} = \mathsf{ten}(\mathsf{mat}(\mathcal{A}) \mathsf{mat}(\mathcal{B})) .$$

Notice that from Equation (3), we can show that the product C = A *c* B is equivalent to **C** = **A B**. Algorithm 1 allows us to compute, in an efficient way, the c-product of the tensors A and B, see [18].

**Algorithm 1** Computing the c-product.

**Inputs**: A ∈ R*<sup>n</sup>*1×*n*2×*n*3 and B ∈ R*<sup>n</sup>*2×*m*×*n*3 **Output**: C = A *c* B ∈ R*<sup>n</sup>*1×*m*×*n*3

1. Compute A = dct(A, [ ], 3) and B = dct(B, [ ], <sup>3</sup>).

:

2. Compute each frontal slices of C by

> *C*(*i*) = *A*(*i*)*B*(*i*)

3. Compute C = idct(C , [ ], 3) .

:

Next, give some definitions and remarks on the c-product and related topics.

:

:

**Definition 2.** *The identity tensor* <sup>I</sup>*<sup>n</sup>*1*n*1*n*3 *is the tensor such that each frontal slice of* I *n*1*n*1*n*3 *is the identity matrix In*1*n*1 *.*

*An n*1 × *n*1 × *n*3 *tensor* A *is said to be invertible if there exists a tensor* B *of order n*1 × *n*1 × *n*3 *such that*

$$\mathcal{A} \star\_{\mathfrak{c}} \mathcal{B} = \mathcal{T}\_{n\_1 n\_1 n\_3} \qquad \text{and} \qquad \mathcal{B} \star\_{\mathfrak{c}} \mathcal{A} = \mathcal{T}\_{n\_1 n\_1 n\_3} \dots$$

*In that case, we denote* B = A−<sup>1</sup>*. It is clear that* A *is invertible if and only if* mat(A) *is invertible. The inner scalar product is defined by*

$$
\langle \mathcal{A}, \mathcal{B} \rangle = \sum\_{i\_1=1}^{n\_1} \sum\_{i\_2=1}^{n\_2} \sum\_{i\_3=1}^{n\_3} a\_{i\_1 i\_2 i\_3} b\_{i\_1 i\_2 i\_3} \dots
$$

*and its corresponding norm is given by* A*F* = A, A. *An n*1 × *n*1 × *n*3 *tensor* Q *is said to be orthogonal if* Q*<sup>T</sup> c* Q = Q *c* Q*<sup>T</sup>* = <sup>I</sup>*<sup>n</sup>*1*n*1*n*3 .

**Definition 3** ([1])**.** *A tensor is called f-diagonal if its frontal slices are diagonal matrices. It is called upper triangular if all its frontal slices are upper triangular.*

Next we recall the Tensor Singular Value Decomposition of a tensor (Algorithm 2); more details can be found in [19].

**Theorem 1.** *Let* A *be an n*1 × *n*2 × *n*3 *real-valued tensor. Then* A *can be factored as follows*

$$\mathcal{A} = \mathcal{U} \star\_{\mathcal{C}} \mathcal{S} \star\_{\mathcal{C}} \mathcal{V}^{T},\tag{5}$$

:

*where* U *and* V *are orthogonal tensors of order* (*<sup>n</sup>*1, *n*1, *<sup>n</sup>*3) *and* (*<sup>n</sup>*2, *n*2, *<sup>n</sup>*3)*, respectively, and* S *is an f-diagonal tensor of order* (*<sup>n</sup>*1 × *n*2 × *<sup>n</sup>*3)*. This factorization is called Tensor Singular Value Decomposition (c-SVD) of the tensor* A*.*

**Algorithm 2** The Tensor SVD (c-SVD). **Input**: A ∈ R*<sup>n</sup>*1×*n*2×*n*3 **Output**: U, V and S. 1. Compute A : = dct(A, [], <sup>3</sup>). 2. Compute each frontal slices of U : , V : and S : from A : as follows (a) for *i* = 1, . . . , *n*3 [U :(*i*) , S :(*i*) , V :(*i*)] = *svd*(A:(*i*)) (b) End for 3.ComputeU=idct(U:[],<sup>3</sup>),S=idct(S:[],3)andV=idct(V:[],<sup>3</sup>).

, , , **Remark 2.** *As for the t-product [19], we can show that if* A = U *c* S *c* V*<sup>T</sup> is a c-SVD of the tensor* A*, then we have*

$$\sum\_{k=1}^{n\_3} A\_k = \left(\sum\_{k=1}^{n\_3} \mathcal{U}\_k\right) \left(\sum\_{k=1}^{n\_3} S\_k\right) \left(\sum\_{k=1}^{n\_3} V\_k^T\right),\tag{6}$$

*where Ak, Uk, Sk and Vk are the frontal slices of the tensors* A*,* U*,* S *and* V*, respectively, and*

$$\mathcal{A} = \sum\_{i=1}^{\min(n\_1, n\_2)} \mathcal{U}(:, i, :) \ast\_c \mathcal{S}(i, i, :) \ast\_c \mathcal{V}(:, i, :)^T. \tag{7}$$

**Theorem 2.** *Let* A = U *c* S *c* V*<sup>T</sup> given by* (5)*, and define for k* ≤ *min*(*<sup>n</sup>*1, *<sup>n</sup>*2) *the tensor*

$$\mathcal{A}\_{k} = \sum\_{i=1}^{k} \mathcal{U}(:, i,:) \, \star\_{c} \mathcal{S}(i, i,:) \, \star\_{c} \mathcal{V}(:, i,:)^{T} \,. \tag{8}$$

*Then*

$$\mathcal{A}\_k = \arg\min\_{\mathcal{X}\in\mathcal{M}} \|\mathcal{A}\_k - \mathcal{A}\|\_{F\_\prime} \tag{9}$$

*where* M = {X *c* Y; X ∈R*<sup>n</sup>*1<sup>×</sup>*k*<sup>×</sup>*n*3 , Y∈R*<sup>k</sup>*<sup>×</sup>*n*2×*n*3 }*.*

Note that when *n*3 = 1 this theorem reduces to the well known Eckart–Young theorem for matrices [20].

**Definition 4** (**The tensor tubal-rank).** *Let* A *be an n*1 × *n*2 × *n*3 *be a tensor and consider its c-SVD* A = U *c* S *c* V*T. The tensor tubal rank of* A*, denoted as rankt(*A*) is defined to be the number of non-zero tubes of the f-diagonal tensor* S*, i.e.,*

$$\text{rank}\_{\ell}(\mathcal{A}) = \#\{i, \mathcal{S}(i, i,:) \neq 0\}.$$

**Definition 5.** *The multi-rank of the tensor* A *is a vector p* ∈ R*<sup>n</sup>*3 *with the i-th element equal to the rank of the i-th frontal slice of* A : = fft(A, [], <sup>3</sup>)*, i.e.,*

$$p(i) = rank(A^{(i)}), \ i = 1, \ldots, n\_3.$$

The well known QR matrix decomposition can also be extended to the tensor case; see [19].

**Theorem 3.** *Let* A *be a real-valued tensor of order n*1 × *n*2 × *n*3*. Then* A *can be factored as follows*

$$\mathcal{A} = \mathbb{Q} \star\_{\mathbb{C}} \mathcal{R}, \tag{10}$$

*where* Q *is an n*1 × *n*1 × *n*3 *orthogonal tensor and* R *is an n*1 × *n*1 × *n*3 *f-upper triangular tensor.*

#### **3. Tensor Principal Component Analysis for Face Recognition**

Principle Component Analysis (PCA) is a widely used technique in image classification and face recognition. Many approaches involve a conversion of color images to grayscale in order to reduce the training cost. Nevertheless, for some applications, color an is important feature and tensor based approaches offer the possibility to take it into account. Moreover, especially in the case of facial recognition, it allows the treatment of enriched databases including for instance additional biometric information. However, one has to bear in mind that the computational cost is an important issue as the volume of data can be very large. We first recall some background facts on the matrix based approach.

#### *3.1. The Matrix Case*

One of the simplest and most effective PCA approaches used in face recognition systems is the so-called eigenface approach. This approach transforms faces into a small set of essential characteristics, eigenfaces, which are the main components of the initial set of learning images (training set). Recognition is done by projecting a test image in the eigenface subspace, after which the person is classified by comparing its position in eigenface space with the position of known individuals. The advantage of this approach over other face recognition strategies resides in its simplicity, speed and insensitivity to small or gradual changes on the face.

The process is defined as follows: Consider a set of training faces *I*1, *I*2, ..., *Ip*. All the face images have the same size: *n* × *m*. Each face *Ii* is transformed into a vector *xi* using the operation *vec*: *xi* = *vec*(*Ii*). These vectors are columns of the *nm* × *p* matrix

$$X = [\mathbf{x}\_{1\prime}, \dots, \mathbf{x}\_p].$$

We compute the average image *μ* = 1*p p*∑*i*=1 *xi*. Set *x*¯*i* = *xi* − *μ* and consider the new matrices

$$
\bar{X} = [\mathfrak{x}\_1, \dots, \mathfrak{x}\_p]\_\prime \text{ and } \mathbb{C} = \bar{X}\bar{X}^T.
$$

Notice that the *nm* × *nm* covariance matrix *C* = *X* ¯ *X* ¯ *T* can be very large. Therefore, the computation of the *nm* eigenvalues and the corresponding eigenvectors (eigenfaces) can be very difficult. To circumvent this issue, we instead consider the smaller *p* × *p* matrix *L* = *X* ¯ *TX*¯ .

Let *vi* be an eigenvector of *L* then *Lvi* = *X* ¯ *TXv*¯ *i* = *λivi* and

$$
\bar{X}Lv\_i = \bar{X}\bar{X}^T\bar{X}v\_i = \lambda\_i\bar{X}v\_{i\nu}.
$$

which shows that *Xv* ¯ *i*is an eigenvector of the covariance matrix *C* = *X* ¯ *X* ¯ *T* .

The *p* eigenvectors of *L* = *X* ¯ *TX*¯ are then used to find the *p* eigenvectors *ui* = *Xv*¯ *i* of *C* that form the eigenface space. We keep only *k* eigenvectors corresponding to the largest *k* eigenvalues (eigenfaces corresponding to small eigenvalues can be omitted, as they explain only a small part of characteristic features of the faces.)

The next step consists of projecting each image of the training sample onto the eigenface space spanned by the orthogonal vectors *u*1,..., *uk*:

$$\mathcal{U}\_k = \text{span}\{\mathfrak{u}\_1, \dots, \mathfrak{u}\_k\}, \text{ with } \mathcal{U}\_k = [\mathfrak{u}\_1, \dots, \mathfrak{u}\_k].$$

The matrix *UkUTk* is an orthogonal projector onto the subspace U*k*. A face image can be projected onto this face space as *yi* = *UTk*(*xi* − *μ*).

We now give the steps of an image classification process based on this approach:

Let *x* = *vec*(*I*) be a test vector-image and project it onto the face space to ge<sup>t</sup> *y* = *UTk* (*x* − *μ*). Notice that the reconstructed image is given by

$$\mathbf{x}^r = \vec{\mathcal{U}}\_k \mathbf{y} + \boldsymbol{\mu}.$$

Compute the Euclidean distance

$$\epsilon\_i = \|y - y\_i\|\_\prime \text{ } i = 1, \dots, k.$$

A face is classified as belonging to the class *l* when the minimum *l* is below some chosen threshold *θ* Set

$$\theta = \frac{1}{2} \max\_{i,j} \|y\_i - y\_j\|\_\prime \text{ i.e.} \\ j = 1, \dots, k, \dots$$

and let be the distance between the original test image *x* and its reconstructed image *xr*: = *x* − *<sup>x</sup><sup>r</sup>*. Then


We now give some basic facts on the relation between the singular value decomposition (SVD) and PCA in this context:

Consider the Singular Value Decomposition of the matrix *A* as

$$\mathcal{X} = \mathcal{U}\Sigma V^T = \sum\_{i=1}^p \sigma\_i u\_i \upsilon\_i^T$$

where *U* and *V* are orthonormal matrices of sizes *nm* and *p*, respectively. The singular values *σi* are the square roots of the eigenvalues of the matrix *L* = *X* ¯ *TX*¯ , the *ui*'s are the left vectors and the *vis* are the right vectors. We have

$$L = \mathcal{X}^T \mathcal{X} = V \Delta V^T; \ \Delta = \operatorname{diag}(\sigma\_1^2, \dots, \sigma\_p^2)$$

which is is the eigendecomposition of the matrix *L* and

$$\mathbb{C} = \bar{X}\bar{X}^T = \mathcal{U}D\mathcal{U}^T; \; D = \operatorname{diag}(\sigma\_1^2, \dots, \sigma\_{p'}^2, 0, \dots, 0).$$

In the PCA method, the projected eigenface space is then generated by the first *u*1, ... , *uk* columns of the unitary matrix *U* derived from the SVD decomposition of the matrix *X* ¯ .

As only a small number *k* of the largest singular values are needed in PCA, we can use the well known Golub–Kahan algorithm to compute these wanted singular values and the corresponding singular vectors to define the projected subspace.

In the next section, we explain how the SVD based PCA can be extended to tensors and propose an algorithm for facial recognition in this context.

#### **4. The Tensor Golub–Kahan Method**

As explained in the previous section, it is important to take into account the potentially large size of datasets, especially for the training process. The idea of extending the matrix Golub–Kahan bidiagonalization algorithm to the tensor context has been explored in the recent years for large and sparse tensors [21]. In [1], the authors established the foundations of a remarkable theoretical framework for tensor decompositions in association with the tensor-tensor t- or c-products, allowing to generalize the main notions of linear algebra to tensors.

#### *4.1. The Tensor C-Global Golub–Kahan Algorithm*

Let A ∈ R*<sup>n</sup>*1×*n*2×*n*3 be a tensor ans *s* ≥ 1 an integer. The Tensor c-global Golub–Kahan bidiagonalization algorithm (associated to the c-product) is described in Algorithm 3.

#### **Algorithm 3** The Tensor Global Golub–Kahan algorithm (TGGKA).


Let *Ck* be the *k* × *k* upper bidiagonal matrix defined by

$$\mathbf{C}\_{k} = \begin{bmatrix} a\_{1} & \beta\_{1} \\ & a\_{2} & \beta\_{2} \\ & & \ddots & \ddots \\ & & & a\_{k-1} & \beta\_{k-1} \\ & & & & a\_{k} \end{bmatrix}. \tag{11}$$

Let V*k* and A *c* V*k* be the (*<sup>n</sup>*2 × (*sk*) × *p*) and (*<sup>n</sup>*1 × (*sk*) × *<sup>n</sup>*3) tensors with frontal slices V1, ... , V*k* and A *c* V1, ... , A *c* V*k*, respectively, and let U*k* and A*<sup>T</sup> c* U*k* be the (*<sup>n</sup>*1 × (*sk*) × *<sup>n</sup>*3) and (*<sup>n</sup>*2 × (*sk*) × *<sup>n</sup>*3) tensors with frontal slices U1, ... , U*k* and A*<sup>T</sup> c* U1, ... , A*<sup>T</sup> c* U*k*, respectively. We set

$$\mathbb{V}\_k := [\mathbb{V}\_1, \dots, \mathbb{V}\_k], \quad \text{and} \quad \mathcal{A} \ast\_{\mathbb{C}} \mathbb{V}\_k := [\mathcal{A} \ast\_{\mathbb{C}} \mathbb{V}\_1, \dots, \mathcal{A} \ast\_{\mathbb{C}} \mathbb{V}\_k], \tag{12}$$

$$\mathbb{U}\_{k} := [\mathcal{U}\_{1}, \dots, \mathcal{U}\_{k}], \quad \text{and} \quad \mathcal{A}^{T} \star\_{\mathbb{c}} \mathbb{U}\_{k} := [\mathcal{A}^{T} \star\_{\mathbb{c}} \mathcal{U}\_{1}, \dots, \mathcal{A}^{T} \star\_{\mathbb{c}} \mathcal{U}\_{k}], \tag{13}$$

with

$$\boldsymbol{\hat{C}}\_{k}^{T} = \begin{bmatrix} \mathbf{C}\_{k}^{T} \\ \boldsymbol{\beta}\_{k} \boldsymbol{\varepsilon}\_{k}^{T} \end{bmatrix} \in \mathbb{R}^{(k+1)\times k}, \ \boldsymbol{e}\_{k}^{T} = (0,0,\ldots,0,1)^{T}.$$

Then, we have the following results [13].

**Proposition 1.** *The tensors produced by the tensor c-global Golub–Kahan algorithm satisfy the following relations*

$$\mathcal{A} \star\_{\mathfrak{C}} \mathbb{V}\_{k} = \mathbb{U}\_{k} \circledast \mathbb{C}\_{k\prime} \tag{14}$$

$$\mathcal{A}^T \star\_{\mathfrak{c}} \mathbb{U}\_k = \quad \mathbb{V}\_{k+1} \circledast \hat{\mathbb{C}}\_k^T \tag{15}$$

$$=\begin{array}{c} \mathbb{V}\_{k}\circledast \mathbb{C}\_{k}^{T} + \beta\_{k} \big[ \mathcal{O}\_{n \times s \times p\prime} \dots , \mathcal{O}\_{n\_{1} \times s \times n\_{3} \prime} \mathcal{V}\_{k+1} \big] , \end{array} \tag{16}$$

*where the product is defined by:*

$$\mathcal{U}\_k \circledast \mathcal{y} = \sum\_{j=1}^k \mathcal{y}\_j \mathcal{V}\_{j\prime} \ \mathcal{y} = (\mathcal{y}\_1, \dots, \mathcal{y}\_m)^T \in \mathbb{R}^k.$$

We set the following notation:

$$\mathbb{U}\_k \circledast \mathbb{C}\_k = \left[ \mathbb{U}\_k \circledast \mathbb{C}\_{k'}^1 \dots , \mathcal{U}\_k \circledast \mathbb{C}\_k^k \right] / \mathbb{C}\_k$$

where *Cik*is the *i*-th column of the matrix *Ck*.

 We note that since the matrix *Ck* is bidiagonal, *Tk* = *CTk Ck* is symmetric and tridiagonal and then Algorithm computes the same information as tensor global Lanczos algorithm applied to the symmetric matrix *A*∗ *c A*.

#### *4.2. Tensor Tubal Golub–Kahan Bidiagonalisation Algorithm*

First, we introduce some new products that will be useful in this section.

**Definition 6** ([13])**.** *Let* **a** ∈ R1×1×*n*3 *and* B ∈ R*<sup>n</sup>*1×*n*2×*n*3 *, the tube fiber tensor product* (**a** B) *is an* (*<sup>n</sup>*1 × *n*2 × *<sup>n</sup>*3) *tensor defined by*

$$\mathbf{a} \ast \mathcal{B} = \begin{pmatrix} \mathbf{a} \star\_{\mathcal{C}} b(1, 1, :) & \dots & \mathbf{a} \star\_{\mathcal{C}} b(1, n\_2, :) \\ \vdots & \ddots & \vdots \\ \mathbf{a} \star\_{\mathcal{C}} b(n\_1, 1, :) & \dots & \mathbf{a} \star\_{\mathcal{C}} b(n\_1, n\_2, :) \end{pmatrix}$$

**Definition 7** ([13])**.** *Let* A ∈ R*<sup>n</sup>*1×*m*1×*n*3 *,* B ∈ R*<sup>n</sup>*1×*m*2×*n*3 *,* C ∈ R*<sup>n</sup>*2×*m*1×*n*3 *and* D ∈ R*<sup>n</sup>*2×*m*2×*n*3 *be tensors. The block tensor*

$$\left[\begin{array}{cc} \mathcal{A} & \mathcal{B} \\ \mathcal{C} & \mathcal{D} \end{array}\right] \in \mathbb{R}^{(n\_1+n\_2)\times(m\_1+m\_2)\times n\_3}$$

*is defined by compositing the frontal slices of the four tensors.*

**Definition 8.** *Let* A = [A1, ... , A*<sup>n</sup>*2 ] ∈ R*<sup>n</sup>*1×*n*2×*n*3 *where* A*i* ∈ R*<sup>n</sup>*1×1×*n*3 *, we denoted by* TVect*(* A *) the tensor vectorization operator:* R*<sup>n</sup>*1×*n*2×*n*3 → R*<sup>n</sup>*1*n*2×1×*n*3 *obtained by superposing the laterals slices* A*i of* A*, for i* = 1, ... , *n*2*. In others words, for a tensor* A = [A1, ... , A*<sup>n</sup>*2 ] ∈ R*<sup>n</sup>*1×*n*2×*n*3 *where* A*i* ∈ R*<sup>n</sup>*1×1×*n*3 *, we have :*

$$\mathsf{Tect}(\mathcal{A}) = \begin{pmatrix} \mathcal{A}\_1 \\ \mathcal{A}\_2 \\ \vdots \\ \mathcal{A}\_{n\_2} \end{pmatrix} \in \mathbb{R}^{n\_1 n\_2 \times 1 \times n\_3}$$

**Remark 3.** *The* TVect *operator transform a given tensor on lateral slice. Its easy to see that when we take p* = 1*, the* TVect *operator coincides with the operation vec which transform the matrix on vector.*

**Proposition 2.** *Let* A *be a tensor of size* R*<sup>n</sup>*1×*n*2×*n*3 *, we have*

$$\|\mathcal{A}\|\_F = \|\mathsf{T}\mathsf{Vec}(\mathcal{A})\|\_F$$

**Definition 9.** *Let* A = [A1, ... , A*<sup>n</sup>*2 ] ∈ R*<sup>n</sup>*1×*n*2×*n*3 *where* A*i* ∈ R*<sup>n</sup>*1×1×*n*3 *. We define the range space of* A *denoted by* Range(A) *as the c-linear span of the lateral slices of* A

$$\text{Range}(\mathcal{A}) = \left\{ \mathcal{A}\_1 \star\_\varepsilon a(1, 1, :) + \dots + \mathcal{A}\_{n\_2} \star\_\varepsilon a(n\_2, n\_2, :) | a(i, i, :) \in \mathbb{R}^{1 \times 1 \times n\_3} \right\} \tag{17}$$

**Definition 10** ([14])**.** *Let* A ∈ R*<sup>n</sup>*1×*n*2×*n*3 *and* B ∈ R*<sup>m</sup>*1×*m*2×*n*3 *, the c-Kronecker product* AB *of* A *and* B *is the n*1*m*1 × *n*2*m*2 × *n*3 *tensor in which the i-th frontal slice of their transformed tensor* (AB ) *is given by:*

$$(\widehat{\mathcal{A}\odot B})\_i = (\mathcal{A}^{(i)}\circledast B^{(i)}),\ i = 1,\dots,n\_3$$

*where A*(*i*) *and B*(*i*) *are the i-th frontal slices of the tensors* A: = dct(A, [ ], 3) *and* B: = dct(B, [], <sup>3</sup>)*, respectively.*

We introduce now a normalization algorithm allowing us to decompose the non-zero tensor C ∈ R*<sup>n</sup>*1×*n*2×*n*3 , such that:

$$\mathcal{C} = \mathbf{a} \not\approx Q,\text{ with }\left<\mathcal{Q},\mathcal{Q}\right> = \mathbf{e},$$

where **a** is an invertible tube fiber of size **a** ∈ R1×1×*n*3 and Q ∈ R*<sup>n</sup>*1×*n*2×*n*3 and **e** is the tube fiber **e** ∈ R1×1×*n*3 defined by unfold(**e**)=(1, 0, 0 . . . , <sup>0</sup>)*<sup>T</sup>*. 4.

This procedure is described in Algorithm



Next, we give the Tensor Tube Global Golub–Kahan (TTGGKA) algorithm, seeElIchi1. Let A ∈ R*<sup>n</sup>*1×*n*2×*n*3 be a tensor and let *s* ≥ 1 be an integer. The Tensor Tube Global Golub–Kahan bidiagonalization process is described in Algorithm 5.


 3)


Let C*k* be the *k* × *k* × *n*3 upper bidiagonal tensor (each frontal slice of C*k* is a bidiagonal matrix) and C : *k* the *k* × (*k* + 1) × *n*3 defined by


Let V*k* and A *c* V*k* be the (*<sup>n</sup>*2 × (*sk*) × *<sup>n</sup>*3) and (*<sup>n</sup>*1 × (*sk*) × *<sup>n</sup>*3) tensors with frontal slices V1, ... , V*k* and A *c* V1, ... , A *c* V*k*, respectively, and let U*k* and A*<sup>T</sup> c* U*k* be the (*<sup>n</sup>*1 × (*sk*) × *<sup>n</sup>*3) and (*<sup>n</sup>*2 × (*sk*) × *<sup>n</sup>*3) tensors with frontal slices U1, ... , U*k* and A*<sup>T</sup> c* U1, ... , A*<sup>T</sup> c* U*k*, respectively. We set

$$\mathbb{V}\_k := [\mathbb{V}\_1, \dots, \mathbb{V}\_k]\_\prime \quad \text{and} \quad \mathcal{A} \star\_\varepsilon \mathbb{V}\_k := [\mathcal{A} \star\_\varepsilon \mathbb{V}\_1, \dots, \mathcal{A} \star\_\varepsilon \mathbb{V}\_k]\_\prime \tag{19}$$

$$\mathbb{U}\_{k} := [\mathcal{U}\_{1}, \dots, \mathcal{U}\_{k}], \quad \text{and} \quad \mathcal{A}^{T} \ast\_{\mathbb{C}} \mathbb{U}\_{k} := [\mathcal{A}^{T} \ast\_{\mathbb{C}} \mathcal{U}\_{1}, \dots, \mathcal{A}^{T} \ast\_{\mathbb{C}} \mathcal{U}\_{k}], \tag{20}$$

Then, we have the following results.

**Proposition 3.** *The tensors produced by the tensor TTGGKA algorithm satisfy the following relations*

$$\mathcal{A} \star\_{\mathcal{C}} \mathbb{V}\_k \quad = \quad \mathbb{U}\_k \star\_{\mathcal{C}} (\mathcal{C}\_k \odot \mathcal{Z}\_{\text{sem}\_{\mathcal{B}}}), \tag{21}$$

$$\mathcal{A}^T \star\_{\mathfrak{c}} \mathbb{U}\_k = \begin{array}{c} \mathbb{V}\_{k+1} \star\_{\mathfrak{c}} (\hat{\mathcal{C}}\_k^T \odot \mathbb{Z}\_{\operatorname{ssn}\_3}) \end{array} \tag{22}$$

$$=\,^\mathsf{V}\mathsf{V}\_{k}\star\_{\mathfrak{c}}\left(\mathcal{C}\_{k}^{T}\odot\mathcal{Z}\_{\mathsf{susp}}\right)+\mathcal{V}\_{k+1}\star\_{\mathfrak{c}}\left(\left(\mathbf{b}\_{k}\star\_{\mathfrak{c}}\mathbf{e}\_{1,k;\cdot}\right)\odot\mathcal{Z}\_{\mathsf{susp}}\right),\tag{23}$$

*where* **<sup>e</sup>**1,*k*,: ∈ R1×*k*<sup>×</sup>*n*3 *with* 1 *in the* (1, *k*, 1) *position and zeros in the other positions,* <sup>I</sup>*ssn*3 ∈ R*<sup>s</sup>*×*s*×*n*3 *the identity tensor and* **b***k is the fiber tube in the* (*k*, *k* + 1, :) *position of the tensor* C : *k.*

#### **5. The Tensor Tubal PCA Method**

In this section, we describe a tensor-SVD based PCA method for order 3 tensors which naturally arise in problems involving images such as facial recognition. As for the matrix case, we consider a set of *N* training images, each of one being encoded as *n*1 × *n*2 × *n*3 real tensors I*i*, 1 ≤ *i* ≤ *N*. In the case of RGB images, each frontal slice would contain the encoding for each color layer (*<sup>n</sup>*3 = 3) but in order to be able to store additional features, the case *n*3 > 3 could be contemplated.

Let us consider one training image <sup>I</sup>*i*0 . Each one of the *n*3 frontal slices *I* (*j*) *i*0 of <sup>I</sup>*i*0 is resized into a column vector *vec*(*<sup>I</sup>* (*j*) *i*0 ) of length *L* = *n*1 × *n*2 and we form a *L* × 1 × *n*3 tensor <sup>X</sup>*i*0 defined by <sup>X</sup>*i*0 (:, :, *j*) = *vec*(*<sup>I</sup>* (*j*) *i*0 ). Applying this procedure to each training image, we obtain *N* tensors X*i* of size *L* × 1 × *n*3. The average image tensor is defined as X¯ = 1 *N N* ∑ *i*=1 X*i* X¯X¯X¯X¯.

and we define the *L* × *N* × *n*3 training tensor X = [ 1,..., *<sup>N</sup>*], where *i* = X*i* − Let us now consider the c-SVD decomposition X = U <sup>∗</sup>*c* S <sup>∗</sup>*c* V*<sup>T</sup>* of X , where U and V are orthogonal tensors of size *L* × *L* × *n*3 and *N* × *N* × *n*3, respectively, and *S* is a f-diagonal

tensor of size *L* × *N* × *n*3. Inthematrixitisknownthatfewvaluessufficeto

 context, just a singular capture the main features of an image, therefore, applying this idea to each one of the three color layers, an RGB image can be approximated by a low tubal rank tensor. Let us consider an image tensor S ∈ R*<sup>n</sup>*1×*n*2×*n*3 and its c-SVD decomposition S = U *c* S *c* V*<sup>T</sup>*. Choosing an integer *r* such as *r* ≤ min(*<sup>n</sup>*1, *<sup>n</sup>*2), we can approximate S by the *r* tubal rank tensor

$$\mathcal{S}\_r \approx \sum\_{i=1}^r \mathcal{U}(:, i .:) \ast\_c \mathcal{S}(i . i .:) \ast\_c \mathcal{V}(:, i .:)^T.$$

In Figure 1, we represented a 512 × 512 RGB image and the images obtained for various truncation indices. On the left part, we plotted the singular values of one color layer of the RGB tensor (the exact same behaviour is observed on the two other layers). The rapid decrease of the singular values explain the good quality of compressed images even for small truncation indices.

Applying this idea to our problem, we want to be able to obtain truncated tensor SVDs of the training tensor X , without needing to compute the whole c-SVD. After *k* iterations of the TTGGKA algorithm (for the case *s* = 1), we obtain three tensors U*k* ∈ R*<sup>n</sup>*1<sup>×</sup>*k*<sup>×</sup>*n*3 , V*k*+<sup>1</sup> ∈ R*<sup>n</sup>*2<sup>×</sup>(*k*+<sup>1</sup>)<sup>×</sup>*n*3 and C : *k* ∈ R(*k*×(*k*+<sup>1</sup>)<sup>×</sup>*n*3 as defined in Equation (21) such as

$$\mathcal{A}^T \star\_{\mathbf{c}} \mathbf{U}\_k = \mathbb{V}\_{k+1} \star\_{\mathbf{c}} \widetilde{\mathcal{C}}\_k^T.$$

Let C : *k* = Φ *c* Σ *c* Ψ the c-SVD of C : *k*, noticing that C : *k* ∈ R*k*×(*k*+<sup>1</sup>)<sup>×</sup>*n*3 is much smaller than X¯. Then first tubal singular values and the left tubal singular tensors of X¯ are given by <sup>Σ</sup>(*<sup>i</sup>*, *i*, :) and U*k c* <sup>Φ</sup>(:, *i*, :), respectively, for *i* ≤ *k*, see [1] for more details.

**Figure 1.** Image compression.

In order to illustrate the ability to approximate the first singular elements of a tensor using the TTGGKA algorithm, we considered a 900 × 900 × 3 real tensor A which frontal slices were matrices generated by a finite difference discretization method of differential operators. On Figure 2, we displayed the error on the first diagonal coefficient of the first frontal S(1, 1, 1) in function of the number of iteration of the Tensor Tube Golub–Kahan algorithm, where A = U *c* S *c* V*<sup>T</sup>* is the c-SVD of A.

**Figure 2.** Σ(1, 1, 1) − S(1, 1, 1) *vs.* number of TTGGKA iteration *k*.

In Table 1, we reported on the errors on the tensor Frobenius norms of the singular tubes in function of the number *k* of the Tensor Tube Golub–Kahan algorithm.



The same behaviour was observed on all the other frontal slices. This example illustrate the ability of the TTGKA algorithm for approximating the largest singular tubes.

The projection space is generated by the lateral slices of the tensor P = U*kc* <sup>Φ</sup>(:, 1 : *k*, :) ∈ R*<sup>n</sup>*1<sup>×</sup>*i*<sup>×</sup>*n*3 derived from the TTGGKA algorithm and the c-SVD decomposition of the bidiagonal tensor C : *k*, i.e., the c-linear span of first k lateral slices of P, see [1,19] for more details.

The steps of the Tensor Tubal PCA algorithm for face recognition which finds the closest image in the training database for a given image I0 are summarized in Algorithm 6:

#### **Algorithm 6** The Tensor Tubal PCA algorithm (TTPCA).

1. **Inputs** Training Image tensor X (*N* images), mean image tensor X¯,Test image I0, index of truncation *r*, *k*=number of iterations of the TTGGKA algorithm (*k* ≥ *r*).

'



In the next section, we consider image identification problems on various databases.

#### **6. Numerical Tests**

In this section, we consider three examples of image identification. In the case of grayscale images, the global version of Golub–Kahan was used to compute the dominant singular values in order to perform a PCA on the data. For the two other situations, we used the Tensor Tubal PCA (TTPCA) method based on the Tube Global Golub–Kahan (TTGGKA) algorithm in order to perform facial recognition on RGB images. The tests were performed with Matlab 2019a, on an Intel i5 laptop with 16 Go of memory. We considered various truncation indices *r* for which the recognition rates were computed. We also reported the CPU time for the training process.

#### *6.1. Example 1*

In this example, we considered the MNIST database of handwritten digits [22]. The database contains two subsets of 28 × 28 grayscale images (60,000 training images and 10,000 test images). A sample is shown in Figure 3. Each image was vectorized as a vector of length 28 × 28 = 784 and, following the process described in Section 3.1, we formed the training and the test matrices of sizes 784 × 60,000 and 784 × 10,000, respectively.

**Figure 3.** First 16 images of MNIST training subset.

Both matrices were centred by substracting the mean training image and the Golub– Kahan algorithm was used to generate an approximation of *r* dominant singular values *si* and left singular vectors *ui*, *i* = 1, . . . ,*r*.

Let us denote U*r* the subspace spanned by the columns of *Ur* = [*<sup>u</sup>*1,..., *ur*]. Let *t* be a test image and ˆ*tr* = *U<sup>T</sup> r t* its projection onto U*<sup>r</sup>*. The closest image in the training dataset is determined by computing

$$\vec{a} = \arg\min\_{i=1,\dots,60,000} ||\hat{\mathbf{f}}\_r - \hat{\mathbf{X}}\_r(:,i)||\_{\prime}$$

where *X* ˆ *r* = U*TrX*.

For various truncation indices *r*, we tested each image of the test subset and computed the recognition rate (i.e., a test is successful if the digit is correctly identified). The results are plotted on Figure 4 and show that a good level of accuracy is obtained with only a few approximate singular values. Due to the large size of the training matrix, it validates the interest of computing only a few singular values with the Golub–Kahan algorithm.

**Figure 4.** Identification rates for different truncation indices *r*.

#### *6.2. Example 2*

In this example, we used the Georgia Tech database GTDB\_ crop [23], which contains 750 face images of 50 persons in different illumination conditions, facial expression and face orientation, as shown in Figure 5. The RGB JPEG images were resized to 100 × 100 × 3 tensors.

**Figure 5.** Fifteen pictures of one individual in the database.

Each image file is coded as a 100 × 100 × 3 tensor and transformed into a 10,000 × 1 × 3 tensor as explained in the previous section. We built the training and test tensors as follows: from 15 pictures of each person in the database, five pictures were randomly chosen and stored in the test folder and the 10 remaining pictures were used for the train tensor. Hence, the database was partitioned into two subsets containing 250 and 500 items, respectively, at each iteration of the simulation.

We applied the TTGGKA based Algoritm 6 for various truncation indices. In Figure 6, we represented a test image (top left position), the closest image in the database (top right), the mean image of the training database (bottom left) and the eigenface associated to the test image (bottom right).

**Figure 6.** Test image, closest image, mean image and eigenface.

In order compute the rate of recognition, we ran 100 simulations, obtained the number of successes (i.e., a test is successful if the person is correctly identified) and reported the best identification rates, in function of the truncation index *r* in Figure 7.

**Figure 7.** Identification rates for different truncation indices *r*.

The results match the performances observed in the literature [24] for this database and it confirms that the use of a Golub–Kahan strategy is interesting especially because, in terms of training, the Tube Tensor PCA algorithm required only 5 s instead of 25 s when using a c-SVD.

#### *6.3. Example 3*

In the second example, we used the larger AR face database (cropped version) (Face crops) [9], which contains 2600 bitmap pictures of human faces (50 males and 50 females, 26 pictures per person), with different expressions, lightning conditions, facial expressions and face orientation. The bitmap pictures were resized to 100 × 100 Jpeg images. The same protocol as for Example 1 was followed: we partitioned the set of images in two subsets. Out of 26 pictures, 6 pictures were randomly chosen as test images and the remaining 20 were put into the training folder. The training process took 24 s while it would have taken 81.5 s if using a c-SVD. An example of test image, the closest match in the dataset, the mean image and its associated eigenface are shown in Figure 8.

**Figure 8.** Test image, closest image, mean image and eigenface.

We applied our approach (TTPCA) to the 10,000 × 2000 × 3 training tensor X and plotted the recognition rate as a function of the truncation index in Figure 9.

**Figure 9.** Identification rates for different truncation indices *r*.

For all examples, it is worth noticing that, as expected in face identification problems, only a few of the first largest singular elements suffice to capture the main features of an image. Therefore, the Golub–Kahan based strategies such as the TTPCA method are an interesting choice.

## **7. Conclusions**

In this manuscript, we focused on two types of Golub–Kahan factorizations. We used the recent advances in the field of tensor factorization and showed that this approach is efficient for image identification. The main feature of this approach resides in the ability of the Global Golub–Kahan algorithms to approximate the dominant singular elements of a training matrix or tensor without needing to compute the SVD. This is particularly important as the matrices and tensors involved in this type of application can be very large. Moreover, in the case for which color has to be taken into account, this approach do not involve a conversion to grayscale, which can be very important for some applications. In a future work, we would like to study the feasability of implementing the promising randomized PCA approaches in the Golub–Kahan tensor algorithm in order to improve the training process computational cost in the case of very large datasets.

**Author Contributions:** Conceptualization, M.H., K.J., C.K. and M.M.; methodology, M.H. and K.J.; software, M.H.; validation, M.H., K.J., C.K. and M.M.; writing—original draft preparation, M.H., K.J., C.K. and M.M.; writing—review and editing, M.H., K.J., C.K. and M.M.; visualization, M.H., K.J., C.K. and M.M.; supervision, K.J.; project administration, M.H. and K.J. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Mustapha Hached acknowledges support from the Labex CEMPI (ANR-11- LABX-0007-01).

**Conflicts of Interest:** The authors declare no conflict of interest.
