Abstract
In this work, we present a new hierarchical decomposition aimed at the decorrelation of a cubical tensor of size 2n, based on the 3D Frequency-Ordered Hierarchical KLT (3D FO-HKLT). The decomposition is executed in three consecutive stages. In the first stage, after adaptive directional vectorization (ADV) of the input tensor, the vectors are processed through one-dimensional FO-Adaptive HKLT (FO-AHKLT), and after folding, the first intermediate tensor is calculated. In the second stage, on the vectors obtained after ADV of the first intermediate tensor, FO-AHKLT is applied, and after folding, the second intermediate tensor is calculated. In the third stage, on the vectors obtained from the second intermediate tensor, ADV is applied, followed by FO-AHKLT, and the output tensor is obtained. The orientation of the vectors, calculated from each tensor, could be horizontal, vertical or lateral. The best orientation is chosen through analysis of their covariance matrix, based on its symmetry properties. The kernel of FO-AHKLT is the optimal decorrelating KLT with a matrix of size 2 × 2. To achieve higher decorrelation of the decomposition components, the direction of the vectors obtained after unfolding of the input tensor in each of the three consecutive stages, is chosen adaptively. The achieved lower computational complexity of FO-AHKLT is compared with that of the Hierarchical Tucker and Tensor Train decompositions.
1. Introduction
The main tensor decompositions could be divided into two groups.
The first group comprises decompositions executed in the spatial domain of the tensor. These are the famous Canonical Polyadic Decomposition (CPD), Higher-Order Singular Value Decomposition (HOSVD) [1,2,3,4], Tensor Train (TT) Decomposition [5], Hierarchical Tucker (H-Tucker) algorithm [6] and some of their modifications [7,8], based on the calculation of the tensor eigenvalues and eigenvectors. Their most important feature is that they are optimal regarding the minimization of the mean square approximation error derived from the low-energy component “truncation”. The calculation of the retained components is based on iterative methods [9,10] that need a relatively small number of mathematical operations to achieve the requested accuracy. The hierarchical tensor decompositions based on the H-Tucker algorithm are presented in publications [8,11]. The compositional hierarchical tensor factorization introduced in [8] disentangles the hierarchical causal structure of object image formation, but the computational complexity (or Complexity) is not presented. In [11] is offered the TT-based hierarchical decomposition of high-order tensors, based on the Tensor-Train Hierarchical SVD (TT-HSVD). This approach permits parallel processing, which significantly accelerates the process. Unlike the TT-SVD algorithm, TT-HSVD is based on applying SVDs to matrices of smaller dimensions, which results in lower Complexity of TT-HSVD.
The second group comprises tensor decompositions performed in the transform domain, which use reversible 3D linear orthogonal transforms such as the Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT), etc. [12,13,14]. This approach is distinguished by its flexibility regarding the choice of the transform based on the processed data contents.
In this work, we present alternative new hierarchical 3D tensor decompositions based on the famous statistical orthogonal Karhunen–Loeve Transform (KLT) [15,16]. They are close to optimal (which ensures full decorrelation of the decomposition components), but do not need iterations and have lower computational complexity. As a basis, we present here the decomposition called 3D Frequency-Ordered Adaptive Hierarchical Karhunen–Loeve Transform (3D FO-AHKLT), whose efficiency is enhanced through adaptive directional tensor vectorization (ADV).
In Section 2, we present the method for 3D hierarchical adaptive transform based on the one-dimensional Frequency-Ordered Adaptive Hierarchical Karhunen–Loeve Transform (FO-AHKLT). Section 3 gives the details on the cubical tensor decomposition through separable 3D FO-AHKLT based on correlation analysis, and Section 4 explains the related algorithm. In Section 5, we analyze the computational complexity of the new approaches compared to that of the well-known H-Tucker and TT decompositions; Section 6 contains the conclusion.
2. Method for 3D Adaptive Frequency-Ordered Hierarchical KLT of a Cubical Tensor
The proposed method for decomposition of 3rd order cubical tensor is based on the 3D Frequency-Ordered Hierarchical Karhunen–Loeve Transform (3D FO-HKLT), defined by the relation below [17]:
Here, N = 2n is the size of the tensor with nonnegative components x (i, j, k). The coefficients are the elements of the spectrum tensor S, which is of the same size as . Each coefficient represents the weights of the basic tensor, . Each basic tensor is represented as the outer product of three vectors:
Here, “” denotes the outer product of the two column-vectors , and are the basic vectors, obtained after execution of the three stages of 3D FO-HKLT. In the first decomposition components (as given in Equation (1)) the main part of the tensor power is concentrated, and their high decorrelation is achieved. The kernel of 3D FO-HKLT, defined as KLT2×2, is KLT with a transform matrix of size 2 × 2.
The decomposition comprises three consecutive stages arranged in accordance with the correlation analysis of the tensor elements. One example decomposition for the tensor X of size N × N × N (for N = 8) is shown in Figure 1. After applying the 3D FO-AHKLT on the input tensor X, it is sequentially transformed into the intermediate tensors E, F and the output tensor, S. The 3D FO-AHKLT is divisible, and this permits it to be executed by using the FO-AHKLT, whose graph for the case N = 8 is shown in Figure 2. As a result, the tensor is transformed into the first intermediate tensor E, of the same size.
Figure 1.
Sequence of the 3D FO-AHKLT execution stages for a tensor of size 8 × 8 × 8 for the case. Δ (1) > Δ (2) > Δ (3), based on FO-AHKLT. On each 2-component for q = 1, 2, 3, 4, KLT2×2 is applied.
Figure 2.
Graphs of the 3-level direct FO-AHKLT of the tensor X of size 8 × 8 × 8, with ADV.
In the general case, FO-HKLT is executed for the N-dimensional vectors of total number N2, oriented in horizontal (u = 1), vertical (u = 2) or lateral (u = 3) direction. These vectors are defined as a result of the ADV of the tensor . The choice of the orientation direction of vectors for s = 1, 2, …, N2 is defined through analysis of their covariance matrices, Kx(u). Before the calculation of FO-HKLT for the N-dimensional vector , it is divided into N/2 two-component vectors . For the case N = 8, each vector is divided into 4 vectors , for q = 1, 2, 3, 4. For this case, the total number of vectors is 4N2 = 256, and they are divided into 4 sub-groups of N2 = 64 vectors each.
Let and be the input and the output column-vectors of N = 2n components, respectively, and n is the number of FO-HKLT hierarchical levels. The relation between vectors es and xs, is [17]:
where is the FO-HKLT matrix; of size 2n × 2n is the permutation matrix for the last level n of FO-HKLT, and is the product of n sparse transform matrices for p = 1, 2, 3, …, n. Each matrix is defined as follows:
for p = 1, 2, 3, …, n, where “⊕” denotes the direct sum of matrices.
In the equations, figures and text below, the abbreviations and are used. Here:
for i = 1, 2-averaging operator.
If β2 = 2k3(p,q) and are two real numbers, the extended arctan function εarctan (β1,β2) is mapped to (−π, π] by adding π [18]:
In particular, the direct and inverse KLT2×2 of the elements x1s and x2s, which have the same spatial position in the couple of matrices X1 and X2 of size N × N for and (valid for the Adaptive KLT2×2), are:
or and for s = 1, 2, …, N2.
Here, y1s and y2s are the corresponding elements in the couple of transformed matrices Y1 and Y2 (each of size N × N); , and .
In particular, for , the relations (8) and (9) become as follows:
Hence, in this case, the KLT2×2 coincides with the Walsh–Hadamard Transform (WHT).
In the level n, the components of the column-vectors (respectively, the components of matrices Yn,k for k = 0, 1, …, N − 1) are rearranged. For this, the permutation matrix is used.
From the components of the column-vectors are obtained the frequency-ordered matrices Er (i.e., E0, E1, …, EN−1), calculated in accordance with the relation between their sequential number r and the sequential number k (for matrices Yn,k).
The relation that defines the matrix is:
- The binary code of the sequential decimal number k = 0, 1, …, 2n−1 of the component Yn,k is arranged inversely (i.e., ), as for 0 ≤ i ≤ n − 1;
- The so-obtained code is transformed from Gray code into the binary code , in accordance with the operations for 0 ≤ i ≤ n − 2. Here, “⊕” denotes the operation “exclusive OR”.
The decimal number defines the sequential number of Er for r = 0, 1, …, 2n−1, which (before the rearrangement) corresponded to Yn,k, with sequential number .
For N = 8 (n = 3), p = 1, 2, 3 and q = 1, 2, 3, 4, FO-HKLT is executed in three consecutive levels. The transform matrix of size 8 × 8 is decomposed in accordance with the relation:
Since , the transform is executed as follows:
The operations above correspond to the calculations in hierarchical levels p = 1, 2, 3 of FO-HKLT. In this case, the transform matrices are defined by the equations:
where for q = 1, 2, 3, 4 and p = 1;
where: for q = 1, 2, 3, 4 and p = 2;
where for q = 1, 2, 3, 4 and p = 3; .
The permutation matrix in the level p = 3 is:
The vector for m = 0, 1, …, 7 comprises the elements of the mth row of the matrix for the stage S1 of 3D FO-HKLT. The vectors and for v, l = 0, 1, …, 7 respectively comprise the elements of rows v and l in matrices and , for 3D FO-HKLT stages S2 and S3.
3. Enhancement of the 3D FO-HKLT Efficiency, Based on Correlation Analysis
To achieve higher efficiency of the 3D tensor decomposition based on the 3D FO-HKLT, here, we use the correlation relations between the tensor components in the orthogonal directions x, y, z. To detect the direction in which these relations are strongest, correlation analysis should be used. It is based on the analysis of the covariance matrices of vectors , obtained through unfolding mode u = 1, 2, 3 of the tensor .
3.1. Analysis of the Covariance Matrices
To estimate the correlation of vectors , for 3D FO-AHKLT, we use the ratio of the sums of the squares of coefficients placed outside the main diagonal of the matrix , and those on the diagonal:
The relation above takes into account the symmetry of coefficients , in respect of the main diagonal of the covariance matrix, . The value of this ratio is maximum for the highest correlation of vectors of same orientation, u.
3.2. Choice of Vectors’ Orientation for Adaptive Directional Tensor Vectorization
The choice of the orientation u = 1, 2, 3 of vectors in each stage Su of 3D FO-AHKLT which ensures maximum decorrelation for the tensor , is based on the relation between coefficients Δ(u), given in Table 1. Figure 3a–c shows the vectors , which have horizontal, vertical and lateral orientation, correspondingly, and are obtained through unfolding mode u = 1, 2, 3 of the tensor , of size 4 × 4 × 4. The couples of elements of the vectors for q = 1, 2 on which KLT2×2 is applied in the lowest level of FO-HKLT, are colored in red and white.
Table 1.
Adaptive choice of the orientation of vectors for u = 1, 2, 3, based on the correlation analysis.
Figure 3.
Orientations of vectors for u = 1, 2, 3 and N = 4 after unfolding mode u = 1, 2, 3 of the tensor : (a) horizontal; (b) vertical; (c) lateral.
3.3. Evaluation of the Decorrelation Properties of FO-AHKLT
The qualities of the one-dimensional FO-AHKLT correspond to the decorrelation degree of the output eigen images, which defines the number of needed execution levels, n. As an example, here is given the evaluation of the decorrelation properties of the transform, for the case N = 8 and n = 3. In the hierarchical level p = 1, 2, 3 of FO-AHKLT, the covariance matrix of size 8 × 8, which represents the transformed 8-component vectors , is:
for s = 1, 2, …, 64.
Here, the sub-matrices of size 2 × 2 (for q = 1, 2, 3, 4) are defined as:
These are the covariance matrices of the transformed vectors in the group q for the level p, and are the eigen values of the covariance matrices . Respectively,
are the mutual covariance matrices of size 2 × 2 of the two-component vectors and for the couples of groups q and k from the level p.
After the rearrangement that follows the decomposition level p = 3, the vectors are transformed into vectors . To evaluate the result of FO-AHKLT, the covariance matrix of the vectors is analyzed, from which the achieved decorrelation degree in the level is defined.
4. Adaptive Control for Each Level of FO-AHKLT
After the execution of FO-AHKLT in the current level p, from the transformed two-component vectors for each group q = 1, 2, …, (N/2), through concatenation the N-dimensional vectors for p = 1, 2, …, n (n = lg2N) and s = 1, 2, …, N2 are obtained. At this moment, the decision to continue with the next level of FO-AHKLT, or to stop, must be taken. For this, the covariance matrix of the vectors is analyzed for s = 1, 2, …, S, which defines the achieved decorrelation. In the case that the decorrelation is full, their matrix is diagonal, and the algorithm is stopped. The proposed adaptive control of FO-AHKLT permits the process to stop earlier, despite that full decorrelation is not achieved, if the result is satisfactory. The decision to stop the FO-AHKLT in the current level p is defined by the relation:
Here, is the (i, j)th element of the matrix , and δ1 is a threshold of small value, set in advance. In the case that the condition is satisfied, the calculations stop. Otherwise, the processing continues with the next FO-AHKLT level, p + 1. When the calculations for the second level are finished, the result is checked again, but in this case are the elements of the matrix of the vectors , etc.
Taking into account the condition (21), the FO-AHKLT matrix defined in Equation (3) turns into:
for p = 2,3,…,n, q = 1, 2, …, (N/2) and .
For the level p = 1 only, the relations (22) and (23) are transformed as follows:
where .
In these relations, δ2 is a threshold of small value, set in advance. The condition (21), together with the adaptive KLT2×2 performed in accordance with relations (23)–(26), permits reducing the number of calculations without worsening the 3D FO-AHKLT decorrelation properties.
5. Algorithm 3D FO-AHKLT
On the basis of the 3D FO-HKLT tensor decomposition, together with the adaptive control of the directional vectorization on the input, the intermediate and the output tensor, and the correlation analysis, the algorithm called 3D FO-AHKLT is developed as Algorithm 1:
| Algorithm1 |
| Input: Third-order tensor of size N × N × N (N = 2n) with elements x(i, j, k), and threshold values, δ1, δ2; |
| The steps of the algorithm are given below: |
| 1. Unfolding of the tensor (mode u = 1, 2, 3), thereby obtaining the corresponding groups of vectors, . |
| 2. Calculation of the covariance matrices of the vectors , for u = 1, 2, 3. |
| 3. Calculation of the coefficients for u = 1, 2, 3, using the matrices . |
| 4. Setting the sequence of stages Su1, Su2, Su3 for the execution of 3D FO-AHKLT, chosen in accordance with the relations between coefficients . |
| 5. Start of stage Su1, which comprises: |
| 5.1. Execution of FO-AHKLT for the vectors with orientation u1 (for ), in correspondence with the calculation graph of n hierarchical levels and with a kernel—the adaptive KLT2×2, thereby obtaining the vectors . In each level p = 1, 2, …, n is checked if the condition to stop the FO-AHKLT calculations is satisfied; |
| 5.2. Folding of the tensor E of size N × N × N, by using the vectors ; |
| 5.3. Unfolding of the tensor E (mode u2), thereby obtaining the vectors with orientation u2, for . |
| 6. Start of stage Su2, which comprises: |
| 6.1. Execution of FO-AHKLT for the vectors in correspondence with the computational graph of n hierarchical levels and with a kernel—the adaptive KLT2 × 2, thereby obtaining the vectors . In each level p = 1, 2, …, n is checked if the condition to stop the FO-AHKLT calculations is satisfied; |
| 6.2. Folding the tensor F of size N × N × N, by using the vectors ; |
| 6.3. Unfolding the tensor F (mode u3), thereby obtaining the vectors , with orientation u3, for ; |
| 7. Start of stage Su3, which comprises: |
| 7.1. Execution of FO-AHKLT for the vectors in correspondence with the computational graph of n hierarchical levels and with a kernel—the adaptive KLT2 × 2, thereby obtaining the vectors . In each level p = 1, 2, …, n is checked if the condition to stop the FO-AHKLT calculations is satisfied; |
| 7.2. Folding of the tensor S of size N × N × N (N = 2n) by using the vectors . The elements of the tensor S are the spectrum coefficients s (m, v, l). |
| 8. Calculation of the transform matrices for each stage t = 1, 2, 3 of the 3D FO-AHKLT in accordance with the relations below:
|
| 9. Determination of the basic vectors for m, v, l = 0, 1, 2, …, N − 1, which are the rows of the matrices . With this, the decomposition of the tensor in correspondence with Equation (1), is finished. |
| Output: Spectral tensor S, whose elements s(m, v, l) are the coefficients of the 3D FO-AHKLT. |
As a result of the algorithm execution:
- The main part of the tensor energy is concentrated into a small number of coefficients s(m, v, l) of the spectrum tensor S, for m, v, l = 0, 1, 2;
- The decomposition components of the tensor are uncorrelated.
6. Comparative Evaluation of the Computational Complexity
For the computation of the covariance matrix of size N × N for N = 2n, in correspondence with Equation (17), additions and multiplications are needed. Then, the total number of the operations needed for the calculation of is:
For the n levels of the algorithm FO-HKLT is obtained:
For the calculation of Δu(u) in correspondence with Equation (17), additions and multiplications are needed. Then,
Hence,
In accordance with [17], the computational complexity (or Complexity) of FO-HKLT is defined by taking into account the number of needed additions , and multiplications Hence, the total number of operations needed for the 3D FO-HKLT execution is:
The total number of operations needed for the execution of the algorithm 3D FO-AHKLT (without taking into consideration the possibility to stop the processing earlier than the last level) is:
For the H-Tucker and TT decomposition of a cubical tensor of size N = 2n, in accordance with [17], the number of needed operations is:
Compared to H-Tucker and TT, the Complexity of the new decomposition decreases together with the growth of n. In the general case, the relative Complexity of 3D FO-AHKLT with respect to decompositions H-Tucker and TT is evaluated in accordance with the relations below:
For example, for n = 8 the results are: and .
For the case when the new decomposition must have minimum Complexity, the transform kernel KLT2×2 could be replaced by WHT of the same size (2 × 2), which in correspondence with Equation (10), is the particular case of KLT2×2 for the fixed value of . According to [19], the Complexity of the 3D Fast Walsh-Hadamard Transform (3D-FWHT) evaluated through the total number of operations is defined by the relation:
Then, for the Complexity of 3D-FWHT with ADV (3D-AFWHT), we obtain:
The relative Complexity of 3D FO-AHKLT with respect to 3D-AFWHT is defined by the relation:
Hence, for the example value n = 8, it is , and the relative Complexities of H-Tucker and TT vs. 3D-AFWHT are and , correspondingly.
In Table 2 are given the values of coefficients , and , calculated in accordance with Equations (36), (37) and (40), for n = 2, 3, …, 10. From the table, it is seen that for n > 5, the values of coefficients and are higher than “1” and grow fast together with n, while for the same values of n, the coefficient increases a little in the range from 4 up to 6.
Table 2.
Relative Complexity of H-Tucker and TT vs. 3D FO-AHKLT and of 3D FO-AHKLT vs. 3D-AFWHT.
From the comparison follow the conclusions below:
- The new hierarchical decomposition has low Complexity, which decreases together with the growth of n faster than those of the H-Tucker and TT decompositions;
- Significant reduction in the decomposition Complexity could be achieved through replacement of the kernel KLT2×2 by WHT2×2. In this particular case, the decrease in the decomposition Complexity results in a lower decorrelation degree;
- For the case n = 8, the Complexity of the algorithm 3D-AFWHT is approximately 6 times lower than that of the 3D FO-AHKLT;
- The Complexity of algorithms 3D FO-AHKLT and 3D-AFWHT was evaluated without taking into consideration the use of the adaptive KLT2×2 and the possibility to stop the execution prior to the maximum level n. Equations (35) and (40) give the maximum values of Complexity used for the evaluation of the compared algorithms.
7. Conclusions
This work presented the new algorithm 3D FO-AHKLT, aimed at the decorrelation of the elements of a cubical tensor of size N = 2n. The Complexity of the algorithm was evaluated and compared with that of other, similar algorithms, and its efficiency was shown. The main qualities of the tensor decomposition 3D FO-AHKLT are:
- Efficient decorrelation of the calculated components;
- Concentration of the tensor energy into a small number of decomposition components;
- Lack of iterations;
- Low Complexity;
- The capacity for parallel recursive implementation, which reduces the needed memory volume;
- The capacity for additional significant Complexity reduction through the use of the algorithm 3D-AFWHT, depending on the needs of the implemented application.
The presented algorithms 3D FO-AHKLT and 3D-AFWHT could be generalized for tensors with three different dimensions (i.e., for ). The choice of the offered hierarchical 3D decompositions depends on the requirements and limitations of their Complexity imposed by the application area.
The future investigations of 3D FO-AHKLT and 3D-AFWHT will be aimed at the evaluation of their characteristics compared to famous tensor decompositions, in order to define the best settings and to define the most efficient applications for tensor image compression, feature space reduction, filtration, analysis, search and recognition of multidimensional visual information, deep learning, etc. The future development of the presented algorithm will be aimed at applications related to tree tensor network [20], multiway array (tensor) data analysis [21,22], tensor decompositions in neural networks for tree-structured data, etc.
Author Contributions
Conceptualization, R.K.K., R.P.M. and R.A.K.; methodology, formal analysis, R.A.K.; investigation, R.P.M.; writing—original draft preparation, R.K.K.; writing—review and editing, R.A.K.; visualization, R.K.K. All authors have read and agreed to the published version of the manuscript.
Funding
This work was funded by the Bulgarian National Science Fund: Project No. KP-06-H27/16: “Development of efficient methods and algorithms for tensor-based processing and analysis of multidimensional images with application in interdisciplinary areas”.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
References
- Liu, Y.; Liu, J.; Long, Z.; Zhu, C. Tensor Decomposition. In Tensor Computation for Data Analysis; Springer: Cham, Switzerland, 2021; pp. 19–57. [Google Scholar]
- Ji, Y.; Wang, Q.; Li, X.; Liu, J. A Survey on Tensor Techniques and Applications in Machine Learning. IEEE Access 2019, 7, 162950–162990. [Google Scholar] [CrossRef]
- Cichocki, A.; Mandic, D.; Phan, A.; Caiafa, C.; Zhou, G.; Zhao, Q.; De Lathauwer, L. Tensor Decompositions for Signal Processing Applications: From Two-Way to Multiway Component Analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef] [Green Version]
- Sidiropoulos, N.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor Decomposition for Signal Processing and Machine Learning. IEEE Trans. Signal Proc. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
- Oseledets, I. Tensor-train decomposition. Siam J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
- Grasedyck, L. Hierarchical Singular Value Decomposition of Tensors. Siam J. Matrix Anal. Appl. 2010, 31, 2029–2054. [Google Scholar] [CrossRef] [Green Version]
- Ozdemir, A.; Zare, A.; Iwen, M.; Aviyente, S. Extension of PCA to Higher Order Data Structures: An Introduction to Tensors, Tensor Decompositions, and Tensor PCA. Proc. IEEE 2018, 106, 1341–1358. [Google Scholar]
- Vasilescu, M.; Kim, E. Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors. In Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD): Tensor Methods for Emerging Data Science Challenges, Anchorage, NY, USA, 2–4 August 2019; p. 9. [Google Scholar]
- Wang, P.; Lu, C. Tensor Decomposition via Simultaneous Power Iteration. In Proceedings of the 34th International Conference on Machine Learning (PMLR), Sydney, Australia, 6–9 August 2017; Volume 70, pp. 3665–3673. [Google Scholar]
- Ishteva, M.; Absil, P.; Van Dooken, P. Jacoby Algorithm for the Best Low Multi-linear Rank Approximation of Symmetric Tensors. SIAM J. Matrix Anal. Appl. 2013, 34, 651–672. [Google Scholar] [CrossRef]
- Zniyed, Y.; Boyer, R.; Almeida, A.; Favier, G. A TT-based Hierarchical Framework for Decomposing High-Order Tensors. SIAM J. Sci. Comput. 2020, 42, A822–A848. [Google Scholar] [CrossRef] [Green Version]
- Kernfeld, E.; Kilmer, M.; Aeron, S. Tensor–tensor products with invertible linear transforms. Linear Algebra Appl. 2015, 485, 545–570. [Google Scholar]
- Keegany, K.; Vishwanathz, T.; Xu, Y. A Tensor SVD-based Classification Algorithm Applied to fMRI Data. arXiv 2021, arXiv:2111.00587v1. [Google Scholar]
- Rao, K.; Kim, D.; Hwang, J. Fast Fourier Transform: Algorithms and Applications; Springer: Dordrecht, The Netherlands, 2010; pp. 166–170. [Google Scholar]
- Ahmed, N.; Rao, K. Orthogonal Transforms for Digital Signal Processing; Springer: Berlin/Heidelberg, Germany, 1975; pp. 86–91. [Google Scholar]
- Kountchev, R.; Kountcheva, R. Adaptive Hierarchical KL-based Transform: Algorithms and Applications. In Computer Vision in Advanced Control Systems: Mathematical Theory; Favorskaya, M., Jain, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 1, pp. 91–136. [Google Scholar]
- Kountchev, R.; Mironov, R.; Kountcheva, R. Complexity Estimation of Cubical Tensor Represented through 3D Frequency-Ordered Hierarchical KLT. Symmetry 2020, 12, 1605. [Google Scholar] [CrossRef]
- Lu, H.; Kpalma, K.; Ronsin, J. Motion descriptors for micro-expression recognition. In Signal Processing: Image Communication; Elsevier: Amsterdam, The Netherlands, 2018; Volume 67, pp. 108–117. [Google Scholar]
- Kountchev, R.; Mironov, R.; Kountcheva, R. Hierarchical Cubical Tensor Decomposition through Low Complexity Orthogonal Transforms. Symmetry 2020, 12, 864. [Google Scholar] [CrossRef]
- Cichocki, A.; Phan, H.; Zhao, Q.; Lee, N.; Oseledets, I.; Sugiyama, M.; Mandic, D. Tensor networks for dimensionality reduction and large-scale optimizations: Part 2 Applications and future perspectives. Found. Trends Mach. Learn. 2017, 9, 431–673. [Google Scholar] [CrossRef]
- Abhyankar, S.S.; Christensen, C.; Sundaram, G.; Sathaye, A.M. Algebra, Arithmetic and Geometry with Applications: Papers from Shreeram S. Abhyankar's 70th Birthday Conference; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004; pp. 457–472. [Google Scholar]
- Peters, J. Computational Geometry, Topology and Physics of Digital Images with Applications: Shape Complexes, Optical Vortex Nerves and Proximities; Springer Nature: Cham, Switzerland, 2019; pp. 1–115. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).


