Next Article in Journal
Non-Minimal Einstein–Dirac-Axion Theory: Spinorization of the Early Universe Induced by Curvature
Previous Article in Journal
Similarity Solutions of Spherical Strong Shock in an Inhomogeneous Self-Gravitating Medium Using Lie Group Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Methods Combining Symmetry and Sparsity for the Calculation of Homogeneous Polynomials Defined by Tensors

by
Ting Zhang
School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
Symmetry 2025, 17(5), 664; https://doi.org/10.3390/sym17050664 (registering DOI)
Submission received: 17 March 2025 / Revised: 20 April 2025 / Accepted: 25 April 2025 / Published: 27 April 2025

Abstract

:
The homogeneous polynomial defined by a tensor, A x m 1 for x R n , has been used in many recent problems in the context of tensor analysis and optimization, including the tensor eigenvalue problem, tensor equation, tensor complementary problem, tensor eigenvalue complementary problem, tensor variational inequality problem, and least element problem of polynomial inequalities defined by a tensor, among others. However, conventional computation methods use the definition directly and neglect the structural characteristics of homogeneous polynomials involving tensors, leading to a high computational burden (especially when considering iterative algorithms or large-scale problems). This motivates the need for efficient methods to reduce the complexity of relevant algorithms. First, considering the symmetry of each monomial in the canonical basis of homogeneous polynomials, we propose a calculation method using the merge tensor of the involved tensor to replace the original tensor, thus reducing the computational cost. Second, we propose a calculation method that combines sparsity to further reduce the computational cost. Finally, a simplified algorithm that avoids duplicate calculations is proposed. Extensive numerical experiments demonstrate the effectiveness of the proposed methods, which can be embedded into algorithms for use by the tensor optimization community, improving computational efficiency in magnetic resonance imaging, n-person non-cooperative games, the calculation of molecular orbitals, and so on.

1. Introduction

For any positive integer n, we use n to denote the set { 1 , 2 , , n } . Throughout this paper, we assume that l, m, and n are positive integers with m , n 2 , unless otherwise specified. We denote the set of all n-dimensional real vectors by R n , the set of all n-dimensional non-negative vectors by R + n , and the set of all m × n -dimensional real matrices by R m × n . We use the bold lowercase letter x : = ( x i ) R n to denote an n-dimensional vector with components x i R , where i n , and the uppercase letter A : = ( a i j ) R m × n to denote an m × n -dimensional matrix of entries a i j R , where i m and j n .
A real tensor A of order m and dimension n 1 × × n m means
A = ( a i 1 i 2 i m ) where a i 1 i 2 i m R for any i j n j with j m .
We use R l × [ m 1 , n ] to denote the set of m-order l × n 2 × × n m -dimensional real tensors with n 2 = = n m = n . In particular, if  A R l × [ m 1 , n ] with l = n , then A is called an m-th order n-dimensional real tensor, and the set of all m-th order n-dimensional real tensors is denoted by R [ m , n ] . A tensor is said to be non-negative if all its entries are non-negative. For any A = ( a i 1 i 2 i m ) R l × [ m 1 , n ] , a i 1 i 2 i m is said to be a diagonal entry when i 1 = i 2 = = i m and is an off-diagonal entry otherwise. If  A R [ m , n ] , then it is said to be an identity tensor if all its diagonal entries are one and all its off-diagonal entries are zero, and we denote it by E .
Tensors and homogeneous polynomials are two closely related concepts in mathematics, particularly in algebraic geometry, differential geometry, and physics. A tensor can define a homogeneous polynomial; conversely, a homogeneous polynomial can be defined by a tensor but not uniquely (i.e., it can be defined by different tensors). If the tensor is required to be symmetric, then a homogeneous polynomial is defined by a unique symmetric tensor. Given A R l × [ m 1 , n ] , for any x R n , the vector of ( m 1 ) -degree homogeneous polynomials defined by tensor A , denoted by A x m 1 R l , can be defined with its i-th component being given by
( A x m 1 ) i : = i 2 , , i m n a i i 2 i m x i 2 x i m , i l ;
when l = n , the m-degree homogeneous polynomial defined by the tensor A , denoted by A x m R , can be defined as
A x m : = i 1 , , i m n a i 1 i m x i 1 x i m .
Over the past decade, tensors and their related problems have been studied extensively, many of which involve the calculation of A x m 1 . Some of these problems are given as follows.
(a)
Tensor eigenvalue problem [1,2,3,4,5]. Given A R [ m , n ] , an H-eigenvalue of A is a real number λ , for which
A x m 1 = λ x [ m 1 ]
admits a non-zero solution x R n , where x [ m 1 ] : = ( x 1 m 1 , , x n m 1 ) . Meanwhile, a Z-eigenvalue of A is a real number λ , for which
A x m 1 = λ x
admits a non-zero solution x R n .
(b)
Tensor equation [6,7,8,9,10,11,12,13,14]. Given A R [ m , n ] and b R n , this problem involves finding a vector x R n such that
A x m 1 = b .
(c)
Tensor complementarity problem [15,16,17,18,19,20,21]. Given A R [ m , n ] and q R n , the tensor complementarity problem involves finding a vector x R n such that
0 x ( A x m 1 + q ) 0 .
See also the survey papers [22,23,24].
(d)
Tensor eigenvalue complementarity problem [25,26,27,28,29,30]. Given A R [ m , n ] , the tensor eigenvalue complementarity problem involves finding a real number λ and a vector x R n such that
0 x ( λ E x m 1 A x m 1 ) 0 ,
where E is an identity tensor.
(e)
Tensor variational inequality problem [31]. Given A R [ m , n ] , q R n , and a non-empty set Ω R n , the tensor variational inequality problem involves finding a vector x R n such that
y x , A x m 1 + q 0 for all y Ω .
(f)
Least element problem for the set defined by polynomial inequalities [32]. Let A R l × [ m 1 , n ] be a Z-tensor (i.e., all its off-diagonal entries are non-positive) and q R n . Then, there exists an element u S : = { x R n : x 0 , A x m 1 q } such that u x for all x S . This problem is related to the design of algorithms to find the least element u .
For any A R l × [ m 1 , n ] , A has l n m 1 entries and, hence, as n or m becomes large, the computational complexity of A x m 1 with x R n grows rapidly; therefore, large-scale problems invoke huge computational challenges. Moreover, in the iterative algorithms used to solve the above problems (a)–(f), it is necessary to repeatedly calculate the values of A x m 1 at different points x R n . Therefore, determining how to calculate A x m 1 effectively is an important problem in the context of tensor analysis and optimization.
The purpose of this study is to determine how to effectively calculate the vector of ( m 1 ) -degree homogeneous polynomials A x m 1 for A R l × [ m 1 , n ] and x R n . We aim to reduce the associated computational cost by dissecting the symmetry of each monomial in the canonical basis of homogeneous polynomials and leveraging the sparsity of the involved tensors. Our main contributions are as follows:
  • Firstly, taking into account the symmetry of the monomials composing homogeneous polynomials, we replace the original tensors with their merge tensors, significantly reducing computational costs.
  • Secondly, considering sparsity, we design algorithms for the calculation of A x m 1 by finding non-zero elements and corresponding positions. When searching for non-zero elements and their corresponding positions in the merge tensor, we also utilize symmetry.
  • Thirdly, a simplified algorithm is proposed to avoid duplicate calculations, which further reduces the computational cost.
The rest of this paper is organized as follows. In Section 2, we introduce some symbols, concepts, and results that will be used in the subsequent analyses. In Section 3, we determine the computational complexity of calculating A x m 1 and A ^ x m 1 without considering sparsity. In Section 4, we design algorithms for calculating A x m 1 and A ^ x m 1 when taking sparsity into consideration and present a simplified algorithm for the calculation of A ^ x m 1 . Preliminary numerical results are reported in Section 5, and the concluding remarks are given in Section 6.

2. Preliminaries

2.1. Notations

Let the index set T n be non-empty. We use | T | to denote the cardinality of the set T. For any x R n , x T R | T | denotes a sub-vector of x containing components corresponding to the indices in T. For any A R m × n , A T = ( a i j ) denotes a sub-matrix of A composed of its rows corresponding to the indices in T; that is, the entries a i j of A T satisfy i T and j n .

2.2. Merge Tensor

For any i 1 l and i 2 , , i m n , we define
δ i 1 i 2 i m = 1 if i 1 = i 2 = = i m , 0 otherwise .
For any given indices i 2 , , i m n , we use P i 2 i m to denote the set of all permutations of the index arrangement i 2 i m . For any A = ( a i 1 i 2 i m ) R l × [ m 1 , n ] , R i ( A ) R [ m 1 , n ] denotes the i-th row-subtensor of A , with entries ( R i ( A ) ) i 2 i m = a i i 2 i m for any i l .
The following concept was introduced in [32].
Definition 1.
For an arbitrary given tensor A R l × [ m 1 , n ] , we call A ^ R l × [ m 1 , n ] the merge tensor under the permutation of A if, for any i 1 l and i 2 , , i m [ n ] ,
a ^ i 1 i 2 i m = a i 1 i 2 i m if δ i 1 i 2 i m = 1 , j 2 j m P i 2 i m a i 1 j 2 j m if δ i 1 i 2 i m = 0 and i 2 i m , 0 otherwise .
In the following, we use the term merge tensor instead of merge tensor under permutation for the sake of simplicity.
It is easy to see that, through utilizing the symmetry of x i 2 i 3 i m , the entries of the merge tensor A ^ R l × [ m 1 , n ] are the sum of entries of the corresponding original tensor A R l × [ m 1 , n ] under any permutation P i 2 i 3 i m . From Definition 1, for any given tensor A R l × [ m 1 , n ] , it is obvious that
A x m 1 = A ^ x m 1 , x R n .
We use the following simple example to illustrate this point.
Example 1.
Let A = ( a i 1 i 2 i 3 ) R [ 3 , 2 ] , where a 111 = 5 , a 112 = 3 , a 121 = 1 , a 122 = 1 , a 211 = 2 , a 212 = 2 , a 221 = 4 , and  a 222 = 4 .
Let A ^ = ( a ^ i 1 i 2 i 3 ) R [ 3 , 2 ] , where a ^ 111 = 5 , a ^ 112 = 2 , a ^ 121 = 0 , a ^ 122 = 1 , a ^ 211 = 2 , a ^ 212 = 2 , a ^ 221 = 0 , and  a ^ 222 = 4 .
It is easy to see that A x 2 = A ^ x 2 for all x R 2 . Specifically, we have
A x 2 = 5 x 1 2 3 x 1 x 2 + x 2 x 1 x 2 2 2 x 1 2 + 2 x 1 x 2 4 x 2 x 1 + 4 x 2 2 = 5 x 1 2 2 x 1 x 2 x 2 2 2 x 1 2 2 x 1 x 2 + 4 x 2 2 = A ^ x 2 .
For any given tensor A R l × [ m 1 , n ] and x R n , it follows from (2) that we may obtain the value of A x m 1 by computing A ^ x m 1 for any A ^ R l × [ m 1 , n ] satisfying (2). As the merge tensor A ^ given in Definition 1 is one of sparsest tensors among all the tensors satisfying (2), we compute A ^ x m 1 instead of A x m 1 , where A ^ is the merge tensor of A as this greatly reduces the cost of calculation. In addition, we combine the sparsity of A ^ to further reduce the computational cost. These aspects are further investigated in the following sections.

3. Calculation of A x m 1 and A ^ x m 1 Without Considering Sparsity

For any given A = ( a i 1 i 2 i m ) R l × [ m 1 , n ] and x R n , let A ^ = ( a ^ i 1 i 2 i m ) R l × [ m 1 , n ] be the merge tensor of A . By (2), we have A x m 1 = A ^ x m 1 for any x R n . In this section, we show the difference between A x m 1 and A ^ x m 1 when they are computed directly, according to their definitions.
On one hand, for any i l , as there are at most n m 1 non-zero entries in sub-row tensor R i ( A ) R [ m 1 , n ] , the computational complexity of ( A x m 1 ) i is O ( n m 1 ) when we compute ( A x m 1 ) i directly using (1).
On the other hand, we define
r : = C n + ( m 1 ) 1 m 1
and, for any x R n , we define the monomial vector [ x ] m 1 R r as
[ x ] m 1 = ( x 1 m 1 , x 1 m 2 x 2 , , x 1 m 2 x n , x 1 m 3 x 2 2 , x 1 m 3 x 2 x 3 , , x 1 m 3 x 2 x n , x 1 m 3 x 3 2 , , x 1 m 3 x n 2 , , x 1 x 2 m 2 , x 1 x 2 m 3 x 3 , , x 1 x 2 m 3 x n , , x 1 x 3 m 2 , , x 1 x n m 2 , , x 2 m 1 , x 2 m 2 x 3 , , x 2 m 2 x n , , x 2 m 3 x 3 2 , x 2 m 3 x 3 x 4 , , x 3 m 1 , , x n m 1 )
which represents the canonical basis of the vector space of ( m 1 ) -degree homogeneous polynomials in x with real coefficients. For any i l , ( A ^ x m 1 ) i can be written as
( A ^ x m 1 ) i = a ^ , [ x ] m 1 ,
where a ^ R r denotes the coefficient vector corresponding to the basis vector [ x ] m 1 . For any i l , it is worth noting that there are at most C n + ( m 1 ) 1 m 1 non-zero entries in the sub-row tensor R i ( A ^ ) R [ m 1 , n ] , such that a ^ has at most C n + ( m 1 ) 1 m 1 non-zero elements. Hence, the computational complexity of ( A ^ x m 1 ) i is O ( C n + ( m 1 ) 1 m 1 ) when we compute ( A ^ x m 1 ) i directly using (4).
The difference between ( A x m 1 ) i and ( A ^ x m 1 ) i when they are computed using the above mentioned methods is intuitively presented in the following Figure 1.
It can be seen that, while the computational cost of ( A x m 1 ) i and ( A ^ x m 1 ) i increases with n, the gap between the above two gradually widens; in particular, when m is larger, this phenomenon can be clearly observed. From the above analysis, we can see that, for any given tensor A = ( a i 1 i 2 i m ) R l × [ m 1 , n ] , if its merge tensor A ^ = ( a ^ i 1 i 2 i m ) R l × [ m 1 , n ] is not itself, then calculating A ^ x m 1 to obtain the value of A x m 1 for any x R n can greatly reduce the computational cost, when compared to calculating A x m 1 directly.

4. Calculation of A x m 1 and A ^ x m 1 When Considering Sparsity

In many practical calculations, the tensors considered are usually sparse. In this section, let the tensor A R l × [ m 1 , n ] be given, and  A ^ be the merge tensor of A . For  x R n , we propose methods to calculate A x m 1 and A ^ x m 1 by combining the sparsity of the tensors A and A ^ . It is obvious that the calculation of both A x m 1 and A ^ x m 1 is trivial when x is a zero vector or a vector of all ones. Thus, in the following, we assume that x is neither a zero vector nor a vector of all ones.

4.1. Calculation of A x m 1

For any i l , the i-th component of A x m 1 is equivalent to
( A x m 1 ) i = ( i 2 , , i m ) Δ i a i i 2 i m x i 2 x i m if Δ i , 0 if Δ i = ,
where Δ i : = { ( i 2 , , i m ) : a i i 2 i m 0 , i j n , j m } . We divide this section into two parts.
Part 1. Identification of non-zero elements and their positions in tensor A .
Suppose that A has p non-zero entries with p l n m 1 . We denote the vector of all non-zero entries of A and the corresponding index matrix by
v : = v 1 v p R p and S : = ( i 1 ) 1 ( i m ) 1 ( i 1 ) p ( i m ) p s 1 s p R p × m ,
where s j R 1 × m is the index corresponding to the non-zero entry v j of A (i.e., v j = a s j for any j p ).
The above discussion can be summarized according to the following procedure:
Procedure 1
(Calculation of vector v and matrix S).
(S0) 
Input: tensor A R l × [ m 1 , n ] .
(S1) 
Compute vector v and matrix S by (6) (for any given tensor A , we can easily obtain the vector v of all its nonzero entries and the corresponding index matrix S by using the Matlab R2023b’s command ‘find’).
Part 2. Calculation of A x m 1 based on vector v and matrix S.
For any fixed i l , we select the rows of the index matrix S satisfying the first component being equal to i and denote the set of numbers corresponding to these rows as I i ; that is,
I i : = { j p : ( s j ) 1 = i } .
If I i = for some i l , then Δ i = obviously; hence, we have ( A x m 1 ) i = 0 . Next, we consider the case of I i for i l . In this case, for  s j R 1 × m with j I i and i l , we use ( s j ) m { 1 } R 1 × ( m 1 ) to denote a sub-vector of s j by discarding its first component.
Then, we have
( A x m 1 ) i = j I i v j x ( s j ) m { 1 } : = j I i v j x ( ( s j ) m { 1 } ) 1 x ( ( s j ) m { 1 } ) m 1 , i l .
Now, for any i l , (5) turns to
( A x m 1 ) i = j I i v j x ( s j ) m { 1 } if I i , 0 if I i = .
We can obtain the following algorithm:
Algorithm 1. Calculation of A x m 1 based on vector v and matrix S
(S0)
Input: vector v and matrix S for tensor A R l × [ m 1 , n ] .
(S1)
For all i l , compute ( A x m 1 ) i by (8).

4.2. Calculation of A ^ x m 1

In this section, based on the vector v and matrix S defined according to (6) in Section 4.1, we find the vector of all non-zero entries of A ^ and the corresponding index matrix and denote them by v ^ and S ^ , respectively. Then, we provide a method to compute A ^ x m 1 . We divide this section into two parts.
Part 1. Identification of non-zero elements and their positions in the tensor A ^ .
Suppose that A ^ has q non-zero entries with q l r , where r is defined by (3). Then, v ^ R q and S ^ R q × m . To obtain v ^ and S ^ , we execute a simple operation on vector v R p and matrix S R p × m , as defined by (6).
For any given i l , I i = { j p : ( s j ) 1 = i } is defined by (7); that is, the set of numbers of rows in matrix S satisfies that the first element is to i.
First, for any given i l , for simplicity, we use
S i : = S I i R | I i | × m and v i : = v I i R | I i |
to denote the sub-matrix composed of rows of matrix S and sub-vector composed of elements of the vector v corresponding to the set I i (to reduce storage, we delete all the rows corresponding to the index set I i from matrix S and vector v after obtaining the sub-matrix S i and sub-vector v i ). Here, we have i = 1 l | I i | = p .
Second, for each row of the matrix S i with i l , we rearrange the last m 1 elements in increasing order. We denote the matrix obtained through this partial rearrangement as  S ¨ i .
Third, for any i l , we assume that there are r i different rows in S ¨ i R | I i | × m and denote the matrix composed of these different rows as S ¯ i R r i × m . Moreover, we denote v ¯ i R r i , with its t-th component being defined by
( v ¯ i ) t : = j K t i v j , t r i ,
where K t i : = { j | I i | : s ¨ j = s ¯ t } with s ¨ j being the j-th row of matrix S ¨ i and s ¯ t being the t-th row of matrix S ¯ i (note that we can add these together utilizing symmetry).
Fourth, for any i l , we use v ^ i to denote the sub-vector of v ¯ i obtained by deleting all zero elements of v ¯ i and  S ^ i to denote the sub-matrix of S ¯ i obtained by deleting all rows of S ¯ i corresponding to the zero elements of v ¯ i .
In summary, we can obtain the matrix S ^ and vector v ^ as
S ^ : = S ^ 1 S ^ l R q × m and v ^ : = v ^ 1 v ^ l R q ,
as summarized in Procedure 2:
Procedure 2
(Compute vector v ^ and matrix S ^ for merge tensor A ^ ).
(S0) 
Input: vector v and matrix S for tensor A R l × [ m 1 , n ] .
(S1) 
Compute vector v ^ and matrix S ^ for merge tensor A ^ by (10).
Part 2. Calculation of A ^ x m 1 based on vector v ^ and matrix S ^ .
Based on the above discussions, we provide the following algorithm to compute A ^ x m 1 for any given tensor A and vector x .    
Algorithm 2. Calculation of A ^ x m 1 based on vector v ^ and matrix S ^
(S0)
Input: vector v ^ and matrix S ^ for tensor A ^ R l × [ m 1 , n ] .
(S1)
For all i l , compute ( A ^ x m 1 ) i by Algorithm 1 with A being replaced by A ^ and  v and S being replaced by v ^ and S ^ , respectively.

4.3. Simplification of Calculating A ^ x m 1

To further reduce computational costs, we propose a simplified algorithm for calculating A ^ x m 1 in this section. From (8), for any i l and I i , instead of calculating j I i v j x ( s j ) m { 1 } directly, we can also calculate z j = x ( s j ) m { 1 } for all j I i first and then calculate j I i v j z j . These two calculation methods are equivalent, but the difference lies in the order of multiplication.
Now, we further consider the latter case. For any i , k l with I i , I k and i < k , S i and S k are defined by (9). Let S m { 1 } i (or S m { 1 } k ) be a sub-matrix of S i (or S k ) by getting rid of its first column; thus, for  j I i I k , ( s j ) m { 1 } defined in Section 4.1 is a row of matrices S m { 1 } i or S m { 1 } k .
Generally, the matrices S m { 1 } i and S m { 1 } k are considered as sets of row vectors, and their intersection may not be empty, which means that duplicate rows may appear. Let z i R | I i | (or z k R | I k | ) be a vector whose j-th element is defined as z j i = x ( s j ) m { 1 } ( j I i ) (or z j k = x ( s j ) m { 1 } ( j I k ) ), and v i (or v k ) is defined by (9). For any i , k l with I i , I k and i < k , we first calculate z i and obtain ( A x m 1 ) i = ( v i ) z i . Then, we calculate z k and obtain ( A x m 1 ) k = ( v k ) z k . However, when we calculate z k , some of its elements may have already been calculated in z i . Therefore, we consider utilizing symmetry, such that each monomial in the canonical basis is computed at most once.
We use the following simple example to illustrate the above description.
Example 2.
Let A = ( a i 1 i 2 i 3 ) R [ 3 , 3 ] , where a 111 = 5 , a 112 = 3 , a 121 = 1 , a 122 = 1 , a 113 = 4 , a 131 = 2 , a 123 = 2 , a 132 = 1 , a 133 = 2 , a 211 = 2 , a 212 = 4 , a 221 = 3 , a 233 = 5 , a 322 = 2 , a 313 = 4 , a 331 = 2 , and a 333 = 1 .
It is easy to calculate that
A x 2 = 5 x 1 2 3 x 1 x 2 + x 2 x 1 + x 2 2 4 x 1 x 3 + 2 x 3 x 1 + 2 x 2 x 3 + x 3 x 2 + 2 x 3 2 2 x 1 2 + 4 x 1 x 2 3 x 2 x 1 + 5 x 3 2 2 x 2 2 + 4 x 1 x 3 2 x 3 x 1 x 3 2 ,
for a given x R 3 . When we calculate the first component of A x 2 , we need to calculate the vector z 1 = ( x 1 2 , x 1 x 2 , x 2 x 1 , x 2 2 , x 1 x 3 , x 3 x 1 , x 2 x 3 , x 3 x 2 , x 3 2 ) first and then take the inner product of the vectors v 1 = ( 5 , 3 , 1 , 1 , 4 , 2 , 2 , 1 , 2 ) and z 1 . However, when we calculate the second component of A x 2 , the elements in vector z 2 = ( x 1 2 , x 1 x 2 , x 2 x 1 , x 3 2 ) have already been calculated in z 1 , and the same applies to the third component of A x 2 .
According to the above analysis, there is a similar discussion regarding the calculation of A ^ x m 1 . In the following, we introduce the details of a process for calculating A ^ x m 1 in order to avoid duplicate calculations. Based on the matrix S ^ defined by (10) in Section 4.2, we can obtain S ^ m { 1 } by getting rid of its first column and identify the different rows of S ^ m { 1 } denoted as S ˜ m { 1 } (for any given matrix A, we can easily obtain the matrix C containing all different rows using the Matlab R2023b’s command ‘unique’). Then, for any i l , we can find v ^ i and the corresponding index matrix S ^ i . Based on S ^ m { 1 } i and S ˜ m { 1 } , we can obtain the intersection of these two sets and the index of S ^ m { 1 } i in S ˜ m { 1 } with respect to the rows. Finally, we calculate the vector z whose t-th element is defined as z t = x ( s ˜ t ) m { 1 } , where ( s ˜ t ) m { 1 } is the t-th row of S ˜ m { 1 } ; thus, for any i l , z i can be obtained from z and the index of S ^ m { 1 } i in S ˜ m { 1 } directly. Then, ( A ^ x m 1 ) i = ( v ^ i ) z i can be calculated. We divide this section into three parts.
Part 1. Identification of different rows based on S ^ by getting rid of its first column.
Let S ^ m { 1 } be the sub-matrix of S ^ by getting rid of its first column. Then, we can find all the different rows of S ^ m { 1 } , and we denote the matrix composed of these different rows by S ˜ m { 1 } . Suppose that there are s different rows of S ^ m { 1 } . Then, S ˜ m { 1 } R s × ( m 1 ) .
Part 2. Identification of the intersection of S ^ m { 1 } i and S ˜ m { 1 } with respect to rows and the corresponding row index vector of S ^ m { 1 } i in S ˜ m { 1 } for any i l .
For any given i l , S ^ i and v ^ i are defined by (10), which represent the sub-matrix composed of rows of matrix S ^ and the sub-vector composed of elements of vector v ^ , respectively. Let S ^ m { 1 } i be the sub-matrix of S ^ i obtained by discarding its first column. Then, we can obtain the intersection of S ^ m { 1 } i and S ˜ m { 1 } to obtain the index set of S ^ m { 1 } i in S ˜ m { 1 } with respect to rows. We denote the index set as J i , which is characterized by the following relationship:
S ^ m { 1 } i = ( S ˜ m { 1 } ) J i .
Then, we can obtain the index set J as follows
J : = J 1 J 2 J l R q .
The above two parts can be summarized through the following procedure:
Procedure 3
(Compute index set J for tensor A ^ ).
(S0) 
Input: matrix S ^ for tensor A ^ R l × [ m 1 , n ] .
(S1) 
Compute J by (11).
Part 3. Calculation of ( A ^ x m 1 ) i based on matrix S ˜ m { 1 } , vector v ^ i , and index set J i for any i l .
Let ( s ˜ t ) m { 1 } be the t-th ( t s ) row of S ˜ m { 1 } . We calculate the vector z based on x and S ˜ m { 1 } first, as follows:
z t = x ( s ˜ t ) m { 1 } = x ( ( s ˜ t ) m { 1 } ) 1 x ( ( s ˜ t ) m { 1 } ) m 1 , t s .
For any i l , we can obtain the corresponding v ^ i and the index set J i . Thus, z i can be obtain by z and J i directly; that is,
z i = z J i , i l ,
then ( A ^ x m 1 ) i = ( v ^ i ) z i can be calculated.
Now, the i-th component of A ^ x m 1 is equivalent to
( A ^ x m 1 ) i = ( v ^ i ) z J i if I i , 0 if I i = .
Based on the above discussions, we give the following simplified algorithm to compute A ^ x m 1 for any given tensor A and vector x .
Algorithm 3. Simplified calculation of A ^ x m 1 based on vector v ^ and index set J
(S0)
Input: vector v ^ and index set J for tensor A ^ R l × [ m 1 , n ] .
(S1)
For all i l , compute ( A ^ x m 1 ) i by (12).

5. Numerical Experiments

In this section, we implement the methods proposed in Section 4 to calculate A x m 1 and A ^ x m 1 and conduct a series of experiments to verify the effectiveness of the proposed methods.
All the experiments were performed using MATLAB R2023b on a laptop computer with an Intel Core i7-10710U at 1.1 GHz and 16 GB of RAM.

5.1. Performance of Algorithm 1 (or Algorithm 2) for Computing A x m 1 (or A ^ x m 1 ) in the Situation That v and S (or v ^ and S ^ ) Are Known

According to the description in Section 4, we can easily find that the difference between Algorithms 1 and 2 relates to their inputs. In this section, we verify the efficiencies of Algorithms 1 and 2 through a comparison using ‘ttv’ in the tensor toolbox [33]. In addition, v ^ and S ^ are sometimes easily obtained in practical applications, and the tensor may even be given by v ^ and S ^ ; in this case, we can use Algorithm 2 to compute A ^ x m 1 directly without conducting Procedures 1 and 2. In the following example, to demonstrate the performance of Algorithm 1 (or Algorithm 2), we construct a tensor for which the related merge tensor is itself, such that Algorithms 1 and 2 are the same.
Example 3.
Let x = 2 e R n with e = ( 1 , , 1 ) and A = ( a i 1 i 2 i 3 i 4 ) R [ 4 , n ] be a tensor generated with entries such that, for any i [ n 10 ] ,
a i i ( i + 1 ) ( i + 1 ) = a i ( i + 1 ) ( i + 3 ) ( i + 4 ) = a i ( i + 2 ) ( i + 5 ) ( i + 5 ) = a i ( i + 2 ) ( i + 5 ) ( i + 8 ) = a i ( i + 3 ) ( i + 4 ) ( i + 5 ) = 1 4 , a i ( i + 4 ) ( i + 4 ) ( i + 6 ) = a i ( i + 5 ) ( i + 8 ) ( i + 9 ) = a i ( i + 6 ) ( i + 8 ) ( i + 9 ) = a i ( i + 7 ) ( i + 9 ) ( i + 10 ) = a i ( i + 8 ) ( i + 9 ) ( i + 9 ) = 1 4 ; a ( i + 1 ) i ( i + 1 ) ( i + 6 ) = a ( i + 1 ) ( i + 1 ) ( i + 1 ) ( i + 6 ) = a ( i + 1 ) ( i + 2 ) ( i + 3 ) ( i + 6 ) = a ( i + 1 ) ( i + 3 ) ( i + 6 ) ( i + 6 ) = a ( i + 1 ) ( i + 5 ) ( i + 5 ) ( i + 6 ) = 1 10 , a ( i + 1 ) ( i + 6 ) ( i + 7 ) ( i + 9 ) = a ( i + 1 ) ( i + 7 ) ( i + 7 ) ( i + 8 ) = a ( i + 1 ) ( i + 8 ) ( i + 9 ) ( i + 9 ) = a ( i + 1 ) ( i + 8 ) ( i + 9 ) ( i + 10 ) = 1 10 ; a ( i + 2 ) i ( i + 1 ) ( i + 6 ) = a ( i + 2 ) ( i + 1 ) ( i + 1 ) ( i + 6 ) = a ( i + 2 ) ( i + 2 ) ( i + 3 ) ( i + 6 ) = a ( i + 2 ) ( i + 3 ) ( i + 6 ) ( i + 6 ) = a ( i + 2 ) ( i + 4 ) ( i + 5 ) ( i + 6 ) = 1 10 , a ( i + 2 ) ( i + 5 ) ( i + 5 ) ( i + 6 ) = a ( i + 2 ) ( i + 6 ) ( i + 7 ) ( i + 9 ) = a ( i + 2 ) ( i + 7 ) ( i + 7 ) ( i + 10 ) = a ( i + 2 ) ( i + 8 ) ( i + 9 ) ( i + 9 ) = a ( i + 2 ) ( i + 8 ) ( i + 9 ) ( i + 10 ) = 1 10 ; a ( i + 3 ) i ( i + 2 ) ( i + 4 ) = a ( i + 3 ) ( i + 1 ) ( i + 1 ) ( i + 3 ) = a ( i + 3 ) ( i + 2 ) ( i + 2 ) ( i + 5 ) = a ( i + 3 ) ( i + 3 ) ( i + 4 ) ( i + 5 ) = a ( i + 3 ) ( i + 4 ) ( i + 5 ) ( i + 7 ) = 1 5 , a ( i + 3 ) ( i + 5 ) ( i + 5 ) ( i + 8 ) = a ( i + 3 ) ( i + 6 ) ( i + 7 ) ( i + 7 ) = a ( i + 3 ) ( i + 7 ) ( i + 7 ) ( i + 9 ) = a ( i + 3 ) ( i + 8 ) ( i + 9 ) ( i + 9 ) = a ( i + 3 ) ( i + 8 ) ( i + 9 ) ( i + 10 ) = 1 5 ; a ( i + 4 ) i ( i + 1 ) ( i + 5 ) = a ( i + 4 ) i ( i + 2 ) ( i + 5 ) = a ( i + 4 ) ( i + 1 ) ( i + 3 ) ( i + 5 ) = a ( i + 4 ) ( i + 2 ) ( i + 3 ) ( i + 5 ) = 1 4 , a ( i + 4 ) ( i + 3 ) ( i + 5 ) ( i + 7 ) = a ( i + 4 ) ( i + 5 ) ( i + 7 ) ( i + 8 ) = a ( i + 4 ) ( i + 6 ) ( i + 6 ) ( i + 8 ) = a ( i + 4 ) ( i + 8 ) ( i + 8 ) ( i + 9 ) = 1 4 ; a ( i + 5 ) ( i + 1 ) ( i + 2 ) ( i + 5 ) = a ( i + 5 ) ( i + 1 ) ( i + 3 ) ( i + 5 ) = a ( i + 5 ) ( i + 2 ) ( i + 3 ) ( i + 5 ) = a ( i + 5 ) ( i + 3 ) ( i + 4 ) ( i + 5 ) = a ( i + 5 ) ( i + 4 ) ( i + 4 ) ( i + 5 ) = 1 5 , a ( i + 5 ) ( i + 5 ) ( i + 5 ) ( i + 8 ) = a ( i + 5 ) ( i + 6 ) ( i + 8 ) ( i + 9 ) = a ( i + 5 ) ( i + 7 ) ( i + 8 ) ( i + 9 ) = a ( i + 5 ) ( i + 8 ) ( i + 9 ) ( i + 9 ) = 1 5 ; a ( i + 6 ) i ( i + 1 ) ( i + 6 ) = a ( i + 6 ) ( i + 1 ) ( i + 2 ) ( i + 9 ) = a ( i + 6 ) ( i + 2 ) ( i + 3 ) ( i + 8 ) = a ( i + 6 ) ( i + 2 ) ( i + 4 ) ( i + 9 ) = a ( i + 6 ) ( i + 3 ) ( i + 5 ) ( i + 10 ) = 1 5 , a ( i + 6 ) ( i + 4 ) ( i + 5 ) ( i + 7 ) = a ( i + 6 ) ( i + 5 ) ( i + 6 ) ( i + 9 ) = a ( i + 6 ) ( i + 6 ) ( i + 6 ) ( i + 9 ) = a ( i + 6 ) ( i + 6 ) ( i + 8 ) ( i + 9 ) = a ( i + 6 ) ( i + 7 ) ( i + 9 ) ( i + 9 ) = 1 5 ; a ( i + 7 ) i ( i + 1 ) ( i + 4 ) = a ( i + 7 ) ( i + 1 ) ( i + 4 ) ( i + 5 ) = a ( i + 7 ) ( i + 2 ) ( i + 3 ) ( i + 6 ) = a ( i + 7 ) ( i + 3 ) ( i + 5 ) ( i + 6 ) = a ( i + 7 ) ( i + 4 ) ( i + 6 ) ( i + 6 ) = 1 4 , a ( i + 7 ) ( i + 5 ) ( i + 7 ) ( i + 7 ) = a ( i + 7 ) ( i + 6 ) ( i + 7 ) ( i + 7 ) = a ( i + 7 ) ( i + 6 ) ( i + 8 ) ( i + 9 ) = a ( i + 7 ) ( i + 8 ) ( i + 8 ) ( i + 9 ) = 1 4 ; a ( i + 8 ) i i ( i + 1 ) = a ( i + 8 ) ( i + 1 ) ( i + 2 ) ( i + 2 ) = a ( i + 8 ) ( i + 2 ) ( i + 4 ) ( i + 5 ) = a ( i + 8 ) ( i + 3 ) ( i + 4 ) ( i + 5 ) = a ( i + 8 ) ( i + 4 ) ( i + 6 ) ( i + 8 ) = 1 5 , a ( i + 8 ) ( i + 4 ) ( i + 7 ) ( i + 8 ) = a ( i + 8 ) ( i + 5 ) ( i + 6 ) ( i + 7 ) = a ( i + 8 ) ( i + 6 ) ( i + 7 ) ( i + 9 ) = a ( i + 8 ) ( i + 7 ) ( i + 7 ) ( i + 8 ) = 1 5 ; a ( i + 9 ) i ( i + 2 ) ( i + 3 ) = a ( i + 9 ) ( i + 1 ) ( i + 1 ) ( i + 2 ) = a ( i + 9 ) ( i + 2 ) ( i + 4 ) ( i + 8 ) = a ( i + 9 ) ( i + 3 ) ( i + 3 ) ( i + 5 ) = a ( i + 9 ) ( i + 4 ) ( i + 5 ) ( i + 6 ) = 1 10 , a ( i + 9 ) ( i + 4 ) ( i + 7 ) ( i + 8 ) = a ( i + 9 ) ( i + 5 ) ( i + 6 ) ( i + 7 ) = a ( i + 9 ) ( i + 6 ) ( i + 7 ) ( i + 8 ) = a ( i + 9 ) ( i + 7 ) ( i + 8 ) ( i + 9 ) = a ( i + 9 ) ( i + 8 ) ( i + 9 ) ( i + 9 ) = 1 10 ;
with all other entries being 0.
Obviously, A given in Example 3 is equal to A ^ and, so, v ^ , S ^ can be obtained easily via Procedure 1. Next, we apply both Algorithm 2 and ‘ttv’ to compute A ^ x m 1 . Note that, before executing Algorithm 2, we need to use Procedure 1 to obtain v ^ and S ^ , while ‘ttv’ can be directly applied to the tensor A ^ . Moreover, we use CPU v ^ , S ^ , CPU A ^ xm 1 v ^ , S ^ , and  CPU ttv to denote the CPU time cost of Procedure 1, Algorithm 2, and ‘ttv’ in the tensor toolbox, respectively. The numerical experimental results for different values of n are presented in Table 1, where p is the number of non-zero elements and s r represents the sparsity ratio.
From Table 1, we can see that our Algorithm 2 significantly outperforms ‘ttv’ in the tensor toolbox for instances with low sparsity. However, our algorithm requires additional time to find v ^ and S ^ via Procedure 1, which can be implemented as an off-line process. Moreover, as n increases, Procedure 1 and Algorithm 2 present certain advantages. In the case n = 300 , the MATLAB function ‘tensor’ ran out of memory.
Furthermore, the tensors involved in Table 1 are stored in a general way. When the tensors in Example 3 are stored by v ^ and S ^ , Procedure 1 does not need to be executed. Therefore, we only need to directly compare Algorithm 2 and ‘ttv’. It is worth noting that ‘ttv’ in the tensor toolbox is an overloaded function, where the function ‘ttv’ invoked with general tensor arguments differs from that with sparse tensor arguments. When the tensor is stored by v ^ and S ^ —that is, the tensor is stored in the sparse structure—it is more fair to use ‘ttv’ with sparse tensors as parameters for comparison. The numerical experimental results for different n are shown in Table 2, where we use CPU s - ttv to denote the CPU time cost of ‘ttv’ with sparse tensors as parameters and the other notation are as same as that in Table 1. Without a loss of generality, in the following description, if the tensors involved are the general structure, we execute the function ‘ttv’ in the tensor toolbox with general tensors as parameters (denoted by ‘ttv’), while, if the tensors involved have sparse structure, we execute the function ‘ttv’ in the tensor toolbox with sparse tensors as parameters (denoted by ‘s-ttv’).
From Table 2, we can see that when the tensors are stored using v ^ and S ^ or in the sparse structure, both Algorithm 2 and ‘s-ttv’ can solve problems of larger scale, and Algorithm 2 and ‘s-ttv’ were found to perform equally well. When n = 2000 , Algorithm 2 performed better.

5.2. Verification of the Advantages of Calculating A ^ x m 1 After Merging

In order to verify the advantages of merging, we determined the numerical performances of Algorithm 2 and ‘s-ttv’ based on v , S .
Example 4.
Let x = 2 e R n with e = ( 1 , , 1 ) and A = ( a i 1 i 2 i 3 i 4 i 5 ) R [ 5 , n ] be a tensor generated with entries satisfying
a i i i i i = 1 , a i i i i ( i + 2 ) = 2 , a i i i ( i + 2 ) i = 3 , a i i ( i + 2 ) i i = 1 , a i ( i + 2 ) i i i = 1 , a i ( i + 1 ) ( i + 1 ) ( i + 2 ) ( i + 2 ) = 1 , a i ( i + 2 ) ( i + 2 ) ( i + 1 ) ( i + 1 ) = 2 , i { 1 , 4 , 7 , } ; a i i i i i = 1 , a i ( i 1 ) ( i 1 ) i i = 2 , a i ( i 1 ) i ( i 1 ) i = 2 , a i i ( i 1 ) ( i 1 ) i = 2 , a i i i ( i 1 ) ( i 1 ) = 2 , a i i i ( i + 1 ) ( i + 1 ) = 2 , a i i ( i + 1 ) ( i + 1 ) i = 2 , a i ( i + 1 ) ( i + 1 ) i i = 2 , i { 2 , 5 , 8 , } ; a i i i i i = 1 , a i ( i 2 ) ( i 2 ) i i = 3 , a i i i ( i 2 ) ( i 2 ) = 2 , a i i ( i 2 ) ( i 2 ) i = 3 , a i ( i 2 ) ( i 2 ) ( i 1 ) ( i 1 ) = 1 2 , a i ( i 1 ) ( i 1 ) ( i 2 ) ( i 2 ) = 1 4 , i { 3 , 6 , 9 , } ;
with all other entries being 0.
Obviously, A given in Example 4 is very sparse and its merge tensor A ^ can reduce the number of non-zero elements by more than half. Suppose that the tensors in Example 4 are stored in the sparse structure; then, we can directly apply Procedure 2 and Algorithm 2 to compute A ^ x m 1 , while ‘s-ttv’ is used to compute A x m 1 . Moreover, we use CPU v ^ , S ^ , CPU A ^ xm 1 v ^ , S ^ , and  CPU s - ttv to denote the CPU time cost of Procedure 2, Algorithm 2, and ‘s-ttv’ in the tensor toolbox, respectively. The numerical experimental results for different values of n are shown in Table 3, where p is the number of non-zero elements of A , and q is the number of non-zero elements of A ^ .
From Table 3, it can be seen that both Algorithm 2 and ‘s-ttv’ can solve problems of different scales and have remarkable computational efficiency. For small-scale problems, Algorithm 2 and ‘ttv’ take almost no CPU time, while, for large-scale problems, Algorithm 2 takes less CPU time than ‘s-ttv’. In addition, although Procedure 2 may take additional CPU time for large-scale problems, it can be performed in an offline manner in practical applications.

5.3. Comparison of Algorithms 1 and 2 with s-ttv

For any given A R l × [ m 1 , n ] and x R n with different ( l , m , n ) , we used ‘s-ttv’ and Algorithm 1 to compute tensor–vector products A x m 1 and used Algorithm 2 to compute A ^ x m 1 , where ‘s-ttv’ was implemented using the tensor toolbox [33], consistent with the description in the Section 5.1.
Example 5.
Let x = 2 e R n with e = ( 1 , , 1 ) and A = ( a i 1 i 2 i m ) R l × [ m 1 , n ] be a sparse tensor generated by sptenrand with sparsity ratio s r ; that is, A has s r × l n m 1 non-zero entries uniformly distributed in ( 0 , 1 ) .
To demonstrate the performance of the different algorithms, we calculated the average CPU time cost from 10 random experiments. In particular, we use CPU Axm 1 v , S , CPU A ^ xm 1 v ^ , S ^ and  CPU s - ttv to denote the average CPU time cost of Algorithms 1 and 2 and ‘s-ttv’, respectively. Moreover, to present the computation cost of the pre-processing procedures clearly, we use CPU v , S to denote the average CPU time cost for v and S in Procedure 1, and we use CPU v ^ , S ^ to denote the average CPU time cost for computation of the vector v ^ and matrix S ^ for the merge tensor A ^ in Procedure 2. Here, p is the average number of non-zero elements of original tensors, and q is the average number of non-zero elements of corresponding merge tensors. The numerical results are shown in Table 4.
From Table 4, we can see that when we calculate A x m 1 , Algorithm 1 is faster than ‘s-ttv’ in the tensor toolbox for high-order examples while, for low-order examples, Algorithm 1 and ‘s-ttv’ perform equally well. In general, Algorithm 2 is faster than Algorithm 1 and ‘s-ttv’. In our examples, we generated a series of sparse tensors, such that Procedure 1 used to calculate v , S is basically not time-consuming due to the special storage structure of sparse tensors, while Procedure 2 to calculate v ^ , S ^ takes a relatively long time. As discussed in Section 4, we need to implement Procedures 1 and 2 to obtain v ^ , S ^ before executing Algorithm 2.
In many practical applications, we need to compute A x m 1 repeatedly if we use ‘s-ttv’ in the tensor toolbox. However, in our proposed methods, we only use Procedures 1 and 2 once to obtain v ^ , S ^ and then call Algorithm 2 repeatedly, thus greatly shortening the computation time. Therefore, our proposed methods may not have advantages in a single calculation but instead have significant advantages in repeated calculations.

5.4. Comparison of Algorithms 2 and 3

In this section, in order to demonstrate the effectiveness of the simplified strategy presented in Section 4.3, we demonstrate the numerical performance of Algorithms 2 and 3 based on v ^ , S ^ .
Example 6.
Let x = 2 e R n with e = ( 1 , , 1 ) and A = ( a i 1 i 2 i m ) R l × [ m 1 , n ] be a sparse tensor generated by sptenrand with sparsity ratio s r . Then, we can use Procedure 2 to obtain the vector of all non-zero entries v ^ and the corresponding index matrix S ^ .
In the following, for any given A R l × [ m 1 , n ] and x R n with different ( l , m , n ) , we use Algorithms 2 and 3 to compute tensor–vector products A ^ x m 1 based on v ^ , S ^ . In addition, we use ‘s-ttv’ to compute A x m 1 as a baseline for comparison.
To demonstrate the performance of the different algorithms, we show the average CPU time cost for 10 random experiments. We use CPU A ^ xm 1 v ^ , S ^ , CPU A ^ xm 1 J , and  CPU s ttv to denote the average CPU time cost for Algorithms 2, 3, and ‘s-ttv’, respectively. As calculating A ^ x m 1 with Algorithm 3 requires an additional procedure, in order to determine the computation cost associated with the pre-processing procedure clearly, we use CPU J to denote the average CPU time cost for computing the index set J based on v ^ and S ^ for the merge tensor A ^ via Procedure 3. Here, q is the average number of non-zero elements of corresponding merge tensors (i.e., the average number of rows of S ^ ), and q is the average number of different rows for S ^ obtained by discarding the first column. The numerical results are presented in Table 5.
The total number of multiplications is q ( m 1 ) when we calculate A ^ x m 1 using Algorithm 2, while the total number is q ( m 2 ) + q when we calculate A ^ x m 1 with Algorithm 3. From Table 5, we can see that Algorithm 3 can avoid a significant number of multiplications. Regarding the CPU time cost, Algorithm 3 is faster than Algorithm 2 with different ( l , m , n ) and sparse ratio s r .
Compared to Algorithm 2, Algorithm 3 takes additional time to find the index set J using Procedure 3 before execution. Similar to the analysis in Section 5.3, when we need to compute A ^ x m 1 iteratively, Procedure 3 only needs to be executed once to obtain J , following which Algorithm 3 is called repeatedly, thus greatly reducing the computational time.

5.5. Finding the Least Element for the Set Defined by Polynomial Inequalities

In this section, we consider the least element problem for the set defined by polynomial inequalities [32] in which the proposed iterative algorithm for finding the least element of the considered set involves the calculation of A x m 1 . In order to verify the advantage of our Algorithm 2 over ttv and s-ttv in the tensor toolbox [33], we conducted the numerical experiments using Example 5.3 from [32]:
Example 7
(Example 5.3 in [32]). Consider the set S : = { x R 2 n + 10 : x 0 and A x 3 b } , where
b = b 1 b 2 b 3 R 3 n + 10 with b i = i 3 if i { 1 , , n } , 2 i 3 if i { n + 11 , , 2 n + 10 } , i 3 if i { 2 n + 11 , , 3 n + 10 } ,
with b 2 = ( 10 , 12 , 20 , 11.0279 , 20.5699 , 10 , 10 , 10 , 10 , 10 ) R 10 , and  A = ( a i 1 i 2 i 3 i 4 ) R ( 3 n + 10 ) × [ 3 , 2 n + 10 ] with diagonal entries
a i i i i = 1 if i { 1 , , n + 10 } { n + 2 } , 2 if i = n + 2 , 1 if i { n + 11 , , 2 n + 10 } ,
and off-diagonal entries
a 1 ( n + 1 ) ( n + 1 ) ( n + 2 ) = 4 , a 1 ( n + 1 ) ( n + 2 ) ( n + 1 ) = 2 , a 1 ( n + 2 ) ( n + 1 ) ( n + 1 ) = 2 ; a 2 ( n + 2 ) ( n + 3 ) ( n + 3 ) = 1 ; a i 1 ( n + 11 ) ( n + 11 ) = a i ( n + 2 ) ( n + 4 ) ( n + 4 ) = 2 , a i ( n + 11 ) ( n + 11 ) 1 = a i ( n + 4 ) ( n + 2 ) ( n + 4 ) = 3 , for any i { 1 , 2 , 5 , 10 , , n 5 } ; a ( n + 1 ) 112 = a ( n + 1 ) 122 = 2 , a ( n + 1 ) ( n + 1 ) ( n + 1 ) ( n + 3 ) = 1 , a ( n + 1 ) ( n + 1 ) ( n + 3 ) ( n + 3 ) = 1 ; a ( n + 2 ) 111 = a ( n + 2 ) 223 = 1 , a ( n + 2 ) ( n + 2 ) ( n + 2 ) ( n + 3 ) = 1 , a ( n + 2 ) 113 = 4 ; a ( n + 3 ) 111 = 1 , a ( n + 3 ) 223 = 1 , a ( n + 3 ) 112 = 4 ; a ( n + 4 ) 112 = a ( n + 4 ) 113 = 2 , a ( n + 4 ) ( n + 4 ) ( n + 4 ) ( n + 5 ) = 1 , a ( n + 4 ) ( n + 4 ) ( n + 5 ) ( n + 5 ) = 1 ; a ( n + 5 ) 11 n = 4 n , a ( n + 5 ) 12 n = 1 ; a ( n + 6 ) 111 = 2 , a ( n + 6 ) ( n + 6 ) ( n + 6 ) ( n + 7 ) = 3 ; a ( n + 7 ) 112 = 1 , a ( n + 7 ) ( n + 6 ) ( n + 7 ) ( n + 7 ) = 1 ; a ( n + 8 ) 11 n = 1 n , a ( n + 8 ) ( n + 6 ) ( n + 8 ) ( n + 8 ) = 2 ; a ( n + 9 ) 12 n = 1 n , a ( n + 9 ) ( n + 8 ) ( n + 8 ) ( n + 9 ) = 1 ; a ( n + 10 ) 112 = 1 , a ( n + 10 ) ( n + 7 ) ( n + 7 ) ( n + 8 ) = 2 ; a i 11 n = 1 n , a i ( n + 6 ) ( n + 6 ) i = 2 , for any i { n + 11 , , 2 n + 10 } ; a i 11 ( n + 1 ) = a i ( n + 1 ) ( n + 1 ) ( i n 5 ) = 1 , a i ( i n 5 ) ( i n 5 ) ( i n ) = 2 , for any i { 2 n + 11 , , 3 n + 10 } ;
whille all other entries are 0. Here, we use x to denote the floor operation of x R .
From the definition of the tensor A in Example 7, the merge tensor A ^ can be easily obtained. We used the algorithm proposed in [32] to find the least element of the set S involving the merge tensor A ^ , where we replaced ttv used in the fixed point iteration process of the algorithm proposed in [32] with s-ttv and our Algorithm 2. We compared the original algorithm in [32] with those replaced by s-ttv and Algorithm 2. The numerical results are shown in Table 6, where It represents the number of iterations, and It(fp) represents the number of iterations in the fixed point method. CPU ttv , CPU s ttv , and  CPU A ^ xm 1 represent the CPU times for the respective algorithms. CPU(fp) ttv , CPU(fp) s ttv , and  CPU(fp) A ^ xm 1 represent the CPU time for the fixed point methods used in the respective algorithms. Here, x k + 1 is the least element, Val : = min { ( A ( x k + 1 ) m 1 b ) i : i [ 3 n + 10 ] } , and the symbol + " means x k + 1 0 . It is worth noting that CPU A ^ xm 1 and CPU(fp) A ^ xm 1 include the CPU time associated with Procedure 1 for calculating the vector v ^ and matrix S ^ .
From Table 6, it can be seen that replacing ttv with s-ttv and Algorithm 2 greatly reduced the CPU time of the iterative algorithm due to the sparsity of the example. Moreover, Algorithm 2 always performed better than s-ttv for different values of n, especially when n is relatively small.

5.6. Tensor Complementarity Problems with the Implicit Z-Tensors

To further validate the effectiveness of embedding our Algorithm 2 in iterative algorithms, we consider tensor complementarity problems with the implicit Z-tensors [20] in which the proposed fixed point iterative algorithm involves the calculation of A ^ x m 1 . We conducted numerical experiments using Example 4 in [20]:
Example 8
(Example 4 in [20]). Consider TCP ( A , q ) , where A = ( a i 1 i 2 i 3 i 4 i 5 ) R [ 5 , n ] with diagonal entries a i i i i i = 1 for any i [ n ] { 3 , 6 , 9 , } and a i i i i i = 1 for any i { 3 , 6 , 9 , } , non-diagonal entries
a i i i i ( i + 2 ) = 2 , a i ( i + 2 ) i i i = 5 , a i ( i + 1 ) ( i + 1 ) ( i + 2 ) ( i + 2 ) = 1 , i { 1 , 4 , 7 , } , a i ( i 1 ) ( i 1 ) i i = 2 , a i i i ( i 1 ) ( i 1 ) = 6 , a i i i ( i + 1 ) ( i + 1 ) = 2 , i { 2 , 5 , 8 , } , a i ( i 2 ) ( i 2 ) i i = 3 , a i i i ( i 2 ) ( i 2 ) = 5 , a i ( i 2 ) ( i 2 ) ( i 1 ) ( i 1 ) = 1 4 , i { 3 , 6 , 9 , } ,
with all other entries being 0; furthermore, q R n with q i = 1 for i { 1 , 4 , 7 , } , q i = 0 for i { 2 , 5 , 8 , } and q i = 16 for i { 3 , 6 , 9 , } .
The tensor A in Example 8 is an implicit Z-tensor, and we can easily obtain the merge tensor A ^ . We embedded our Algorithm 2 into the fixed point iterative algorithm in [20] and compared it with the algorithm presented in [20], where the calculation of A ^ x m 1 uses the tensor toolbox [33]. The numerical results are shown in Table 7, where x 0 represents the initial solution, Iteration represents the number of iterations, and CPU A ^ xm 1 and CPU ttv represent the CPU time for the whole algorithm embedded with our Algorithm 2 and that presented in [20], respectively. Furthermore, Res represents the natural residual.
It can be seen that Algorithm 2 outperformed the algorithm in [20] for different initial solutions and values of n. Algorithm 2 greatly reduced the computational time for the fixed point iteration algorithm, especially when n is larger.
In order to demonstrate the difference in calculating A ^ x m 1 between the fixed point iterative algorithm in [20] in comparison with our Algorithm 2, using the tensor toolbox more intuitively, we took the initial solution x 0 = ( 5 , 0 , 0 , 5 , 0 , 0 , ) as an example and determined the total computational time when using iterative algorithms adopting the above two techniques, as shown in Figure 2. The bar chart shows the computational time with n equal to 10 , 20 , 30 , 40 , and 50, while the line chart shows the difference in computational time between the algorithms.
From Figure 2, we can see that our Algorithm 2 and tensor toolbox have similar and shorter computational times when n is relatively small. Our algorithm does not significantly increase the computational time with increasing n, while the tensor toolbox algorithm increases rapidly. Therefore, the proposed algorithm has a better application value in large-scale problems.

6. Concluding Remarks

Many tensors and their related problems involve a calculation of the vector of a ( m 1 ) -degree homogeneous polynomial defined by a tensor A x m 1 . In this study, taking symmetry and sparsity properties into consideration, we proposed efficient algorithms that avoid the large amount of computation inherent to existing algorithms. Specifically, utilizing the symmetry of monomials in the canonical basis of homogeneous polynomials, we proposed a method to calculate A x m 1 using the merge tensor of the involved tensor to replace the original tensor, thus reducing the computational cost. Then, an algorithm was designed to calculate this tensor, combining sparsity to further reduce the computational cost. Moreover, through analysis of the calculation details, a simplified algorithm that avoids duplicate calculations was further proposed. Finally, the results of extensive numerical experiments verified the effectiveness of the proposed methods.
There are still some aspects worthy of further study. First, although Procedure 2 or Procedure 3 only need to be executed once (or offline) for iterative algorithms, they take a relatively long time; as such, further accelerating the speed of these procedures would be desirable. Second, it is clear that more entries can be merged for sparser tensors, thus making our algorithms faster; therefore, can we determine the range of sparsity that is suitable for our algorithms? Finally, in addition to considering computation time, storage space may also be taken into account. Future investigations may help to address these outstanding questions.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Hu, S.; Huang, Z.H.; Ling, C.; Qi, L. On determinants and eigenvalue theory of tensors. J. Symb. Comput. 2013, 50, 508–531. [Google Scholar] [CrossRef]
  2. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  3. Liu, P.; Liu, G.; Lv, H. Power function method for finding the spectral radius of weakly irreducible nonnegative tensors. Symmetry 2022, 14, 2157. [Google Scholar] [CrossRef]
  4. Lin, H.; Zheng, L.; Zhou, B. Largest and least H-eigenvalues of symmetric tensors and hypergraphs. Linear Multilinear Algebr. 2025, 1–27. [Google Scholar] [CrossRef]
  5. Pakmanesh, M.; Afshin, H.; Hajarian, M. Normalized Newton method to solve generalized tensor eigenvalue problems. Numer. Linear Algebra Appl. 2024, 31, e2547. [Google Scholar] [CrossRef]
  6. Bai, X.L.; He, H.J.; Ling, C.; Zhou, G. A nonnegativity preserving algorithm for multilinear systems with nonsingular M-tensors. Numer. Algor. 2021, 87, 1301–1320. [Google Scholar] [CrossRef]
  7. Ning, J.; Xie, Y.; Yao, J. Efficient splitting methods for solving tensor absolute value equation. Symmetry 2022, 14, 387. [Google Scholar] [CrossRef]
  8. Jiang, Z.; Li, J. Solving tensor absolute value equation. Appl. Numer. Math. 2021, 170, 255–268. [Google Scholar] [CrossRef]
  9. Ding, W.; Wei, Y. Solving multi-linear systems with M-tensors. J. Sci. Comput. 2016, 68, 689–715. [Google Scholar] [CrossRef]
  10. Han, L. A homotopy method for solving multilinear systems with M-tensors. Appl. Math. Lett. 2017, 69, 49–54. [Google Scholar] [CrossRef]
  11. He, H.; Ling, C.; Qi, L.; Zhou, G. A globally and quadratically convergent algorithm for solving multilinear systems with M-tensors. J. Sci. Comput. 2018, 76, 1718–1741. [Google Scholar] [CrossRef]
  12. Li, D.H.; Guan, H.B.; Wang, X.Z. Finding a nonnegative solution to an M-tensor equation. Pac. J. Optim. 2020, 16, 419–440. [Google Scholar]
  13. Liu, D.; Li, W.; Vong, S.W. The tensor splitting with application to solve multi-linear systems. J. Comput. Appl. Math. 2018, 330, 75–94. [Google Scholar] [CrossRef]
  14. Xie, Z.J.; Jin, X.Q.; Wei, Y.M. Tensor methods for solving symmetric M-tensor systems. J. Sci. Comput. 2018, 74, 412–425. [Google Scholar] [CrossRef]
  15. Bai, X.L.; Huang, Z.H.; Wang, Y. Global uniqueness and solvability for tensor complementarity problems. J. Optim. Theory Appl. 2016, 170, 72–84. [Google Scholar] [CrossRef]
  16. Che, M.; Qi, L.; Wei, Y. Positive-definite tensors to nonlinear complementarity problems. J. Optim. Theory Appl. 2016, 168, 475–487. [Google Scholar] [CrossRef]
  17. Huang, Z.H.; Qi, L. Formulating an n-person noncooperative game as a tensor complementarity problem. Comput. Optim. Appl. 2017, 66, 557–576. [Google Scholar] [CrossRef]
  18. Song, Y.; Qi, L. Properties of some classes of structured tensors. J. Optim. Theory Appl. 2015, 165, 854–873. [Google Scholar] [CrossRef]
  19. Song, Y.; Qi, L. Tensor complementarity problem and semi-positive tensors. J. Optim. Theory Appl. 2016, 169, 1069–1078. [Google Scholar] [CrossRef]
  20. Huang, Z.H.; Li, Y.F.; Wang, Y. A fixed point iterative method for tensor complementarity problems with the implicit Z-tensors. J. Glob. Optim. 2023, 86, 495–520. [Google Scholar] [CrossRef]
  21. Jia, Q.; Huang, Z.H.; Wang, Y. Generalized multilinear games and vertical tensor complementarity problems. J. Optim. Theory Appl. 2024, 200, 602–633. [Google Scholar] [CrossRef]
  22. Huang, Z.H.; Qi, L. Tensor complementarity problemspart I: Basic theory. J. Optim. Theory Appl. 2019, 183, 1–23. [Google Scholar] [CrossRef]
  23. Huang, Z.H.; Qi, L. Tensor complementarity problemspart III: Applications. J. Optim. Theory Appl. 2019, 183, 771–791. [Google Scholar] [CrossRef]
  24. Qi, L.; Huang, Z.H. Tensor complementarity problemspart II: Solution methods. J. Optim. Theory Appl. 2019, 183, 365–385. [Google Scholar] [CrossRef]
  25. Fan, J.; Nie, J.; Zhou, A. Tensor eigenvalue complementarity problems. Math. Program. 2018, 170, 507–539. [Google Scholar] [CrossRef]
  26. Ling, C.; He, H.; Qi, L. On the cone eigenvalue complementarity problem for higher-order tensors. Comput. Optim. Appl. 2016, 63, 143–168. [Google Scholar] [CrossRef]
  27. Ling, C.; He, H.; Qi, L. Higher-degree eigenvalue complementarity problems for tensors. Comput. Optim. Appl. 2016, 64, 149–176. [Google Scholar] [CrossRef]
  28. Song, Y.; Qi, L. Eigenvalue analysis of constrained minimization problem for homogeneous polynomial. J. Glob. Optim. 2016, 64, 563–575. [Google Scholar] [CrossRef]
  29. Xu, Y.; Huang, Z.H. Pareto eigenvalue inclusion intervals for tensors. J. Ind. Manag. Optim. 2023, 19, 2123–2139. [Google Scholar] [CrossRef]
  30. Zhang, L.; Chen, C. A Newton-type algorithm for the tensor eigenvalue complementarity problem and some applications. Math. Comput. 2021, 90, 215–231. [Google Scholar] [CrossRef]
  31. Wang, Y.; Huang, Z.H.; Qi, L. Global uniqueness and solvability of tensor variational inequalities. J. Optim. Theory Appl. 2018, 177, 137–152. [Google Scholar] [CrossRef]
  32. Huang, Z.H.; Li, Y.F.; Miao, X.H. Finding the least element of a nonnegative solution set of a class of polynomial inequalities. SIAM J. Matrix Anal. Appl. 2023, 44, 530–558. [Google Scholar] [CrossRef]
  33. Bader, B.W.; Kolda, T.G.; Dunlavy, D.M. Tensor Toolbox for MATLAB, Version 3.6. 28 September 2023. Available online: www.tensortoolbox.org (accessed on 15 February 2025).
Figure 1. Evolution of the difference n m 1 C n + ( m 1 ) 1 m 1 with respect to n with different m.
Figure 1. Evolution of the difference n m 1 C n + ( m 1 ) 1 m 1 with respect to n with different m.
Symmetry 17 00664 g001
Figure 2. The comparison of iterative algotithm in [20] using different techniques.
Figure 2. The comparison of iterative algotithm in [20] using different techniques.
Symmetry 17 00664 g002
Table 1. Numerical results for computing A ^ x m 1 in Example 3.
Table 1. Numerical results for computing A ^ x m 1 in Example 3.
np sr CPU v ^ , S ^ CPU A ^ xm 1 v ^ , S ^ CPU ttv
5030784.92 × 10 4 0.01560.00000.0000
10068786.88 × 10 5 1.37500.00000.3906
12083984.05 × 10 5 1.73440.00000.7031
15010,6782.11 × 10 5 2.15630.00001.7031
20014,4789.05 × 10 6 7.46880.00005.7969
22015,9986.83 × 10 6 22.57810.000085.1250
25018,2784.68 × 10 6 44.31250.0313171.7656
30022,0782.73 × 10 6 ---
Table 2. Numerical results for computing A ^ x m 1 in Example 3 with sparse structure.
Table 2. Numerical results for computing A ^ x m 1 in Example 3 with sparse structure.
np sr CPU A ^ xm 1 v ^ , S ^ CPU s - ttv
5030784.92 × 10 4 0.00000.0000
10068786.88 × 10 5 0.00000.0000
12083984.05 × 10 5 0.00000.0000
15010,6782.11 × 10 5 0.00000.0313
20014,4789.05 × 10 6 0.00000.0469
22015,9986.83 × 10 6 0.00000.0000
25018,2784.68 × 10 6 0.00000.0313
30022,0782.73 × 10 6 0.03130.0000
50037,2785.96 × 10 7 0.00000.0156
100075,7287.53 × 10 8 0.00000.0156
2000151,2789.45 × 10 9 0.03130.1094
Table 3. Numerical results for computing A x m 1 in Example 4.
Table 3. Numerical results for computing A x m 1 in Example 4.
npq CPU v ^ , S ^ CPU A ^ xm 1 v ^ , S ^ CPU s - ttv
1006942980.00000.00000.0000
1000699429980.00000.00000.0000
200013,99259970.00000.00000.0000
10,00069,99429,9980.00000.00000.0313
50,000349,992149,9970.06250.01560.0938
100,000699,994299,9980.26560.03130.1094
200,0001,399,992599,9970.51560.03130.2188
Table 4. Numerical results for computing A x m 1 in Example 5 with different values of s r .
Table 4. Numerical results for computing A x m 1 in Example 5 with different values of s r .
( l , m , n ) , sr p / q CPU s - ttv CPU v , S CPU Axm 1 v , S CPU v ^ , S ^ CPU A ^ xm 1 v ^ , S ^
(50, 4, 50), 50%2,459,310/1,027,410.60.17500.00470.18441.50470.0906
(80, 4, 100)31,477,918.2/12,909,783.73.45780.00164.592235.17031.0750
(100, 4, 100)39,347,945/16,138,6143.10310.06092.937536.55161.2438
(30, 5, 30)9,561,777.4/1,221,955.81.94380.01560.87199.93440.1328
(10, 6, 10)393,401/19,875.40.05630.00000.04840.77190.0094
(10, 6, 20)12,590,775.4/424,5852.49840.00001.457818.37190.0391
(50, 4, 50), 20%1,133,057/740,778.50.06880.00310.08440.68590.0422
(80, 4, 100)14,501,746/9,398,356.52.21090.00001.206316.59840.8078
(100, 4, 100)18,127,358/11,748,3511.46090.03441.460916.44210.9344
(30, 5, 30)4,404,639.6/1,171,318.71.51250.01090.45314.90310.1313
(10, 6, 10)181,208.3/19,414.90.04380.00000.08440.34690.0000
(10, 6, 20)5,800,478.7/422,239.61.65160.00000.67037.50780.0375
(50, 4, 50), 10%594,732.1/474,245.90.04840.00000.06880.35000.0297
(80, 4, 100)7,613,230.3/6,042,749.71.62030.00000.58287.76090.4828
(100, 4, 100)9,516,266/7,553,460.20.77810.02340.78288.10470.6250
(30, 5, 30)2,312,471.9/1,017,4020.98590.00310.24533.25470.1125
(10, 6, 10)95,190.2/18,359.30.00940.00000.02970.17500.0094
(10, 6, 20)3,045,135.4/414,376.11.30630.00000.40004.07030.0578
(50, 4, 50), 5%304,809.5/271,144.40.01560.00150.01880.22030.0125
(80, 4, 100)3,901,636.1/3,463,587.51.32190.01410.32814.33280.2563
(100, 4, 100)4,876,798/4,329,426.80.45780.00780.44693.88440.3609
(30, 5, 30)1,185,079.8/747,613.90.54840.00160.13442.12660.0781
(10, 6, 10)48,769.4/16,144.30.01250.00000.00000.09530.0094
(10, 6, 20)1,560,658.4/389,369.20.86410.00000.54847.50780.0437
(50, 4, 50), 1%62,188.1/60,713.80.02190.00000.01880.13280.0094
(80, 4, 100)796,011.4/776,7600.21090.00000.22811.54530.0859
(100, 4, 100)995,026.8/971,036.70.06880.00160.07970.59220.0875
(30, 5, 30)241,783.1/218,543.70.03910.00000.02660.14840.0141
(10, 6, 10)9951.4/7275.20.00160.00000.00000.00780.0000
(10, 6, 20)318,414.5/211,318.50.16880.00000.15630.77970.1047
Table 5. Numerical results for computing A x m 1 in Example 6 based on v ^ and S ^ .
Table 5. Numerical results for computing A x m 1 in Example 6 based on v ^ and S ^ .
( l , m , n ) , sr q / q CPU s - ttv CPU A ^ xm 1 v ^ , S ^ CPU J CPU A ^ xm 1 J
(50, 4, 50), 50%1,027,344.4/22,1000.70160.06881.21250.0313
(80, 4, 100)12,909,783.7/171,7003.49221.148420.88910.8984
(100, 4, 100)16,137,621.3/171,7004.20161.451628.46251.2314
(30, 5, 30)1,221,928.7/40,9202.02970.13752.04840.0594
(10, 6, 10)19,878.7/20020.05470.00310.05470.0000
(10, 6, 20)424,585/42,503.82.33750.05310.73750.0188
(50, 4, 50), 20%740,674.6/22,1000.30630.04061.02970.0281
(80, 4, 100)9,398,356.5/171,7002.11410.685912.67810.5781
(100, 4, 100)11,748,329.8/171,7002.44840.867216.86880.6141
(30, 5, 30)1,171,271.9/40,9201.40000.10161.70310.0391
(10, 6, 10)19,399.1/2000.50.02810.00310.02660.0000
(10, 6, 20)422,239.6/42,501.21.71560.04220.72500.0188
(50, 4, 50), 10%474,356.7/22,099.80.14060.11721.02810.0156
(80, 4, 100)6,042,749.7/171,7001.70940.510911.21560.4125
(100, 4, 100)7,553,286.2/171,7001.73280.565611.91250.3469
(30, 5, 30)1,017,688.9/40,918.40.83910.08911.59690.0375
(10, 6, 10)18,358.6/1997.70.01560.00310.06250.0000
(10, 6, 20)414,376.1/42,493.61.38910.05470.64380.0031
(50, 4, 50), 5%271,201.6/22,093.40.04530.03280.92660.0094
(80, 4, 100)3,463,587.5/171,6981.16250.27197.14530.1609
(100, 4, 100)4,329,140.1/171,6991.38590.31258.84840.2391
(30, 5, 30)747,763.8/40,911.30.48130.06251.11880.0297
(10, 6, 10)16,135.6/19880.01560.00000.04060.0000
(10, 6, 20)389,369.2/42,456.20.78750.03280.56560.0125
(50, 4, 50), 1%60,719.4/20,555.90.00940.00310.42500.0000
(80, 4, 100)776,760/169,413.80.20310.16564.77030.0438
(100, 4, 100)970,963.6/170,773.70.25000.07345.70160.0344
(30, 5, 30)218,543.7/40,215.50.08590.05941.07970.0063
(10, 6, 10)7275.2/1837.30.00160.00000.01410.0000
(10, 6, 20)211,318.5/41,432.70.10630.08590.86560.0313
Table 6. Numerical results of Example 7 for different values of n.
Table 6. Numerical results of Example 7 for different values of n.
nItIt(fp) CPU ttv / CPU s - ttv / CPU A ^ xm 1 CPU(fp) ttv / CPU(fp) s - ttv / CPU(fp) A ^ xm 1 x k + 1 Val
1039154.5313/2.1719/0.29694.3281/1.9688/0.0938+−2.07 × 10 11
2039144.8750/2.3281/0.37504.5938/2.0781/0.1875+−2.46 × 10 11
50391334.1094/3.9063/2.156332.4219/2.1250/0.3750+−2.62 × 10 10
1003913468.9375/39.5156/36.2969433.4063/0.9375/0.5000+−1.75 × 10 9
1203913995.7344/84.4375/79.2813913.2969/1.4063/0.7969+−2.79 × 10 9
15039072809.6875/258.2656/256.90632552.2188/5.3750/4.1719+−5.59 × 10 9
Table 7. Numerical results for computing A ^ x m 1 in Example 8.
Table 7. Numerical results for computing A ^ x m 1 in Example 8.
n x 0 Iteration CPU A ^ xm 1 CPU ttv Res
10 ( 5 , 3 , 0 , 5 , 3 , 0 , ) 540.07810.17196.65 × 10 13
20 ( 5 , 3 , 0 , 5 , 3 , 0 , ) 550.10941.32815.26 × 10 13
50 ( 5 , 3 , 0 , 5 , 3 , 0 , ) 550.26569.82818.20 × 10 13
80 ( 5 , 3 , 0 , 5 , 3 , 0 , ) 560.562588.39065.54 × 10 13
10 ( 5 , 0 , 0 , 5 , 0 , 0 , ) 420.03120.12509.27 × 10 13
20 ( 5 , 0 , 0 , 5 , 0 , 0 , ) 430.06250.93756.13 × 10 13
50 ( 5 , 0 , 0 , 5 , 0 , 0 , ) 430.10947.73449.56 × 10 13
80 ( 5 , 0 , 0 , 5 , 0 , 0 , ) 440.343881.51566.00 × 10 13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, T. Numerical Methods Combining Symmetry and Sparsity for the Calculation of Homogeneous Polynomials Defined by Tensors. Symmetry 2025, 17, 664. https://doi.org/10.3390/sym17050664

AMA Style

Zhang T. Numerical Methods Combining Symmetry and Sparsity for the Calculation of Homogeneous Polynomials Defined by Tensors. Symmetry. 2025; 17(5):664. https://doi.org/10.3390/sym17050664

Chicago/Turabian Style

Zhang, Ting. 2025. "Numerical Methods Combining Symmetry and Sparsity for the Calculation of Homogeneous Polynomials Defined by Tensors" Symmetry 17, no. 5: 664. https://doi.org/10.3390/sym17050664

APA Style

Zhang, T. (2025). Numerical Methods Combining Symmetry and Sparsity for the Calculation of Homogeneous Polynomials Defined by Tensors. Symmetry, 17(5), 664. https://doi.org/10.3390/sym17050664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop