Next Article in Journal
Movement Coordination during Functional Single-Leg Squat Tests in Healthy, Recreational Athletes
Previous Article in Journal
First-Order Sign-Invariants and Exact Solutions of the Radially Symmetric Nonlinear Diffusion Equations with Gradient-Dependent Diffusivities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Splitting Methods for Solving Tensor Absolute Value Equation

1
College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350108, China
2
School of Big Data, Fuzhou University of International Studies and Trade, Fuzhou 350202, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(2), 387; https://doi.org/10.3390/sym14020387
Submission received: 29 December 2021 / Revised: 23 January 2022 / Accepted: 8 February 2022 / Published: 15 February 2022

Abstract

:
The tensor absolute value equation is a class of interesting structured multilinear systems. In this article, from the perspective of pure numerical algebra, we first consider a tensor-type successive over-relaxation method (SOR) (called TSOR) and tensor-type accelerated over-relaxation method (AOR) (called TAOR) for solving tensor absolute value equations. Furthermore, one type of preconditioned tensor splitting method is also applied for solving structured multilinear systems. Numerical experiments adequately demonstrate the efficiency of the presented methods.

1. Introduction

In this paper, we focus on the following tensor absolute value equation (TAVE):
A x m 1 | x | [ m 1 ] = b ,
where A = ( a i 1 i m ) R [ m , n ] is an order m dimension n tensor, vector b R n and | x | [ m 1 ] is dimension n with the following:
| x | [ m 1 ] : = ( | x 1 | m 1 , | x 2 | m 1 , , | x n | m 1 ) T R n ,
and x R n is an undetermined vector.
It is obvious that (1) will reduce to the well-known absolute value equation (AVE) as follows:
A x | x | = b ,
when m = 2 , and A is a matrix with appropriate dimensions. Formula (1), in fact, can also be regarded as multilinear systems:
F x m 1 = b
with special structure F = A D x · I m , where the following is the case.
D x = d i a g ( s g n ( x ) ) R n × n , I m R [ m , n ] .
Symbol ‘ · ’ denotes a matrix-tensor product (see details in the next section).
An essential problem in pure and applied mathematics is solving various linear or nonlinear systems. The rapid and efficient techniques of solution-finding for multilinear systems [1,2,3,4,5,6] are becoming increasingly significant in the field of science and engineering due to their extensive applications. In particular, some approaches with respect to higher order tensors [7,8,9,10,11,12,13,14,15] for data analysis have been widely studied in the era of Big Data.
Song and Qi [16] introduced a class of complementarity problems, named tensor complementarity problems. It was testified that the well-known absolute value equation (AVE) [17] is equivalent to a generalized linear complementarity problem [18] (see [19,20] for more details).
Normally, it is hard to obtain the exact solution by means of direct methods even for smaller-scale general linear systems, which greatly urges substantial developments of presenting various iterative strategies. Many research works have been investigated fast solvers for multilinear systems. Ding and Wei [21,22] proposed some classical iteratives, such as Jacobi, Gauss–Seidel methods and Newton methods by translating (4) into an optimization problem. In general, the computational cost for the Newton method is expensive on account of the inverse of the matrix involved. Then, Han [23] investigated a homotopy method by the Euler–Newton prediction–correction technique to solve multilinear systems with nonsymmetric M -tensors, which demonstrated a better result than the Newton method in the sense of convergence performances. The tensor splitting method and its convergence results have been studied by Liu and Li et al. [24]. Furthermore, some comparison results for splitting iteration for solving multilinear systems were investigated widely in [25,26]. It is observed that the results of studies about tensor absolute value equation (TAVE) (1) are relatively rare in the existing literature. Du et al. presented the tensor absolute value equation (TAVE) [27] and introduced the Levenberg–Marquardt method for solving this (1) from the point of an optimization theory. Ling et al. [28] provided some conditions for the existence of solutions of TAVE with the help of degree theory. Moreover, by fixed point theory, the system of TAVE has at least one solution under some workable assumptions.
Fixed-point typed methods and splitting tricks can be applicable for the consideration of some problems with low-rank tensor structure, e.g., the dynamical or steady-state multicomponent coagulation equation [29,30]. Recently, a novel approach was proposed to solve TAVE, where the interval tensor is introduced, which is the main tool for proving the unique solvability of TAVE. Then, a sufficient condition for TAVE was presented to obtain a unique solution [31]. TAVE can be related to global optimization via tensor-formats, e.g., tensor train [32]. In this case, the tensor-matrix can be considered as a diagonal, and the global maximum is exactly the largest eigenvalue of such diagonalized tensor. To our knowledge, there have been not many research works for solving TAVE from the splitting iteration perspective. Based on this consideration, some efficient splitting iteration methods will be studied to solve TAVE. Tensor-type successive overrelaxation method (TSOR) and tensor-type accelerated overrelaxation method (TAOR) are intensively surveyed in this paper. Furthermore, a preconditioner is also constructed ultimately for solving TAVE (1).
The remainder of this paper is organized as follows. In Section 2, some basic and useful notations are described simply. In Section 3, we will propose tensor splitting iteration schemes TAOR and TSOR for solving TAVE. Meanwhile, the analyses of convergence conditions are established in detail. Then, some numerical experiments are provided in Section 4 to illustrate the superiority of the presented iteration methods. Finally, a conclusion is provided in Section 5.

2. Preliminaries

Let A R [ 2 , n ] and B R [ k , n ] . The matrix-tensor product C = A · B R [ k , n ] is defined by the following.
c j i 2 i k = j 2 = 1 n a j j 2 b j 2 i 2 i k .
The above formula can be regarded as follows:
C ( 1 ) = ( A · B ) ( 1 ) = A B ( 1 ) ,
where C ( 1 ) and B ( 1 ) are the matrices generated from C and B flattened along first index. For more details, see [33,34].
Definition 1. 
(In [35]). Let A = ( a i 1 i 2 i m ) R [ m , n ] . Then, the majorization matrix M ( A ) of A is the n × n matrix with the entries.
M ( A ) i j = a i j j , i , j = 1 , 2 , , n .
Definition 2. 
(In [25]). Let A = ( a i 1 i 2 i m ) R [ m , n ] . If M ( A ) is a nonsingular matrix and A = M ( A ) I m , we call M ( A ) 1 the order-2 left-inverse of tensor A , and A is called left-nonsingularity, where I m is an identity tensor with all diagonal elements being 1.
Definition 3. 
(In [25]). Let A , E , F R [ m , n ] . A = E F is named as a splitting of tensor A if E is left-nonsingular. A regular splitting of A if E is left-nonsingular with M ( E ) 1 0 and F 0 (here, ≤ or ≥ denotes elementwise). A weak regular splitting of A if E is left-nonsingular with M ( E ) 1 F 0 . A convergence splitting occurrs if the spectral radius of M ( E ) 1 F is less than 1, i.e., ρ ( M ( E ) 1 F ) < 1 .
Definition 4. 
(In [36]). Let A = ( a i 1 i 2 i m ) R [ m , n ] . A pair ( λ , x ) C × ( C n \ { 0 } ) is called an eigenvalue-eigenvector of tensor A if they satisfy the following systems:
A x m 1 = λ x [ m 1 ] ,
where x [ m 1 ] = ( x 1 m 1 , x 2 m 1 , , x n m 1 ) T . ( λ , x ) is named as an H-eigenpair if both λ and vector x are real.
Definition 5. 
Let ρ ( A ) = max | λ | | λ σ ( A ) be the spectral radius of A , where σ ( A ) is the set of all eigenvalues of A .
Definition 6. 
(In [37]). Let A = ( a i 1 i 2 i m ) R [ m , n ] . A is called a Z-tensor if its off-diagonal entries are non-positive. A is called an M-tensor if there exists a non-negative tensor B and a positive real number η ρ ( B ) such that
A = η I m B .
If η > ρ ( B ) , then A is called a strong M-tensor.

3. Reformulation and Tensor Splitting Iteration

We first transform the TAVE (1) into the equivalent form.
A x m 1 I m y m 1 = b , | x | [ m 1 ] + I m y m 1 = 0 ,
In other words, we have the following:
H z = f ,
where
H = M ( A ) M ( I m ) D x M ( I m ) R 2 n × 2 n , z = I m x m 1 I m y m 1 R 2 n , f = b 0 R 2 n ,
I m is an m-order n-dimensional unit tensor for which its entries are 1 if and only if i 1 = = i m and otherwise zero, M ( · ) is the majorization matrix in Definition 1 and symbolic matrix D x is defined by (5).
If we consider the tensor splitting [38,39]:
B = E F ,
then, it is easy to briefly establish an iterative method for solving multi-linear systems
B x m 1 = b .
Clearly, the above multi-linear systems can be written as follows.
E x m 1 = F x m 1 + b ,
In other words, we have the following.
I m x m 1 = M ( E ) 1 F x m 1 + M ( E ) 1 b ,
Here, we use the property of two-order left-nonsingularity of tensor E , and I m is an identify tensor with appropriate orders.
Moreover, it conceives the following splitting:
A = A D A L A U ,
where A D is the diagonal tensor, and A L and A U are strictly lower and upper triangular tensors generated by tensor A in (10) [25].
If we consider the TAOR iterative method for (11), it gives rise to the following:
( H D σ H L ) z ( k + 1 ) = [ ( 1 ω ) H D + ( ω σ ) H L + ω H U ] z ( k ) + ω f ,
where parameters satisfy 0 < ω < 2 , σ > 0 and the following.
H D = M ( A D ) 0 0 M ( I m ) , H L = M ( A L ) 0 D x 0 , H U = M ( A U ) M ( I m ) 0 0 .
Thus, we have the following:
M ( A D ) σ M ( A L ) 0 σ D x M ( I m ) I m x ( k + 1 ) m 1 I m y ( k + 1 ) m 1 = ( 1 ω ) M ( A D ) + ( ω σ ) M ( A L ) + ω M ( A U ) ω M ( I m ) ( ω σ ) D x ( 1 ω ) M ( I m ) I m x ( k ) m 1 I m y ( k ) m 1 + ω f ,
which can be written as follows.
I m x ( k + 1 ) m 1 = [ M ( A D σ A L ) ] 1 ( 1 ω ) A D + ( ω σ ) A L + ω A U x ( k ) m 1 + ω ( I m y ( k ) m 1 + b ) , I m y ( k + 1 ) m 1 = ( ω σ ) | x k | [ m 1 ] + ( 1 ω ) I m y ( k ) m 1 + σ | x k + 1 | [ m 1 ] .
Based on the above analysis, the following Algorithm 1 is proposed formally.
Algorithm 1 Tensor AOR splitting iterative method (TAOR) for TAVE
Step 1  Input vector b, tensor A . Given a precision ε > 0 , parameters 0 < ω < 2 , σ > 0 , and initial vector x 0 , y 0 . Set k : = 0 .
Step 2  If A x ( k ) m 1 | x ( k ) | [ m 1 ] b 2 < ε stop; otherwise, go to Step 3.
Step 3 Compute iterative schemes
x ( k + 1 ) = [ M ( A D σ A L ) ] 1 ( 1 ω ) A D + ( ω σ ) A L + ω A U x ( k ) m 1 + ω ( I m y ( k ) m 1 + b ) [ 1 m 1 ] , y ( k + 1 ) = ( ω σ ) | x ( k ) | [ m 1 ] + ( 1 ω ) I m y ( k ) m 1 + σ | x ( k + 1 ) | [ m 1 ] [ 1 m 1 ] .
Step 4  Set k : = k + 1 , return to Step 2.
Remark1. 
If we take σ = ω in (17), the TAOR method will be reduced to the TSOR method with the following iterative schemes.
x ( k + 1 ) = [ M ( A D ω A L ) ] 1 ( 1 ω ) A D + ω A U x ( k ) m 1 + ω ( I m y ( k ) m 1 + b ) [ 1 m 1 ] , y ( k + 1 ) = ( 1 ω ) I m y ( k ) m 1 + ω | x ( k + 1 ) | [ m 1 ] [ 1 m 1 ] .
Remark2. 
Furthermore, let us consider a preconditioned TAVE:
P · A x m 1 = P c ,
where c = I m y ( k ) m 1 + b . It follows from the following splitting:
A ^ : = P · A = P · A D P · A L P · A U = A ^ D A ^ L A ^ U
that the following is the case.
x ( k + 1 ) = [ M ( A ^ D ω A ^ L ) ] 1 ( 1 ω ) A ^ D + ω A ^ U x ( k ) m 1 + ω P ( I m y ( k ) m 1 + b ) [ 1 m 1 ] , y ( k + 1 ) = ( 1 ω ) I m y ( k ) m 1 + ω | x ( k + 1 ) | [ m 1 ] [ 1 m 1 ] .
In the numerical experiment section, a type of preconditioner will be introduced to further accelerate Algorithm 1.
Next, we will provide some critical lemmas [21,27] about the existence of solution for TAVE.
Lemma1. 
Let A = ( a i 1 i m ) R [ m , n ] . If A is a strong M -tensor, then for every positive vector b, the multilinear system of equations A x m 1 = b has a unique positive solution.
Lemma2. 
Let A = ( a i 1 i m ) R [ m , n ] be a Z -tensor. Then, it is a strong M -tensor if and only if the multilinear system of equations A x m 1 = b has a unique positive solution for every positive vector b.
Lemma3. 
Let A = ( a i 1 i m ) R [ m , n ] be an M -tensor and b 0 . If there exists v 0 such that A v m 1 b , then the multilinear system of equations A x m 1 = b has a non-negative solution.
Lemma4. 
Let A = ( a i 1 i m ) R [ m , n ] . If A can be written as A = η I B with B 0 and η ρ ( B ) + 1 , then for every positive vector b, TAVE (10) has a unique positive solution.
Lemma5. 
Let the real quadratic equation x 2 p x + q = 0 , where p and q are real numbers. Both roots of the real quadratic equation are less than one in modulus if and only if | q | < 1 and | p | < 1 + q .
For the sake of convergence analysis, it is necessary to provide the definition of iterative errors as follows.
e ( k ) x : = I m x * m 1 I m x ( k ) m 1 = x * [ m 1 ] x ( k ) [ m 1 ] ,
e ( k ) y : = I m y * m 1 I m y ( k ) m 1 = y * [ m 1 ] y ( k ) [ m 1 ] .
Theorem1. 
Let A = ( a i 1 i m ) R [ m , n ] . If A can be written as A = η I B with B 0 and η ρ ( B ) + 1 , b > 0 , then the iterative sequence { x ( k ) } ( k = 0 , 1 , ) generated by the Algorithm 1 converges to a unique positive solution x * of the TAVE (10) if the following is the case:
| κ θ 1 θ 2 | < 1 , | κ + θ 1 + θ 3 | < 1 + κ θ 1 θ 2 ,
where
θ 1 = | 1 ω | , θ 2 = | ω σ | , θ 3 = ω σ ρ
and the following is obtained.
κ = [ M ( A D σ A L ) ] 1 ( 1 ω ) M ( A D ) + ( ω σ ) M ( A L ) + ω M ( A U ) ,
ρ = [ M ( A D σ A L ) ] 1 .
Proof. 
Clearly, the existence of solution x * for TAVE is guaranteed by Lemmas 1–4.
From (18)–(20), it generates the following.
e ( k + 1 ) x = [ M ( A D σ A L ) ] 1 ( 1 ω ) A D + ( ω σ ) A L + ω A U e ( k ) x + ω e ( k ) y , e ( k + 1 ) y = ( ω σ ) ( | x * | [ m 1 ] | x ( k ) | [ m 1 ] ) + ( 1 ω ) e ( k ) y + σ ( | x * | [ m 1 ] | x ( k + 1 ) | [ m 1 ] ) .
By (25), we have the following.
e ( k + 1 ) x = [ M ( A D σ A L ) ] 1 [ ( 1 ω ) M ( A D ) + ( ω σ ) M ( A L ) + ω M ( A U ) ] e ( k ) x + ω [ M ( A D σ A L ) ] 1 e ( k ) y = κ e ( k ) x + ω ρ e ( k ) y ,
e ( k + 1 ) y | ω σ | | x * [ m 1 ] | | x ( k ) [ m 1 ] | + | 1 ω | e ( k ) y + σ | x * [ m 1 ] | | x ( k + 1 ) [ m 1 ] | | ω σ | e ( k ) x + | 1 ω | e ( k ) y + σ e ( k + 1 ) x .
Based on (26) and (27), it can be reformulated as follows.
e ( k + 1 ) y ( | ω σ | + σ κ ) e ( k ) x + ( | 1 ω | + σ ω ρ ) e ( k ) y .
It follows from (26) and (28) that the following is the case.
e ( k + 1 ) x e ( k + 1 ) y κ ω ρ | ω σ | + σ κ | 1 ω | + σ ω ρ e ( k ) x e ( k ) y κ ω ρ | ω σ | + σ κ | 1 ω | + σ ω ρ k + 1 e ( 0 ) x e ( 0 ) y .
Let the following be the case.
G = κ ω ρ | ω σ | + σ κ | 1 ω | + σ ω ρ .
When ρ ( G ) < 1 , it must be lim k G k + 1 = 0 . That is, the following is the case:
lim k e ( k ) x = 0 , lim k e ( k ) y = 0 ,
In other words, we have the following.
lim k x ( k ) = x * , lim k y ( k ) = y * .
Now, we should focus on inequality ρ ( G ) < 1 . First let ( λ , v ) be an eigen-pair of matrix G. The eigenvalue equation is as follows:
G v = λ v ,
and it implies the following:
k v 1 + ω ρ v 2 = λ v 1 , ( | ω σ | + σ κ ) v 1 + ( | 1 ω | + σ ω ρ ) v 2 = λ v 2 ,
where v = ( v 1 T , v 2 T , ) T . By some calculations, it generates the following.
λ 2 [ κ + | 1 ω | + ω σ ρ ] λ + κ | 1 ω | | ω σ | = 0 .
According to Lemma 5, all eigenvalues λ are less than one in modulus if and only if the following is the case.
| κ θ 1 θ 2 | < 1 , | κ + θ 1 + θ 3 | < 1 + κ θ 1 θ 2 .
This completes the proof. □

4. Numerical Experiments

In this section, some numerical examples are discussed to validate the performance of the effectiveness of the proposed tensor AOR method (‘TAOR’) (Algorithm 1) and tensor inexact Levenberg–Marquardt-type method (TILM) (see [27]) for solving the tensor absolute value equation. We compare their performances of convergence by the iteration step (denoted as ‘IT’), elapsed CPU time in seconds (denoted as ‘CPU’) and residual error (denoted as ‘ERR’) defined by the following.
E R R : = A x k m 1 | x k | [ m 1 ] b .
All computations are started from x ( 0 ) = 0 and y ( 0 ) = 0 and terminated once E R R at the current iterative x ( k ) 10 10 or the maximal number of iterative k m a x exceeds 1000.
In what follows, we consider the tensor preconditioned splitting of (10):
P · A = A ^ = A ^ D A ^ L A ^ U ,
where A ^ D = A D I m , A ^ L = A L I m , A ^ U = A U I m , and A D , A L , A U are the diagonal parts, strictly lower and strictly upper triangle part of M ( P · A ) and I m is a unit tensor as previously mentioned.
A type of preconditioner (as a variant form in [40]) P α = I + S α is considered, where the following is the case.
S α = 0 α 1 a 12 2 0 0 α 1 a 21 1 0 α 2 a 23 3 0 0 0 0 α n 1 a n 1 , n n 0 0 0 α n 1 a n , n 1 n 1 0 ,
α i = 0.01 , i = 1 , 2 , n 1 . I is an identity matrix with appropriate dimension.
All numerical experiments have been carried out by MATLAB R2011b 7.1.3 on a PC equipped with an Intel(R) Core(TM) i7-2670QM, CPU running at 2.20GHZ with 8 GB of RAM in Windows 7 operating system.
Example1. 
First, consider the tensor absolute Equation (10) with a strong M -tensor A in different cases.
Case 1.   A = η I 3 B , where B R [ 3 , n ] is a non-negative tensor with b i 1 , i 2 , i 3 = | t a n ( i 1 + i 2 + i 3 ) | .
Case 2.   A = n 3 I 3 B , where B R [ 3 , n ] is a non-negative tensor with b i 1 , i 2 , i 3 = | s i n ( i 1 + i 2 + i 3 ) | .
Case 3.   A = s I 3 B , where B R [ 3 , n ] is generated randomly by MATLAB, and s = ( 1 + δ ) max 1 , 2 , , n ( B e 2 ) i , e = ( 1 , 1 , , 1 ) T .
We provide three different cases for different tensors A and B with various sizes. Parameters σ , ω , η , δ and sizes n are concretely shown in Tables or Figures.
The numerical results have been shown in Table 1, Table 2 and Table 3 and Figure 1 and Figure 2. From the numerical results, we can see that TAOR and TILM are all efficient methods. Case 3 seems to be the best condition. From an iterative parameter point of view in experiments, a better choice of σ and ω may be close to 1.2. It is observed that once parameters σ and ω are selected as 1, TAOR will reduce to TSOR, which is displayed in line 4 in Table 1 and Table 2. By all performances of convergence, TSOR is also a quite efficient approach. However, TAOR seems to be a more efficient method than TILM and TSOR methods from all aspects due to the flexible selection of parameters. In particular, in the aspects of IT number and elapsed CPU, TAOR is superior to TILM distinctly, despite these ERRs of the latter descending rapidly in the last few steps. Furthermore, from the residual trend chart with the changing numbers of iteration in Figure 1 and Figure 2, we can find the desired performances of the established methods.
Example2. 
Consider the following TAVE:
A x m 1 | x | [ m 1 ] = b ,
where A = 2 I m ϑ B .
B ( : , : , 1 ) = 0.580 0.2432 0.1429 0 0.4109 0.0701 0.4190 0.3459 0.7870 , B ( : , : , 2 ) = 0.4708 0.1330 0.0327 0.1341 0.5450 0.2042 0.3951 0.3220 0.7631 ,
B ( : , : , 3 ) = 0.4381 0.1003 0 0.0229 0.4338 0.0930 0.5390 0.4659 0.9070 ,
I m is an identity tensor of order 3 dimension 3. In this example, we set x * = ( 1 , 1 , 1 ) T . Then, it generates the right hand b.
In this example, we discuss the probable optimal parameters of TAOR. In order to guarantee the strong M -tensor of A , set ϑ = 0.1 . All numerical results are depicted in Figure 3, Figure 4, Figure 5 and Figure 6.
From Figure 3, Figure 4, Figure 5 and Figure 6, we find that when parameter ω is fixed on interval [0.8–1.2], the other parameter σ shows better performance nearby 1.2 or 1.3. IT and CPU seem to have similar trends in their process of variation. Note that when ω is selected with 1.2, the changes of IT and CPU tend to be stable between 0.4 and 1.6 for parameter σ . These results show a similar trend with Example 1.

5. Conclusions

In this paper, tensor splitting iteration schemes TAOR and TSOR are proposed for solving the tensor absolute value equation (TAVE), which can be regarded as generalizations of AOR and SAOR for linear systems. Some efficient preconditioned techniques are provided to improve the efficiency of solution-finding of TAVE. Meanwhile, we attempt to provide probability selections of optimal parameters from the perspective of tentative examples. The proposed approaches have been demonstrated to be superior to the existing method (rare at present) under some conditions, which can be fully validated in our numerical experiments section. Theoretical optimal parameters will be the considerations of our future research work.

Author Contributions

All authors contributed equally and significantly in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

The project is supported by Fujian Natural Science Foundation (Grant No. 2019J01879) and Key Reform in Education (Grant No. FBJG20200310).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this article.

References

  1. Cipolla, S.; Redivo-Zaglia, M.; Tudisco, F. Extrapolation methods for fixed-point multilinear PageRank computations. Numer. Linear Algebra Appl. 2020, 27, e2280. [Google Scholar] [CrossRef] [Green Version]
  2. He, H.; Ling, C.; Qi, L.; Zhou, G. A Globally and Quadratically Convergent Algorithm for Solving Multilinear Systems with M-tensors. J. Sci. Comput. 2018, 76, 1718–1741. [Google Scholar] [CrossRef]
  3. Lv, C.Q.; Ma, C.F. A Levenberg-Marquardt method for solving semi-symmetric tensor equations. J. Comput. Appl. Math. 2018, 332, 13–25. [Google Scholar] [CrossRef]
  4. Wang, X.; Che, M.L.; Wei, Y.M. Neural networks based approach solving multi-linear systems with M-tensors. Neurocomputing 2019, 351, 33–42. [Google Scholar] [CrossRef]
  5. Xie, Z.J.; Jin, X.Q.; Wei, Y.M. Tensor ethods for Solving Symmetric M-tensor Systems. J. Sci. Comput. 2018, 74, 412–425. [Google Scholar] [CrossRef]
  6. Xie, Z.J.; Jin, X.Q.; Wei, Y.M. A fast algorithm for solving circulant tensor systems. Linear Multilinear Algebra 2017, 65, 1894–1904. [Google Scholar] [CrossRef]
  7. El Ichi, A.; Jbilou, K.; Sadaka, R. Tensor Global Extrapolation Methods Using the n-Mode and the Einstein Products. Mathematics 2020, 8, 1298. [Google Scholar] [CrossRef]
  8. Kolda, T.G. Multilinear Operators for Higher-Order Decompositions; Technical Report SAND2006-2081; Sandia National Laboratories: Albuquerque, NM, USA; Livermore, CA, USA, 2006. [Google Scholar]
  9. Liu, D.D.; Li, W.; Vong, S.W. Relaxation methods for solving the tensor equation arising from the higher-order Markov chains. Numer. Linear Algebra Appl. 2019, 25, e2260. [Google Scholar] [CrossRef]
  10. Heyouni, M.; Saberi-Movahed, F.; Tajaddini, A. A tensor format for the generalized Hessenberg method for solving Sylvester tensor equations. J. Comput. Appl. Math. 2020, 377, 112878. [Google Scholar] [CrossRef]
  11. Reichel, L.; Ugwu, U.O. Tensor Arnoldi-Tikhonov and GMRES-Type Methods for Ill-Posed Problems with a t-Product Structure. J. Sci. Comput. 2022, 90, 1–39. [Google Scholar] [CrossRef]
  12. Saberi-Movahed, F.; Tajaddini, A.; Heyouni, M.; Elbouyahyaoui, L. Some iterative approaches for Sylvester tensor equations, Part I: A tensor format of truncated Loose Simpler GMRES. Appl. Numer. Math. 2022, 172, 428–445. [Google Scholar] [CrossRef]
  13. Song, Y.S.; Qi, L.Q. Spectral properties of positively homogeneous operators induced by higher order tensors. SIAM J. Matrix Anal. Appl. 2013, 34, 1581–1595. [Google Scholar] [CrossRef]
  14. Xie, Y.J.; Ke, Y.F. Neural network approaches based on new NCP-functions for solving tensor complementarity problem. J. Appl. Math. Comput. 2021, 4, 1–21. [Google Scholar] [CrossRef]
  15. Zhang, L.P.; Qi, L.Q.; Zhou, G.L. M-tensors and some applications. SIAM J. Matrix Anal. Appl. 2014, 35, 437–452. [Google Scholar] [CrossRef]
  16. Song, Y.S.; Qi, L.Q. Properties of some classes of structured tensors. J. Optim. Theory Appl. 2015, 165, 854–873. [Google Scholar] [CrossRef] [Green Version]
  17. Ke, Y.F.; Ma, C.F. SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 311, 195–202. [Google Scholar] [CrossRef]
  18. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef] [Green Version]
  19. Che, M.L.; Qi, L.Q.; Wei, Y.M. Positive definite tensors to nonlinear complementarity problems. J. Optim. Theory Appl. 2016, 168, 475–487. [Google Scholar] [CrossRef] [Green Version]
  20. Chen, Z.; Qi, L. A semismooth Newton method for tensor eigenvalue complementarity problem. Comput. Optim. Appl. 2016, 65, 109–126. [Google Scholar] [CrossRef] [Green Version]
  21. Ding, W.Y.; Wei, Y.M. Solving multilinear systems with M-tensors. J. Sci. Comput. 2016, 68, 689–715. [Google Scholar] [CrossRef]
  22. Ding, W.Y.; Wei, Y.M. Generalized tensor eigenvalue problems. SIAM J. Matrix Anal. Appl. 2015, 36, 1073–1099. [Google Scholar] [CrossRef]
  23. Han, L.X. A homotopy method for solving multilinear systems with M-tensors. Appl. Math. Lett. 2017, 69, 49–54. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, D.D.; Li, W.; Vong, S.W. The tensor splitting with application to solve multi-linear systems. J. Comput. Appl. Math. 2018, 330, 75–94. [Google Scholar] [CrossRef]
  25. Li, W.; Liu, D.D.; Vong, S.W. Comparison results for splitting iterations for solving multi-linear systems. Appl. Numer. Math. 2018, 134, 105–121. [Google Scholar] [CrossRef]
  26. Li, W.; Ng, M.K. On the limiting probability distribution of a transition probability tensor. Linear Multilinear Algebra 2014, 62, 362–385. [Google Scholar] [CrossRef] [Green Version]
  27. Du, S.; Zhang, L.; Chen, C.; Qi, L. Tensor absolute value equations. Sci. China Math. 2018, 61, 1695–1710. [Google Scholar] [CrossRef] [Green Version]
  28. Ling, C.; Yan, W.; He, H.; Qi, L. Further study on tensor absolute value equations. Sci. China Math. 2019, 63, 2137–2156. [Google Scholar] [CrossRef] [Green Version]
  29. Matveev, S.A.; Zheltkov, D.A.; Tyrtyshnikov, E.E.; Smirnov, A.P. Tensor train versus Monte Carlo for the multicomponent Smoluchowski coagulation equation. J. Comput. Phys. 2016, 316, 164–179. [Google Scholar] [CrossRef]
  30. Smirnov, A.P.; Matveev, S.A.; Zheltkov, D.A.; Tyrtyshnikov, E.E. Fast and accurate finite-difference method solving multicomponent Smoluchowski coagulation equation with source and sink terms. Procedia Comput. Sci. 2016, 80, 2141–2146. [Google Scholar] [CrossRef] [Green Version]
  31. Jiang, Z.; Li, J. Solving tensor absolute value equation. Appl. Numer. Math. 2021, 170, 255–268. [Google Scholar] [CrossRef]
  32. Zheltkov, D.; Tyrtyshnikov, E. Global optimization based on TT-decomposition. Russ. J. Numer. Anal. Math. Model. 2020, 35, 247–261. [Google Scholar] [CrossRef]
  33. Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  34. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  35. Pearson, K. Essentially positive tensors. Int. J. Algebra 2010, 4, 421–427. [Google Scholar]
  36. Qi, L.Q. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef] [Green Version]
  37. Che, M.L.; Qi, L.Q.; Wei, Y.M. M-tensors and nonsingular M-tensors. Linear Algebra Appl. 2013, 439, 3264–3278. [Google Scholar]
  38. Li, D.H.; Xie, S.; Xu, R.H. Splitting methods for tensor equations. Numer. Linear Algebra Appl. 2017, 24, 5e2102. [Google Scholar] [CrossRef]
  39. Qi, L.Q.; Luo, Z. Tensor Analysis: Spectral Theory and Special Tensors; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2017. [Google Scholar]
  40. Kohno, T.; Kotakemori, H.; Niki, H.; Usui, M. Improving the modified Gauss-Seidel method for Z-matrices. Linear Algebra Appl. 1997, 267, 113–123. [Google Scholar] [CrossRef]
Figure 1. The residual ERR for TAOR and TILM methods with different cases in Example 1, σ = 1.1 , ω = 1.3 , n = 5 .
Figure 1. The residual ERR for TAOR and TILM methods with different cases in Example 1, σ = 1.1 , ω = 1.3 , n = 5 .
Symmetry 14 00387 g001
Figure 2. The residual ERR for TAOR and TILM methods with difference cases in Example 1, σ = 1.2 , ω = 1.2 and n = 10 .
Figure 2. The residual ERR for TAOR and TILM methods with difference cases in Example 1, σ = 1.2 , ω = 1.2 and n = 10 .
Symmetry 14 00387 g002
Figure 3. IT (a) and CPU (b) versus parameters σ , ω = 0.6 in Example 2.
Figure 3. IT (a) and CPU (b) versus parameters σ , ω = 0.6 in Example 2.
Symmetry 14 00387 g003
Figure 4. IT (a) and CPU (b) versus parameters σ , ω = 0.8 in Example 2.
Figure 4. IT (a) and CPU (b) versus parameters σ , ω = 0.8 in Example 2.
Symmetry 14 00387 g004
Figure 5. IT (a) and CPU (b) versus parameters σ , ω = 1.0 in Example 2.
Figure 5. IT (a) and CPU (b) versus parameters σ , ω = 1.0 in Example 2.
Symmetry 14 00387 g005
Figure 6. IT (a) and CPU (b) versus parameters σ , ω = 1.2 in Example 2.
Figure 6. IT (a) and CPU (b) versus parameters σ , ω = 1.2 in Example 2.
Symmetry 14 00387 g006
Table 1. Preconditioned numerical results for Example 1, n = 5.
Table 1. Preconditioned numerical results for Example 1, n = 5.
Case 123
It1865
TAOR, σ = 1.1 , ω = 1.3 CPU 0.1413 0.1118 0.0743
RES 3.2455 × 10 11 2.6210 × 10 12 6.7591 × 10 12
It1555
TAOR, σ = 0.1 , ω = 1.1 CPU 0.1215 0.1098 0.0713
RES 2.8875 × 10 11 9.1778 × 10 13 3.9930 × 10 13
It2065
TSOR, σ = 1.0 , ω = 1.0 CPU 0.1662 0.1098 0.0914
RES 4.0899 × 10 11 1.8440 × 10 12 6.6225 × 10 12
It342528
TILMCPU 0.4920 0.3498 0.4861
RES 1.4108 × 10 11 5.5511 × 10 12 1.1809 × 10 12
Table 2. Preconditioned numerical results for Example 1, n = 10 .
Table 2. Preconditioned numerical results for Example 1, n = 10 .
Case 123
It1046
TAOR, σ = 1.2 , ω = 1.2 CPU 0.1295 0.1223 0.0807
RES 1.4571 × 10 12 6.0335 × 10 14 4.2387 × 10 12
It10109
TAOR, σ = 0.2 , ω = 1.3 CPU 0.1894 0.1466 0.0960
RES 1.4571 × 10 12 6.7092 × 10 12 1.9524 × 10 11
It1055
TSOR, σ = 1.0 , ω = 1.0 CPU 0.1503 0.1098 0.0914
RES 1.4571 × 10 12 7.0335 × 10 14 6.6225 × 10 12
It454335
TILMCPU 0.6000 0.5883 0.5320
RES 7.1598 × 10 11 6.8692 × 10 16 4.9104 × 10 13
Table 3. Preconditioned numerical results for Example 1, n = 50 .
Table 3. Preconditioned numerical results for Example 1, n = 50 .
Case 123
It1368
TAOR, σ = 0.8 , ω = 1.3 CPU 0.4183 0.3241 0.2626
RES 1.6573 × 10 11 7.0635 × 10 13 5.2853 × 10 11
It1379
TSOR, σ = 0.8 , ω = 0.8 CPU 0.6902 0.5926 0.4531
RES 1.5732 × 10 11 8.4235 × 10 13 6.8265 × 10 11
It484639
TILMCPU 1.835 1.3452 1.2921
RES 8.1845 × 10 10 7.9655 × 10 13 4.9104 × 10 11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ning, J.; Xie, Y.; Yao, J. Efficient Splitting Methods for Solving Tensor Absolute Value Equation. Symmetry 2022, 14, 387. https://doi.org/10.3390/sym14020387

AMA Style

Ning J, Xie Y, Yao J. Efficient Splitting Methods for Solving Tensor Absolute Value Equation. Symmetry. 2022; 14(2):387. https://doi.org/10.3390/sym14020387

Chicago/Turabian Style

Ning, Jing, Yajun Xie, and Jie Yao. 2022. "Efficient Splitting Methods for Solving Tensor Absolute Value Equation" Symmetry 14, no. 2: 387. https://doi.org/10.3390/sym14020387

APA Style

Ning, J., Xie, Y., & Yao, J. (2022). Efficient Splitting Methods for Solving Tensor Absolute Value Equation. Symmetry, 14(2), 387. https://doi.org/10.3390/sym14020387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop