Next Article in Journal
Compression Reconstruction Network with Coordinated Self-Attention and Adaptive Gaussian Filtering Module
Previous Article in Journal
TGSNet: Multi-Field Feature Fusion for Glass Region Segmentation Using Transformers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Picard-like Method for Solving Absolute Value Equations

1
School of Mathematics and Computer Science, Yunnan Minzu University, Kunming 650504, China
2
School of Mathematics and Statistics, Yunnan University, Kunming 650504, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 848; https://doi.org/10.3390/math11040848
Submission received: 19 December 2022 / Revised: 23 January 2023 / Accepted: 30 January 2023 / Published: 7 February 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
We present a modified Picard-like method to solve absolute value equations by equivalently expressing the implicit fixed-point equation form of the absolute value equations as a two-by-two block nonlinear equation. This unifies some existing matrix splitting algorithms and improves the efficiency of the algorithm by introducing the parameter ω . For the choice of ω in the new method, we give a way to determine the quasi-optimal values. Numerical examples are given to show the feasibility of the proposed method. It is also shown that the new method is better than those proposed by Ke and Ma in 2017 and Dehghan and Shirilord in 2020.

1. Introduction

The absolute value equation (AVE)
A x B | x | = b ,
where A R n × n , B R n × n , b R n and | x | = ( | x 1 | , | x 2 | , , | x n | ) T , was introduced by Rohn [1], and has attracted the attention of many scholars, such as Mangasarian [2,3], Mangasarian and Meyer [4], Noor et al. [5,6,7], and Ketabchi and Moosaei [8,9,10,11], because it can be used as an important tool in the optimization field, such as in the complementarity problem, linear programming, and convex quadratic programming [12]. When B = I , (1) will become
A x | x | = b ,
which can be obtained from a class of ordinary differential equations [5],
d 2 x d t 2 | x | = ( a t 2 ) , 0 t 1 .
Obviously, when B is the zero matrix, AVE (1) reduces to a linear system. In general, the matrix B in AVE (1) is supposed to be nonzero.
One of the important problems in the AVE is the existence and uniqueness of its solution. There are a variety of existence and uniqueness results; for instance, in [13], Rohn proved that if σ max ( A 1 B ) < 1 or σ max ( | A 1 B | ) < 1 (in this paper, we consider A 1 B < 1 , where · means the 2-norm), then AVE (1) has an unique solution for any b R n , where σ max ( · ) denotes the largest singular value of the matrices; in [14], Wu and Guo proved that if A can be expressed as A = β I + M , where β 1 , and M is a nonsingular M-matrix, then AVE (1) has an unique solution for any vector b; in addition, in [15], Wu and Li proved that AVE (1) has a unique solution for any vector b, if and only if matrix A + I 2 D is nonsingular for any diagonal matrix D = diag ( d i ) with 0 d i 1 . We refer to [4,16] for other sufficient or necessary conditions for the existence and uniqueness results.
In this paper, we focus on another problem in the AVE (1), that is, how to solve the AVE (1). In fact, various numerical methods have been developed; for instance, Mangasarian, in [17], proposed a generalized Newton method for solving the AVE, which generated a sequence formally stated as
( A D ( x ( k ) ) ) x ( k + 1 ) = b ,
where the diagonal matrix D ( x ) : = diag ( sgn ( x ) ) , x R n . It was proved to be convergent under certain conditions; Lian et al., in [18], further studied the generalized Newton method and obtained the weaker convergence results; Wang et al., in [19], proposed some modified Newton-type iteration methods for generalized absolute value equations. Since the convergence requirements of Newton’s method are strict, Edalatpour et al., in [20], proposed a generalization of the Gauss–Seidel iteration method; in addition, Ke and Ma, in [21], proposed an SOR-like iteration method by reformulating equivalently the AVE (1) as a two-by-two block nonlinear equation
A x B y = b , | x | + y = 0 .
Guo, Wu, and Li, in [22], presented some new convergence conditions obtained from the involved iteration matrix of the SOR-like iteration method in [21]; also based on (3), Li and Wu, in [23], improved the SOR-like iteration method proposed by Ke and Ma in [21] and obtained a modified SOR-like iteration method; Ali et al. proposed two modified generalized GaussSeidel (MGGS) iteration techniques to determine the AVE (1) in [24]. To accelerate the convergence, in [25], Salkuyeh proposed the Picard-HSS iteration method for the AVE; Zheng extended this method to the Picard-HSS-SOR Method in [26]; in [27], Ma proposed Picard methods for solving the AVE by combining matrix splitting iteration algorithms, such as the Jacobi, SSOR, or SAOR; Dehghan et al., in [28], proposed the following matrix multisplitting Picard-iterative methods (PIM), see Algorithm 1, under the condition σ min ( A ) > n σ max ( B ) . In addition, for A being an M-matrix case, in [29], Ali et al. presented two new generalized Gauss–Seidel iteration methods; finally, in [30], Yu et al. proposed an inverse-free dynamical system to solve AVE (1). We also refer to [8,9,10,14,25,31,32,33,34,35] for other methods of finding the solution to the AVE.
Algorithm 1:Picard Iteration Method (PIM).
 Suppose that x ( 0 ) R n is an initial guess for the solution of (3), ϵ is the target accuracy, and      A = M i N i ( i = 1, 2, …, p, where p is a positive integer) is a splitting for matrix A, where M i 1     exists. We compute x ( k + 1 ) R n by using the following iteration:
1. Input x ( 0 ) ;
2. Solve the following equations to obtain x ( k , t k ) , k = 0 , 1 , 2 , , t = 0 , 1 , , t k 1 , where t k     is a positive integer.
x ( k , t + 1 p ) = M 1 1 N 1 x ( k , t ) + M 1 1 B | x ( k ) | + M 1 1 b , x ( k , t + 2 p ) = M 2 1 N 2 x ( k , t + 1 p ) + M 2 1 B | x ( k ) | + M 2 1 b , x ( k , t + 3 p ) = M 3 1 N 3 x ( k , t + 2 p ) + M 3 1 B | x ( k ) | + M 3 1 b , , x ( k , t + 1 ) = M p 1 N p x ( k , t + 1 1 p ) + M p 1 B | x ( k ) | + M p 1 b ;
3. Set x ( k + 1 ) = x ( k , t k ) ;
4. End the iteration when b A x ( k + 1 ) + B | x ( k + 1 ) | ϵ ;
5. Output x ( k + 1 ) .
The main contributions of this paper are listed as follows.
(1) Based on a splitting of the two-by-two block coeffificient matrix, the Modified Picard-Like Iteration Method (MPIM) is proposed for solving AVE (1). It unifies some existing matrix splitting algorithms, such as the algorithms in [21,23,24,25,26,28];
(2) Compared with Algorithm 1, we introduce a parameter, ω , to accelerate the convergence of the proposed MPIM method. The quasi-optimal parameters for the MPIM method are also given for various cases.
The present paper proceeds as follows. In Section 2, based on the equivalent form (3), we present a general matrix splitting method for solving the AVE (1). It unifies some existing matrix splitting methods. Furthermore, by combining this with the Picard technique, we propose an MPIM method. In Section 3, some numerical examples are given to show the feasibility and efficiency of the new methods.

2. Modified Picard-like Iteration Method

In this section, based on (3), we first give a general matrix splitting iteration method. By combining this with the Picard technique, we propose a Modified Picard-Like iteration method named MPIM. We also discuss the choice of the quasi-optimal parameter of the MPIM.

2.1. General Matrix Splitting Iteration

As stated in Section 1, by letting y = | x | , AVE (1) is equivalent to the form (3). Note that the matrix form for (3) can be written as A ^ X ^ = b ^ , where
A ^ = A B D ( x ) I , X ^ = x y , b ^ = b 0 ,
and x , y R n . Hence, if we let A = M N , and M 1 exists, then A ^ = M ^ N ^ , where
M ^ = M 0 D ( x ) ω 1 I , N ^ = N B 0 ( ω 1 1 ) I .
Furthermore, the form (3) can be reformulated as
x y = M ^ 1 N ^ x y + M ^ 1 b 0 .
This provides an iteration method to solve AVE (1) as below, see Algorithm 2.
Algorithm 2:General Splitting Iteration Method (GSIM).
Suppose that x ( 0 ) , y ( 0 ) R n is an initial guess for the solution of (3), ϵ is the target accuracy,      ω > 0 , and A = M N is a splitting for matrix A. We compute x ( k + 1 ) R n by using the     following iteration:
1. Input x ( 0 ) .
2. Solve the following equations to obtain x ( k + 1 ) , k = 0 , 1 , 2 ,
x ( k + 1 ) = M 1 N x ( k ) + M 1 B y ( k ) + M 1 b , y ( k + 1 ) = ω | x ( k + 1 ) | + ( 1 ω ) y ( k ) .
3. End the iteration when b A x ( k + 1 ) + B | x ( k + 1 ) | ϵ .
4. Output x ( k + 1 ) .
 Remark 1. 
The GSIM unifies some existing matrix splitting algorithms for the AVE. For example, if M = ω 1 A , and B = I , then it becomes the SOR-like method (SOR) proposed by Ma and Ke in [21]; if M = ω 1 D L , and B = I , where D = diag ( A ) , then it becomes the Modified SOR-like method (M-SOR) proposed by Wu and Li in [23]; if M = β 1 ( θ 1 I + D λ L ) , B = I , and ω = 1 , then it becomes the MGGS by Ali et al. in [24]; if M = D , then we obtain a new algorithm named the Modified Jacobi iteration method; if M = [ ( 2 γ ) γ ] 1 [ D γ ( L + U ) + γ 2 L D 1 U ] , then we obtain another new algorithm named the Modified SSOR-Like method (M-SSOR). It should be pointed out here that by choosing other splittings for the matrix A, more methods can be extracted.
Next, a theoretical analysis of the convergence for the GSIM is performed according to the following result, which can be found in [23].
 Lemma 1 
([23]). Let λ be any root of x 2 b x + d = 0 with b , d R . Then, | λ | < 1 , if and only if | d | < 1 , and | b | < 1 + d .
 Theorem 1. 
Let A R n × n be nonsingular and b R n , B, M, N, and ω be defined as in Algorithm 2. Denote P = M 1 N , Q = M 1 B , s = 1 ω , and L = ω Q .
If
P | s | < 1 , L < ( 1 | s | ) ( 1 P ) ,
then
lim k + e x ( k ) = 0 , lim k + e y ( k ) = 0 ,
where
e x ( k ) = x * x ( k ) , e y ( k ) = y * y ( k ) ,
where ( x * , y * ) is the solution pair of (4), and  ( x ( k ) , y ( k ) ) is generated by the iteration scheme (5).
 Proof. 
According to (5) and (7), we obtain
e x ( k + 1 ) = P e x ( k ) + Q e y ( k ) , e y ( k + 1 ) = s e y ( k ) + ω ( | x * | | x ( k + 1 ) | ) .
This implies
e x ( k + 1 ) P e x ( k ) + Q e y ( k ) ,
and
e y ( k + 1 ) | s | e y ( k ) + | ω | ( | x * | | x ( k + 1 ) | )
| s | e y ( k ) + | ω | x * x ( k + 1 )
= | s | e y ( k ) + | ω | e x ( k + 1 ) .
Thus,
ω 1 1 0 e x ( k + 1 ) e y ( k + 1 ) 0 | s | P Q e x ( k ) e y ( k ) .
Since ω > 0 , and the inverse of the matrix ω 1 1 0 is
0 1 1 ω > 0 ,
by Inequality (9), we have
e x ( k + 1 ) e y ( k + 1 ) P Q ω P | s | + ω Q e x ( k ) e y ( k )
P Q ω P | s | + ω Q 2 e x ( k 1 ) e y ( k 1 )
                    …
P Q ω P | s | + ω Q k e x ( 0 ) e y ( 0 ) .
Let
T = P Q ω P | s | + ω Q .
Then, the spectral radius ρ ( T ) = max { | λ | : λ i s a n e i g e n v a l u e o f T } < 1 . In fact, we assume that λ is an eigenvalue of matrix T; then,
( λ P ) ( λ | s | ω Q ) ω P Q = 0 ,
which is equal to
λ 2 ( | s | + ω Q + P ) λ + P | s | = 0 .
By applying Lemma 1 to (10), λ < 1 is equal to
P | s | < 1 , L < ( 1 | s | ) ( 1 P ) .
Therefore, ρ ( T ) < 1 ; consequently,
lim k + e x ( k ) = 0 , lim k + e y ( k ) = 0 .
The proof is complete.    □
Theorem 1 tells us that the iteration sequence ( x ( k ) , y ( k ) ) produced by the GSIM can converge to the solution of (3) under some conditions. This also holds true for all the methods in Remark 1.

2.2. Modified Picard-like Iteration Method

In Section 2.1, if we increase some equations
x ( k + 1 ) = M i 1 N i x ( k ) + M i 1 B y ( k ) + M i 1 b , i = 1 , 2 , 3 ,
in (5), then we obtain the MPIM.
 Remark 2. 
The MPIM unifies some existing Picard-like methods for the AVE. For example, if M 1 = γ 1 D L , and M 2 = γ 1 D U , where γ is a positive constant and ω = 1 , then this method will reduce to the Picard-SSOR Method in [28]; if M 1 = γ 1 ( D μ L ) , and M 2 = γ 1 ( D μ U ) , where γ and μ are positive constants and ω = 1 , then this method will reduce to the Picard-SAOR Method in [28]; if M 1 = γ I + H , and M 2 = γ I + S , where H = 1 2 ( A + A H ) , S = 1 2 ( A A H ) , γ is a positive constant, and ω = 1 , then this method will reduce to the Picard-HSS Method in [25]; if M 1 = γ I + H , and M 2 = γ I + S , where H = 1 2 ( A + A H ) , S = 1 2 ( A A H ) , γ is a positive constant, and ω ( 0 , 2 ) , then this method will reduce to the Picard-HSS-SOR Method in [26]. Obviously, by choosing other splittings for the matrix A, more methods can be also extracted.
Now, we consider the convergence of the MPIM. In fact, similar to the computation in [28], the (20) in Step 2 can be rewritten as
x ( k + 1 ) = Θ p t k x ( k ) + ( I Θ p t k ) A 1 B y ( k ) + ( I Θ p t k ) A 1 b ,
where Θ p = l = 1 p M l 1 N l . Thus, the corresponding iteration matrix for the MPIM is
M ^ 1 N ^ = Θ p t k ( I Θ p t k ) A 1 B ω D Θ p t k ω D ( I Θ p t k ) A 1 B + ( 1 ω ) I .
We use the notations presented in Section 2.1, and set P = Θ p t k , Q = ( I Θ p t k ) A 1 B , s = 1 ω , and L = ω Q . Obviously, as with the GSIM, the MPIM is convergent if
P | s | = Θ p t k | 1 ω | < 1 , L = ω Q < ( 1 | 1 ω | ) ( 1 Θ p t k ) = ( 1 | s | ) ( 1 P ) .

2.3. Range of ω

As stated in [28], the PIM may not be convergent if the matrix A and B do not satisfy
n σ max ( B ) < σ min ( A ) .
Next, we analyze in detail how to select ω such that the MPIM converges even when (15) does not hold.
 Theorem 2. 
Suppose
0 < Θ p t k < 1 A 1 B 1 + A 1 B ;
if ω ( 0 , 1 ) , then (14) holds, i.e., the MPIM is convergent.
 Proof. 
Note that
Θ p t k < 1 A 1 B 1 + A 1 B
implies
( 1 + Θ p t k ) A 1 B < 1 Θ p t k ,
and
( I Θ p t k ) A 1 B < ( I Θ p t k ) A 1 B < ( 1 + Θ p t k ) A 1 B .
Hence,
Q = ( I Θ p t k ) A 1 B < 1 Θ p t k .
This means
L = ω Q | ω | Q < ( 1 | 1 ω | ) ( 1 Θ p t k ) = ( 1 | s | ) ( 1 P ) ,
since ω ( 0 , 1 ) . Furthermore, because 
2 1 A 1 B < 0 < ω ,
we obtain
1 A 1 B 1 + A 1 B < 1 1 ω ,
i.e.,
Θ p t k < 1 1 ω ;
hence
P | s | < 1 .
Therefore, for  ω ( 0 , 1 ) , (16) implies (14). The proof is complete.    □
 Theorem 3. 
Suppose
0 < Θ p t k < 2 ω ω A 1 B 2 ω + ω A 1 B ;
if ω 1 , 2 1 + A 1 B , then (14) holds, i.e., the MPIM is convergent.
 Proof. 
Note that (17) is equal to
ω ( 1 + Θ p t k ) A 1 B < 2 ω + ( ω 2 ) Θ p t k ,
and
( I Θ p t k ) A 1 B ( I + Θ p t k ) A 1 B = ( 1 + Θ p t k ) A 1 B ;
hence,
ω Q | ω | Q ω ( I Θ p t k ) A 1 B < 2 ω + ( ω 2 ) Θ p t k .
This means
L = ω Q < ( 1 | 1 ω | ) ( 1 Θ p t k ) = ( 1 | s | ) ( 1 P ) .
Furthermore, since ω < 2 1 + A 1 B < 2 , we obtain 1 ω 1 > 1 ; hence,
Θ p t k < 2 ω ω A 1 B 2 ω + ω A 1 B < 1 < 1 ω 1 ,
which implies
P | s | < 1 .
Therefore, for  ω 1 , 2 1 + A 1 B , (17) implies (14). The proof is complete.    □
Theorems 2 and 3 tell us that if we split the matrix A such that (16) or (17) holds, then we can choose some ω 0 , 2 1 + A 1 B , such that the corresponding vector sequence generated by (20) in the MPIM is convergent.

2.4. Quasi-Optimal Value of ω

It can be seen from Section 2.3 that for a given ω 0 , 2 1 + A 1 B , we can always adopt a specific splitting form to make the algorithm converge. Next, we consider how to select ω to make the algorithm converge faster when the splitting form of the matrix has been given. For the MPIM, the iteration matrix
M ^ 1 N ^ = Θ p t k ( I Θ p t k ) A 1 B ω D Θ p t k ω D ( I Θ p t k ) A 1 B + ( 1 ω ) I
= I 0 ω D I Θ p t k ( I Θ p t k ) A 1 B 0 ( 1 ω ) I
= I 0 0 I + 0 0 ω D 0 Θ p t k 0 0 ( 1 ω ) I + 0 ( I Θ p t k ) A 1 B 0 0 .
Then,
M ^ 1 N ^ ( 1 + ω ) Θ p t k 0 0 ( 1 ω ) I + 0 ( I Θ p t k ) A 1 B 0 0
( 1 + ω ) max | 1 ω | , Θ p t k + ( I Θ p t k ) A 1 B
( 1 + ω ) max | 1 ω , | Θ p t k + ( 1 + Θ p t k ) A 1 B .
We denote
f ( ω , Θ p t k ) = ( 1 + ω ) max | 1 ω | , Θ p t k + ( 1 + Θ p t k ) A 1 B .
Then,
M ^ 1 N ^ f ( ω , Θ p t k ) .
Apparently, M ^ 1 N ^ and f ( ω , Θ p t k ) all involve the ω and Θ p t k . It is well known that the smaller M ^ 1 N ^ is, the faster the MPIM converges. However, it is not easy to determine M ^ 1 N ^ in general. So, we next discuss the minimum of f ( ω , Θ p t k ) instead of M ^ 1 N ^ to provide some choices for ω .
 Theorem 4. 
Suppose that Θ p t k is given with
Θ p t k < 1 A 1 B 1 + A 1 B a n d 0 < ω 1 Θ p t k ;
then,
min ω ( 0 , ω 1 ] f ( ω , Θ p t k ) = f ( ω 1 , Θ p t k ) ,
where ω 1 = 1 Θ p t k .
Proof. 
Based on (18), when
0 < ω 1 Θ p t k ,
we obtain
f ( ω , Θ p t k ) = ω 2 + ( 1 + Θ p t k ) A 1 B ω + ( 1 + Θ p t k ) A 1 B + 1 .
Then, the axis of symmetry of the parabola is
( 1 + Θ p t k ) A 1 B 2 .
We notice that
Θ p t k < 1 A 1 B 1 + A 1 B .
This leads to
1 Θ p t k ( 1 + Θ p t k ) A 1 B 2 > ( 1 + Θ p t k ) A 1 B 2 0 ,
i.e.,
min ω ( 0 , ω 1 ] f ( ω , Θ p t k ) = f ( ω 1 , Θ p t k ) ,
since the axis of symmetry of the parabola is ( 1 + Θ p t k ) A 1 B 2 . The proof is complete.    □
 Theorem 5. 
Suppose that Θ p t k is given, ω 1 , and (17) holds.
(I) If
Θ p t k 1 A 1 B 1 + A 1 B , 1 A 1 B 1 + A 1 B ,
then
ω < ω 2 a n d min ω [ 1 , ω 2 ) f ( ω , Θ p t k ) = f ( 1 , Θ p t k ) ,
where ω 2 = 1 + Θ p t k .
(II) If
Θ p t k 0 , 1 A 1 B 1 + A 1 B ,
then
ω 2 ω < ω 3 a n d min ω [ ω 2 , ω 3 ) f ( ω , Θ p t k ) = f ( ω 2 , Θ p t k ) ,
where ω 3 = 2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 .
Proof. 
From (17), we obtain
ω < 2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 .
Since ω 1 , we have
2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 > 1 ,
i.e.,
Θ p t k ( 0 , 1 A 1 B 1 + A 1 B ) .
(I) Since
1 A 1 B 1 + A 1 B < 1 A 1 B 1 + A 1 B < 1 < 1 + A 1 B 1 A 1 B ,
then
Θ p t k 1 A 1 B 1 + A 1 B , 1 A 1 B 1 + A 1 B 1 A 1 B 1 + A 1 B , 1 + A 1 B 1 A 1 B .
Note that
Θ p t k 1 A 1 B 1 + A 1 B , 1 + A 1 B 1 A 1 B
is equal to
( 1 A 1 B ) Θ p t k 2 ( 2 A 1 B + 2 ) Θ p t k + 1 A 1 B < 0 ,
which means
2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 1 < Θ p t k ,
i.e.,  ω < 1 + Θ p t k . So,
f ( ω , Θ p t k ) = ( 1 + ω ) ( Θ p t k + ( 1 + Θ p t k ) A 1 B ) ;
then,
min ω [ 1 , ω 2 ) f ( ω , Θ p t k ) = f ( 1 , Θ p t k ) .
(II) From (I), we know when
Θ p t k 0 , 1 A 1 B 1 + A 1 B ,
we have ω 1 + Θ p t k , i.e.
ω 1 + Θ p t k , 2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 .
Thus,
f ( ω , Θ p t k ) = ( 1 + ω ) ω 1 + ( 1 + Θ p t k ) A 1 B
= ω 2 + ( 1 + Θ p t k ) A 1 B ω + ( 1 + Θ p t k ) A 1 B 1 .
Similar to the proof of Theorem 4, we obtain
min ω [ ω 2 , ω 3 ) f ( ω , Θ p t k ) = f ( ω 2 , Θ p t k ) .
The proof is complete.    □
It can be seen under the condition of Theorem 4 that we can choose an appropriate parameter ω to accelerate the convergence of the algorithm, even though the PIM does not converge, as shown in Section 3.2. Based on Theorem 5 (II), it can be seen that when
Θ p t k 1 A 1 B 1 + A 1 B , 1 A 1 B 1 + A 1 B ,
f ( ω , Θ p t k ) becomes a linear function, and the rate of convergence is almost completely determined by the value of Θ p t k . It means under this condition, Θ p t k plays a more important role than ω , and if we change the value of ω , it will not cause great changes in the convergence, as shown in Section 3.2. Based on Theorem 5 (I), it also can be seen that when
Θ p t k 0 , 1 A 1 B 1 + A 1 B ,
we can choose an appropriate parameter ω to accelerate the convergence of the algorithm, as shown in Section 3.2.

3. Numerical Examples

In this section, we demonstrate the performance of Algorithms 1–3 with numerical examples.
The numerical experiments were performed using MATLAB software on an Intel (R) Pentium (R) CPU N3700 for (1.60 GHz, 4 GB RAM). Zero vector x ( 0 ) = 0 was used for the initial guess. The iterations were terminated when the current iterate satisfied
R e ( k ) 10 8 ,
where R e ( k ) = b A x ( k ) + B | x ( k ) | is the k-th iterate for the PIM, GSIM, and MPIM algorithms. For all mesh sizes m and n, we set p = 2 , and the max iteration was = 500 in all the methods. Moreover, we took the vector b such that the solution x = ( n 2 + 1 : n 2 ) T . For the matrices A and B, we took
A = T x I m + I m T y + 200 I n , B = I ,
in the GSIM algorithm, with tridiagonal matrices
T x = T r i d i a g o n a l ( 1 , 4 , 1 ) l × l
and
T y = T r i d i a g o n a l ( 1 , 0 , 1 ) l × l ,
where m and n are positive integer numbers, and ⊗ is the Kronecker product, where l = 10 . In addition, we took
A = T x I m + I m T y + 320 I n a n d B = 50 i I n + 2 I m T y ,
in the PIM and MPIM algorithms to compare the efficiency of these methods, where m = 50 , l = 20 , n = 1000 , and i = 1 , 2 , 3 , 4 , 5 , 6 .
Algorithm 3:Modified Picard-Like Iteration Method (MPIM).
Suppose that x ( 0 ) , y ( 0 ) R n is an initial guess for solution of (3), ϵ is the target accuracy, ω > 0 , and A = M i N i ( i = 1, 2, …, p) is a splitting for matrix A, where M i 1 exists. We compute x ( k + 1 ) R n by using the following iteration:
1. Input x ( 0 ) .
2. Solve the following equations to obtain x ( k , t k ) , k = 0 , 1 , 2 , , t = 0 , 1 , , t k 1
x ( k , t + 1 p ) = M 1 1 N 1 x ( k , t ) + M 1 1 B y ( k ) + M 1 1 b , x ( k , t + 2 p ) = M 2 1 N 2 x ( k , t + 1 p ) + M 2 1 B y ( k ) + M 2 1 b , x ( k , t + 3 p ) = M 3 1 N 3 x ( k , t + 2 p ) + M 3 1 B y ( k ) + M 3 1 b , , x ( k , t + 1 ) = M p 1 N p x ( k , t + 1 1 p ) + M p 1 B y ( k ) + M p 1 b .
3. Set x ( k + 1 ) = x ( k , t k ) .
4. Set y ( k + 1 ) = ω | x ( k + 1 ) | + ( 1 ω ) y ( k ) .
5. End the iteration when b A x ( k + 1 ) + B | x ( k + 1 ) | ϵ .
6. Output x ( k + 1 ) .

3.1. An Application

We solved the following ordinary differential equations:
d 2 x d t 2 | x | = ( 1 t 2 ) , 0 t 1 , x ( 0 ) = 1 , x ( 1 ) = 0 ,
to show our method is feasible. Similarly to the Example 3.1 in [5], we discredite the above equation using the finite difference method to obtain (2), where A of size 10 is given by:
a i , j = 242 , f o r j = i , 121 , f o r j = i + 1 , i = 1 , 2 , , 9 , j = i 1 , i = 2 , 3 , , 10 , 0 , o t h e r w i s e ,
and the exact solution is
x = 1.4621171757 e t 0.5378828428 e t + 1 + t 2 , t > 0 .
We compared the SOR algorithm with ω = 0.5 , the M-SOR with w = 0.5 , and the MPIM with M 1 = 2 A , M 2 = 1.2 A , and ω = 1.1 to solve the above AVE problem in 72, 2156, and 18 iterations, respectively; see Figure 1.

3.2. General Splitting Iteration Method

We solved (2), where A and B are as mentioned above, by Algorithm 2 and 3 with the following special methods: the SOR-like (SOR), the Modified SOR-Like (M-SOR), and the Modified SSOR-Like (M-SSOR) with γ = ω = 0.5 , the Modified Picard-SSOR (MP-SSOR) with ω = 1.05 and γ = 0.5 , and the MGGS with β = 0.5 , θ = 0.2 and λ = 0.8 ; see Remarks 1 and 2.
From Table 1, we can see that when B = I , the M-SSOR and MP-SSOR proposed in this paper were better than the previous algorithms [21,23,24]. Note that for all the iterative methods, we did not choose the best parameters of ω to further optimize the algorithm (in fact, the splitting parameter of A can be different from ω to speed up the algorithm, as discussed in Remark 2).

3.3. Modified Picard-like Iteration Method

Moreover, we solved (1) by the MPIM and PIM with special splitting ( M 3 = 0.5 D L , M 4 = 1.5 D U , record as a splitting 1). Since
Θ p t k < 1 A 1 B 1 + A 1 B ,
based on Theorem 4, we chose ω = ω 1 = 1 Θ p t k 0.66 . Some numerical results, such as the number of iterations, the parameters ω , the CPU time needed to run the iterative methods, and the residual error Re(k), are reported in Table 2.
From Table 2, we can see that with the increase in i, i.e., the increase in the maximum singular value of B, the Picard method gradually tended to be non-convergent for a given splitting form. However, when we took ω = ω 1 0.66 , the algorithm was always convergent. This is consistent with the analysis of Theorem 4. When B = 50 I 1000 + 2 I 50 T 20 , the spectral radius of iterative matrix M ^ 1 N ^ was recorded as ρ ( M ^ 1 N ^ ) in Figure 2, where
ω 3 = 2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 1.49 .
In addition, we changed the spliiting of A. For the PIM and MPIM, we let M 5 = γ 1 D L , and M 6 = γ 1 D U , where γ = 1.5 . Since
Θ p t k 0 , 1 A 1 B 1 + A 1 B ,
based on Theorem 5 (II), we chose ω = ω 2 = 1 + Θ p t k 1.28 , where
B = 50 × 1 × I 1000 + 2 I 50 T 20 .
It can be seen from Table 3 that when the splitting form was modified, the MPIM was still better than the PIM. We selected B = 50 I 1000 + 2 I 50 T 20 to give the spectral radius under different ω values, as shown in Figure 3, where
ω 3 = 2 2 Θ p t k A 1 B Θ p t k Θ p t k + A 1 B + 1 1.54 .
Now, for the MPIM and the PIM, we chose M 7 = γ 1 D L , M 8 = γ 1 D U , and t k = 2 , where γ = 1.8 , record as a splitting 3. We selected B = 50 I 1000 + 2 I 50 T 20 to give the spectral radius under different ω values, as shown in Figure 4. Since
Θ p t k 1 A 1 B 1 + A 1 B , 1 A 1 B 1 + A 1 B ,
based on Theorem 5 (I), we chose ω = 1 . Then, we see that when ω was equal to 1, the difference between the spectral radius values corresponding to ω o p t was very small.
We let A = 400 I 1000 + T 20 I 50 + I 50 T 20 , and B = 380 I 1000 + 2 I 50 T 20 , and used the PIM algorithm with M 9 = 0.5 D L , M 10 = 1.5 D U , and t k = 2 , the MPIM algorithm with M 9 = 0.5 D L , M 10 = 1.5 D U , t k = 2 , and ω = 0.6 , the SOR algorithm with ω = 0.6 , the M-SSOR algorithm with γ = ω = 0.6 , and the M-SOR algorithm with γ = ω = 0.6 to solve AVE (1), as shown in Figure 5.
The proposed MPIM and Modified SSOR algorithms were significantly better than the methods in the related literature [21,23,24,28].

4. Conclusions

In this paper, we solved AVE (1) by equivalently expressing the implicit fixed-point equation form of the absolute value equations as a two-by-two block nonlinear equation and then proposed an MPIM method for solving it. We also proved the convergence of the MPIM method under suitable choices of the involved parameters and presented the choice of the quasi-optimal parameters. Finally, we performed some numerical experiments to demonstrate that the MPIM method was feasible and effective. However, the convergence analysis and the choice of the optimal parameters of the whole iteration method remain unresolved, which need to be considered in the future.

Author Contributions

Writing—original draft, Y.L.; Writing—review & editing, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

Yunnan Provincial Ten Thousands Plan Young Top Talents.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rohn, J. A theorem of the alternatives for the equation Ax+B|x|=b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef]
  2. Mangasarian, O.L. Absolute value equation solution via concave minimization. Optim. Lett. 2007, 1, 3–8. [Google Scholar] [CrossRef]
  3. Mangasarian, O.L. A hybrid algorithm for solving the absolute value equation. Optim. Lett. 2015, 9, 1469–1474. [Google Scholar] [CrossRef]
  4. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
  5. Noor, M.A.; Iqbal, J.; Al-Said, E. Residual iterative method for solving absolute value equations. Abstr. Appl. Anal. 2012, 2012, 406232. [Google Scholar] [CrossRef]
  6. Noor, M.A.; Iqbal, J.; Noor, K.I.; Al-Said, E. On an iterative method for solving absolute value equations. Optim. Lett. 2012, 6, 1027–1033. [Google Scholar] [CrossRef]
  7. Noor, M.A.; Iqbal, J.; Noor, K.I.; Al-Said, E. Generalized AOR method for solving absolute complementarity problems. J. Appl. Math. 2012, 2012, 743861. [Google Scholar] [CrossRef]
  8. Ketabchi, S.; Moosaei, H. An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side. Comput. Math. Appl. 2012, 64, 1882–1885. [Google Scholar] [CrossRef]
  9. Ketabchi, S.; Moosaei, H. Minimum norm solution to the absolute value equation in the convex case. Optim. Theory Appl. 2012, 154, 1080–1087. [Google Scholar] [CrossRef]
  10. Ketabchi, S.; Moosaei, H.; Fallhi, S. Optimal error correction of the absolute value equations using a genetic algorithm. Comput. Math. Model. 2013, 57, 2339–2342. [Google Scholar] [CrossRef]
  11. Moosaei, H.; Ketabchi, S.; Noor, M.A.; Iqbal, J.; Hooshyarbakhsh, V. Some techniques for solving absolute value equations. Appl. Math. Comput. 2015, 268, 696–705. [Google Scholar] [CrossRef]
  12. Gottle, R.W.; Pang, J.S.; Stone, R.E. The Linear Complementarity Problem; Academic Press: New York, NY, USA, 1992. [Google Scholar]
  13. Rohn, J. On unique solvability of the absolute value equation. Optim. Lett. 2009, 3, 603–606. [Google Scholar] [CrossRef]
  14. Wu, S.L.; Guo, P. On the unique solvability of the absolute value equation. J. Optim. Theory Appl. 2016, 169, 705–712. [Google Scholar] [CrossRef]
  15. Wu, S.L.; Li, C.X. The unique solution of the absolute value equations. Appl. Math. Lett. 2018, 76, 195–200. [Google Scholar] [CrossRef]
  16. Li, C.X.; Wu, S.L. A note on unique solvability of the absolute value equation. Optim. Lett. 2020, 14, 1957–1960. [Google Scholar]
  17. Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  18. Lian, Y.Y.; Li, C.X.; Wu, S.L. Weaker convergent results of the generalized Newton method for the generalized absolute value equations. Comput. Appl. Math. 2018, 338, 221–226. [Google Scholar] [CrossRef]
  19. Wang, A.; Cao, Y.; Chen, J.X. Modified Newton-type iteration methods for generalized absolute value equations. Optim. Theory Appl. 2019, 181, 216–230. [Google Scholar] [CrossRef]
  20. Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. A generalization of the Gauss-Seidel iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 293, 156–167. [Google Scholar] [CrossRef]
  21. Ke, Y.F.; Ma, C.F. SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 311, 195–202. [Google Scholar] [CrossRef]
  22. Guo, P.; Wu, S.L.; Li, C.X. On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 2019, 97, 107–113. [Google Scholar] [CrossRef]
  23. Li, C.X.; Wu, S.L. Modified SOR-like iteration method for absolute value equations. Math. Probl. Eng. 2020, 2020, 9231639. [Google Scholar]
  24. Ali, R.; Ahmad, A.; Ahmad, I.; Ali, A. The modification of the generalized Gauss-Seidel iteration techniques for absolute value equations. Comput. Algor. Numer. Dimen. 2022, 1, 130–136. [Google Scholar]
  25. Salkuyeh, D.K. The Picard-HSS iteration method for absolute value equations. Optim. Lett. 2014, 8, 2191–2202. [Google Scholar] [CrossRef]
  26. Zheng, L. The Picard-HSS-SOR iteration method for absolute value equations. J. Inequal. Appl. 2020, 2020, 258. [Google Scholar] [CrossRef]
  27. Lv, C.Q.; Ma, C.F. Picard splitting method and Picard CG method for solving the absolute value equation. Nonlinear Sci. Appl. 2017, 10, 3643–3654. [Google Scholar] [CrossRef]
  28. Dehghan, M.; Shirilord, A. Matrix multisplitting Picard-iterative method for solving generalized absolute value matrix equation. Appl. Numer. Math. 2020, 158, 425–438. [Google Scholar] [CrossRef]
  29. Ali, R.; Khan, I.; Ali, M.A.A. Two new generalized iteration methods for solving absolute value equations using M-matrix. AIMS Math. 2022, 7, 8176. [Google Scholar] [CrossRef]
  30. Yu, D.; Chen, C.; Yang, Y.; Han, D. An inertial inverse-free dynamical system for solving absolute value equations. J. Ind. Manag. Optim. 2023, 19, 2549–2559. [Google Scholar] [CrossRef]
  31. Hu, S.L.; Huang, Z.H.; Zhang, Q. A generalized Newton method for absolute value equations associated with second order cones. Comput. Appl. Math. 2011, 235, 1490–1501. [Google Scholar] [CrossRef]
  32. Iqbal, J.; Arif, M. Symmetric SOR method for absolute complementarity problems. Appl. Math. 2013, 2013, 172060. [Google Scholar] [CrossRef]
  33. Li, C.X. A modified generalized Newton method for absolute value equations. Optim. Theory Appl. 2016, 170, 1055–1059. [Google Scholar] [CrossRef]
  34. Wang, H.; Liu, H.; Cao, S. A verification method for enclosing solutions of absolute value equations. Collect. Math. 2013, 64, 17–18. [Google Scholar] [CrossRef]
  35. Cruz, J.Y.B.; Ferreira, O.P.; Prudente, L.F. On the global convergence of the inexact semi-smooth Newton method for absolute value equation. Comput. Optim. Appl. 2016, 65, 93–108. [Google Scholar] [CrossRef]
Figure 1. Number of iterations.
Figure 1. Number of iterations.
Mathematics 11 00848 g001
Figure 2. The spectral radius of M ^ 1 N ^ for M i , i = 3 , 4 .
Figure 2. The spectral radius of M ^ 1 N ^ for M i , i = 3 , 4 .
Mathematics 11 00848 g002
Figure 3. The spectral radius of M ^ 1 N ^ for M i , i = 5 , 6 .
Figure 3. The spectral radius of M ^ 1 N ^ for M i , i = 5 , 6 .
Mathematics 11 00848 g003
Figure 4. The spectral radius of M ^ 1 N ^ for M i , i = 7 , 8 .
Figure 4. The spectral radius of M ^ 1 N ^ for M i , i = 7 , 8 .
Mathematics 11 00848 g004
Figure 5. The Rate of Convergency.
Figure 5. The Rate of Convergency.
Mathematics 11 00848 g005
Table 1. Comparision of the iteration number and the CPU time in (2).
Table 1. Comparision of the iteration number and the CPU time in (2).
n500100020003000400050006000
SOR
Iteration46474849495050
CPU Time(s)0.0223610.0858640.3145040.8250141.3475162.1226932.866091
R E ( k ) 7.28 × 10 9 7.62 × 10 9 8.02 × 10 9 6.37 × 10 9 8.46 × 10 9 6.43 × 10 9 7.07 × 10 9
MGGS
Iteration46474848494949
CPU Time(s)0.0223010.0901650.3042980.8448111.3426162.1203932.796891
R E ( k ) 5.12 × 10 9 5.25 × 10 9 5.34 × 10 9 8.17 × 10 9 5.45 × 10 9 7.23 × 10 9 8.07 × 10 9
M-SOR
Iteration47484950505151
CPU Time(s)0.0222230.0826260.3046490.7837271.3810162.13672.979601
R E ( k ) 6.79 × 10 9 7.70 × 10 9 8.77 × 10 9 7.07 × 10 9 9.56 × 10 9 6.43 × 10 9 7.76 × 10 9
M-SSOR
Iteration39404141424242
CPU Time(s)0.0206070.0728050.2902160.6342031.0897381.6921612.458237
R E ( k ) 5.75 × 10 9 5.85 × 10 9 5.94 × 10 9 8.92 × 10 9 6.14 × 10 9 7.82 × 10 9 9.61 × 10 9
MP-SSOR
t k 2222222
Iteration13131313131313
CPU Time(s)0.016340.068550.2325890.5593560.9936721.5869372.206395
R E ( k ) 8.46 × 10 9 1.40 × 10 9 2.95 × 10 9 4.58 × 10 9 6.08 × 10 9 7.76 × 10 9 9.50 × 10 9
Table 2. Comparision of the iteration number and the CPU time for M i , i = 3 , 4 .
Table 2. Comparision of the iteration number and the CPU time for M i , i = 3 , 4 .
i123456
PIM
t k 111111
Iteration51106500500500500
CPU Time(s)0.2455990.5143122.3297512.3297512.3297512.329751
R e ( k ) 5.9590 × 10 9 7.7489 × 10 9 4.0375 × 10 4 NANNANNAN
MPIM
ω 0.660.660.660.660.660.66
t k 111111
Iteration36466190155483
CPU Time(s)0.1868910.2266950.2854970.4337090.7238792.240295
R e ( k ) 4.9986 × 10 9 5.1004 × 10 9 8.8476 × 10 9 7.1886 × 10 9 8.2655 × 10 9 9.9244 × 10 9
Table 3. Comparision of the iteration number and the CPU time for M i , i = 5 , 6 .
Table 3. Comparision of the iteration number and the CPU time for M i , i = 5 , 6 .
i123456
PIM
t k 111111
Iteration31425887153500
CPU Time(s)0.1455420.1865290.2574940.3965600.6922432.372002
R e ( k ) 6.8540 × 10 9 6.5775 × 10 9 8.8039 × 10 9 8.8912 × 10 9 9.6479 × 10 9 1.0696 × 10 8
MPIM
ω 1.281.11.11.11.11.1
t k 111111
Iteration30405581143467
CPU Time(s)0.1379610.1681870.2304880.3500710.6132982.014428
R e ( k ) 5.9881 × 10 9 4.7221 × 10 9 5.9517 × 10 9 9.0949 × 10 9 8.6729 × 10 9 9.7934 × 10 9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, Y.; Li, C. Modified Picard-like Method for Solving Absolute Value Equations. Mathematics 2023, 11, 848. https://doi.org/10.3390/math11040848

AMA Style

Liang Y, Li C. Modified Picard-like Method for Solving Absolute Value Equations. Mathematics. 2023; 11(4):848. https://doi.org/10.3390/math11040848

Chicago/Turabian Style

Liang, Yuan, and Chaoqian Li. 2023. "Modified Picard-like Method for Solving Absolute Value Equations" Mathematics 11, no. 4: 848. https://doi.org/10.3390/math11040848

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop