Next Article in Journal
MultiFuzzTOPS: A Fuzzy Multi-Criteria Decision-Making Model Using Type-2 Soft Sets and TOPSIS
Previous Article in Journal
A Symmetric Multiprocessor System-on-a-Chip-Based Solution for Real-Time Image Dehazing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Spectral Conjugate Gradient Method for Absolute Value Equations Associated with Second-Order Cones

1
School of Business Administration, Liaoning Technical University, Huludao 125105, China
2
Institute for Optimization and Decision Analytics, Liaoning Technical University, Fuxin 123000, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(6), 654; https://doi.org/10.3390/sym16060654
Submission received: 7 April 2024 / Revised: 9 May 2024 / Accepted: 13 May 2024 / Published: 25 May 2024
(This article belongs to the Section Mathematics)

Abstract

:
In this paper, we propose a modified spectral conjugate gradient (MSCG) method for solving absolute value equations associated with second-order cones (SOCAVEs). Some properties of the SOCAVEs are analyzed, and the global convergence of the MSCG method is discussed in depth. Numerical experiments are given to illustrate the effectiveness and competitiveness of our algorithm.

1. Introduction

In this paper, we focus on investigating the following absolute value equations associated with second-order cones (SOCAVEs)
A x | x | = b ,
where A R n × n , b R n , and | x | K n . The second-order cone (SOC) is characterized by the following mathematical representation
K n : = ( x 1 , x 2 ) R × R n 1 | x 2 x 1 .
If n = 1 , let K n be the set of non-negative reals R + . In addition, the general SOC K can be considered as the Cartesian product of individual SOCs, that is,
K : = K n 1 × × K n r ,
where n 1 + + n r = n and n i ( i = 1 , , r ) is a non-negative integer. Without loss of generality, we will concentrate on the scenario where r = 1 , as all the analysis can be extended to the case of r > 1 based on the characteristics of the Cartesian product. For every x = ( x 1 , x 2 ) R × R n 1 and y = ( y 1 , y 2 ) R × R n 1 , their Jordan product ∘ is defined as [1,2,3]
x y = x , y , y 1 x 2 + x 1 y 2 R × R n 1 .
Based on this definition, the absolute value vector | x | in SOC K n can be calculated by
| x | = x x .
In the context of solving SOCAVEs (1), the definition of | x | is specified in (2). SOCAVEs (1) can be seen as a particular form of the generalized absolute value equations associated with SOCs (SOCGAVEs)
C x + D | x | c = 0
with C , D R m × n , c R m , and | x | K n . Actually, SOCGAVEs (3) was first introduced in [4] and then further explored in subsequent research conducted by [5,6,7,8], as well as the references therein. Furthermore, SOCAVEs (1) can be viewed as a generalization of absolute value equations (AVEs)
A x | x | = b ,
where A R n × n and b R n . Meanwhile, SOCGAVEs (3) can be seen as an extension of generalized AVEs (GAVEs)
C x + D | x | = c ,
where C , D R m × n and c R m . In AVEs (4) and GAVEs (5), | x | = ( | x 1 | , | x 2 | , , | x n | ) T R n . It is worth noting that GAVEs (5) was initially introduced in [9] and subsequently explored in [10,11,12], along with the references included therein. Undoubtedly, AVEs (4) can be regarded as a particular case of GAVEs (5).
In recent years, AVEs (4) and GAVEs (5) have been extensively studied due to their significance in a variety of mathematical programming problems, including the linear complementarity problem (LCP), bimatrix game and others; for further details, refer to [10,11,13]. As a result, numerous theoretical findings and numerical algorithms have been developed for both AVEs (4) and GAVEs (5). In term of theory, we elaborate on two aspects which are the equivalent reformulations of AVEs (4) and its solvabilities. As for the equivalent reformulations, ref.  [13] proved that AVEs (4) could be reformulated as the LCP when 1 is not a singular value of A, which can subsume many optimization problems such as linear programs, quadratic programs, and bimatrix games. Ref. [11] further improved the above equivalence which indicates that AVEs (4) can be equivalently recast as the LCP without any assumption on A and B, and the authors also provided a relationship with mixed integer programming. Ref. [14] gave the equivalent relation between the horizontal linear complementarity problems (HLCP) and AVEs (4). When considering the existence of a solution, the sufficient conditions of a unique solution, many solutions, and no solution were presented by [13]. Ref. [15] presented two necessary and sufficient conditions for the unique solution of AVEs (4). That is, matrix A I + 2 D or A + I 2 D are nonsingular for any diagonal matrix D = d i a g ( d i ) with 0 d i 1 . Based on the equivalent reformulation between AVEs (4) and the HLCP, ref. [14] also gave the necessary and sufficient condition for the unique solvability of AVEs (4), i.e., { A I , A + I } has the column W -property. Under the condition that AVEs (4) has a unique solution, ref. [16] conducted a study on the error bound and condition number of AVEs (4), which play crucial roles for the convergence analysis of AVEs (4). Ref. [12] analyzed topological properties of a solution set for AVEs (4), including convexity, boundedness, or whether it consists of many finite solutions.
When considering numerical algorithms, ref. [10] proposed a generalized Newton method for solving AVEs (4). Based on the equivalent reformulation between AVEs (4) and the two-by-two block nonlinear equations, ref. [17] developed the SOR-like iterative method and [18] analyzed the selection of optimal parameter for this algorithm. By reformulating AVEs (4) into a new nonlinear system, ref. [19] put forward an alternative SOR-like method to solve AVEs (4). Subsequently, many researchers, such as [20,21,22], presented many iterative methods by applying the idea of [17]. Furthermore, AVEs (4) can be seen as a class of special nonlinear equations; based on this, ref. [23] proposed a modified multivariate spectral gradient algorithm for AVEs (4) and [24] improved a new three-term spectral subgradient method for solving AVEs (4).
Our interest in SOCAVEs (1) and SOCGAVEs (3) is motivated by their status as extensions of conventional forms, as well as their equivalence to LCPs associated with SOC (SOCLCPs). These problems have wide-ranging applications in engineering, control systems, and finance, as evidenced by studies such as [4,6,7]. Notably, recent research efforts have yielded significant developments in both numerical methodologies and theoretical findings for addressing SOCAVEs (1) and SOCGAVEs (3). Ref. [4] have shown that SOCAVEs (1) is equivalent with the second-order cone linear complementarity problem (SOCLCP). Furthermore, ref. [6] have proved that SOCAVEs (1) can be converted into the standard SOCLCP. When considering the existence of a solution, ref. [25] presented some sufficient conditions for the unique solution of SOCAVEs (1). Furthermore, by using P-property and a globally uniquely solvable (GUS) property in the SOCLCP, ref. [26] showed some sufficient and necessary conditions for the unique solution of SOCAVEs (1). When considering the numerical algorithms for solving SOCAVEs (1), ref. [4] proposed a generalized Newton method and demonstrated that the proposed approach exhibits global linear convergence and local quadratic convergence under appropriate assumptions. Considering the nonlinear and non-smoothing term of | x | K n , by applying different smoothing techniques, the smoothing-type algorithms have been proposed by [6,7,8] for SOCAVEs (1). The above smoothing-type algorithms usually apply the monotone line search technique. In order to improve the computing performance of the smoothing-type algorithms, ref. [27] introduced a non-monotone smoothing Newton algorithm for solving SOCAVEs (1). Based on a splitting of the two-by-two block coefficient matrix, ref. [28] proposed a modified SOR-like method to solve SOCAVEs (1).
SOCAVEs (1) can be interpreted as a special case of the following system of nonlinear equations
F ( x ) = 0 with F ( x ) : = A x | x | b ,
where F : R n R n is continuous and monotone. F ( x ) is called a monotone equation such that it satisfies
F ( x ) F ( y ) , x y 0 , x , y R n .
For the general nonlinear equations, the spectral conjugate gradient methods have garnered significant attention since they do not require the first order information at each iteration. Moreover, the spectral conjugate gradient methods have been successfully applied to find local minimizers of larger-scale problems. Using these motivations, in this paper, we develop a MSCG method for solving SOCAVEs (1).
In particular, the significant contributions of this paper can be summarized as follows.
(i)
A modified spectral conjugate gradient method is proposed for solving SOCAVEs (1).
(ii)
The properties of the objective function of SOCAVEs (1) under suitable conditions are established.
(iii)
In comparison to the spectral gradient algorithm, the proposed method is appropriate for resolving SOCAVEs (1) due to its low storage demands and exclusive reliance on the value of objective function.
(iv)
Numerical examples are given to demonstrate the effectiveness of the proposed method.
The remainder of this paper is structured as follows. Section 2 provides an overview of the preliminaries, propositions, and lemmas that are essential for understanding and analyzing the subsequent content. In Section 3, we propose a modified spectral conjugate gradient method for solving SOCAVEs (1). Furthermore, the global convergence of the proposed method is presented in Section 4. In Section 5, numerical examples are given to illustrate the efficiency of the proposed algorithm. Concluding remarks are made in Section 6.

2. Preliminaries

In this section, we gather fundamental results essential for our subsequent analysis.
We begin by reviewing key concepts and background information concerning SOCs, as detailed in [1,2,3,29,30].
For x = ( x 1 , x 2 ) R × R n 1 , the spectral decomposition of x with respect to SOC is given by
x = λ 1 μ 1 + λ 2 μ 2 ,
where λ 1 , λ 2 and μ 1 , μ 2 are the spectral values and the corresponding spectral vectors of x defined as
λ i = x 1 + ( 1 ) i x 2
and
μ i = 1 2 1 , ( 1 ) i x 2 x 2 , if x 2 0 , 1 2 1 , ( 1 ) i ω , if x 2 = 0 ,
for i = 1 , 2 , with ω R n 1 and satisfying ω = 1 . Indeed, the decomposition (8) is guaranteed to be unique if x 2 0 .
To derive a more explicit formula for | x | K n , we need to give some results about the functions associated with SOC.
Definition 1. 
For any function f ^ : R R , defining a function on R n associated with K n ( n 1 ) as
f ( x ) = f ^ ( λ 1 ) μ 1 + f ^ ( λ 2 ) μ 2 , x = ( x 1 , x 2 ) R × R n 1 ,
where λ 1 and λ 2 are the spectral values and μ 1 and μ 2 are the associated spectral vectors of x.
If n = 1 , according to (8), we have λ 1 = λ 2 = x and μ 1 = μ 2 = 1 2 . Therefore, for any x R , the spectral decomposition of x is x = 1 2 x + 1 2 x . By the definition of the SOC function (9), it holds that f ( x ) = 1 2 f ^ ( x ) + 1 2 f ^ ( x ) = f ^ ( x ) for all x R .
Next, we present some special examples about the function associated with SOC. For any x = ( x 1 , x 2 ) R × R n 1 , according to the spectral decomposition (8), we have
x 2 = λ 1 2 μ 1 + λ 2 2 μ 2
and
x = λ 1 μ 1 + λ 2 μ 2 .
When n = 1 , we have λ 1 = λ 2 = x and μ 1 = μ 2 = 1 2 . According to (11), it holds that x = 1 2 x + 1 2 x .
By using (10) and (11), we have
x 2 = λ 1 2 μ 1 + λ 2 2 μ 2 = λ 1 2 μ 1 + λ 2 2 μ 2 = | λ 1 | μ 1 + | λ 2 | μ 2 .
Therefore, for any x = ( x 1 , x 2 ) R × R n 1 ,
| x | = | λ 1 | μ 1 + | λ 2 | μ 2 .
According to (8) and (12), we have
| x | = 1 2 | x 1 x 2 | + | x 1 + x 2 | , ( | x 1 + x 2 | | x 1 x 2 | ) x 2 x 2 , if x 2 0 , ( | x 1 | , 0 ) , if x 2 = 0 .
Finally, we discuss the Lipschitz continuous property and monotonicity of the mapping F (6). First, we give the following definitions and the auxiliary lemma.
Definition 2. 
The mapping F (6) is Lipschitz continuous; a constant L > 0 exists and satisfies
F ( x ) F ( y ) L x y , x , y R n .
Definition 3. 
The mapping F (6) is strongly monotone; a constant m > 0 exists such that
F ( x ) F ( y ) , x y m x y 2 , x , y R n .
Lemma 1 
([31]). For any vectors x = ( x 1 , x 2 ) and y = ( y 1 , y 2 ) R × R n 1 , then
| x | | y | x y .
Proposition 1. 
The mapping F (6) is Lipschitz continuous.
Proof. 
For x , y R n , we have
F ( x ) F ( y ) = A x | x | A y + | y | = A ( x y ) ( | x | | y | ) A ( x y ) + | x | | y | A x y + x y = ( A + 1 ) x y .
Therefore, the mapping F (6) is Lipschitz continuous with L = A + 1 .    □
Proposition 2. 
Assume A I is positive semi-definite matrix, then the mapping F (6) is monotone.
Proof. 
For x , y R n , we have
F ( x ) F ( y ) , x y = A ( x y ) ( | x | | y | ) , x y = ( x y ) T A ( x y ) ( x y ) T ( | x | | y | ) .
Using the Cauchy–Schwarz inequality, we have
( x y ) T ( | x | | y | ) | | x y | | · | x | | y | x y 2 ,
in which Lemma 1 is used. Then,
F ( x ) F ( y ) , x y = ( x y ) T A ( x y ) ( x y ) T ( | x | | y | ) ( x y ) T A ( x y ) x y 2 = ( x y ) T ( A I ) ( x y ) .
Obviously, when A I positive semi-definite matrix, the mapping F (6) is monotone.    □
Proposition 3. 
If A is a symmetric matrix and A I is positive definite, then the mapping F (6) is strongly monotone.
Proof. 
Owing to the last inequality of (18), that is
F ( x ) F ( y ) , x y ( x y ) T ( A I ) ( x y ) ,
then we have
F ( x ) F ( y ) , x y 1 2 ( x y ) T ( A I ) ( x y ) + ( x y ) T ( A I ) T ( x y ) = 1 2 ( x y ) T ( A + A T ) ( x y ) ( x y ) T ( x y ) 1 2 λ min ( A + A T ) x y 2 x y 2 = 1 2 λ min ( A + A T ) 1 x y 2 = λ min ( A ) 1 x y 2 ,
in which λ min ( A ) denotes the smallest eigenvalue of A; thus, the last equation holds by the symmetry property of matrix A. Because A I is definitely positive, then λ min ( A ) 1 > 0 . Therefore, the mapping F (6) is strongly monotone.    □

3. The Algorithm

In this section, we develop a modified spectral conjugate gradient method for solving SOCAVEs (1).
The spectral conjugate gradient method is a class of important optimization methods for solving nonlinear equations. Let us simply describe the algorithm framework of the spectral conjugate gradient method for nonlinear Equation (6). Starting from an initial guess x 0 R n , the spectral conjugate gradient method usually generates a sequence { x k } satisfying
x k + 1 = x k + α k d k , k 0 ,
where α k > 0 is the step size which is obtained by a suitable line search, and the search direction d k is defined as
d k = F ( x k ) , if k = 0 , θ k F ( x k ) + β k d k 1 , if k 1 ,
where θ k and β k are the spectral parameter and the conjugate parameter, respectively. Therefore, the performance property of spectral conjugate gradient methods depends on the choice of the parameters θ k and β k . For the sake of convenience, we use F k to replace F ( x k ) . Inspired by [32], we choose the following parameters with
θ k = F k 1 T d k 1 F k 1 2 + β k F k T d k 1 F k 2
and
β k = σ F k T y k 1 F k d k 1 ,
where y k 1 = F k F k 1 .
In what follows, we will demonstrate that the search direction d k generated by (20)–(22) is a sufficient descent direction, which satisfies
F k d k = F k 2 .
In general, the formula of the sufficient descent condition is
F k d k γ F k 2 ,
where γ > 0 is a positive constant. In contrast to the above condition, an attractive property of the sufficient descent direction (23) is its independence from any line search. Moreover, if the line search is exact, then F k d k 1 = 0 . In this case, we have
θ k = F k 1 T d k 1 F k 1 2 = 1 ,
then the MSCG method can reduce to the standard conjugate gradient method.
Lemma 2. 
The search direction d k defined by (20)–(22) is the sufficient descent direction, i.e., the condition (23) holds for k 0 .
Proof. 
Pre-multiplying (20) by F k , we have
F k d k = θ k F k 2 + β k F k d k 1 = F k 2 · F k 1 d k 1 F k 1 2 · F k 1 2 F k 1 d k 1 θ k + β k F k 1 2 F k 1 d k 1 · F k d k 1 F k 2 .
Let δ k : = F k 1 2 F k 1 d k 1 θ k + β k F k 1 2 F k 1 d k 1 · F k d k 1 F k 2 . Applying (21), it is easy to deduce that δ k = 1 .
Therefore, we have
F k d k F k 2 = F k 1 d k 1 F k 1 2 = = F 0 d 0 F 0 2 = 1 ,
and the condition (23) is satisfied.    □
This means that if we can choose the search direction d k to satisfy δ k 1 , then d k always satisfies the sufficient descent property. Moreover, this remarkable property is independent of any line search.
The above analysis allows us to obtain a reasonable parameter θ k . To obtain a global convergent algorithm for solving nonlinear monotone Equation (6), the search direction generated by the algorithm is required to be bounded. At this point, we describe a more specific algorithm as follows.
The following lemma shows that Algorithm 1 is well defined.
Algorithm 1 Modified Spectral Conjugate Gradient Method (MSCG).
Step 0. Choose an initial guess x 0 R n , ξ , t o l > 0 , 0 < δ < 1 , σ > 0 and set k = 0 .
Step 1. If F k t o l , terminate. Else go to Step 2.
Step 2. Compute d k using (19)–(22).
Step 3. Set the test point μ k
μ k = x k + α k d k ,
where α k = δ i , for i = 0 , 1 , , satisfying
F ( x k + δ i d k ) d k ξ δ i d k 2 .
Step 4. Update next iterative point by the hyperplane projection method,
x k + 1 = x k ζ k F ( μ k ) ,
where
ζ k = F ( μ k ) ( x k μ k ) F ( μ k ) 2 .
Step 5. Set k : = k + 1 , go to Step 1.
Lemma 3. 
The Algorithm 1 is well defined, i.e., for all k, the step size α k = δ i exists such that the line search condition (25) holds.
Proof. 
We prove the conclusion by contradiction. Assume there exists a constant k 0 0 such that (25) does not satisfy for any non-negative integer i, i.e.,
F ( x k 0 + δ i d k 0 ) d k 0 < ξ δ i d k 0 2 .
From Proposition 1 and allowing i , we have
F ( x k 0 ) d k 0 0 .
Using (23), we can derive that
F ( x k 0 ) d k 0 = F ( x k 0 ) 2 > 0 ,
which contradicts (28). Therefore, the Algorithm 1 is well defined. □
Remark 1. 
By the monotonicity of F(6), we have
F ( μ k ) , x ¯ μ k = 0 ,
for all x ¯ such that F ( x ¯ ) = 0 . By applying a line search procedure along the descent direction d k , we obtain a point μ k = x k + α k d k such that
F ( μ k ) , x k μ k > 0 .
Thus, the hyperplane
H k = { x R n | F ( μ k ) , x k μ k = 0 }
strictly separates the current iterate x k from zeros of Equation (6). Once the separating hyperplane is obtained, the next iteration point x k + 1 is computed by projecting x k onto the hyperplane.

4. Convergence Analysis

In this section, we establish the global convergence of Algorithm 1. To this end, we give the following auxiliary lemmas to complete the proof of convergence.
Lemma 4 
([33]). Suppose that the sequence { μ k } and { x k + 1 } are generated by (19) and (26) in Algorithm 1. If the mapping F is monotone and satisfying
F ( μ k ) ( x k μ k ) > 0 ,
then for any x ¯ R n such that F ( x ¯ ) = 0 , it holds that
x k + 1 x ¯ 2 x k x ¯ 2 x k + 1 x k 2 .
Lemma 5 
([34,35]). Let { τ k } be a sequence of real numbers and { t k } be a sequence of real non-negative numbers. Let’s assume that the sequence { τ k } is bounded and for any k 0 , it satisfies that
τ k + 1 τ k t k ,
then the following statements hold:
(i)
For k 0 , τ k is bounded;
(ii)
The sequence { t k } is summable, namely k = 0 t k < ;
(iii)
The sequence { τ k } is monotonically decreasing and convergent.
Lemma 6. 
Based on Propositions 1 and 2, if { μ k } and { x k } are sequences generated by (19) and (26) in Algorithm 1, then the following statements hold:
(a)
The sequence { x k } is bounded;
(b)
The sequence { F k } is bounded;
(c)
The step size α k and the search direction d k satisfy
lim k α k d k = 0 .
Proof. 
Assuming that x ¯ is a solution of SOCAVEs (1).
(a)
From the line search condition (25), we can easily deduce that the inequity (29) holds. Based on Lemmas 4 and 5, we obtain the following results:
The sequence { x k + 1 x ¯ } is monotonically decreasing and convergent;
The sequence { x k + 1 x k } is summable, i.e.,
k = 0 x k + 1 x k 2 < .
Based on the aforementioned results, we have
x k x ¯ x 0 x ¯
and the sequence { x k } is bounded, i.e., a constant c > 0 exists such that
x k c , k 0 .
In addition, based on the inequality (32), we have
lim k x k + 1 x k = 0 .
(b)
By the Lipschitz continuity of F and (33), it follows that
F k = F k F ( x ¯ ) L x k x ¯ L x 0 x ¯ ,
then, by setting ε = x 0 x ¯ , we have
F k L ε .
Therefore, the sequence { F k } is bounded.
(c)
Based on Proposition 2, we obtain that
F ( μ k ) T ( x k μ k ) F k T ( x k μ k ) .
From (19) and (25) and the inequality above,
ξ μ k x k = ξ μ k x k 2 μ k x k = ξ α k d k 2 μ k x k F ( μ k ) ( x k μ k ) μ k x k F k ( x k μ k ) μ k x k F k ,
that is
μ k x k L ε ξ .
Using (33) and (37), we have
μ k x ¯ μ k x k + x k x ¯ μ k x k + x 0 x ¯ L ξ + 1 ε .
Based on the Lipschitz continuity of F, it follows that
F ( μ k ) = F ( μ k ) F ( x ¯ ) L μ k x ¯ L ξ + 1 L ε .
By (26) and (38), and the line search condition (25), we can obtain
x k + 1 x k = F ( μ k ) ( x k μ k ) F ( μ k ) 2 F ( μ k ) = F ( μ k ) ( x k μ k ) F ( μ k ) ξ α k 2 d k 2 F ( μ k ) ξ 2 ( L + ξ ) L ε α k 2 d k 2 .
It follows from (35) and (39) that the equality (31) holds.
Lemma 7. 
Based on Propositions 1 and 2, if { μ k } and { x k } are sequences generated by (19) and (26) in Algorithm 1, then
α k δ F k 2 ( L + ξ ) d k 2 .
Proof. 
Suppose α k δ , then α k = α k δ does not satisfy (25), that is
F ( x k + α k d k ) d k < ξ α k d k 2 .
The inequality above combined with (23) and Proposition 1 yields
F k 2 = F k d k = ( F ( x k + α k d k ) F k ) d k F ( x k + α k d k ) d k L α k d k 2 + ξ α k d k 2 = L + ξ δ α k d k 2 .
Therefore,
α k δ F k 2 ( L + ξ ) d k 2 ,
which completes this proof. □
Lemma 8. 
Based on Propositions 1 and 2, if d k is defined by (20)–(22), then for all k 0 , the following inequality
d k N
holds, where N = L ( ε + 4 σ c ) .
Proof. 
It is easy to see that for k = 0 ,
d 0 = F 0 L ε .
Now, for k 1 , from (20)–(22), (34) and (36),
d k = F k + β k d k 1 β k F k T d k 1 F k F k 2 F k + β k d k 1 + β k F k T d k 1 F k F k 2 F k + 2 β k d k 1 F k + 2 σ F k + 1 F k F k + 2 L σ x k + 1 x k F k + 2 L σ ( x k + 1 + x k ) L ε + 4 L σ c .
By setting N : = L ( ε + 4 σ c ) , the assertion then follows immediately. □
Theorem 1. 
Based on Propositions 1 and 2, if { x k } is the sequence generated by (19) and (26) in Algorithm 1, then
lim k inf F k = 0 .
Proof. 
Suppose lim k inf F k 0 , then ν > 0 exists such that F k ν . Using the sufficient descent condition (23) and the Cauchy–Schwarz inequality, we have
F k 2 = F k T d k F k d k .
This implies that d k ν . Therefore, it follows from (31) that
lim k α k = 0 .
However, Lemma 7 shows that
α k max δ , δ F k 2 ( L + ξ ) d k 2 max δ , δ ν 2 ( L + ξ ) N 2 > 0 .
This contradicts (43) and hence lim k inf F k = 0 . □
Lemma 9 
([25]). If all singular values of A exceed 1, then SOCAVEs (1) has a unique solution.
Theorem 2. 
Based on Propositions 1 and 3, then { x k } generated by (19) and (26) in Algorithm 1 converges to the unique solution of SOCAVEs (1).
Proof. 
Based on Lemma 9 and Proposition 3, it is evident that SOCAVEs (1) has a unique solution. The assertion follows immediately on account of Theorem 1, thereby completing this proof. □

5. Numerical Experiments

In this section, we report some numerical examples to compare the MSCG method with the modified multivariate spectral gradient algorithm proposed by [23] (for short MMSGA), the modified SOR-like method proposed by [28] (for short MSOR), and the generalized Newton method proposed by [4] (for short GN). All codes are written in MATLAB R2020a on a personal computer with Intel(R) CPU of 2.10 GHZ and RAM of 16.00 GB.
The parameters used for the implementation of the MSCG method are chosen to be ξ = 0.001 , δ = 0.5 , and σ = 2 . We choose the parameters of the MMSGA as follows:
β = 0.5 , σ = 0.01 , r = 0.1 , τ = 0.001 , δ = 0.01 .
As for the MSOR method, we choose the approximate optimal parameters, which satisfy
ω aopt = 2 2 μ min μ max + 1
and
α aopt = 4 μ min μ max + 2 μ min μ max ( μ max + μ min ) 2 ,
where μ min = 1 A 1 and μ max = 1 + A 1 . In addition, we choose all parameters of the GN method to be the same as those in [4]. The termination criterion used for the three algorithms is F k 10 6 or if the number of iterations exceeds 1000. Additionally, we choose x 0 = ( 0 , , 0 ) as the initial point.
In our experiments, the performance indicators of the algorithms are shown in Table 1.
Here, the Res and Err are defined as
Res : = A x k | x k | b
and
Err : = x k x ¯ ,
where x ¯ is the exact solution.
Example 1. 
Consider SOCAVEs (1) with
A = t r i d i a g ( 1 , 8 , 1 ) R n × n , x ¯ = ( 1 , 1 , 1 , 1 , , 1 , 1 ) R n ,
and b = A x ¯ | x ¯ | .
From Table 2, it is evident that these iteration methods provide approximations to the desired solutions for different dimensions of n. The elapsed CPU time of the MSCG method is the shortest among the four algorithms considered. Moreover, Table 2 demonstrates that the elapsed CPU time of the MSCG method is not significantly affected by changes in problem size compared to the other three algorithms.
Example 2. 
Let m be a given positive integer and n = m 2 . We consider SOCAVEs (1), where A R n × n is expressed as A = A ^ + 4 I and b R n is given by b = A x ¯ | x ¯ | , where
x ¯ = ( 1 , 1 , 1 , 1 , , 1 , 1 ) , A ^ = B I O O O I B I O O O I B O O O O B I O O I B
with B = t r i d i a g ( 1 , 4 , 1 ) R m × m , I R m × m being the identity matrix, and O R m × m being the zero matrix.
In Example 2, we consider the second-order cone as K = K n 1 × × K n r , where n 1 + + n r = n and r 1 . We assume that n 1 = = n r = n / r . The numerical results are presented in Table 3, Table 4 and Table 5, demonstrating that all approaches provide approximations to the desired solutions. From Table 3 to Table 5, it is evident that the MSCG method, MSOR method, and GN method can effectively and accurately solve the problem, with the GN method exhibiting the highest accuracy among these algorithms. Although the MSCG method has the highest number of iterations, its elapsed CPU time is the shortest.
In Example 2, A is a sparse block tridiagonal matrix, and A I is a symmetric positive matrix. Therefore, the convergent condition of the MSCG method is satisfied. The numerical results are listed in Table 3. It can be seen from Table 3 that the MSCG method is the most effective among the four algorithms. In addition, the GN method is the most accurate.
Example 3. 
Consider SOCAVEs (1); we choose A R n × n and b R n by A = A ^ + 4 I and b = A x ¯ | x ¯ | , respectively. Here, A ^ is defined as follows
A ^ = B 0.5 I O O O 1.5 I B 0.5 I O O O 1.5 I B O O O O B 0.5 I O O 1.5 I B
with B = t r i d i a g ( 1.5 , 4 , 0.5 ) R m × m , x ¯ = ( 1 , 2 , 1 , 2 , , 1 , 2 ) .
In contrast to Example 2, A I is an asymmetric positive matrix, and the numerical results for this scenario are presented in Table 3. It shows that the MSCG method exhibits superior computing performance compared to the other three algorithms.
Example 4. 
Consider SOCAVEs (1) which is generated the same as Example 1 with n = 1000 . But, here the SOCs is given by K n : = K n 1 × × K n r , where n 1 = = n r = n r .
In Example 4, we choose that r = 2 , 5 , 10 , 20 , 50 , 100 . The numerical results are presented in Table 5, demonstrating that all approaches provide approximations to the desired solutions. It is evident that the MSCG method, MMSGA method, MSOR method, and GN method can effectively and accurately solve the problem, with the GN method exhibiting the highest accuracy among these algorithms. Although the MSCG method has a higher iteration count compared to the MSOR method and GN method, its elapsed CPU time is the shortest.
Example 5. 
This example is an adaptation of Example 1 from the work of [23]. We select a random A according to the following structure:
A = r o u n d ( 4 e y e ( n , n ) 0.02 ( 2 r a n d ( n , n ) 1 ) ) ,
then we choose a random x ¯ R n , and subsequently, we compute b = A x ¯ | x ¯ | . Finally, we randomly generate x 0 R n in the interval [ 5 , 5 ] as the initial point for the iterative process.
We analyze the numerical comparison among Algorithm 1, MMSGA, MSOR, and GN. In particular, we utilize the performance profile introduced in [36] as a method of evaluation and the performance measurement is the running CPU time.
Figure 1 illustrates the performance profile of the computing time in Algorithm 1 in the range of τ [ 1 , 1487 ] for four solvers on 200 randomly generated test problems from Example 5. From Figure 1, it is evident that the proposed methods outperform MMSGA, MSOR, and GN nearly 100% in CPU time. This demonstrates the superior performance and advantages of the proposed methods over the other algorithms.
Indeed, for large-scale sparse problems, our proposed method is much more efficient than three existing algorithms. However, its numerical performance deteriorates when dealing with dense coefficient matrices.

6. Conclusions

A modified spectral conjugate gradient (MSCG) method is developed for solving SOCAVEs (1). The convergence results of the MSCG method are proved under certain assumptions. Some numerical results of the MSCG method for solving SOCAVEs (1) demonstrate the effectiveness of the proposed approach. There are also some issues to be further studied, including the following: (i) The necessary and sufficient conditions for solving SOCAVEs are worth further research. (ii) Dynamic models have demonstrated significant success in addressing AVEs, as evidenced by works such as [37,38,39]. Consequently, how the dynamic models solve SOCAVEs deserves further investigation.

Author Contributions

Conceptualization, guidance, methodology, editing and revision, L.G.; editing, review and revision, original draft preparation, Z.L.; editing and review, software, validation, J.Z.; original draft preparation, review and revision, software, validation, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

Supported partially by the National Natural Science Foundation of China (Grant No. 12201275), the Ministry of Education in China of Humanities and Social Science Project (Grant No. 21YJCZH204) and the Liaoning Provincial Department of Education (Grant No. JYTZD2023072).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors express sincere gratitude to the referees for their valuable comments and suggestions, which greatly contributed to the improvement of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, J.S.; Tseng, P. An unconstrained smooth minimization reformulation of the second-order cone complementarity problem. Math. Program. 2005, 104, 293–327. [Google Scholar] [CrossRef]
  2. Chen, J.S.; Pan, S.H. A survey on SOC complementarity functions and solution methods for SOCPs and SOCCPs. Pac. J. Optim. 2012, 8, 33–74. [Google Scholar]
  3. Fukushima, M.; Luo, Z.Q.; Tseng, P. Smoothing functions for second-order-cone complementarity problems. SIAM J. Optim. 2002, 12, 436–460. [Google Scholar] [CrossRef]
  4. Hu, S.L.; Huang, Z.H.; Zhang, Q. A generalized newton method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 2011, 235, 1490–1501. [Google Scholar] [CrossRef]
  5. Ke, Y.F.; Ma, C.F.; Zhang, H. The modulus-based matrix splitting iteration methods for second-order cone linear complementarity problems. Numer. Algorithm 2018, 79, 1283–1303. [Google Scholar] [CrossRef]
  6. Miao, X.H.; Yang, J.T.; Saheya, B.; Chen, J.S. A smoothing newton method for absolute value equation associated with second-order cone. Appl. Numer. Math. 2017, 120, 82–96. [Google Scholar] [CrossRef]
  7. Miao, X.H.; Yao, K.; Yang, C.Y.; Chen, J.S. Levenberg-Marquardt method for absolute value equation associated with second-order cone. Numer. Algebr. Control. Optim. 2022, 12, 47–61. [Google Scholar] [CrossRef]
  8. Nguyen, C.T.; Saheya, B.; Chang, Y.L.; Chen, J.S. Unified smoothing functions for absolute value equation associated with second-order cone. Appl. Numer. Math. 2019, 135, 206–227. [Google Scholar] [CrossRef]
  9. Rohn, J. A theorem of the alternatives for the equation Ax+B|x| = b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef]
  10. Mangasarian, O.L. A generalized newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  11. Prokopyev, O.L. On equivalent reformulations for absolute value equations. Comput. Optim. Appl. 2009, 44, 363–372. [Google Scholar] [CrossRef]
  12. Hladík, M. Properties of the solution set of absolute value equations and the related matrix classes. SIAM J. Matrix Anal. Appl. 2023, 44, 175–195. [Google Scholar] [CrossRef]
  13. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Its Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
  14. Mezzadri, F. On the solution of general absolute value equations. Appl. Math. Lett. 2020, 107, 106462. [Google Scholar] [CrossRef]
  15. Wu, S.L.; Li, C.X. The unique solution of the absolute value equations. Appl. Math. Lett. 2018, 76, 195–200. [Google Scholar] [CrossRef]
  16. Zamani, M.; Hladík, M. Error bounds and a condition number for the absolute value equations. Math. Program. 2023, 198, 85–113. [Google Scholar] [CrossRef]
  17. Ke, Y.F.; Ma, C.F. SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 311, 195–202. [Google Scholar] [CrossRef]
  18. Guo, P.; Wu, S.L.; Li, C.X. On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 2019, 97, 107–113. [Google Scholar] [CrossRef]
  19. Zhang, Y.M.; Yu, D.M.; Yuan, Y.F. On the alternative SOR-like iteration method for solving absolute value equations. Symmetry 2023, 15, 589. [Google Scholar] [CrossRef]
  20. Dong, X.; Shao, X.H.; Shen, H.L. A new SOR-like method for solving absolute value equations. Appl. Numer. Math. 2020, 156, 410–421. [Google Scholar] [CrossRef]
  21. Ke, Y.F. The new iteration algorithm for absolute value equation. Appl. Math. Lett. 2020, 99, 105990. [Google Scholar] [CrossRef]
  22. Yu, D.M.; Chen, C.R.; Han, D.R. A modified fixed point iteration method for solving the system of absolute value equations. Optimization 2022, 71, 449–461. [Google Scholar] [CrossRef]
  23. Yu, Z.S.; Li, L.; Yuan, Y. A modified multivariate spectral gradient algorithm for solving absolute value equations. Appl. Math. Lett. 2021, 121, 107461. [Google Scholar] [CrossRef]
  24. Rahpeymaii, F.; Amini, K.; RostamyMalkhalifeh, M. A new three-term spectral subgradient method for solving absolute value equation. Int. J. Comput. Math. 2023, 100, 440–452. [Google Scholar] [CrossRef]
  25. Miao, X.H.; Hsu, W.M.; Nguyen, C.T.; Chen, J.S. The solvabilities of three optimization problems associated with second-order cone. J. Nonlinear Convex Anal. 2021, 22, 937–967. [Google Scholar]
  26. Miao, X.H.; Chen, J.S. On matrix characterizations for P-property of the linear transformation in second-order cone linear complementarity problems. Linear Algebra Its Appl. 2021, 613, 271–294. [Google Scholar] [CrossRef]
  27. Yu, D.M.; Wang, Z.W.; Chen, C.R.; Han, D.R. A non-monotone smoothing Newton algorithm for absolute value equations with second-order cone. Math. Numer. Sin. 2023, 45, 251. [Google Scholar]
  28. Huang, B.H.; Li, W. A modified SOR-like method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 2022, 400, 113745. [Google Scholar] [CrossRef]
  29. Alizadeh, F.; Goldfarb, D. Second-order cone programming. Math. Program. 2003, 95, 3–51. [Google Scholar] [CrossRef]
  30. Faraut, J.; Korányi, A. Analysis on Symmetric Cones; Oxford University Press: Oxford, UK, 1994. [Google Scholar]
  31. Huang, B.H.; Ma, C.F. Convergent conditions of the generalized newton method for absolute value equation over second order cones. Appl. Math. Lett. 2019, 92, 151–157. [Google Scholar] [CrossRef]
  32. Liu, J.K.; Feng, Y.M.; Zou, L.M. A spectral conjugate gradient method for solving large-scale unconstrained optimization. Comput. Math. Appl. 2019, 77, 731–739. [Google Scholar] [CrossRef]
  33. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Boston, MA, USA, 1999; pp. 355–369. [Google Scholar]
  34. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  35. Bot, R.I.; Csetnek, E.R.; Nguyen, D.K. A proximal minimization algorithm for structured nonconvex and nonsmooth problems. SIAM J. Optim. 2019, 29, 1300–1328. [Google Scholar] [CrossRef]
  36. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  37. Chen, C.R.; Yang, Y.N.; Yu, D.M.; Han, D.R. An inverse-free dynamical system for solving the absolute value equations. Appl. Numer. Math. 2021, 168, 170–181. [Google Scholar] [CrossRef]
  38. Ju, X.X.; Yang, X.S.; Feng, G.; Che, H.J. Neurodynamic optimization approaches with finite/fixed-time convergence for absolute value equations. Neural Netw. 2023, 165, 971–981. [Google Scholar] [CrossRef]
  39. Yu, D.M.; Zhang, G.H.; Chen, C.R.; Han, D.R. The neural network models with delays for solving absolute value equations. Neurocomputing 2024, 589, 127707. [Google Scholar] [CrossRef]
Figure 1. Performance profile of computation time of four different algorithms.
Figure 1. Performance profile of computation time of four different algorithms.
Symmetry 16 00654 g001
Table 1. Performance indicators of the algorithms.
Table 1. Performance indicators of the algorithms.
IndicatorsMeaning of Representation
Iterthe total number of iterations
Timethe elapsed CPU time in seconds
Resthe norm of absolute residual vectors
Errthe norm of absolute error vectors
Table 2. Numerical results for Example 1.
Table 2. Numerical results for Example 1.
n100020003000400050006000
MSCGIter222323232323
Time0.00540.00400.00230.00430.00520.0048
Res7.3344 × 10 7 4.2396 × 10 7 5.1559 × 10 7 5.9285 × 10 7 6.6094 × 10 7 7.2249 × 10 7
Err7.3134 × 10 8 4.2539 × 10 8 5.1713 × 10 8 5.9446 × 10 8 6.6258 × 10 8 7.2416 × 10 8
MMSGAIter525656695659
Time0.13550.50001.10712.36872.98124.4165
Res9.6533 × 10 7 5.7346 × 10 7 5.9639 × 10 7 9.4671 × 10 7 8.0882 × 10 7 6.9092 × 10 7
Err1.0413 × 10 7 6.2386 × 10 8 6.4498 × 10 8 9.8842 × 10 8 8.5702 × 10 8 7.2622 × 10 8
MSORIter888999
Time0.67872.97108.170719.808739.692568.8103
Res5.3711 × 10 7 7.6161 × 10 7 9.3390 × 10 7 9.2454 × 10 8 1.0296 × 10 7 1.1246 × 10 7
Err6.8543 × 10 8 9.7461 × 10 8 1.1965 × 10 7 1.0051 × 10 8 1.1193 × 10 8 1.2226 × 10 8
GNIter444444
Time0.08760.30750.87291.75402.82814.9972
Res9.5889 × 10 12 2.5840 × 10 11 4.8558 × 10 11 7.4836 × 10 11 1.0435 × 10 10 1.3690 × 10 10
Err1.2270 × 10 12 3.2606 × 10 12 6.1695 × 10 12 9.5089 × 10 12 1.3127 × 10 11 1.7491 × 10 11
Table 3. Numerical results for Example 2.
Table 3. Numerical results for Example 2.
m203040506070
MSCGIter262727272728
Time0.00920.00710.00170.00270.00520.0057
Res4.3855 × 10 7 4.0944 × 10 7 4.7299 × 10 7 6.6179 × 10 7 9.7673 × 10 7 4.8759 × 10 7
Err3.7474 × 10 8 3.4829 × 10 8 4.0059 × 10 8 5.6225 × 10 8 8.3402 × 10 8 4.1533 × 10 8
MMSGAIter97125129133148177
Time0.07340.28750.81481.91964.21029.6016
Res9.8940 × 10 7 9.6225 × 10 7 8.9834 × 10 7 9.5188 × 10 7 9.8214 × 10 7 7.2742 × 10 7
Err1.8880 × 10 7 1.8361 × 10 7 1.5265 × 10 7 1.7421 × 10 7 1.8667 × 10 7 1.3863 × 10 7
MSORIter101010101010
Time0.08150.06340.23281.918910.347933.3332
Res1.0685 × 10 8 2.7920 × 10 8 4.4075 × 10 8 5.9449 × 10 8 7.4367 × 10 8 8.9001 × 10 8
Err1.5480 × 10 9 3.9014 × 10 9 6.1004 × 10 9 8.1958 × 10 9 1.0231 × 10 8 1.2228 × 10 8
GNIter444444
Time0.05080.06310.18390.53381.58093.1439
Res2.4441 × 10 13 1.7123 × 10 12 5.8799 × 10 12 1.4538 × 10 11 2.5154 × 10 11 4.2559 × 10 11
Err3.0969 × 10 14 2.1237 × 10 13 7.4333 × 10 13 1.8532 × 10 12 3.2322 × 10 12 5.4819 × 10 12
Table 4. Numerical results for Example 3.
Table 4. Numerical results for Example 3.
m203040506070
MSCGIter100102104106106108
Time0.05470.00670.00580.00900.01450.0204
Res7.7706 × 10 7 8.7960 × 10 7 8.3790 × 10 7 7.3681 × 10 7 8.7462 × 10 7 7.1258 × 10 7
Err1.4827 × 10 7 1.6905 × 10 7 1.6158 × 10 7 1.4236 × 10 7 1.6921 × 10 7 1.3799 × 10 7
MMSGAIter282279300413532517
Time0.19030.66421.81535.977816.637730.1169
Res7.5264 × 10 7 7.6801 × 10 7 9.3640 × 10 7 9.6159 × 10 7 9.5593 × 10 7 9.7385 × 10 7
Err1.5684 × 10 7 1.6204 × 10 7 2.0482 × 10 7 2.0997 × 10 7 2.1293 × 10 7 2.2972 × 10 7
MSORIter91010101010
Time0.32370.11380.39032.768612.865737.7472
Res5.2065 × 10 7 1.6860 × 10 7 2.1851 × 10 7 2.6716 × 10 7 3.1549 × 10 7 3.6379 × 10 7
Err9.9605 × 10 8 2.9425 × 10 8 4.0864 × 10 8 5.2339 × 10 8 6.3860 × 10 8 7.5418 × 10 8
GNIter344444
Time0.07390.06730.18770.63891.67073.1735
Res8.5492 × 10 7 1.5275 × 10 12 4.9211 × 10 12 1.1935 × 10 11 2.3194 × 10 11 3.7757 × 10 11
Err1.3707 × 10 7 2.0417 × 10 13 6.5438 × 10 13 1.5791 × 10 12 3.0854 × 10 12 5.0251 × 10 12
Table 5. Numerical results for Example 4.
Table 5. Numerical results for Example 4.
r25102050100
MSCGIter232323232323
Time0.00690.00450.00280.00440.01000.0159
Res6.6105 × 10 7 6.6187 × 10 7 6.4937 × 10 7 6.4864 × 10 7 6.6740 × 10 7 6.5931 × 10 7
Err6.2687 × 10 8 6.2658 × 10 8 6.1515 × 10 8 6.1365 × 10 8 6.2760 × 10 8 6.1137 × 10 8
MMSGAIter676377786766
Time0.17720.15630.19300.20270.18360.1914
Res7.9526 × 10 7 9.9459 × 10 7 7.3965 × 10 7 9.6005 × 10 7 9.9015 × 10 7 6.0634 × 10 7
Err8.3993 × 10 8 1.0422 × 10 7 8.0545 × 10 8 1.0009 × 10 7 1.0606 × 10 7 7.0451 × 10 8
MSORIter888888
Time0.73500.59940.58570.45410.67310.6574
Res5.4330 × 10 7 5.4816 × 10 7 5.5486 × 10 7 5.6886 × 10 7 6.1015 × 10 7 6.6834 × 10 7
Err7.0453 × 10 8 7.0641 × 10 8 6.9921 × 10 8 6.8771 × 10 8 6.7298 × 10 8 6.7406 × 10 8
GNIter444444
Time0.09570.06460.07050.09140.15610.2753
Res4.6026 × 10 12 1.6831 × 10 12 8.0434 × 10 13 3.4623 × 10 13 7.9322 × 10 14 5.7396 × 10 14
Err5.8097 × 10 13 2.1482 × 10 13 1.0123 × 10 13 4.3809 × 10 14 9.9165 × 10 15 7.3165 × 10 15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, L.; Liu, Z.; Zou, J.; Wang, Z. A Modified Spectral Conjugate Gradient Method for Absolute Value Equations Associated with Second-Order Cones. Symmetry 2024, 16, 654. https://doi.org/10.3390/sym16060654

AMA Style

Gao L, Liu Z, Zou J, Wang Z. A Modified Spectral Conjugate Gradient Method for Absolute Value Equations Associated with Second-Order Cones. Symmetry. 2024; 16(6):654. https://doi.org/10.3390/sym16060654

Chicago/Turabian Style

Gao, Leifu, Zheng Liu, Jingfei Zou, and Zengwei Wang. 2024. "A Modified Spectral Conjugate Gradient Method for Absolute Value Equations Associated with Second-Order Cones" Symmetry 16, no. 6: 654. https://doi.org/10.3390/sym16060654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop