Next Article in Journal
Research on Solving Nonlinear Problem of Ball and Beam System by Introducing Detail-Reward Function
Next Article in Special Issue
Solving the Adaptive Cubic Regularization Sub-Problem Using the Lanczos Method
Previous Article in Journal
A Review on Some Linear Positive Operators Defined on Triangles
Previous Article in Special Issue
Finding Solutions to the Yang–Baxter-like Matrix Equation for Diagonalizable Coefficient Matrix
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Step Iteration Method for Vertical Linear Complementarity Problems

1
Department of Engineering Science, Macau University of Sciences and Technology, Macao 999078, China
2
School of Mathematics and Statistics, Shaoguan University, Shaoguan 512005, China
3
School of Computer Science and Engineering, Macau University of Sciences and Technology, Macao 999078, China
4
Department of Mathematics, University of Macau, Macao 519000, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(9), 1882; https://doi.org/10.3390/sym14091882
Submission received: 9 August 2022 / Revised: 31 August 2022 / Accepted: 6 September 2022 / Published: 8 September 2022
(This article belongs to the Special Issue Tensors and Matrices in Symmetry with Applications)

Abstract

:
In this paper, a two-step iteration method was established which can be viewed as a generalisation of the existing modulus-based methods for vertical linear complementarity problems. The convergence analysis of the proposed method is presented, which can enlarge the convergence domain of the parameter matrix compared to the recent results. Numerical examples show that the proposed method is efficient with the two-step technique and confirm the improvement of the theoretical results.

1. Introduction

As a generalisation of a linear complementarity problem (LCP) [1], the vertical linear complementarity problem (VLCP) has wide applications in many fields of science and technology, such as control theory, nonlinear networks and economics; see [2,3,4,5,6,7] for details. Let A 1 , , A R n × n and q 1 , , q R n , where is a positive integer. The VLCP seeks to find z , w 1 , , w R n to satisfy
w i = A i z + q i , i = 1 , , , with min { z , w 1 , , w } = 0 ,
where the minimum operation is taken component-wise, which implies that all the involved vectors are non-negative and at least one entry of the ith ( i = 1 , 2 , , n ) component is zero. Note that the VLCP 1 is exactly the LCP.
Recently, for solving the VLCP, a modulus-based formulation was introduced by Mezzadi in [8], which can result in a class of modulus-based matrix splitting (MMS) iteration methods, shown to be more efficient than smooth Newton method [9]. The kind of modulus-based methods can be viewed as the generalisation of the one for solving LCP [10]. For other existing methods of solving the VLCP, one can refer to the recent studies [8,11,12] and the references therein. The MMS methods have been successfully used to solve many kinds of complementarity problems due to the high efficiency in solving linear modulus equations in each iteration. There were also many accelerated techniques, such as double splitting [13], precondition [14], two-step splitting [15,16,17,18], and relaxation [19] to improve the convergence rate of MMS methods in recent works.
In this work, we focus on applying the two-step splitting technique to the equivalent modulus equation of the VLCP. The advantage of the two-step splitting is to make full use of the information of the system matrix in each iteration. Such a technique was successfully used in the LCP [15,16], nonlinear complementarity problem [17], and horizontal LCP [18]. Numerical results showed that the computation time could be saved significantly by the two-step technique comparing to the original iteration method. Hence, we aim to construct the two-step modulus-based matrix splitting (TMMS) iteration for the VLCP, which will be done in Section 2. The convergence theorems of the proposed method will be given in Section 3, and shown to improve the existing ones of the MMS method. By numerical tests in the Section 4, the efficiency of the proposed method is presented. Concluding remarks are given in Section 5.
Some necessary notations, definitions and known results are given first. Let e = ( 1 , 1 , , 1 ) T R n . Let A = ( a i j ) R n × n and A = D A C A = D A L A U A , where D A , C A , L A and U A denote the diagonal, nondiagonal, strictly lower triangle and strictly upper triangle parts of A, respectively. By ρ ( A ) , we denote the spectral radius of A. For A = ( a i j ) R n × n , A > ( ) 0 means that a i j > ( ) 0 for all i , j . For two matrices A = ( a i j ) , B = ( b i j ) R m × n the order A ( > ) B means a i j ( > ) b i j for any i and j . By | A | , we denote | A | = ( | a i j | ) and the comparison matrix of A is A = ( a i j ) , where a i j = | a i j | if i = j and a i j = | a i j | if i j . A is called a Z-matrix if a i j 0 for any i j , a nonsingular M-matrix if it is a nonsingular Z-matrix with A 1 0 , an H-matrix if A is a nonsingular M-matrix, a strictly diagonal dominant (s.d.d.) matrix if | a i i | > j i | a i j | for all 1 i n (e.g., see [20]), and an H + -matrix if A is an H-matrix with a i i > 0 for every i (e.g., see [21]). A = M N is called an H-splitting if M | N | is a nonsingular M-matrix (e.g., see [22]). It is known that the VLCP has a unique solution if the row-representative matrices { A 1 , A 2 , , A } satisfy row W -property; see [7]. In the following discussion, we always assume that both the system matrices of the VLCP and their row-representative matrices are H + -matrices, which is a sufficient condition of the row W -property, including many typical situations where the solution is unique; see [7,8].

2. New Method

First, we introduce the MMS method for solving VLCP .
Let A i = F i G i ( i = 1 , 2 , , ) be splittings of A i , Ω be a diagonal matrix with positive diagonal entries and γ be a positive constant. Then, with
z = 1 γ ( | x 1 | + x 1 ) , w j = i = 1 j Ω γ ( | x i | x i ) + Ω γ ( | x j + 1 | + x j + 1 ) , j = 1 , 2 , , 1 , w = i = 1 Ω γ ( | x i | x i ) ,
VLCP can be equivalently transformed into a system of fixed-point equations
( 2 1 Ω + i = 1 1 2 i 1 F i + F ) x 1 = ( i = 1 1 2 i 1 G i + G ) x 1 + ( 2 1 Ω i = 1 1 2 i 1 A i A ) | x 1 | + Ω i = 2 2 i + 1 | x i | γ ( i = 1 1 2 i 1 q i + q ) , x j = 1 2 Ω 1 [ ( A j 1 A j ) ( | x 1 | + x 1 ) + γ ( | x j + 1 | + x j + 1 ) + γ q j 1 γ q j ] , j = 2 , 3 , , 1 , x = 1 2 Ω 1 [ ( A 1 A ) ( | x 1 | + x 1 ) + γ q 1 γ q ] ,
see [8] for more details. Based on (2) and (3), the MMS method is presented as follows:
Method 1
([8]). MMS method for VLCP
Let A i = F i G i ( i = 1 , 2 , , ) be ℓ splittings of A i R n × n , Ω R n × n be a diagonal matrix with positive diagonal entries and γ be a positive constant. Given x 1 ( 0 ) R n , for k = 0 , 1 , 2 , , compute x 2 ( k ) , , x ( k ) by
x ( k ) = 1 2 Ω 1 [ ( A 1 A ) ( | x 1 ( k ) | + x 1 ( k ) ) + γ q 1 γ q ] , x j ( k ) = 1 2 Ω 1 [ ( A j 1 A j ) ( | x 1 ( k ) | + x 1 ( k ) ) + γ ( | x j + 1 ( k ) | + x j + 1 ( k ) ) + γ q j 1 γ q j ] , j = 1 , 2 , , 2 .
and compute x 1 ( k + 1 ) R n by
( 2 1 Ω + i = 1 1 2 i 1 F i + F ) x 1 ( k + 1 ) = ( i = 1 1 2 i 1 G i + G ) x 1 ( k ) + ( 2 1 Ω i = 1 1 2 i 1 A i A ) | x 1 ( k ) | + Ω i = 2 2 i + 1 | x i ( k ) | γ ( i = 1 1 2 i 1 q i + q ) .
Then, set
z ( k + 1 ) = 1 γ ( | x 1 ( k + 1 ) | + x 1 ( k + 1 ) ) , w j ( k + 1 ) = i = 1 j Ω γ ( | x i ( k + 1 ) | x i ( k + 1 ) ) + Ω γ ( | x j + 1 ( k + 1 ) | + x j + 1 ( k + 1 ) ) , j = 1 , 2 , , 1 , w ( k + 1 ) = i = 1 Ω γ ( | x i ( k + 1 ) | x i ( k + 1 ) ) ,
until the iteration is convergent.
In order to make full use of the information in the system matrices, by the two-step matrix splitting technique, we construct the two-step modulus-based matrix splitting (TMMS) iteration method as follows:
Method 2.
TMMS method for VLCP
Let Ω R n × n be a diagonal matrix with positive diagonal entries, γ be a positive constant, and A i = F i ( t ) G i ( t ) ( t = 1 , 2 ) be two splittings of A i ( i = 1 , 2 , , ) . Given an initial vector x 1 ( 0 ) R n , for k = 0 , 1 , 2 , , compute x 1 ( k + 1 ) R n by
( 2 1 Ω + i = 1 1 2 i 1 F i ( 1 ) + F ( 1 ) ) x 1 ( k + 1 2 ) = ( i = 1 1 2 i 1 G i ( 1 ) + G ( 1 ) ) x 1 ( k ) + ( 2 1 Ω i = 1 1 2 i 1 A i A ) | x 1 ( k ) | + Ω i = 2 2 i + 1 | x i ( k ) | γ ( i = 1 1 2 i 1 q i + q ) , ( 2 1 Ω + i = 1 1 2 i 1 F i ( 2 ) + F ( 2 ) ) x 1 ( k + 1 ) = ( i = 1 1 2 i 1 G i ( 2 ) + G ( 2 ) ) x 1 ( k + 1 2 ) + ( 2 1 Ω i = 1 1 2 i 1 A i A ) | x 1 ( k + 1 2 ) | + Ω i = 2 2 i + 1 | x i ( k + 1 2 ) | γ ( i = 1 1 2 i 1 q i + q ) ,
where x 2 ( k ) , , x ( k ) are computed by (4). Then, set the same sequences as (5) until the iteration is convergent.
Clearly, if we take F i ( 1 ) G i ( 1 ) = F i ( 2 ) G i ( 2 ) , Method 2 reduces to Method 1. For the simplest case, when = 2 , by (4), the main iteration (6) reduces to
( 2 Ω + F 1 ( 1 ) + F 2 ( 1 ) ) x 1 ( k + 1 2 ) = ( G 1 ( 1 ) + G 2 ( 1 ) ) x 1 ( k ) + ( 2 Ω A 1 A 2 ) | x 1 ( k ) | + | ( A 1 A 2 ) ( | x 1 ( k ) | + x 1 ( k ) ) + γ q 1 γ q 2 | γ ( q 1 + q 2 ) , ( 2 Ω + F 1 ( 2 ) + F 2 ( 2 ) ) x 1 ( k + 1 ) = ( G 1 ( 2 ) + G 2 ( 2 ) ) x 1 ( k + 1 2 ) + ( 2 Ω A 1 A 2 ) | x 1 ( k + 1 2 ) | + | ( A 1 A 2 ) ( | x 1 ( k + 1 2 ) | + x 1 ( k + 1 2 ) ) + γ q 1 γ q 2 | γ ( q 1 + q 2 ) ,
Moreover, by specifically choosing the matrix splittings of the system matrices, one can obtain some TMMS relaxation methods. For i = 1 , 2 , , , taking
F i ( 1 ) = 1 α ( D A i ( 1 ) β L A i ( 1 ) ) , G i ( 1 ) = F i ( 1 ) A i , F i ( 2 ) = 1 α ( D A i ( 2 ) β U A i ( 2 ) ) , G i ( 2 ) = F i ( 2 ) A i ,
we can obtain the two-step modulus-based accelerated overrelaxation (TMAOR) iteration method. Taking ( α , β ) = ( α , α ) , ( α , β ) = ( 1 , 1 ) and ( α , β ) = ( 1 , 0 ) , the TMAOR reduces to the two-step modulus-based successive overrelaxation (TMSOR), Gauss–Seidel (TMMGS) and Jacobi (TMJ) iteration methods, respectively.

3. Convergence Analysis

Lemma 3
([20]). Assume that A is a Z-matrix. Then, the following three statements are equivalent:
(1) 
A is a nonsingular M-matrix;
(2) 
There exists a diagonal matrix D with positive diagonal entries, such that A D is an s.d.d. matrix with positive diagonal entries.
(3) 
If A = F G satisfy F 1 0 and G 0 , then ρ ( F 1 G ) < 1 .
Lemma 4
([23]). Let A be an H-matrix. Then | A 1 | A 1 .
Lemma 5
([24]). Let B R n × n be an s.d.d. matrix. Then, C R n × n ,
| | B 1 C | | max 1 i n ( | C | e ) i ( B e ) i .
We first give the convergence result for = 2 .
Theorem 6.
Let A 1 , A 2 and all their row-representative matrices be H + -matrices. Let A i = F i ( t ) G i ( t ) ( t = 1 , 2 ) be two splittings of A i ( i = 1 , 2 ) . Assume that:
(I) 
For t = 1 , 2 , A 1 = F 1 ( t ) G 1 ( t ) is a splitting of A 1 satisfying D F 1 ( t ) > 0 , and A 2 = F 2 ( t ) G 2 ( t ) is an H-splitting of A 2 ;
(II) 
For t = 1 , 2 , F 1 ( t ) F 2 ( t ) and | G 2 ( t ) | | G 1 ( t ) | ;
(III) 
There exists a positive diagonal matrix T with positive diagonal entries such that both ( F 2 ( 1 ) | G 2 ( 1 ) | ) T and ( F 2 ( 2 ) | G 2 ( 2 ) | ) T are s.d.d. matrices;
(IV) 
Ω T e > max t = 1 , 2 { [ 1 2 ( D F 1 ( t ) + D F 2 ( t ) ) ( F 2 ( t ) | G 2 ( t ) | ) ] T e } .
Then, Method 2 converges to the solution of the VLCP 2 .
Proof. 
For t = 1 , 2 , by Assumptions (II) and (III), we have
2 Ω + F 1 ( t ) + F 2 ( t ) T e > F 1 ( t ) + F 2 ( t ) T e ( F 1 ( t ) + F 2 ( t ) ) T e 2 F 2 ( t ) T e ( 2 F 2 ( t ) | G 2 ( t ) | ) T e > 0 .
Therefore, 2 Ω + F 1 ( t ) + F 2 ( t ) T is an s.d.d matrix, which implies that 2 Ω + F 1 ( t ) + F 2 ( t ) is an H-matrix. Then, by Lemma 4, we have the error at iteration k + 1 :
Let x 1 * be the solution of (3). By the first equation of (7), we can obtain the error at the iteration ( k + 1 ) :
( 2 Ω + F 1 ( 1 ) + F 2 ( 1 ) ) ( x 1 ( k + 1 2 ) x 1 * ) = ( G 1 ( 1 ) + G 2 ( 1 ) ) ( x 1 ( k ) x 1 * ) + ( 2 Ω A 1 A 2 ) × ( | x 1 ( k ) | | x 1 * | ) + | ( A 1 A 2 ) ( | x 1 ( k ) | + x 1 ( k ) ) + γ q 1 γ q 2 | | ( A 1 A 2 ) ( | x 1 * | + x 1 * ) + γ q 1 γ q 2 | , ( 2 Ω + F 1 ( 2 ) + F 2 ( 2 ) ) ( x 1 ( k + 1 ) x 1 * ) = ( G 1 ( 2 ) + G 2 ( 2 ) ) ( x 1 ( k + 1 2 ) x 1 * ) + ( 2 Ω A 1 A 2 ) × ( | x 1 ( k + 1 2 ) | | x 1 * | ) + | ( A 1 A 2 ) ( | x 1 ( k + 1 2 ) | + x 1 ( k + 1 2 ) ) + γ q 1 γ q 2 | | ( A 1 A 2 ) ( | x 1 * | + x 1 * ) + γ q 1 γ q 2 | ,
Then, by Lemma 4, we obtain
| x 1 ( k + 1 ) x 1 * | P ( 2 ) P ( 1 ) | x 1 ( k ) x 1 * | ,
where
P ( t ) = F ( t ) 1 G ( t ) , F ( t ) = 2 Ω + F 1 ( t ) + F 2 ( t ) , G ( t ) = | G 1 ( t ) + G 2 ( t ) | + | 2 Ω A 1 A 2 | + 2 | A 1 A 2 | .
By Lemma 5, we have
| | T 1 P ( t ) T | | = | | T 1 F ( t ) 1 G ( t ) T | | = | | ( F ( t ) T ) 1 ( G ( t ) T ) | | max 1 i n ( G ( t ) T e ) i ( F ( t ) T e ) i .
Still considering assumption (II), we can obtain
| G 1 ( t ) + G 2 ( t ) | + | G 1 ( t ) G 2 ( t ) | = 2 | G 2 ( t ) | , | C F 1 ( t ) + C F 2 ( t ) | + | C F 1 ( t ) C F 2 ( t ) | = 2 | C F 2 ( t ) | , | D F 1 ( t ) D F 2 ( t ) | = D F 1 ( t ) D F 2 ( t ) .
Then, we have
F ( t ) T e G ( t ) T e = ( 2 Ω + F 1 ( t ) + F 2 ( t ) | G 1 ( t ) + G 2 ( t ) | | 2 Ω A 1 A 2 | 2 | A 1 A 2 | ) T e = [ 2 Ω + D F 1 ( t ) + D F 2 ( t ) | C F 1 ( t ) + C F 2 ( t ) | | G 1 ( t ) + G 2 ( t ) | | 2 Ω D F 1 ( t ) D F 2 ( t ) C F 1 ( t ) C F 2 ( t ) + G 1 ( t ) + G 2 ( t ) | 2 | D F 1 ( t ) + C F 1 ( t ) G 1 ( t ) ( D F 2 ( t ) + C F 2 ( t ) G 2 ( t ) ) | ] T e ( 2 Ω + D F 1 ( t ) + D F 2 ( t ) | C F 1 ( t ) + C F 2 ( t ) | | G 1 ( t ) + G 2 ( t ) | | 2 Ω D F 1 ( t ) D F 2 ( t ) | | C F 1 ( t ) + C F 2 ( t ) | + 2 | G 1 ( t ) + G 2 ( t ) | 2 | D F 1 ( t ) D F 2 ( t ) | 2 | C F 1 ( t ) C F 2 ( t ) | 2 | G 1 ( t ) G 2 ( t ) | ) T e = ( 2 Ω + 3 D F 2 ( t ) D F 1 ( t ) | 2 Ω D F 1 ( t ) D F 2 ( t ) | 2 | G 1 ( t ) + G 2 ( t ) | 2 | G 1 ( t ) G 2 ( t ) | 2 | C F 1 ( t ) + C F 2 ( t ) | 2 | C F 1 ( t ) C F 2 ( t ) | ) T e = ( 2 Ω + 3 D F 2 ( t ) D F 1 ( t ) | 2 Ω D F 1 ( t ) D F 2 ( t ) | 4 | G 2 ( t ) | 4 | C F 2 ( t ) | ) T e ,
where the last two equalities hold by (12).
When
Ω 1 2 ( D F 1 ( t ) + D F 2 ( t ) ) ,
by (13), we have
F ( t ) T e G ( t ) T e [ 2 Ω + 3 D F 2 ( t ) D F 1 ( t ) ( 2 Ω D F 1 ( t ) D F 2 ( t ) ) 4 | G 2 ( t ) | 4 | C F 2 ( t ) | ] T e = ( 4 D F 2 ( t ) 4 | G 2 ( t ) | 4 | C F 2 ( t ) | ) T e = 4 ( F 2 ( t ) | G 2 ( t ) | ) T e > 0 .
When
[ 1 2 ( D F 1 ( t ) + D F 2 ( t ) ) ( F 2 ( t ) | G 2 ( t ) | ) ] T e < Ω T e < 1 2 ( D F 1 ( t ) + D F 2 ( t ) ) T e ,
by (13), we have
F ( t ) T e G ( t ) T e [ 2 Ω + 3 D F 2 ( t ) D F 1 ( t ) ( D F 1 ( t ) + D F 2 ( t ) 2 Ω ) 4 | G 2 ( t ) | 4 | C F 2 ( t ) | ] T e = ( 4 Ω + 2 D F 2 ( t ) 2 D F 1 ( t ) 4 | G 2 ( t ) | 4 | C F 2 ( t ) | ) T e = 4 [ Ω 1 2 ( D F 1 ( t ) + D F 2 ( t ) ) + F 2 ( t ) | G 2 ( t ) | ] T e > 0 .
Combining (14) and (15), we have F ( t ) T e G ( t ) T e > 0 provided that the assumption (IV) holds. Then, by (11), we have | | T 1 F ( t ) 1 G ( t ) T | | < 1 . Then, we have the next inequality:
| | ρ ( P ( 2 ) P ( 1 ) ) | | | | T 1 P ( 2 ) P ( 1 ) T | | | | T 1 P ( 2 ) T | | | | T 1 P ( 1 ) T | | < 1 ,
which implies that lim k + x 1 ( k ) = x 1 * by (10), ending the proof. □
Remark 7.
By the proof of Theorem 6, (II) is relevant in the derivation of the first chain of inequalities. A simple example can be given to make readers understand it easily. Let
A 1 = 4 1 1 4 4 1 1 4 and A 2 = 4 1 1 1 4 1 1 4 1 1 1 4 .
Consider the two-step SOR splitting, where
F 1 ( 1 ) = 4 α 1 4 α 4 α 1 4 α , G 1 ( 1 ) = 4 α 4 1 4 α 4 4 α 4 1 4 α 4 ,
F 1 ( 2 ) = 4 α 1 4 α 4 α 1 4 α , G 1 ( 2 ) = 4 α 4 1 4 α 4 4 α 4 1 4 α 4 ,
F 2 ( 1 ) = 4 α 1 4 α 1 4 α 1 1 4 α , G 2 ( 1 ) = 4 α 4 1 1 4 α 4 1 4 α 4 1 4 α 4 ,
F 2 ( 2 ) = 4 α 1 1 4 α 1 4 α 1 4 α , G 2 ( 2 ) = 4 α 4 1 4 α 4 1 4 α 4 1 1 4 α 4 .
Clearly, for the matrices presented above, (II) is satisfied. Moreover, by simple computation, one can easily determine that A 1 A 2 with two-step AOR splittings is a sufficient condition of (II).
Remark 8.
If we take F i ( 1 ) G i ( 1 ) = F i ( 2 ) G i ( 2 ) , all the assumptions in Theorem 6 can reduce to those in Theorem 4.1 of [8]. Clearly, (IV) is weaker than the corresponding one in Theorem 4.1 of [8], where Ω was assumed to satisfy Ω 1 2 ( D F 1 + D F 2 ) . On the other hand, 2 Ω + F 1 ( t ) + F 2 ( t ) is proved to be an H-matrix in Theorem 6, not set to be an assumption as that in [8].
Remark 9.
In view of the assumptions in Theorem 6, Assumption (III) seems to be a special one. In fact, for some special cases, the matrix T given in the Assumption (III) can be computed. Taking the TMAOR method where the matrix splittings are given by (8), for example, we have
F 2 ( 1 ) | G 2 ( 1 ) | = F 2 ( 2 ) | G 2 ( 2 ) | = 1 | 1 α | α D A 2 | C A 2 | .
Since A 2 is an H + -matrix, by Lemma 3, we have ρ ( D A 2 1 | C A 2 | ) < 1 . By simple computation, if 0 < β α < 2 1 + ρ ( D A 2 1 | C A 2 | ) , we can obtain 1 | 1 α | α D A 2 | C A 2 | is an M-matrix. Then, letting
T = diag [ ( 1 | 1 α | α D A 2 | C A 2 | ) 1 e ] ,
Assumption (III) of Theorem 6 can be satisfied.
By the similar proof technique, we can obtain the convergence theorem for VLCP ( 3 ) . We then first show the idea of the proof when = 3 .
First, by (6) and the first equation of (3), we can determine the error at the iteration ( k + 1 ) :
| x 1 ( k + 1 2 ) x 1 * | | 4 Ω + 2 F 1 ( 1 ) + F 2 ( 1 ) + F 3 ( 1 ) | 1 ( | 2 G 1 ( 1 ) + G 2 ( 1 ) + G 3 ( 1 ) | + | 4 Ω 2 A 1 A 2 A 3 | + 2 | 2 A 1 A 2 A 3 | + 4 | A 2 A 3 | ) × | x 1 ( k ) x 1 * | , | x 1 ( k + 1 ) x 1 * | | 4 Ω + 2 F 1 ( 2 ) + F 2 ( 2 ) + F 3 ( 2 ) | 1 ( | 2 G 1 ( 2 ) + G 2 ( 2 ) + G 3 ( 2 ) | + | 4 Ω 2 A 1 A 2 A 3 | + 2 | 2 A 1 A 2 A 3 | + 4 | A 2 A 3 | ) × | x 1 ( k + 1 2 ) x 1 * | ,
If there exists a diagonal matrix T with positive diagonal entries such that ( F 3 ( t ) | G 3 ( t ) | ) T , t = 1 , 2 , are s.d.d. matrices, we obtain
| x 1 ( k + 1 ) x 1 * | P ( 2 ) P ( 1 ) | x 1 ( k ) x 1 * | ,
where
P ( t ) = F ( t ) 1 G ( t ) , F ( t ) = 4 Ω + 2 F 1 ( t ) + F 2 ( t ) + F 3 ( t ) , G ( t ) = | 2 G 1 ( t ) + G 2 ( t ) + G 3 ( t ) | + | 4 Ω 2 A 1 A 2 A 3 | + 4 | A 2 A 3 | .
If 2 F 1 ( t ) F 2 ( t ) + F 3 ( t ) , F 2 ( t ) F 3 ( t ) , 2 | G 1 ( t ) | | G 2 ( t ) + G 3 ( t ) | , and | G 2 ( t ) | | G 3 ( t ) | hold, we can also have that 4 Ω + 2 F 1 ( t ) + F 2 ( t ) + F 3 ( t ) T is an s.d.d. matrix and
| | T 1 F ( t ) 1 G ( t ) T | | max 1 i n ( G ( t ) T e ) i ( F ( t ) T e ) i .
Similarly to (13), we can obtain
F ( t ) T e G ( t ) T e ( 4 Ω 2 D F 1 ( t ) D F 2 ( t ) + 7 D F 3 ( t ) | 4 Ω 2 D F 1 ( t ) D F 2 ( t ) D F 3 ( t ) | 8 | G 3 ( t ) | 8 | C F 3 ( t ) | ) T e .
Then, we can also distinguish two cases with respect to Ω , where
Ω 1 4 ( 2 D F 1 ( t ) + D F 2 ( t ) + D F 3 ( t ) )
and
[ 1 4 ( 2 D F 1 ( t ) + D F 2 ( t ) + D F 3 ( t ) ) ( F 3 ( t ) | G 3 ( t ) | ) ] T e < Ω T e < [ 1 4 ( 2 D F 1 ( t ) + D F 2 ( t ) + D F 3 ( t ) ) ] T e ,
and obtain
F ( t ) T e G ( t ) T e 8 ( F 3 ( t ) | G 3 ( t ) | ) T e > 0
and
F ( t ) T e G ( t ) T e = 8 [ Ω 1 4 ( 2 D F 1 ( t ) + D F 2 ( t ) + D F 3 ( t ) ) + F 3 ( t ) | G 3 ( t ) | ] T e > 0 ,
respectively.
In summary, we have the next result.
Theorem 10.
Let A 1 , A 2 , A 3 and all their row-representative matrices be H + -matrices. Let A i = F i ( t ) G i ( t ) ( t = 1 , 2 ) be two splittings of A i ( i = 1 , 2 , 3 ) . Assume that:
(I) 
D F 1 ( t ) > 0 , D F 2 ( t ) > 0 , and A 3 = F 3 ( t ) G 3 ( t ) are an H-splitting of A 3 ;
(II) 
2 F 1 ( t ) F 2 ( t ) + F 3 ( t ) , F 2 ( t ) F 3 ( t ) , 2 | G 1 ( t ) | | G 2 ( t ) + G 3 ( t ) | , and | G 2 ( t ) | | G 3 ( t ) | ;
(III) 
There exists a diagonal matrix T with positive diagonal entries such that ( F 3 ( t ) | G 3 ( t ) | ) T , t = 1 , 2 , are s.d.d. matrices;
(IV) 
Ω T e [ 1 4 ( 2 D F 1 ( t ) + D F 2 ( t ) + D F 3 ( t ) ) ( F 3 ( t ) | G 3 ( t ) | ) ] T e .
Then, Method 2 converges to the solution of the VLCP 3 .
Furthermore, by deduction, for a general , we can also show the main steps of the proof.
In fact, the errors at the iteration ( k + 1 ) are
| x 1 ( k + 1 2 ) x 1 * | | 2 1 Ω + i = 1 1 2 i 1 F i ( 1 ) + F ( 1 ) | 1 ( | i = 1 1 2 i 1 G i ( 1 ) + G ( 1 ) | + | 2 1 Ω i = 1 1 2 i 1 A i A | + 2 [ | A 1 A | + j = 2 2 2 j 1 | 2 j 1 A j s = j + 1 1 2 s 1 A s A | + | 2 2 A 1 s = 2 1 2 s 1 A s A | ] ) | x 1 ( k ) x 1 * | , | x 1 ( k + 1 ) x 1 * | | 2 1 Ω + i = 1 1 2 i 1 F i ( 2 ) + F ( 2 ) | 1 ( | i = 1 1 2 i 1 G i ( 2 ) + G ( 2 ) | + | 2 1 Ω i = 1 1 2 i 1 A i A | + 2 [ | A 1 A | + j = 2 2 2 j 1 | 2 j 1 A j s = j + 1 1 2 s 1 A s A | + | 2 2 A 1 s = 2 1 2 s 1 A s A | ] ) | x 1 ( k + 1 2 ) x 1 * | .
If there exists a diagonal matrix T with positive diagonal entries such that ( F ( t ) | G ( t ) | ) T , t = 1 , 2 , are s.d.d. matrices, we obtain
| x 1 ( k + 1 ) x 1 * | P ( 2 ) P ( 1 ) | x 1 ( k ) x 1 * | ,
where
P ( t ) = F ( t ) 1 G ( t ) , F ( t ) = 2 1 Ω + i = 1 1 2 i 1 F i ( t ) + F ( t ) , G ( t ) = | i = 1 1 2 i 1 G i ( 1 ) + G ( 1 ) | + | 2 1 Ω i = 1 1 2 i 1 A i A | + 2 [ | A 1 A | + j = 2 2 2 j 1 | 2 j 1 A j s = j + 1 1 2 s 1 A s A | + | 2 2 A 1 s = 2 1 2 s 1 A s A | ] .
If
2 j F j 1 ( t ) i = j 1 2 i 1 F i ( t ) + F ( t ) , ( j = 2 , 3 , , 1 ) F 1 ( t ) F ( t ) ,
and
2 j | G j ( t ) | | i = j 1 2 i 1 G i ( t ) + G ( t ) | , ( j = 2 , 3 , , 1 ) | G 1 ( t ) | | G ( t ) | ;
hold, we can also have F ( t ) T as an s.d.d. matrix and
| | T 1 F ( t ) 1 G ( t ) T | | max 1 i n ( G ( t ) T e ) i ( F ( t ) T e ) i .
Similarly to (13), we can obtain
F ( t ) T e G ( t ) T e [ 2 1 Ω + ( 2 1 ) D F ( t ) i = 1 1 2 i 1 D F i ( t ) 2 | C F ( t ) | 2 | G ( t ) | | 2 1 Ω i = 1 1 2 i 1 D F i ( t ) D F ( t ) | ] T e .
Then, we can also distinguish two cases with respect to Ω , where
Ω 2 1 ( i = 1 1 2 i 1 D F i ( t ) + D F ( t ) )
and
[ 2 1 ( i = 1 1 2 i 1 D F i ( t ) + D F ( t ) ) ( F ( t ) | G ( t ) | ) ] T e < Ω T e < 2 1 ( i = 1 1 2 i 1 D F i ( t ) + D F ( t ) ) T e ,
and obtain
F ( t ) T e G ( t ) T e 2 ( F ( t ) | G ( t ) | ) T e > 0
and
F ( t ) T e G ( t ) T e [ 2 Ω 2 ( i = 1 1 2 i 1 D F i ( t ) + D F ( t ) ) + 2 ( F ( t ) | G ( t ) | ) ] T e > 0 ,
respectively. Finally, we have the next theorem.
Theorem 11.
Let A 1 , A 2 , , A and all their row-representative matrices be H + -matrices. Let A i = F i ( t ) G i ( t ) ( t = 1 , 2 ) be two splittings of A i ( i = 1 , 2 , , ) . Assume that:
(I) 
D F i ( t ) > 0 , i = 1 , 2 , , 1 , and A = F ( t ) G ( t ) are an H-splitting of A ;
(II) 
(17) and (18) are satisfied;
(III) 
There exists a diagonal matrix T with positive diagonal entries such that ( F ( t ) | G ( t ) | ) T , t = 1 , 2 , are s.d.d. matrices;
(IV) 
Ω T e [ 2 1 ( i = 1 1 2 i 1 D F i ( t ) + D F ( t ) ) ( F ( t ) | G ( t ) | ) ] T e .
Then, Method 2 converges to the solution of the VLCP .
Same comments as in Remarks 8 and 9 can be given for Theorems 10 and 11.

4. Numerical Examples

In this section, numerical examples are given to show the efficiency of the proposed method.
Consider the two following examples similar to [8], where Examples 12 and 13 are of the symmetry and asymmetry cases, respectively.
Example 12.
Let n = m 2 . Consider the VLCP whose system matrices are given by
A 1 = S S S + I n R n × n , A 2 = S I m I m S I m I m S R n × n ,
where
S = 4 1 1 4 1 1 4 R m × m .
Example 13.
Let n = m 2 . Consider the VLCP whose system matrices are given by
A 1 = S S S + I n R n × n , A 2 = S 0.5 I m 1.5 I m S 0.5 I m 1.5 I m S R n × n ,
where
S = 4 0.5 1.5 4 0.5 1.5 4 R m × m .
The numerical tests are performed on a computer, which has Intel(R) Core(TM) i7-9700 CPU 3.00 GHz with 8 GB memory. Denote the total computation time (in seconds) and the iteration steps by T and I T , respectively. Let γ = 1 , x 1 ( 0 ) = e and the tolerance be 10 6 . By “ S A V E ”, we denote the per centum of total computation time saved by the TMSOR method from the MSOR method, where
S A V E = T M S O R T T M S O R T M S O R × 100 % .
The numerical results are presented in Table 1, Table 2 and Table 3, where the notations “MSOR α ” and “TMSOR α ” denote the MSOR and TMSOR methods with relaxation parameter α , respectively, and the parameter matrix Ω is chosen as
Ω = τ 2 ( D F 1 + D F 2 ) ,
τ = 0.8 , 0.9 , 1.0 .
One can see that all methods are convergent for different dimensions. Since there are two linear systems solved in each iteration of the TMMS method, most of the number of iteration steps of the MMS method is nearly twice or a little less than twice as long as that of the TMMS method in each comparison. Meanwhile, the TMMS method converges faster than the MMS method except for a few cases. Specially, we can see that the CPU time is saved larger than 20% for most cases. Therefore, the two-step technique works for the improvement of the MMS method. On the other hand, one can see that the relaxation parameter may affect the computation efficiencies of the MMS and TMMS.
Although there are some cases of Example 13 where the values of “SAVE” are small or negative, the values of “SAVE” can be larger than 15% for the “optimal” relaxation parameters of both two examples, set to bold. Nevertheless, by Table 2 and Table 3, both the MMS and TMMS methods are convergent for all cases when τ < 1 , which confirms the improvement of the proposed convergence theorem as Remark 8 commented. However, the theoretical analysis of the relaxation parameter is still difficult even for the LCP. It may be an interesting work in the future.

5. Concluding Remarks

The two-step splittings are successfully applied to the MMS iteration method for solving the VLCP. The convergence analysis is given where the convergence domain of the parameter matrix is larger than the existing one. Numerical results show that the proposed method can improve the convergence rate of the MMS iteration method. In two recent works [25,26], the modulus-based transformation was also used for tensor complementarity problems (TCP). One can thus expect that some accelerated technique such as two-step splittings can be also used for the TCP.

Author Contributions

Conceptualization, H.Z.; investigation, Y.S. and H.Z.; Methodology, Y.S. and X.L.; writing original draft preparation, Y.S.; writing review and editing, X.L. and S.-W.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Major Projects of Guangdong Education Department for Foundation Research and Applied Research (No. 2018KZDXM065), the Science and Technology Development Fund, Macau SAR (No. 0073/2019/A2, 0005/2019/A), University of Macau (No. MYRG2020-00035-FST, MYRG2018-00047-FST), the Scientific Computing Research Innovation Team of Guangdong Province (No. 2021KCXTD052), Technology Planning Project of Shaoguan (No. 210716094530390), and the Science Foundation of Shaoguan University (No. SZ2020KJ01, SY2021KJ09).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cottle, R.W.; Pang, J.-S.; Stone, R.E. The Linear Complementarity Problem; Academic: San Diego, CA, USA, 1992. [Google Scholar]
  2. Cottle, R.W.; Dantzig, G.B. A generalization of the linear complementarity problem. J. Comb. Theory 1970, 8, 79–90. [Google Scholar] [CrossRef]
  3. Fujisawa, T.; Kuh, E.S. Piecewise-linear theory of nonlinear networks. SIAM J. Appl. Math. 1972, 22, 307–328. [Google Scholar] [CrossRef]
  4. Fujisawa, T.; Kuh, E.S.; Ohtsuki, T. A sparse matrix method for analysis of piecewise-linear resistive networks. IEEE Trans. Circuit Theory 1972, 19, 571–584. [Google Scholar] [CrossRef]
  5. Gowda, M.S.; Sznajder, R. The generalized order linear complementarity problem. SIAM J. Matrix Anal. Appl. 1994, 15, 779–795. [Google Scholar] [CrossRef]
  6. Oh, K.P. The formulation of the mixed lubrication problem as a generalized nonlinear complementarity problem. J. Tribol. 1986, 108, 598–604. [Google Scholar] [CrossRef]
  7. Sznajder, R.; Gowda, M.S. Generalizations of P0- and P-properties; extended vertical and horizontal linear complementarity problems. Linear Algebra Appl. 1995, 223–224, 695–715. [Google Scholar] [CrossRef]
  8. Mezzadri, F. A modulus-based formulation for the vertical linear complementarity problems. Numer. Algorithms 2022, 90, 1547–1568. [Google Scholar] [CrossRef]
  9. Qi, H.-D.; Liao, L.-Z. A smoothing Newton method for extended vertical linear complementarity problems. SIAM J. Matrix Anal. Appl. 1999, 21, 45–66. [Google Scholar] [CrossRef]
  10. Bai, Z.-Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  11. He, J.-W.; Vong, S. A new kind of modulus-based matrix splitting methods for vertical linear complementarity problems. Appl. Math. Lett. 2022, 134, 108344. [Google Scholar] [CrossRef]
  12. Mezzadri, F.; Galligani, E. Projected splitting methods for vertical linear complementarity problems. J. Optim. Theory Appl. 2022, 193, 598–620. [Google Scholar] [CrossRef]
  13. Fang, X.-M.; Zhu, Z.-W. The modulus-based matrix double splitting iteration method for linear complementarity problems. Comput. Math. Appl. 2019, 78, 3633–3643. [Google Scholar] [CrossRef]
  14. Zheng, H.; Vong, S.; Liu, L. A direct precondition modulus-based matrix splitting iteration method for solving linear complementarity problems. Appl. Math. Comput. 2019, 353, 396–405. [Google Scholar] [CrossRef]
  15. Zhang, L.-L. Two-step modulus based matrix splitting iteration for linear complementarity problems. Numer. Algorithms 2011, 57, 83–99. [Google Scholar] [CrossRef]
  16. Zhang, L.-L. Two-step modulus-based synchronous multisplitting iteration methods for linear complementarity problems. J. Comput. Math. 2015, 33, 100–112. [Google Scholar] [CrossRef]
  17. Zheng, H.; Liu, L. A two-step modulus-based matrix splitting iteration method for solving nonlinear complementarity problems of H+-matrices. Comput. Appl. Math. 2018, 37, 5410–5423. [Google Scholar] [CrossRef]
  18. Zheng, H.; Vong, S. A two-step modulus-based matrix splitting iteration method for horizontal linear complementarity problems. Numer. Algorithms 2021, 86, 1791–1810. [Google Scholar] [CrossRef]
  19. Ke, Y.-F.; Ma, C.-F.; Zhang, H. The relaxation modulus-based matrix splitting iteration methods for circular cone nonlinear complementarity problems. Comput. Appl. Math. 2018, 37, 6795–6820. [Google Scholar] [CrossRef]
  20. Berman, A.; Plemmons, R.J. Nonnegative Matrix in the Mathematical Sciences; SIAM Publisher: Philadelphia, PA, USA, 1994. [Google Scholar]
  21. Bai, Z.-Z. On the convergence of the multisplitting methods for the linear complementarity problem. SIAM J. Matrix Anal. Appl. 1999, 21, 67–78. [Google Scholar] [CrossRef]
  22. Frommer, A.; Szyld, D.B. H-splittings and two-stage iterative methods. Numer. Math. 1992, 63, 345–356. [Google Scholar] [CrossRef]
  23. Frommer, A.; Mayer, G. Convergence of relaxed parallel multisplitting methods. Linear Algebra Appl. 1989, 119, 141–152. [Google Scholar] [CrossRef]
  24. Hu, J.-G. Estimates of ||B−1C|| and their applications. Math. Numer. Sin. 1982, 4, 272–282. [Google Scholar]
  25. Liu, D.-D.; Li, W.; Vong, S. Tensor complementarity problems: The GUS-property and an algorithm. Linear Multilinear Algebra 2018, 66, 1726–1749. [Google Scholar] [CrossRef]
  26. Dai, P.-F. A fixed point iterative method for tensor complementarity problems. J. Sci. Comput. 2020, 84, 1–20. [Google Scholar] [CrossRef]
Table 1. Numerical results when τ = 1 (the “optimal” computation times of the MSOR and TMSOR are set to bold for each dimension and each example).
Table 1. Numerical results when τ = 1 (the “optimal” computation times of the MSOR and TMSOR are set to bold for each dimension and each example).
ExampleMethod m = 128 m = 256 m = 512
IT T SAVE IT T SAVE IT T SAVE
Example 12MSOR 0.9 480.1999 490.9145 515.1585
TMSOR 0.9 240.133533%250.676126%264.073921%
MSOR 1.0 410.1818 420.7773 444.3167
TMSOR 1.0 210.114936%210.536031%223.052629%
MSOR 1.1 350.1569 360.6566 383.9291
TMSOR 1.1 180.110729%200.584311%192.761230%
MSOR 1.2 520.2079 520.9520 545.5109
TMSOR 1.2 260.137434%290.763220%284.079326%
Example 13MSOR 0.9 420.1792 360.6613 454.6083
TMSOR 0.9 240.132126%200.560715%253.611222%
MSOR 1.0 350.1539 300.5444 383.8549
TMSOR 1.0 200.131714%170.427322%223.037721%
MSOR 1.1 300.1417 250.4919 323.5243
TMSOR 1.1 180.107024%150.413116%192.791121%
MSOR 1.2 420.1899 290.5140 454.5890
TMSOR 1.2 280.18284%160.48845%304.37465%
Table 2. Numerical results when τ = 0.9 (the “optimal” computation times of the MSOR and TMSOR are set to bold for each dimension and each example).
Table 2. Numerical results when τ = 0.9 (the “optimal” computation times of the MSOR and TMSOR are set to bold for each dimension and each example).
ExampleMethod m = 128 m = 256 m = 512
IT T SAVE IT T SAVE IT T SAVE
Example 12MSOR 0.9 440.1426 460.8142 474.6364
TMSOR 0.9 220.103027%230.615924%243.276829%
MSOR 1.0 380.1378 390.6936 403.9927
TMSOR 1.0 190.088136%200.531523%202.816929%
MSOR 1.1 420.1375 410.7498 444.4324
TMSOR 1.1 220.094431%240.651913%233.315225%
MSOR 1.2 780.2284 811.4617 798.0223
TMSOR 1.2 370.144336%411.106124%405.729728%
Example 13MSOR 0.9 390.1391 330.5767 414.1800
TMSOR 0.9 220.098129%190.489615%243.432817%
MSOR 1.0 320.1116 270.4928 343.4535
TMSOR 1.0 190.089719%160.413316%202.864517%
MSOR 1.1 360.1236 250.4295 394.0004
TMSOR 1.1 230.109811%140.364715%243.527611%
MSOR 1.2 550.1740 350.6576 595.8532
TMSOR 1.2 410.16644%200.523420%446.2346-6%
Table 3. Numerical results when τ = 0.8 (the “optimal” computation times of the MSOR and TMSOR are set to bold for each dimension and each example).
Table 3. Numerical results when τ = 0.8 (the “optimal” computation times of the MSOR and TMSOR are set to bold for each dimension and each example).
ExampleMethod m = 128 m = 256 m = 512
IT T SAVE IT T SAVE IT T SAVE
Example 12MSOR 0.9 410.1400 420.7212 444.645
TMSOR 0.9 210.097730%210.573121%223.349628%
MSOR 1.0 350.1218 360.6060 383.9238
TMSOR 1.0 180.085329%200.514115%202.917426%
MSOR 1.1 590.1803 590.9876 616.3516
TMSOR 1.1 290.117735%320.835715%314.618527%
MSOR 1.2 1840.5213 1923.3582 18418.8629
TMSOR 1.2 800.314140%832.345330%8011.637538%
Example 13MSOR 0.9 350.1238 300.5136 383.9445
TMSOR 0.9 200.096322%170.446413%223.271817%
MSOR 1.0 310.1100 240.4548 333.3400
TMSOR 1.0 190.091117%150.359721%202.916413%
MSOR 1.1 460.1566 310.5378 494.9155
TMSOR 1.1 320.136813%170.455115%345.0096-2%
MSOR 1.2 790.2616 460.8248 858.5703
TMSOR 1.2 930.3573-37%280.79184%9313.7466-60%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Zheng, H.; Lu, X.; Vong, S.-W. A Two-Step Iteration Method for Vertical Linear Complementarity Problems. Symmetry 2022, 14, 1882. https://doi.org/10.3390/sym14091882

AMA Style

Song Y, Zheng H, Lu X, Vong S-W. A Two-Step Iteration Method for Vertical Linear Complementarity Problems. Symmetry. 2022; 14(9):1882. https://doi.org/10.3390/sym14091882

Chicago/Turabian Style

Song, Yunlin, Hua Zheng, Xiaoping Lu, and Seak-Weng Vong. 2022. "A Two-Step Iteration Method for Vertical Linear Complementarity Problems" Symmetry 14, no. 9: 1882. https://doi.org/10.3390/sym14091882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop