Next Article in Journal
Application of Wind Tunnel Device for Evaluation of Biokinetic Parameters of Running
Next Article in Special Issue
Dynamics of a COVID-19 Model with a Nonlinear Incidence Rate, Quarantine, Media Effects, and Number of Hospital Beds
Previous Article in Journal
Solution of Some Impulsive Differential Equations via Coupled Fixed Point
Previous Article in Special Issue
Convergence Analysis of a Three-Step Iterative Algorithm for Generalized Set-Valued Mixed-Ordered Variational Inclusion Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem †

1
School of Mathematics, Yunnan Normal University, Kunming 650500, China
2
Department of Mathrematics, Anand International College of Engineering, Jaipur 303012, India
3
Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman, United Arab Emirates
*
Author to whom correspondence should be addressed.
This research was supported by National Natural Science Foundation of China (No.11961082).
Symmetry 2021, 13(3), 503; https://doi.org/10.3390/sym13030503
Submission received: 12 February 2021 / Revised: 11 March 2021 / Accepted: 11 March 2021 / Published: 19 March 2021
(This article belongs to the Special Issue Advanced Calculus in Problems with Symmetry)

Abstract

:
In this paper, we obtain a new equivalent fixed-point form of the linear complementarity problem by introducing a relaxed matrix and establish a class of relaxed modulus-based matrix splitting iteration methods for solving the linear complementarity problem. Some sufficient conditions for guaranteeing the convergence of relaxed modulus-based matrix splitting iteration methods are presented. Numerical examples are offered to show the efficacy of the proposed methods.
MSC:
90C33, 65F10, 65F50, 65G40

1. Introduction

In this paper, we focus on the iterative solution of the linear complementarity problem, abbreviated as ‘LCP( q , A )’, whose form is
w = A z + q 0 , z 0 a n d z T w = 0 ,
where A n × n and q n are given, and z n is unknown, and for two s × t matrices G = ( g i j ) and H = ( h i j ) the order G ( > ) H means g i j ( > ) h i j for any i and j. As is known, as a very useful tool in many fields, such as the free boundary problem, the contact problem, option pricing problem, nonnegative constrained least squares problems, see [1,2,3,4,5], LCP( q , A ) is most striking in the articles, see [6,7,8,9,10,11].
Designing iteration methods to fast and economically computing the numerical solution of the LCP( q , A ) is one of the hotspot nowadays, which were widely discussed in the articles, see [1,2,8,9,12,13] for more details. Recently, by using w = 1 γ Ω ( | x | x ) and z = 1 γ ( | x | + x ) with γ > 0 for the LCP( q , A ), Bai in [14] expressed the LCP( q , A ) as the fixed-point form
( Ω + M ) x = N x + ( Ω A ) | x | γ q ,
where A = M N is a splitting of matrix A and Ω denotes a positive diagonal matrix, and then first desgined a class of modulus-based matrix splitting (MMS) iteration methods. Since the MMS method has the advantages of simple form and fast convergence rate, it was regarded as a powerful method of solving the LCP( q , A ), and raises concerns. For several variants and applications of the MMS method, one can see [2,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32].
In this paper, based on the MMS method, we will create a type of new iteration methods to solve the LCP( q , A ). Our strategy is to introduce a relaxed matrix for both sides of Equation (2) and obtain a new equivalent fixed-point form of the LCP( q , A ). Based on this new equivalent form, we can establish a class of relaxed modulus-based matrix splitting (RMMS) iteration methods for solving the LCP( q , A ). This class of new iteration methods not alone inherits the virtues of the modulus-based methods. A more important one is that we can choose the relaxed matrix to enhance the computational efficiency of the classical MMS method in [14]. To guarantee the convergence of the RMMS iteration method, some sufficient conditions will be given under suitable conditions. Numerical examples are provided to verify that the RMMS iteration method are feasible and overmatch the classical MMS iteration method in terms of the computational efficiency.
The layout of this paper is as follows. In Section 2, for the sake of discussion in the rest of this paper, we provide some necessary definitions, lemmas and notations. In Section 3, we present a class of relaxed modulus-based matrix splitting (RMMS) iteration methods to solve the LCP( q , A ). The convergence conditions of the RMMS iteration method are presented in Section 4. Numerical experiments with regard to the proposed methods compared to the classical MMS iteration method are reported in Section 5. In Section 6, we summarize this work. Finally, some simple discussions are given in Section 7.

2. Preliminaries

Some necessary definitions, lemmas and notations, which are used in the sequel discussions, are primitively introduced in this section.
Let A = ( a i j ) n × n . Then it is named as a Z-matrix if a i j 0 ( i j ) ; a nonsingular M-matrix if A is a Z-matrix and A 1 0 ; its comparison matrix is A = ( a i j ) with a i j being
a i j = | a i j | f o r i = j , | a i j | f o r i j , i , j = 1 , 2 , , n .
Further, a matrix A = ( a i j ) n × n is named as an H-matrix if A = ( a i j ) is an M-matrix; an H + -matrix if A is an H-matrix with diag ( A ) > 0 ; a P-matrix if all of its principal minors are positive [33,34]. In addition, we let | A | = ( | a i j | ) .
Let A = M N be a splitting of matrix A n × n . Then it is named as an M-splitting if M is a nonsingular M-matrix and N 0 ; an H-splitting if M | N | is a nonsingular M-matrix. As is known, if A = M N is an M-splitting and A is a nonsingular M-matrix, then ρ ( M 1 N ) < 1 , where ρ ( · ) indicates the spectral radius (the maximum value of the absolute value of the eigenvalues) of the matrix, see [33,34]. Finally, · 2 denotes the Euclidean norm on n .
Lemma 1 
([19]). Let A = ( a i j ) n × n with a i j 0 . If there exists u n with u > 0 such that A u < u , then ρ ( A ) < 1 .
Lemma 2 
([35]). Let A n × n be an H-matrix, D be the diagonal part of the matrix A, and A = D B . Then matrices A and | D | are nonsingular, | A 1 | A 1 and ρ ( | D | 1 | B | ) < 1 .
Lemma 3 
([34]). Let A be an M-matrix and B be a Z-matrix with A B . Then B is an M-matrix.
In addition, there exists a famous result for the existence and uniqueness of the LCP( q , A ), that is to say, the LCP( q , A ) has a unique solution if and only if a matrix A is a P-matrix, see [1]. Obviously, when A is an H + -matrix, the LCP( q , A ) has a unique solution as well.

3. Relaxed Modulus-Based Matrix Splitting Method

In this section, we introduce a class of relaxed modulus-based matrix splitting (RMMS) iteration methods for solving the LCP( q , A ). For this purpose, by introducing an identical equation R x R x = 0 for Equation (2), where R n × n is a given relaxed matrix, then we obtain
( Ω + M ) x = N x + ( Ω A ) | x | + R ( x x ) γ q ,
or,
( Ω + M R ) x = ( N R ) x + ( Ω A ) | x | γ q .
Based on Equations (3) and (4), we can establish the following iteration method, which is named as a class of relaxed modulus-based matrix splitting (RMMS) iteration methods for the LCP( q , A ).
Method 1. 
Let A = M N be a splitting of the matrix A n × n . Let γ > 0 and matrix Ω + M R be nonsingular, where Ω is a positive diagonal matrix and R n × n is a given relaxed matrix. Given an initial vector x ( 0 ) n , compute z ( k + 1 ) n
z ( k + 1 ) = 1 γ ( | x ( k + 1 ) | + x ( k + 1 ) ) , k = 0 , 1 , 2 , ,
where x ( k + 1 ) can be obtained by solving the linear system
( Ω + M ) x ( k + 1 ) = N x ( k ) + ( Ω A ) | x ( k ) | + R ( x ( k + 1 ) x ( k ) ) γ q
or
( Ω + M R ) x ( k + 1 ) = ( N R ) x ( k ) + ( Ω A ) | x ( k ) | γ q .
Clearly, when R = 0 , method 1 reduces to the well-known MMS iteration method in [14]. Similarly, by introducing an identical equation, Wu and Li in [22] presented two-sweep modulus-based matrix splitting iteration methods for the LCP ( q , A ) . The goal of introducing the relaxed matrix R in (5) or (6) is that the computational efficiency of the RMMS iteration method may be better than the classical MMS iteration method in [14].
In fact, method 1 is a general framework of RMMS iteration methods for solving the LCP ( q , A ) . This implies that we can construct some concrete forms of RMMS iteration methods by the specific splitting matrix of matrix A and the iteration parameters. If we take
M = 1 α ( D β L ) a n d N = 1 α [ ( 1 α ) D + ( α β ) L + α U ] ,
where D, L and U, respectively, are the diagonal, the strictly lower-triangular and the strictly upper-triangular parts of the matrix A, then this leads to the relaxed modulus-based AOR (RMAOR) iteration method
( Ω + D β L R ) x ( k + 1 ) = [ ( 1 α ) D + ( α β ) L + α U R ] x ( k ) + ( Ω α A ) | x ( k ) | γ α q ,
with z ( k + 1 ) = 1 γ ( | x ( k + 1 ) | + x ( k + 1 ) ) . When α = β , α = β = 1 , and α = 1 , β = 0 , respectively, the RMAOR method (7) can yield the corresponding relaxed modulus-based SOR (RMSOR) method, the relaxed modulus-based Gauss–Seidel (RMGS) method and the relaxed modulus-based Jacobi (RMJ) method.

4. Convergence Analysis

In this section, some sufficient conditions are given to guarantee the convergence of method 1.
Theorem 1. 
Let A = M N be a splitting of the matrix A n × n with A being a P-matrix, and matrix Ω + M R be nonsingular, where Ω is a positive diagonal matrix and R n × n is a given relaxed matrix. Let
δ ( R ) = f ( R ) + g ( R ) ,
where
f ( R ) = ( Ω + M R ) 1 ( N R ) 2 a n d g ( R ) = ( Ω + M R ) 1 ( Ω A ) 2 .
When δ ( R ) < 1 , Method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Proof. 
Let ( z * , w * ) be a solution pair of the LCP( q , A ). Then x * = 1 2 γ ( z * Ω 1 w * ) satisfies
( Ω + M R ) x * = ( N R ) x * + ( Ω A ) | x * | γ q .
Based on (6) and (8), note that matrix Ω + M R is nonsingular, we can obtain
x ( k + 1 ) x * = ( Ω + M R ) 1 ( ( N R ) ( x ( k ) x * ) + ( Ω A ) ( | x ( k ) | | x * | ) ) .
This indicates that
x ( k + 1 ) x * 2 = ( Ω + M R ) 1 ( ( N R ) ( x ( k ) x * ) + ( Ω A ) ( | x ( k ) | | x * | ) ) 2 ( Ω + M R ) 1 ( N R ) ( x ( k ) x * ) 2 + ( Ω + M R ) 1 ( Ω A ) ( | x ( k ) | | x * | ) 2 ( Ω + M R ) 1 ( N R ) 2 x ( k ) x * 2 + ( Ω + M R ) 1 ( Ω A ) 2 x ( k ) x * 2 = ( f ( R ) + g ( R ) ) x ( k ) x * 2 = δ ( R ) x ( k ) x * 2 .
Obviously, when δ ( R ) < 1 , method 1 is convergent. □
Since
( Ω + M R ) 1 ( Ω A ) 2 = ( Ω + M R ) 1 ( Ω M + N ) 2 = ( Ω + M R ) 1 ( Ω M + N R + R ) 2 ( Ω + M R ) 1 ( Ω M + R ) 2 + ( Ω + M R ) 1 ( N R ) 2 ,
the following corollary can be obtained.
Corollary 1. 
Let A = M N be a splitting of the matrix A n × n with A being a P-matrix, and matrix Ω + M R be nonsingular, where Ω is a positive diagonal matrix and R n × n is a given relaxed matrix. Let
δ ¯ ( R ) = 2 f ( R ) + g ¯ ( R ) ,
where
f ( R ) = ( Ω + M R ) 1 ( N R ) 2 a n d g ¯ ( R ) = ( Ω + M R ) 1 ( Ω M + R ) 2 .
When δ ¯ ( R ) < 1 , Method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Similar to the above proof, if we take | · | instead of · 2 , we can also obtain Corollary 2.
Corollary 2. 
Let A = M N be a splitting of the matrix A n × n with A being a P-matrix, and matrix Ω + M R be nonsingular, where Ω is a positive diagonal matrix and R n × n is a given relaxed matrix. Let
Φ = | ( Ω + M R ) 1 ( N R ) | + | ( Ω + M R ) 1 ( Ω A ) |
and
Ψ = 2 | ( Ω + M R ) 1 ( N R ) | + | ( Ω + M R ) 1 ( Ω M + R ) | .
When ρ ( Φ ) < 1 or ρ ( Ψ ) < 1 , Method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Theorem 2. 
Let A = M N be a splitting of the matrix A n × n with A being a P-matrix, and matrix Ω + M be nonsingular and matrix I | ( Ω + M ) 1 R | be nonsingular M-matrix, where Ω is a positive diagonal matrix and R n × n is a given relaxed matrix. Let
Θ = ( I | ( Ω + M ) 1 R | ) 1 ( | ( Ω + M ) 1 N | + | ( Ω + M ) 1 ( Ω A ) | + | ( Ω + M ) 1 R | ) .
When ρ ( Θ ) < 1 , method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Proof. 
Based on (5) and (8), we have
( Ω + M ) ( x ( k + 1 ) x * ) = N ( x ( k ) x * ) + ( Ω A ) ( | x ( k ) | | x * | ) + R ( ( x ( k + 1 ) x * ) ( x ( k ) x * ) ) .
This implies
| x ( k + 1 ) x * | = | ( Ω + M ) 1 ( N ( x ( k ) x * ) + ( Ω A ) ( | x ( k ) | | x * | ) + R ( ( x ( k + 1 ) x * ) ( x ( k ) x * ) ) ) | = | ( Ω + M ) 1 N ( x ( k ) x * ) + ( Ω + M ) 1 ( Ω A ) ( | x ( k ) | | x * | ) + ( Ω + M ) 1 R ( ( x ( k + 1 ) x * ) ( x ( k ) x * ) ) ) | | ( Ω + M ) 1 N | · | x ( k ) x * | + | ( Ω + M ) 1 ( Ω A ) | · | | x ( k ) | | x * | | + | ( Ω + M ) 1 R | · | x ( k + 1 ) x * | + | ( Ω + M ) 1 R | · | x ( k ) x * | | ( Ω + M ) 1 N | · | x ( k ) x * | + | ( Ω + M ) 1 ( Ω A ) | · | x ( k ) x * | + | ( Ω + M ) 1 R | · | x ( k + 1 ) x * | + | ( Ω + M ) 1 R | · | x ( k ) x * | .
Further, we have
| x ( k + 1 ) x * | Θ ( R ) ( | x ( k ) x * | ) ,
where
Θ = ( I | ( Ω + M ) 1 R | ) 1 ( | ( Ω + M ) 1 N | + | ( Ω + M ) 1 ( Ω A ) | + | ( Ω + M ) 1 R | ) .
Clearly, when ρ ( Θ ) < 1 , Method 1 is convergent. □
Similarly, we have Corollary 3.
Corollary 3. 
Let A = M N be a splitting of the matrix A n × n with A being a P-matrix, and matrix Ω + M be nonsingular and I | ( Ω + M ) 1 R | be nonsingular M-matrix, where Ω is a positive diagonal matrix and R n × n is a given relaxed matrix. Let
Θ ¯ = ( I | ( Ω + M ) 1 R | ) 1 ( 2 | ( Ω + M ) 1 N | + | ( Ω + M ) 1 ( Ω M ) | + | ( Ω + M ) 1 R | ) .
When ρ ( Θ ¯ ) < 1 , method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Theorem 3. 
Let A = M N be a splitting of the matrix A = ( a i j ) n × n , where A is an H + -matrix, and M R | N R | be an M-matrix with R = ( r i j ) n × n . Let matrix Ω = ( ω i j ) n × n satisfy ω i i a i i and 0 ω i j 1 2 ( | m i j r i j | + | n i j r i j | | a i j | ) . Then method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Proof. 
First, we prove that Ω + M R is an M-matrix. In fact, since
0 ω i j 1 2 ( | m i j r i j | + | n i j r i j | | a i j | ) ,
we obtain
2 | m i j r i j | + | n i j r i j | ω i j + | a i j | + ω i j + | m i j r i j | | ω i j a i j | + | ω i j + m i j r i j | .
Further, we have
2 | m i j r i j | + | n i j r i j | | ω i j a i j | | ω i j + m i j r i j | ,
which is equal to
2 | m i j r i j | | n i j r i j | + | ω i j a i j | | ω i j + m i j r i j | .
In addition,
ω i i + m i i r i i = ω i i | m i i r i i | + | n i i r i i | + 2 | m i i r i i | | n i i r i i | > 0 .
Therefore,
Ω + M R Δ + 2 M + R | N + R | ,
where matrix Δ = ( δ i j ) n × n satisfy
( δ i j ) = ω i i | m i i r i i | + | n i i r i i | 0 for i = j , | ω i j a i j | for i j , i , j = 1 , 2 , , n .
It noted that
Δ + 2 M R | N R | 2 ( M R | N R | ) .
Based on Lemma 3, matrix Δ + 2 M R | N R | is an M-matrix. Based on (10) and (11), we have
2 ( M R | N R | ) Ω + M R ,
which implies that Ω + M R is an H + matrix.
Based on (9) and Lemma 2, we have
| x ( k + 1 ) x * | = | ( Ω + M R ) 1 ( ( N R ) ( x ( k ) x * ) + ( Ω A ) ( | x ( k ) | | x * | ) ) | | ( Ω + M R ) 1 | · | ( N R ) ( x ( k ) x * ) + ( Ω A ) ( | x ( k ) | | x * | ) | | ( Ω + M R ) 1 | ( | N R | + | Ω A | ) | x ( k ) x * | Ω + M R 1 ( | N R | + | Ω A | ) | x ( k ) x * | = Ω + M R 1 ( Ω + M R Ω + M R + | N R | + | Ω A | ) | x ( k ) x * | = [ I Ω + M R 1 ( Ω + M R | N R | | Ω A | ) ] | x ( k ) x * | [ I Ω + M R 1 ( 2 M R 2 | N R | + Δ | Ω A | ) ] | x ( k ) x * | .
By calculation, it is easy to obtain that matrix Δ | Ω A | is a nonnegative diagonal matrix. It follows that matrix
2 M R 2 | N R | + Δ | Ω A |
is an M-matrix. Further, there exists a positive vector u such that
( 2 M R 2 | N R | + Δ | Ω A | ) u > 0 .
Therefore,
Ω + M R 1 ( 2 M R 2 | N R | + Δ | Ω A | ) u > 0 .
Let
W = I Ω + M R 1 ( 2 M R 2 | N R | + Δ | Ω A | ) .
Then
W u = [ I Ω + M R 1 ( 2 M R 2 | N R | + Δ | Ω A | ) ] u < u .
Based on Lemma 1, we can obtain that ρ ( W ) < 1 . which completes the proof. □
When matrix R is a diagonal matrix, Corollary 4 can be obtained.
Corollary 4. 
Let A = M N be a splitting of the matrix A = ( a i j ) n × n , where A is an H + -matrix, and M R | N R | be an M-matrix with R = ( r i j ) n × n being a diagonal matrix. Let matrix Ω = ( ω i j ) n × n satisfy ω i i a i i and 0 ω i j 1 2 ( | m i j | + | n i j | | a i j | ) . Then Method 1 with γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
When matrix R is a zero matrix, Corollary 4 reduces to the following result, which is a main result in [36].
Corollary 5. 
[36] Let A = M N be a splitting of the matrix A = ( a i j ) n × n , where A is an H + -matrix, and M | N | be an M-matrix. Let Ω = ( ω i j ) n × n satisfy ω i i a i i and 0 ω i j 1 2 ( | m i j | + | n i j | | a i j | ) . Then method 1 with R = 0 and γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Further, when matrix R is a zero matrix and Ω = ( ω i j ) n × n is a positive diagonal matrix, Corollary 4 reduces to the following result, which is a main result in [37].
Corollary 6. 
[37] Let A = M N be a splitting of the matrix A = ( a i j ) n × n , where A is an H + -matrix, and M | N | be an M-matrix. Let the positive diagonal matrix Ω = ( ω i j ) n × n satisfy ω i i a i i . Then Method 1 with R = 0 and γ > 0 converges to the unique solution z * + n of the LCP ( q , A ) for an initial vector.
Theorem 4. 
Let A = D L U = D B and A = D | L | | U | , where A n × n is an H + -matrix. Assume that the positive diagonal matrix Ω satisfies Ω D , matrix R is a lower-triangular matrix with d i a g ( R ) 0 , and ρ : = ρ ( D 1 | B | + | L R | ) < 1 , where L R = R d i a g ( R ) . Then for an initial vector, the RMAOR iteration method with γ > 0 is convergent if the parameters α and β satisfy
0 max { α , β } ρ < min { 1 , α } .
Proof. 
From the proof of Theorem 3, we take
M = 1 α ( D β L ) and N = 1 α [ ( 1 α ) D + ( α β ) L + α U ] .
Since Ω D > 0 and diag ( R ) 0 , obviously, matrix Ω R + 1 α ( D β L ) is an H + -matrix. Based on Lemma 2, we have
| ( Ω R + 1 α ( D β L ) ) 1 | Ω R + 1 α ( D β L ) 1 = ( Ω D R + D | L R + α β L | ) 1
with D R = diag ( R ) and L R = R diag ( R ) . Let
W ^ = | Ω R + M | 1 ( | N | + | R | + | Ω A | ) .
Then by the simple computations we have
W ^ = | Ω R + 1 α ( D β L ) | 1 ( | 1 α [ ( 1 α ) D + ( α β ) L + α U ] | + | R | + | Ω A | ) = | α Ω α R + D β L | 1 ( | ( 1 α ) D + ( α β ) L + α U | + α | R | + α | Ω A | ) = | α Ω α D R α L R + D β L | 1 ( | ( 1 α ) D + ( α β ) L + α U | + α | R | + α | Ω A | ) α Ω α D R + D α L R β L 1 ( | ( 1 α ) D + ( α β ) L + α U | + α | R | + α | Ω A | ) = ( α Ω α D R + D | α L R + β L | ) 1 ( | ( 1 α ) D + ( α β ) L + α U | + α | R | + α | Ω A | ) ( α Ω α D R + D | α L R | | β L | ) 1 ( | ( 1 α ) D + ( α β ) L + α U | + α | R | + α | Ω A | ) = ( α Ω α D R + D | α L R | | β L | ) 1 ( α Ω α D R + D | α L R | | β L | ( α Ω α D R + D | α L R | | β L | ) + | ( 1 α ) D + ( α β ) L + α U | + α | R | + α | Ω A | ) = I ( α Ω α D R + D α | L R | β | L | ) 1 ( α Ω α D R + D | α L R | | β L | | ( 1 α ) D + ( α β ) L + α U | α | R | α | Ω A | ) = I ( α Ω α D R + D α | L R | β | L | ) 1 ( α Ω α D R + D | α L R | | β L | | 1 α | D | α B β L | α | D R | α | L R | α ( Ω D ) α | B | ) = I ( α Ω α D R + D α | L R | β | L | ) 1 ( ( 1 + α ) | 1 α | ) D 2 α | L R | | α B β L | β | L | α | B | )
Since
( 1 + α | 1 α | ) = 2 min { 1 , α }
and
| α B β L | + α | B | + β | L | = | α L + α U β L | + α | U | + α | L | + β | L | = ( | α β | + α + β ) | L | + 2 α | U | 2 max { α , β } | B | ,
then
W ^ W ˜ ,
where
W ˜ = I 2 ( α Ω α D R + D α | L R | β | L | ) 1 D ( min { 1 , α } I max { α , β } D 1 ( | B | + | L R | ) ) .
Note that ρ ( D 1 ( | B | + | L R | ) ) < 1 . Then there exists an arbitrary small number ε > 0 such that
ρ ε : = ρ ( J ε ) < 1 ,
where J ε : = D 1 ( | B | + | L R | ) + ϵ e e T and e : = ( 1 , , 1 ) T n . Based on Perron–Frobenius theorem in [34], there exists a positive vector u ε n such that
J ε u ε = ρ ε u ε .
Therefore,
W ˜ u ε u ε 2 ( α Ω α D R + D α | L R | β | L | ) 1 D ( min { 1 , α } u ε max { α , β } J ε u ε ) = u ε 2 ( min { 1 , α } max { α , β } ρ ε ) ( α Ω α D R + D α | L R | β | L | ) 1 D u ε < v ε .
Based on Lemma 1, we can obtain that ρ ( W ^ ) ρ ( W ˜ ) < 1 , which implies that the result of Theorem 4 is true. □
When matrix R in Theorem 4 is a nonpositive diagonal matrix, Theorem 4 reduces to the following result.
Corollary 7. 
Let A = D L U = D B and A = D | L | | U | , where A n × n is an H + -matrix. Assume that the positive diagonal matrix Ω satisfies Ω D , matrix R is a nonpositive diagonal matrix and ρ : = ρ ( D 1 | B | ) . Then for an initial vector, the RMAOR iteration method with γ > 0 is convergent if the parameters α and β satisfy
0 max { α , β } ρ < min { 1 , α } .

5. Numerical Experiments

In this section, we utilize two examples to illustrate the computational efficiency of the RMMS iteration method in terms of iteration steps (IT) and elapsed CPU time (CPU) in seconds, and the following norm of absolute residual vectors (RES)
R E S ( z ( k ) ) : = min ( A z ( k ) + q , z ( k ) ) 2 ,
where the minimum is componentwisely taken.
To show the advantages of the RMMS iteration method, we compare the RMMS iteration method with the classical MMS iteration method. During these tests, all initial vectors are chosen to be
x ( 0 ) = ( 1 , 0 , 1 , 0 , , 1 , 0 , ) T n ,
all the iterations are stopped once RES( z ( k ) ) 10 5 or the number of iteration exceeds 500. For convenience, here we consider the relaxed modulus-based SOR (RMSOR) method and the modulus-based SOR (MSOR) method. The basis of this comparison is that the modulus-based SOR (MSOR) method in [14] outperforms other forms of the modulus-based matrix splitting iteration method, the projected relaxation methods and the modified modulus method. In actual implementations, we take Ω = D and γ = 2 for the RMSOR method and the MSOR method. All of computations are performed in MATLAB 7.0. In addition, in the following tables, ‘–’ denotes that the iteration steps exceed 500 or the residual norms exceed 10 5 .
In our computations, we take the following two examples, which were considered in [14,18]. Parts of two examples are symmetry, see case 1 of Example 1 and the diagonal matrix of Example 2. Of course, we consider the non-symmetry case, see case 2 of Example 1 and Example 2.
Example 1 
([14]). Let the LCP( q , A ) be given by q = A z * and A = A ^ + μ I , where
A ^ = t r i d i a g ( r I , S , t I ) = S t I 0 0 0 r I S t I 0 0 0 r I S 0 0 0 0 0 S t I 0 0 0 r I S n × n
with
S = t r i d i a g ( r , 4 , t ) = 4 t 0 0 0 r 4 t 0 0 0 r 4 0 0 0 0 0 4 t 0 0 0 r 4 m × m ,
and
z * = ( 1 , 2 , 1 , 2 , , 1 , 2 , ) T n
is the unique solution of the LCP( q , A ).
Example 2 (([18]). Let the LCP( q , A ) be given by
A = W I I 0 0 0 0 W I I 0 0 0 0 W I 0 0 0 0 0 0 W I 0 0 0 0 0 W n × n a n d q = 1 1 1 ( 1 ) n 1 ( 1 ) n n ,
with
W = t r i d i a g ( 1 , 4 , 1 ) = 4 1 0 0 0 1 4 1 0 0 0 1 4 0 0 0 0 0 4 1 0 0 0 1 4 m × m .
In Examples 1 and 2, the value of m is chosen to be a prescribed positive integer, and then n = m 2 . For Example 1, we consider two cases: one is the symmetric case and the other is the nonsymmetric case. For the former, we take r = t = 1 ; for the latter, we take r = 1.5 and t = 0.5 . In the implementations, the value of the iteration parameter α used in both the RMSOR method and the MSOR method is chosen to be 1.3. For convenience, the value of matrix is chosen to be R = 2 I for the RMSOR method, where I is the identity matrix.
For Example 1, for different problem sizes of m and the different values of μ , the numerical results (including IT, CPU and RES) for the RMSOR method and the MSOR method are listed in Table 1 when r = t = 1 . Clearly, the RMSOR method and the MSOR method can rapidly compute a satisfactory approximation to the solution of the LCP( q , A ).
From the numerical results in Table 1, fixed the value of μ , the iteration steps and the CPU times of the RMSOR method and the MSOR method are incremental when the problem size n = m 2 is increasing. Whereas, fixing the value of the problem size n = m 2 , the iteration steps and the CPU times of the RMSOR method and the MSOR method are descended when the value of μ is increasing. This implies that both may be fit for the larger μ when as a solver for solving the LCP( q , A ).
Base on the presented numerical results in Table 1, our numerical experiment show that the RMSOR method compared to the MSOR method requires less iteration steps and CPU times. This shows that when both RMSOR and MSOR methods are used to solve the LCP( q , A ), the former is superior the latter.
Table 2 presents the numerical results of the nonsymmetric case of Example 1. Specifically, some numerical results (including IT, CPU and RES) for the RMSOR method and the MSOR method for different problem sizes of m and the different values of μ are listed when r = 1.5 and t = 0.5 . Table 2 shows that the RMSOR method and the MSOR method can still rapidly compute a satisfactory approximation to the solution of the LCP( q , A ).
From Table 2, these numerical results further verify the observed results from Table 1. Our numerical experiments show that the computational efficiency of the RMSOR method is better than the MSOR method. It is noted that compared with the symmetric case, the iteration steps and the CPU times of the RMSOR method and the MSOR method slightly decrease in the nonsymmetric case.
Table 3 presents the numerical results of Example 2. To compare the RMSOR method with the MSOR method, Table 3 lists the numerical results (including IT, CPU and RES) for the different values of μ under the same iteration parameter α . From the presented numerical results in Table 3, our numerical experiments show that that the RMSOR method requires less iteration steps and CPU times than the MSOR method. The numerical results in Table 3 confirm that the RMSOR method is still superior to the MSOR method.

6. Conclusions

In this paper, by introducing a relaxed matrix to obtain a new equivalent fixed-point form of the LCP( q , A ), we establish a class of relaxed modulus-based matrix splitting (RMMS) iteration methods. Some sufficient conditions are presented to guarantee the convergence of RMMS iteration methods. Numerical examples show that the RMMS iteration method is feasible and overmatches the classical MMS iteration method under certain conditions.
It is noted that our approach can be extended to other modulus-based matrix splitting methods, such as two-step modulus-based matrix splitting iteration methods, accelerated modulus-based matrix splitting methods, and so on.

7. Discussion

From our numerical experiments, we find that both the MMS iteration method and the RMMS iteration method are sensitive to the iteration parameter. This implies that the iteration parameter may play an important part in these two methods. Therefore, the determination of the optimal parameters for these two methods could be still an open problem, which is an interesting topic in the future.

Author Contributions

Conceptualization, methodology, software S.W.; original draft preparation, C.L.; data curation, P.A.; guidance, review and revision, P.A.; translation, editing and review, S.W.; validation, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Natural Science Foundation of China (No.11961082).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank two anonymous referees for providing helpful suggestions, which greatly improved the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cottle, R.W.; Pang, J.-S.; Stone, R.E. The Linear Complementarity Problem; Academic: San Diego, CA, USA, 1992. [Google Scholar]
  2. Murty, K.G. Linear Complementarity, Linear and Nonlinear Programming; Heldermann: Berlin, Germany, 1988. [Google Scholar]
  3. Cottle, R.W.; Dantzig, G.B. Complementary pivot theory of mathematical programming. Linear Algebra Appl. 1968, 1, 103–125. [Google Scholar] [CrossRef] [Green Version]
  4. Schäfer, U. A linear complementarity problem with a P-matrix. SIAM Rev. 2004, 46, 189–201. [Google Scholar] [CrossRef]
  5. Zheng, N.; Hayami, K.; Yin, J.-F. Modulus-type inner outer iteration methods for nonnegative constrained least squares problems. SIAM J. Matrix Anal. Appl. 2016, 37, 1250–1278. [Google Scholar] [CrossRef] [Green Version]
  6. Ahn, B.H. Solutions of nonsymmetric linear complementarity problems by iterative methods. J. Optim. Theory Appl. 1981, 33, 175–185. [Google Scholar] [CrossRef]
  7. Cryer, C.W. The solution of a quadratic programming using systematic overrelaxation. SIAM J. Control. 1971, 9, 385–392. [Google Scholar] [CrossRef]
  8. Kappel, N.W.; Watson, L.T. Iterative algorithms for the linear complementarity problems. Int. J. Comput. Math. 1986, 19, 273–297. [Google Scholar] [CrossRef]
  9. Mangasarian, O.L. Solutions of symmetric linear complementarity problems by iterative methods. J. Optim. Theory Appl. 1977, 22, 465–485. [Google Scholar] [CrossRef] [Green Version]
  10. Pang, J.-S. Necessary and sufficient conditions for the convergence of iterativemethods for the linear complementarity problem. J. Optim. Theory Appl. 1984, 42, 1–17. [Google Scholar] [CrossRef]
  11. Tseng, P. On linear convergence of iterative methods for the variational inequality problem. J. Comput. Appl. Math. 1995, 60, 237–252. [Google Scholar] [CrossRef] [Green Version]
  12. Bai, Z.-Z. On the convergence of the multisplitting methods for the linear complementarity problem. SIAM J. Matrix Anal. Appl. 1999, 21, 67–78. [Google Scholar] [CrossRef]
  13. Bai, Z.-Z.; Evans, D. Matrix multisplitting methods with applications to linear complementarity problems: Parallel asynchronous methods. Int. J. Comput. Math. 2002, 79, 205–232. [Google Scholar] [CrossRef]
  14. Bai, Z.-Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  15. Van Bokhoven, W.M.G. A Class of Linear Complementarity Problems is Solvable in Polynomial Time; Unpublished Paper; Dept. of Electrical Engineering, University of Technology: Delft, The Netherlands, 1980. [Google Scholar]
  16. Dong, J.-L.; Jiang, M.-Q. A modified modulus method for symmetric positive-definite linear complementarity problems. Numer. Linear Algebra Appl. 2009, 16, 129–143. [Google Scholar] [CrossRef]
  17. Hadjidimos, A.; Tzoumas, M. Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problem. Linear Algebra Appl. 2009, 431, 197–210. [Google Scholar] [CrossRef] [Green Version]
  18. Bai, Z.-Z.; Zhang, L.-L. Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2013, 20, 425–439. [Google Scholar] [CrossRef]
  19. Bai, Z.-Z.; Zhang, L.-L. Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems. Numer. Algor. 2013, 62, 59–77. [Google Scholar] [CrossRef]
  20. Zheng, N.; Yin, J.-F. Accelerated modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Algor. 2013, 64, 245–262. [Google Scholar] [CrossRef]
  21. Zhang, L.-L. Two-step modulus-based matrix splitting iteration method for linear complementarity problems. Numer. Algor. 2011, 57, 83–99. [Google Scholar] [CrossRef]
  22. Wu, S.-L.; Li, C.-X. Two-sweep modulus-based matrix splitting iteration methods for linear complementarity problems. J. Comput. Math. 2016, 302, 327–339. [Google Scholar] [CrossRef]
  23. Li, W. A general modulus-based matrix splitting method for linear complementarity problems of H-matrices. Appl. Math. Lett. 2013, 26, 1159–1164. [Google Scholar] [CrossRef]
  24. Hadjidimos, A.; Lapidakis, M.; Tzoumas, M. On iterative solution for linear complementarity problem with an H+-matrix. SIAM J. Matrix Anal. Appl. 2011, 33, 97–110. [Google Scholar] [CrossRef]
  25. Ma, C.-F.; Huang, N. Modified modulus-based matrix splitting algorithms for a class of weakly nondifferentiable nonlinear complementarity problems. Appl. Numer. Math. 2016, 108, 116–124. [Google Scholar] [CrossRef]
  26. Xia, Z.-C.; Li, C.-L. Modulus-based matrix splitting iteration methods for a class of nonlinear complementarity problem. Appl. Math. Comput. 2015, 271, 34–42. [Google Scholar] [CrossRef]
  27. Xie, S.-L.; Xu, H.-R.; Zeng, J.-P. Two-step modulus-based matrix splitting iteration method for a class of nonlinear complementarity problems. Linear Algebra Appl. 2016, 494, 1–10. [Google Scholar] [CrossRef]
  28. Huang, N.; Ma, C.-F. The modulus-based matrix splitting algorithms for a class of weakly nondifferentiable nonlinear complementarity problems. Numer. Linear Algebra Appl. 2016, 23, 558–569. [Google Scholar] [CrossRef]
  29. Hong, J.-T.; Li, C.-L. Modulus-based matrix splitting iteration methods for a class of implicit complementarity problems. Numer. Linear Algebra Appl. 2016, 23, 629–641. [Google Scholar] [CrossRef]
  30. Wu, S.-L.; Guo, P. Modulus-based matrix splitting algorithms for the quasi-complementarity problems. Appl. Numer. Math. 2018, 132, 127–137. [Google Scholar] [CrossRef]
  31. Mezzadri, F.; Galligani, E. Modulus-based matrix splitting methods for horizontal linear complementarity problems. Numer. Algor. 2020, 83, 201–219. [Google Scholar] [CrossRef]
  32. Zheng, N.; Yin, J.-F. Convergence of accelerated modulus-based matrix splitting iteration methods for linear complementarity problem with an H+-matrix. J. Comput. Appl. Math. 2014, 260, 281–293. [Google Scholar] [CrossRef]
  33. Varga, R.S. Matrix Iterative Analysis; Prentice-Hall: Englewood Cliffs, NJ, USA, 1962. [Google Scholar]
  34. Berman, A.; Plemmons, R.J. Nonnegative Matrices in the Mathematical Sciences; Academic: New York, NY, USA, 1979. [Google Scholar]
  35. Frommer, A.; Mayer, G. Convergence of relaxed parallel multisplitting methods. Linear Algebra Appl. 1989, 119, 141–152. [Google Scholar] [CrossRef] [Green Version]
  36. Xu, W.-W. Modified modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2015, 22, 748–760. [Google Scholar] [CrossRef]
  37. Zhang, L.-L.; Ren, Z.-R. Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems. Appl. Math. Lett. 2013, 26, 638–642. [Google Scholar] [CrossRef]
Table 1. Numerical results for Example 1 with r = t = 1 .
Table 1. Numerical results for Example 1 with r = t = 1 .
m2030405060
μ = 2 RMSORIT2525262727
CPU0.03130.03130.06250.06250.1094
RES5.908   ×   10 6 5.819   ×   10 6 8.472   ×   10 6 6.027   ×   10 6 7.469   ×   10 6
MSORIT5153535455
CPU0.03130.07810.09380.10940.1875
RES8.427   ×   10 6 7.753   ×   10 6 9.798   ×   10 6 8.895   ×   10 6 7.782   ×   10 6
μ = 4 RMSOR IT2020212121
CPU0.03130.03130.06250.06250.0781
RES7.566   ×   10 6 9.882   ×   10 6 5.930   ×   10 6 6.748   ×   10 6 7.476   ×   10 6
MSORIT3435353536
CPU0.03130.06250.07810.10940.0938
RES7.574   ×   10 6 6.854   ×   10 6 8.276   ×   10 6 9.488   ×   10 6 7.083   ×   10 6
μ = 8 RMSOR IT1818181818
CPU0.01560.03130.03130.04690.0625
RES4.675   ×   10 6 5.998   ×   10 6 7.078   ×   10 6 8.014   ×   10 6 8.851   ×   10 6
MSOR IT2424252525
CPU0.03130.03130.06250.06250.0781
RES7.476   ×   10 6 9.658   ×   10 6 6.343   ×   10 6 7.197   ×   10 6 7.961   ×   10 6
Table 2. Numerical results for Example 1 with r = 1.5 and t = 0.5 .
Table 2. Numerical results for Example 1 with r = 1.5 and t = 0.5 .
m2030405060
μ = 2 RMSOR IT2121222223
CPU0.01560.03130.04690.04690.0781
RES4.669   ×   10 6 9.734   ×   10 6 7.082   ×   10 6 9.528   ×   10 6 5.772   ×   10 6
MSOR IT2828292929
CPU0.03130.03130.06250.09380.1094
RES6.588   ×   10 6 8.501   ×   10 6 6.252   ×   10 6 7.091   ×   10 6 7.840   ×   10 6
μ = 4 RMSOR IT1515161616
CPU0.01560.01560.04690.06250.0625
RES8.430   ×   10 6 4.328   ×   10 6 5.092   ×   10 6 5.756   ×   10 6 6.351   ×   10 6
RSOR IT2323242424
CPU0.03130.04690.06250.07810.0781
RES6.771   ×   10 6 8.646   ×   10 6 5.609   ×   10 6 6.345   ×   10 6 7.004   ×   10 6
μ = 8 RMSOR IT1515151515
CPU0.01560.03130.03130.03130.0469
RES4.011   ×   10 6 5.077   ×   10 6 5.955   ×   10 6 6.719   ×   10 6 7.405   ×   10 6
RSOR IT1919202020
CPU0.01560.03130.06250.06250.0938
RES7.112   ×   10 6 8.999   ×   10 6 4.977   ×   10 6 5.617   ×   10 6 6.191   ×   10 6
Table 3. Numerical results for Example 2.
Table 3. Numerical results for Example 2.
α 1.41.61.82
m = 20 RMSOR IT33304380
CPU0.03130.03130.04690.0938
RES7.452   ×   10 6 9.803   ×   10 6 9.105   ×   10 6 9.712   ×   10 6
RSOR IT3882296
CPU0.04690.07810.3438
RES8.648   ×   10 6 8.408   ×   10 6 9.475   ×   10 6
m = 30 RMSOR IT504754107
CPU0.07810.06750.09380.25
RES6.121   ×   10 6 1.691   ×   10 6 6.982   ×   10 6 8.419   ×   10 6
RSOR IT57108410
CPU0.09380.23440.5781
RES3.915   ×   10 6 7.102   ×   10 6 9.266   ×   10 6
m = 40 RMSOR IT656274129
CPU0.14060.12500.21880.3594
RES9.235   ×   10 6 7.330   ×   10 6 6.697   ×   10 6 9.493   ×   10 6
RSOR IT80138
CPU0.18750.2813
RES7.557   ×   10 6 8.111   ×   10 6
m = 50 RMSOR IT797691156
CPU0.17190.15630.20310.3438
RES9.926   ×   10 6 8.954   ×   10 6 9.862   ×   10 6 8.250   ×   10 6
RSOR IT100174
CPU0.21880.3281
RES9.408   ×   10 6 8.764   ×   10 6
m = 60 RMSOR IT9389110178
CPU0.29690.28130.39060.5625
RES7.478   ×   10 6 9.642   ×   10 6 9.490   ×   10 6 8.071   ×   10 6
RSOR IT119211
CPU0.32810.5938
RES8.510   ×   10 6 7.022   ×   10 6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, S.; Li, C.; Agarwal, P. Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem. Symmetry 2021, 13, 503. https://doi.org/10.3390/sym13030503

AMA Style

Wu S, Li C, Agarwal P. Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem. Symmetry. 2021; 13(3):503. https://doi.org/10.3390/sym13030503

Chicago/Turabian Style

Wu, Shiliang, Cuixia Li, and Praveen Agarwal. 2021. "Relaxed Modulus-Based Matrix Splitting Methods for the Linear Complementarity Problem" Symmetry 13, no. 3: 503. https://doi.org/10.3390/sym13030503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop