Next Article in Journal
An Initial Condition Optimization Approach for Improving the Prediction Precision of a GM(1,1) Model
Next Article in Special Issue
A Five-Point Subdivision Scheme with Two Parameters and a Four-Point Shape-Preserving Scheme
Previous Article in Journal / Special Issue
Rape Plant Disease Recognition Method of Multi-Feature Fusion Based on D-S Evidence Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Modulus-Based Synchronous Multisplitting Multi-Parameters TOR Methods for Linear Complementarity Problems

1
School of Science, Zhengzhou University of Aeronautics, Zhengzhou 450015, China
2
Laboratory of Computationary Physics, Institute of Applied Physics and Computational Mathematics, P.O.Box 8009, Beijing 100088, China
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2017, 22(1), 20; https://doi.org/10.3390/mca22010020
Submission received: 17 October 2016 / Revised: 25 January 2017 / Accepted: 27 January 2017 / Published: 21 February 2017
(This article belongs to the Special Issue Information and Computational Science)

Abstract

:
In 2013, Bai and Zhang constructed modulus-based synchronous multisplitting methods for linear complementarity problems and analyzed the corresponding convergence. In 2014, Zhang and Li studied the weaker convergence results based on linear complementarity problems. In 2008, Zhang et al. presented global relaxed non-stationary multisplitting multi-parameter method by introducing some parameters. In this paper, we extend Bai and Zhang’s algorithms and analyze global modulus-based synchronous multisplitting multi-parameters TOR (two parameters overrelaxation) methods. Moverover, the convergence of the corresponding algorithm in this paper are given when the system matrix is an H + -matrix.

1. Introduction

Consider the linear complementarity problems LCP ( q , A ) , for finding a pair of real vectors r and z R n such that
r = A z + q 0 , z 0 , z T ( A z + q ) = 0 ,
where A = ( a i j ) R n × n is the given real matrix and q = ( q 1 , q 2 , , q n ) T R n is the given real vector. Here, z T and ≥ denote the transpose of the vector z and the componentwise defined partial ordering between two vectors, respectively. Now, H + -matrices belong to class of P-matrices and so play an important rule in the theory of LCP.
The readers may see references [1,2,3,4] for many problems in scientific computing and engineering applications. When the matrix A is special for LCP ( q , A ) , readers may see the references [5,6,7,8,9,10,11,12,13,14]. Lately, when LCP ( q , A ) is an algebra system, some scientist have studied it. Moreover, Bai and Zhang presented the modulus-based multisplitting iterative methods for LCP ( q , A ) and analyzed the convergence based on the corresponding methods in [10,11]. Zhang and Ren generalized the compatible H-splitting condition to an H-splitting [15]. L generalized modulus-based splitting iterative method to more general situationi [16]. Zhang et al. studied the wider convergence when system matrix is an H + -matrix [17,18,19].

2. Notations and Lemmas

A matrix A = ( a i j ) is called an M-matrix if a i j 0 for i j and A 1 0 . The comparison matrix A = ( α i j ) of matrix A = ( a i j ) is defined by: α i j = | a i j | , if i = j ; α i j = | a i j | , if i j . A matrix A is called an H-matrix if A is an M-matrix and is called an H + -matrix if it is an H-matrix with positive diagonal entries [5,20,21]. Let ρ ( A ) denote the spectral radius of A, a representation A = M N is called a splitting of A when M is nonsingular. Let A and B be M-matrices, if A B , then A 1 B 1 . Let A be an H-matrix, and A = D B , D = diag ( A ) , then ρ ( | D | 1 | B | ) < 1 . Moreover, D is nonsingular. Finally, we define by R + n = { x | x 0 , x R n } and denote the nonnegative matrix with entries | a i j | by | A | .
Lemma 1.
Let A be an H-matrix. Then A is nonsingular, and | A 1 | A 1 [22].
Lemma 2.
Let H ( 1 ) , H ( 2 ) , , H ( l ) be a sequence of nonnegative matrices in R n × n [23]. If there exists a real number 0 θ < 1 , and a vector ν > 0 in R n , such that
H ( l ) ν θ ν , l = 1 , 2 ,
then ρ ( K l ) θ l < 1 , where K l = H ( l ) H ( l 1 ) H ( 1 ) , and therefore lim l K l = 0 .
Lemma 3.
Let A = ( a i j ) Z n × n have all positive diagonal entries [24]. A is an M-matrix if and only if ρ ( B ) < 1 , where B = D 1 C , D = diag ( A ) , A = D C .
Lemma 4.
A R n × n be an H + -matrix. Then, the LCP ( q , A ) has a unique solution for any q R n [7,9,25].
Lemma 5.
Let A = M N be a splitting of the matrix A R n × n , Ω be a positive diagonal matrix, and γ a positive constant [10]. Then, for the LCP ( q , A ) the following statements hold true:
(i) 
if ( z , r ) is a solution of the LCP ( q , A ) , then x = 1 2 γ ( z Ω 1 r ) satisfies the implicit fixed-point equation
( Ω + M ) x = N x + ( Ω A ) | x | γ q ;
(ii) 
if x satisfies the implicit fixed-point Equation (2), then
z = γ 1 ( | x | + x ) , r = γ 1 Ω ( | x | x )
is a solution of the LCP ( q , A ) .

3. Global Modulus-Based Synchronous Multisplitting Multi-Parameters TOR Methods

Firstly, we will introduce the idea of multisplitting algorithm and the parallel iterative process. { M k , N k , E k } k = 1 l is a m u l t i s p l i t t i n g of A if
(1)
A = M k N k is a splitting for k = 1 , 2 , , l ;
(2)
E k 0 is a nonnegative diagonal matrix, called weighting matrix;
(3)
k = 1 l E k = I , where I is the identity matrix.
If Ω is a positive diagonal matrix, γ is a positive constant, form Lemma 5, we may find that if x satisfies the following implicit fixed-point systems,
( Ω + M k ) x = N k x + ( Ω A ) | x | γ q , k = 1 , 2 , , l ,
we have,
z = γ 1 ( | x | + x ) , r = γ 1 Ω ( | x | x )
which is a solution of the Equation (1).
Let
A = D L k F k U k , k = 1 , 2 , , l ,
where D = diag ( A ) , L k and F k are the strictly lower triangular, and U k are such that A = D L k F k U k , then ( D L k F k , U k , E k ) is a multisplitting of A. With the equivalent reformulations (4), (5) and TOR method of the Equation (1), we may obtain the global modulus-based synchronous multisplitting multi-parameters TOR algorithm (GMSMMTOR). Please see the following Method 1.
Method 1.
The GMSMMTOR algorithm for the Equation (1). If ( M k , N k , E k ) ( k = 1 , 2 , l ) are the multisplitting of matrix A R n × n . Given an initial value x ( 0 ) R n for m = 0 , 1 , until the iteration sequence { z ( m ) } m = 0 R + n is convergent, compute z ( m + 1 ) R + n by
z ( m + 1 ) = 1 γ ( | x ( m + 1 ) | + x ( m + 1 ) )
and x ( m , k ) R n according to
x ( m + 1 ) = ω k = 1 l E k x ( m , k ) + ( 1 ω ) x ( m ) ,
where x ( m , k ) , k = 1 , 2 , , l , are obtained by solving the linear systems:
[ α k Ω + D ( β k L k + γ k F k ) ] x ( m , k ) = [ ( 1 α k ) D + ( α k β k ) L k + ( α k γ k ) F k + α k U k ] x ( m ) + α k [ ( Ω A ) | x ( m ) | γ q ] , k = 1 , 2 , , l ,
respectively.
Remark 1.
In this paper, TOR method has more splitting and parameters, so the faster convergence rate can be obtained by selecting parameters. In Method 1, when α k = α , β k = β , γ k = γ , ω = 1 , the GMSMMTOR algorithm reduces to MSMTOR (Modulus-Based Synchronous Multisplitting Two Parameters Overrelaxation Method) algorithm; when α k = α , β k = β , γ k = γ , GMSMMTOR algorithm reduces to GMSMTOR (Global Modulus-Based Synchronous Multisplitting Two Parameters Overrelaxation Method) algorithm; when γ k = 0 , ω = 1 , GMSMMTOR algorithm reduces to MSMMAOR (Modulus-Based Synchronous Multisplitting Multi-Parameters Accelerated Overrelaxation Method) algorithm; when γ k = 0 , GMSMMTOR algorithm reduces to GMSMMAOR (Global Modulus-Based Synchronous Multisplitting Multi-Parameters Accelerated Overrelaxation Method) algorithm; when α k = α , β k = β , γ k = 0 , ω = 1 , GMSMMTOR algorithm reduces to MSMAOR (Modulus-Based Synchronous Multisplitting Accelerated Overrelaxation Method) algorithm [26]; when α k = α , β k = β , γ k = 0 , GMSMMTOR algorithm reduces to GMSMAOR (Global Modulus-Based Synchronous Multisplitting Accelerated Overrelaxation Method) algorithm.
Remark 2.
From Table 1, one can find that GMSMMTOR algorithm is the generalization of MSMMAOR algorithm. Moreover, when selecting proper parameters and E k , we can get faster convergence rate.

4. Convergence Analysis

In 2013, based on modulus-based synchronous multisplitting AOR method, Bai and Zhang got the following Theorem [27].
Theorem 1.
Let A R n × n be an H + -matrix, with D = diag ( A ) and B = D A , and let ( M k , N k , E k ) ( k = 1 , 2 , , l ) and ( D L k , U k , E k ) ( k = 1 , 2 , , l ) be a multisplitting and a triangular multisplitting of the matrix A, respectively [27]. Assume that γ > 0 and the positive diagonal matrix Ω satisfies Ω D . If A = D L k U k ( k = 1 , 2 , , l ) satisfies A = D | L k | | U k | ( k = 1 , 2 , , l ) , then the iteration sequence { z ( m ) } m = 0 generated by the MSMAOR iteration method converges to the unique solution z * of LCP ( q , A ) for any initial vector z ( 0 ) R + n , provided the relaxation parameters α and β satisfy
0 < β α < 1 ρ ( D 1 | B | ) .
In 2014, based on modulus-based synchronous multisplitting AOR algorithm, Zhang et al. [17] obtained Theorem 2.
Theorem 2.
Let A R n × n be an H + -matrix, with D = diag ( A ) and B = D A , and let ( M k , N k , E k ) ( k = 1 , 2 , , l ) and ( D L k , U k , E k ) ( k = 1 , 2 , , l ) be a multisplitting and a triangular multisplitting of the matrix A, respectively [17]. Assume that γ > 0 and the positive diagonal matrix Ω satisfies Ω D . If A = D L k U k ( k = 1 , 2 , , l ) satisfies A = D | L k | | U k | ( k = 1 , 2 , , l ) , then the iteration sequence { z ( m ) } m = 0 generated by the MSMMAOR iteration method converges to the unique solution z * of LCP ( q , A ) for any initial vector z ( 0 ) R + n , provided the relaxation parameters α k and β k satisfy
0 < β k α k 1 or 0 < β k < 1 ρ ( D 1 | B | ) , 1 < α k < 1 ρ ( D 1 | B | ) .
In 2008, based on global relaxed non-stationary multisplitting multi-parameter TOR algorithm (GRNMMTOR) for the large sparse linear system [26], Zhang, Huang and Gu [28] got the corresponding theorem:
Theorem 3.
Let A be an H-matrix, and for k = 1 , 2 , , l , L k and F k be strictly lower triangular matrices [26]. Define the matrix U k , k = 1 , 2 , , l , such that A = D L k F k U k and assume that we have A = | D | | L k | | F k | | U k | = | D | | B | . If
0 β k γ k , 0 α k γ k , 0 < γ k < 2 1 + ρ , 0 < ω < 2 1 + ρ γ k ,
then GRNMMTOR method converges for any initial vector x ( 0 ) , where ρ = ρ ( J ) , J = | D | 1 | B | , ρ γ k = max 1 k α { | 1 γ k | + γ k ρ ϵ } , q ( m , k ) 1 , m = 0 , 1 , , k = 1 , 2 , , l .
Based on global modulus-based synchronous multisplitting multi-parameter TOR algorithm, we analyze the wider results of the presented algorithms for LCPs, which is as follows:
Theorem 4.
Let A R n × n be an H + -matrix, with D = diag ( A ) and B = D A , and let ( M k , N k , E k ) ( k = 1 , 2 , , l ) and ( D L k F k , U k , E k ) ( k = 1 , 2 , , l ) be a multisplitting and a triangular multisplitting of the matrix A, respectively. Assume that γ > 0 and the positive diagonal matrix Ω satisfies Ω D . If A = D L k F k U k ( k = 1 , 2 , , l ) satisfies A = D | L k | | F k | | U k | ( k = 1 , 2 , , l ) , then the iteration sequence { z ( m ) } m = 0 generated by the GMSMMTOR iteration method converges to the unique solution z * of LCP ( q , A ) for any initial vector z ( 0 ) R + n , provided the relaxation parameters α k and β k , ω satisfy
0 < β k , γ k α k 1 , 0 < ω < 2 1 + ρ or 0 < β k , γ k < 1 ρ ( J | ) , 1 < α k < 1 ρ ( J ) , 0 < ω < 2 1 + ρ ,
where ρ = ρ ( J ) < 1 , J = D 1 | B | , ρ = max 1 k l { 1 2 α k + 2 α k ρ ϵ , 2 δ k ρ ϵ 1 , 2 α k ρ ϵ 1 } , δ k = max { β k , γ k } . Moreover, β k , γ k should be greater than or less than α k at once.
Proof 1.
From Lemma 3 and Equation (6), for GMSMMTOR algorithm, we have
( α k Ω + D ( β k L k + γ k F k ) ) x * = [ ( 1 α k ) D + ( α k β k ) L k + ( α k γ k ) F k + α k U k ] x * + α k [ ( Ω A ) | x * | γ q ] , k = 1 , 2 , , l ,
by subtracting Equation (8) from Equation (6), we obtain
x ( m , k ) x * = ( α k Ω + D ( β k L k + γ k F k ) ) 1 [ ( 1 α k ) D + ( α k β k ) L k + ( α k γ k ) F k + α k U k ] ( x ( m ) x * ) + ( α k Ω + D ( β k L k + γ k F k ) ) 1 α k ( Ω A ) ( | x ( m ) | | x * | ) , k = 1 , 2 , , l ,
then, the error about the GMSMMTOR algorithm is as follows:
x ( m + 1 ) x * = ω k = 1 l E k ( α k Ω + D ( β k L k + γ k F k ) ) 1 [ ( 1 α k ) D + ( α k β k ) L k + ( α k γ k ) F k + α k U k ] ( x ( m ) x * ) + ω k = 1 l E k ( α k Ω + D ( β k L k + γ k F k ) ) 1 α k ( Ω A ) ( | x ( m ) | | x * | ) + ( 1 ω ) ( x ( m ) x * ) ,
Equation (9) is the base for discussing the convergence results of GMSMMTOR algorithm. If we take the absolute values on both sides of Equation (9) and compute | | x ( m ) | | x * | | | x ( m ) x * | , defining ϵ ( m ) = x ( m ) x * and assembling homothetic terms together, we have
| ϵ ( m ) | = | x ( m + 1 ) x * | H G M S M M T O R | x ( m ) x * | ,
where
H G M S M M T O R = ω k = 1 l E k ( α k Ω + D ( β k | L k | + γ k | F k | ) ) 1 [ | 1 α k | D + | α k β k | | L k | + | α k γ k | | F k | + α k | U k | + α k | Ω A | ] + | 1 ω | I .
 ☐
Problem 1.
If 0 < β k , γ k α k 1 , 0 < ω < 2 1 + ρ . We define
M k = α k Ω + D ( β k | L k | + γ k | F k | ) , N k 1 = ( 1 α k ) D + ( α k β k ) | L k | + ( α k γ k ) | F k | + α k | U k | + α k | Ω A | .
By Equation (12), Ω D and A = D B , so the diagonal part of | Ω A | is Ω D and off-diagonal part is B. So | Ω A | = ( Ω D ) + | B | , | B | = | L k | + | F k | + | U k | , k = 1 , 2 , , l , we have N k 1 = M k 2 α k D + 2 α k | B | . So
H G M S M M T O R = M k 1 N k 1 = M k 1 ( M k 2 α k D + 2 α k | B | ) = I 2 α k M k 1 ( D | B | ) ,
and
| H G M S M M T O R | M k 1 [ M k 2 α k ( D | B | ) ] I 2 α k M k 1 D ( I D 1 | B | ) .
Let e denote vector e = ( 1 , 1 , , 1 ) T R n . Since J is a nonnegative matrix, this matrix J + ϵ e e T has only positive entries and is irreducible for any ϵ > 0 . By Perron-Frobenius theorem for any ϵ > 0 , there is a vector x ϵ > 0 such that
( J + ϵ e e T ) x ϵ = ρ ϵ x ϵ ,
where ρ ϵ = ρ ( J + ϵ e e T ) = ρ ( J ϵ ) . Moreover, if ϵ > 0 is small enough, we obtain ρ ϵ < 1 by continuity of spectral radius. Since 0 < α k 1 , we also obtain 1 2 α k + 2 α k ρ < 1 , and 1 2 α k + 2 α k ρ ϵ < 1 . So
| H G M S M M T O R | I 2 α k M k 1 D [ I ( D 1 | B | + ϵ e e T ) ] = I 2 α k M k 1 D [ I J ϵ ] .
Multiplying x ϵ in both sides of the equation, and M k 1 D 1 , we have
| H G M S M M T O R | x ϵ x ϵ 2 α k M k 1 D [ 1 ρ ( J ϵ ) ] x ϵ x ϵ 2 α k D 1 D [ 1 ρ ( J ϵ ) ] x ϵ = ( 1 2 α k + 2 α k ρ ( J ϵ ) ) x ϵ
By Equation (11), we have
| H G M S M M T O R | x ϵ ω Σ k = 1 l E k ( 1 2 α k + 2 α k ρ ( J ϵ ) ) x ϵ + | 1 ω | x ϵ ω Σ k = 1 l E k ( 1 2 α k + 2 α k ρ ϵ ) x ϵ + | 1 ω | x ϵ = ( ω ρ 1 + | 1 ω | ) x ϵ = θ 1 x ϵ ( ϵ 0 ) ,
where θ 1 = ω ρ 1 + | 1 ω | < 1 , ρ 1 = Σ k = 1 l E k ( 1 2 α k + 2 α k ρ ϵ ) .
Problem 2.
If 0 < β k , γ k < 1 ρ ( D 1 | B | ) , 1 < α k < 1 ρ ( D 1 | B | ) , 0 < ω < 2 1 + ρ .
Subproblem 2.1.:
α k β k and α k γ k . We define:
N k 2 = ( α k 1 ) D + ( α k β k ) | L k | + ( α k γ k ) | F k | + α k | U k | + α k | Ω A | = M k 2 D + 2 α k | B | .
So
| H G M S M M T O R | M k 1 [ M k 2 ( D α k | B | ) ] I 2 M k 1 D ( I α k D 1 | B | ) .
Similar to the Problem 1, let e denote vector e = ( 1 , 1 , , 1 ) T R n , and x ϵ > 0 such that J ϵ x ϵ = ( J + ϵ e e T ) x ϵ = ρ ( J ϵ ) x ϵ . Moreover, if ϵ > 0 is small enough, we can obtain ρ ϵ < 1 by continuity of spectral radius. Since 1 < α k < 1 ρ ( D 1 | B | ) , we may obtain
2 α k ρ 1 < 1 and 2 α k ρ ϵ 1 < 1 ,
so
| H G M S M M T O R | I 2 M k 1 D [ I α k ( D 1 | B | + ϵ e e T ) ] = I 2 M k 1 D [ I α k J ϵ ] .
Multiplying x ϵ in both sides of the above equation, and M k 1 D 1 , we have
| H R M S M M A O R | x ϵ x ϵ 2 M k 1 D [ 1 α k ρ ( J ϵ ) ] x ϵ x ϵ 2 ( 1 α k ρ ( J ϵ ) ) ] x ϵ = ( 2 α k ρ ( J ϵ ) 1 ) x ϵ .
By Equation (11), we have
| H G M S M M T O R | x ϵ ω Σ k = 1 l E k ( 2 α k ρ ( J ϵ ) 1 ) x ϵ + | 1 ω | x ϵ ω Σ k = 1 l E k ( 2 α k ρ ϵ 1 ) x ϵ + | 1 ω | x ϵ = ( ω ρ 2 + | 1 ω | ) x ϵ = θ 2 x ϵ ( ϵ 0 ) ,
where θ 2 = ω ρ 2 + | 1 ω | < 1 , ρ 2 = Σ k = 1 l E k ( 2 α k ρ ϵ 1 ) .
Subproblem 2.2.:
α k β k and α k γ k . We define
N k 3 = ( α k 1 ) D + ( β k α k ) | L k | + ( γ k α k ) | F k | + α k | U k | + α k | Ω A | = M k 2 D + 2 β k | L k | + 2 γ k | F k | + 2 α k | U k | M k 2 D + 2 δ k | B | .
where δ k = max { β k , γ k } , so
| H G M S M M T O R | M k 1 [ M k 2 ( D δ k | B | ) ] I 2 M k 1 D ( I δ k D 1 | B | ) .
Similar to the Problem 1, let e denote vector e = ( 1 , 1 , , 1 ) T R n , and x ϵ > 0 such that J ϵ x ϵ = ( J + ϵ e e T ) x ϵ = ρ ( J ϵ ) x ϵ . Furthermore, if ϵ > 0 is small enough, we obtain ρ ϵ < 1 by continuity of spectral radius. Since 0 < β k , γ k < 1 ρ ( D 1 | B | ) , we can obtain
2 δ k ρ 1 < 1 and 2 δ k ρ ϵ 1 < 1 ,
so
| H G M S M M T O R | I 2 M k 1 D [ I δ k ( D 1 | B | + ϵ e e T ) ] = I 2 M k 1 D [ I δ k J ϵ ] .
Multiplying x ϵ in both sides of the equation, and M k 1 D 1 , we have
| H G M S M M T O R | x ϵ x ϵ 2 ( 1 δ k ρ ( J ϵ ) ) ] x ϵ = ( 2 δ k ρ ( J ϵ ) 1 ) x ϵ
By Equation (11), we have
| H G M S M M T O R | x ϵ ω Σ k = 1 l E k ( 2 δ k ρ ( J ϵ ) 1 ) x ϵ + | 1 ω | x ϵ ω Σ k = 1 l E k ( 2 δ k ρ ϵ 1 ) x ϵ + | 1 ω | x ϵ = ( ω ρ 3 + | 1 ω | ) x ϵ = θ 3 x ϵ ( ϵ 0 ) ,
where θ 3 = ω ρ 3 + | 1 ω | < 1 , ρ 3 = Σ k = 1 l E k ( 2 δ k ρ ϵ 1 ) .
Remark 3.
Obviously, one can find that the conditions of Theorem 4 in this paper are wider than those of Theorem 2.3 in [28]. Furthermore, we have more choices for the splitting A = B C which makes multisplitting iterative methods converge. So, convergence results are generalized in applications.
Remark 4.
In this paper, GMSMMTOR algorithm is also the generalization of MSMAOR method in [27] and MSMMAOR algorithm in [17].

5. Numerical Experiments

In this section, numerical examples are used to illustrate the feasibility and effectiveness of the relaxed modulus-based synchronous multisplitting multi-parameter AOR methods (GMSMMAOR) ( F = U ) in terms of iteration count (denoted by IT) and computing time (denoted by CPU), and norm of absolute residual vectors (denoted by RES). Here, RES is defined as
RES ( z ( k ) ) = min ( A z ( k ) + q , z ( k ) ) 2
where z ( k ) is the kth approximate solution to the LCP ( q , A ) and the minimum is taken componentwise in [10].
In our numerical computations, to compare the GMSMMAOR method with the modulus-based synchronous multisplitting multi-parameter methods (MSMAOR), all initial vectors are chosen to be
x ( 0 ) = ( 1 , 0 , 1 , 0 , , 1 , 0 , ) T R n
all runs are performed in MATLAB 7.0 (MathWorks, Natick, MA, USA) with double machine precision, and all iterations are terminated with RES ( z ( k ) ) 10 5 . In the table, α , β denote the iteration parameters in the GMSMMAOR methods and the MSMAOR. In addition, we take Ω = 1 2 α D in [10] for GMSMMAOR and MSMAOR methods. In particular, when we choose the parameter pair ( α k , β k ) to be ( α k , α k ) ( 1 , 1 ) and ( 1 , 0 ) respectively, the GMSMMAOR method gives the so-called GMSMMSOR (Global Modulus-Based Synchronous Multisplitting Multi-Parameters Successive Over Relaxation Method), GMSMGS (Global Modulus-Based Synchronous Multisplitting Multi-Parameters Successive Gauss-Seidel Method), and GMSMJ (Global Modulus-Based Synchronous Multisplitting Multi-Parameters Successive Jacobi Method) methods, correspondingly. For convenience, let α k = α , β k = β , γ = 2 , ω = 1 , k = 1 .
Let m be a prescribed positive integer and n = m 2 . Consider the LCP ( q , A ) , in which A R n × n is given by A = A ^ + μ I and q R n is given by q = M z * where
A ^ = tridiag ( r I , S , t I ) = S t I 0 0 0 r I S t I 0 0 0 r I S 0 0 0 0 S t I 0 0 r I S R n × n
is a block-tridiagonal matrix,
S = tridiag ( 1 , 4 , 1 ) = 4 1 0 0 0 1 4 1 0 0 0 1 4 0 0 0 0 4 1 0 0 1 4 R n × n
is a tridiagonal matrix, and
z * = ( 1 , 2 , 1 , 2 , , 1 , 2 , ) T R n
is the unique solution of the LCP ( q , A ) , one can see [10] for more details.
For symmetric case, we take r = t = 1 , which is considered in [10]. In this case, the system matrix A R n × n is symmetric positive and definite for μ 0 . So, the LCP ( q , A ) has a unique solution.
In Table 2, the iteration steps, the CPU times, and the residual norms of GMSMMAOR and MSMAOR methods for the symmetric case are listed for different parameters and different problem sizes of m. When both GMSMMAOR and MSMAOR methods are applied to solve the LCP ( q , A ) , the iteration parameters α , β about MSMAOR method satisfy Theorem 4.1 in [27] and Theorem 2 in this paper, but the iteration parameters α , β about GMSMMAOR method only satisfy Theorem 2 in this paper and don’t satisfy Theorem 4.1 in [27].
From Table 2, for GMSMMAOR and MSMAOR methods with α = 1 , β = 1 . 2 and α = 1 , β = 0 . 7 , fixing the value of μ, it is easy to see that the iteration steps do not change with the increasing of the problem size m . However, CPU times increase as the problem size m increases. Moreover, for GMSMMAOR and MSMAOR methods, fixing the value of m, it is also easy to see that the iteration steps and CPU times decrease as the increasing of the problem size μ . In our numerical experiments, we find that the iteration steps and CPU times of GMSMMAOR are less than that of MSMAOR under certain conditions.

6. Conclusions

In this paper, global modulus-based synchronous multisplitting multi-parameters TOR methods has been established and its convergence properties are discussed in detail when the system matrix is either a positive-definite matrix or an H + -matrix. Numerical experiments show that the GMSMMTOR methods are feasible under certain conditions.

Acknowledgments

This research of this author is supported by NSFC Tianyuan Mathematics Youth Fund (11226337), NSFC (11501525, 11471098, 61203179, 61202098, 61170309, 91130024, 61272544, 61472462 and 11171039), Science Technology Innovation Talents in Universities of Henan Province (16HASTIT040,17HASTIT012), Aeronautical Science Foundation of China (2013ZD55006, 2016ZG55019), Project of Youth Backbone Teachers of Colleges and Universities in Henan Province (2013GGJS-142, 2015GGJS-179), ZZIA Innovation team fund (2014TD02), Major project of development foundation of science and technology of CAEP (2012A0202008), Defense Industrial Technology Development Program, China Postdoctoral Science Foundation (2014M552001), Basic and Advanced Technological Research Project of Henan Province (152300410126), Henan Province Postdoctoral Science Foundation (2013031), Natural Science Foundation of Zhengzhou City (141PQYJS560).

Author Contributions

Litao Zhang completed the whole paper, Tongxiang Gu revised the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cottle, R.W.; Pang, J.-S.; Stone, R.E. The Linear Complementarity Problem; Academic Press: San Diego, CA, USA, 1992. [Google Scholar]
  2. Ferris, M.C.; Pang, J.-S. Engineering and economic applications of complementarity problems. SIAM. Rev. 1997, 39, 669–713. [Google Scholar] [CrossRef]
  3. Murty, K.G. Linear Complementarity, Linear and Nonlinear Programming; Heldermann: Berlin, Germany, 1997. [Google Scholar]
  4. O’ Leary, D.P.; White, R.E. Multisplittings of matrices and parallel solution of linear systems. SIAM J. Alg. Disc. Meth. 1985, 6, 630–640. [Google Scholar] [CrossRef]
  5. Bai, Z.-Z. On the convergence of the multisplitting methods for the linear complementarity problem. SIAM. J. Matrix Anal. Appl. 1999, 21, 67–78. [Google Scholar] [CrossRef]
  6. Bai, Z.-Z. The convergence of parallel iteration algorithms for linear complementarity problems. Comput. Math. Appl. 1996, 32, 1–17. [Google Scholar] [CrossRef]
  7. Bai, Z.-Z.; Evans, D.J. Matrix multisplitting relaxation methods for linear complementarity problems. Int. J. Comput. Math. 1997, 63, 309–326. [Google Scholar] [CrossRef]
  8. Bai, Z.-Z. On the monotone convergence of matrix multisplitting relaxation methods for the linear complementarity problem. IMA J. Numer. Anal. 1998, 18, 509–518. [Google Scholar] [CrossRef]
  9. Bai, Z.-Z.; Evans, D.J. Matrix multisplitting methods with applications to linear complementarity problems: Parallel synchronous and chaotic methods. Reseaux Syst. Repartis Calculateurs Paralleles 2001, 13, 125–154. [Google Scholar] [CrossRef]
  10. Bai, Z.-Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  11. Bai, Z.-Z.; Zhang, L.-L. Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems. Numer. Algorithms 2013, 62, 59–77. [Google Scholar] [CrossRef]
  12. Van Bokhoven, W.M.G. Piecewise-Linear Modelling and Analysis. Ph.D. Thesis, Eindhoven University of Technology, Eindhoven, The Netherlands, 1981. [Google Scholar]
  13. Dong, J.-L.; Jiang, M.-Q. A modified modulus method for symmetric positive-definite linear complementarity problems. Numer. Linear Algebra Appl. 2009, 16, 129–143. [Google Scholar] [CrossRef]
  14. Hadjidimos, A.; Tzoumas, M. Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problem. Linear Algebra Appl. 2009, 431, 197–210. [Google Scholar] [CrossRef]
  15. Zhang, L.-L.; Ren, Z.-R. Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems. Appl. Math. Lett. 2013, 26, 638–642. [Google Scholar] [CrossRef]
  16. Li, W. A general modulus-based matrix splitting method for linear complementarity problems of H-matrices. Appl. Math. Lett. 2013, 26, 1159–1164. [Google Scholar] [CrossRef]
  17. Zhang, L.-T.; Li, J.-L. The weaker convergence of modulus-based synchronous multisplitting multi-parameters methods for linear complementarity problems. Comput. Math. Appl. 2014, 67, 1954–1959. [Google Scholar] [CrossRef]
  18. Zhang, L.-T.; Zuo, X.-Y.; Gu, T.-X.; Liu, X.-P. Improved convergence theorems of multisplitting methods for the linear complementarity problem. Appl. Math. Comput. 2014, 243, 982–987. [Google Scholar] [CrossRef]
  19. Zhang, L.-T.; Zhang, Y.-X.; Gu, T.-X.; Liu, X.-P.; Zhang, L.-W. New convergence of modulus-based synchronous block multisplitting multi-parameters methods for linear complementarity problems. Comput. Appl. Math. 2015. [Google Scholar] [CrossRef]
  20. Berman, A.; Plemmons, R.J. Nonnegative Matrices in the Mathematical Sciences; Academic Press: New York, NY, USA, 1979. [Google Scholar]
  21. Varga, R.S. Matrix Iterative Analysis; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  22. Frommer, A.; Mayer, G. Convergence of relaxed parallel multisplitting methods. Linear Algebra Appl. 1989, 119, 141–152. [Google Scholar] [CrossRef]
  23. Robert, F.; Charnay, M.; Musy, F. Iterations chaotiques serie-parallel pour des equations non-lineaires de point fixe. Aplikace Matematiky 1975, 20, 1–38. [Google Scholar]
  24. Young, D.M. Iterative Solution of Large Linear Systems; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  25. Bai, Z.-Z.; Evans, D.J. Matrix multisplitting methods with applications to linear complementarity problems: Parallel asynchronous methods. Int. J. Comput. Math. 2002, 79, 205–232. [Google Scholar] [CrossRef]
  26. Zhang, L.-T.; Huang, T.-Z.; Cheng, S.-H.; Gu, T.-X. The weaker convergence of non-stationary matrix multisplitting methods for almost linear systems. Taiwan. J. Math. 2011, 15, 1423–1436. [Google Scholar]
  27. Bai, Z.-Z.; Zhang, L.-L. Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2013, 20, 425–439. [Google Scholar] [CrossRef]
  28. Zhang, L.-T.; Huang, T.-Z.; Gu, T.-X. Global relaxed non-stationary multisplitting multi-parameter methods. Int. J. Comput. Math. 2008, 85, 211–224. [Google Scholar] [CrossRef]
Table 1. The relaxed modulus-based synchronous multisplitting multi-parameter algorithm and the corresponding convergence.
Table 1. The relaxed modulus-based synchronous multisplitting multi-parameter algorithm and the corresponding convergence.
Method α k , β k , ω DescriptionRef.
MSMJ α k = 1 , β k = 0 , ω = 1 Modulus-based synchronous[27]
multisplitting Jacobi algorithm
MSMGS α k = β k = 1 , ω = 1 Modulus-based synchronous[27]
multisplitting Gauss-Seidel algorithm
MSMSOR 0 < α ( α k ) = β ( β k ) < 1 ρ ( D 1 | B | ) , ω = 1 Modulus-based synchronous[27]
multisplitting SOR algorithm
MSMAOR 0 < β ( β k ) α ( α k ) < 1 ρ ( D 1 | B | ) Modulus-based synchronous[27]
multisplitting AOR algorithm
MSMMAOR ω = 1 , 0 < β k α k 1 orModulus-based synchronous[17]
0 < β k < 1 ρ ( D 1 | B | ) , 1 < α k < 1 ρ ( D 1 | B | ) multisplitting multi-parameters
AOR algorithm
GMSMMTOR 0 < β k , γ k α k 1 , 0 < ω < 2 1 + ρ orGlobal modulus-basedthis paper
0 < β k , γ k < 1 ρ ( D 1 | B | ) , 1 < α k < 1 ρ ( D 1 | B | ) synchronous multisplitting
0 < ω < 2 1 + ρ multi-parameter TOR algorithm
where ρ = max 1 k l { 1 2 α k + 2 α k ρ ϵ ,
2 δ k ρ ϵ 1 , 2 α k ρ ϵ 1 } , δ k = max { β k , γ k }
Table 2. IT, CPU and Error for GMSMMAOR and MSMAOR with different parameters in symmetric case.
Table 2. IT, CPU and Error for GMSMMAOR and MSMAOR with different parameters in symmetric case.
m2030405060
μ = 0 . 5 GMSMMAORIT2222222222
α = 1 CPU 0 . 1560 0 . 7800 2 . 4336 5 . 9280 12 . 4957
β = 1 . 2 Error 7 . 2225 × 10 6 7 . 2598 × 10 6 7 . 2970 × 10 6 7 . 3390 × 10 6 7 . 3707 × 10 6
μ = 0 . 5 MSMAORIT3030303131
α = 1 CPU 0 . 2184 1 . 0764 3 . 2916 8 . 3773 17 . 6905
β = 0 . 7 Error 9 . 7188 × 10 6 9 . 8399 × 10 6 9 . 9531 × 10 6 7 . 3792 × 10 6 7 . 4496 × 10 6
μ = 1 . 5 GMSMMAORIT1919191919
α = 1 CPU 0 . 1716 0 . 6552 2 . 0748 5 . 0856 10 . 7797
β = 1 . 2 Error 6 . 6884 × 10 6 6 . 8943 × 10 6 7 . 0943 × 10 6 7 . 2888 × 10 6 7 . 4782 × 10 6
μ = 1 . 5 MSMAORIT2324242424
α = 1 CPU 0 . 1716 0 . 8424 2 . 6520 6 . 4584 13 . 6657
β = 0 . 7 Error 9 . 5945 × 10 6 6 . 6969 × 10 6 7 . 0677 × 10 6 7 . 4200 × 10 6 7 . 7563 × 10 6
μ = 2 . 5 GMSMMAORIT1717171717
α = 1 CPU 0 . 1404 0 . 6084 1 . 8720 4 . 5552 9 . 6565
β = 1 . 2 Error 7 . 6793 × 10 6 8 . 2513 × 10 6 8 . 7861 × 10 6 9 . 2902 × 10 6 9 . 7683 × 10 6
μ = 2 . 5 MSMAORIT2020202121
α = 1 CPU 0 . 1404 0 . 7020 2 . 2932 5 . 6472 11 . 9341
β = 0 . 7 Error 8 . 3861 × 10 6 9 . 4078 × 10 6 6 . 1592 × 10 6 6 . 6458 × 10 6 7 . 0992 × 10 6
IT: iteration count; CPU: computing time, Error: norm of residual vectors, m: problem size, μ : μ 0 is a parameter to get a different matrix A.

Share and Cite

MDPI and ACS Style

Zhang, L.-T.; Gu, T.-X. Global Modulus-Based Synchronous Multisplitting Multi-Parameters TOR Methods for Linear Complementarity Problems. Math. Comput. Appl. 2017, 22, 20. https://doi.org/10.3390/mca22010020

AMA Style

Zhang L-T, Gu T-X. Global Modulus-Based Synchronous Multisplitting Multi-Parameters TOR Methods for Linear Complementarity Problems. Mathematical and Computational Applications. 2017; 22(1):20. https://doi.org/10.3390/mca22010020

Chicago/Turabian Style

Zhang, Li-Tao, and Tong-Xiang Gu. 2017. "Global Modulus-Based Synchronous Multisplitting Multi-Parameters TOR Methods for Linear Complementarity Problems" Mathematical and Computational Applications 22, no. 1: 20. https://doi.org/10.3390/mca22010020

Article Metrics

Back to TopTop