Next Article in Journal
Classification of Warped Product Submanifolds in Kenmotsu Space Forms Admitting Gradient Ricci Solitons
Next Article in Special Issue
Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem
Previous Article in Journal
The Characterization of Affine Symplectic Curves in ℝ4
Previous Article in Special Issue
Fixed Point Results for the Family of Multivalued F-Contractive Mappings on Closed Ball in Complete Dislocated b-Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extension of Extragradient Techniques for Variational Inequalities

1
School of Mathematical Sciences, Tianjin Polytechnic University, Tianjin 300387, China
2
The Key Laboratory of Intelligent Information and Data Processing of NingXia Province, North Minzu University, Yinchuan 750021, China
3
Health Big Data Research Institute of North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 111; https://doi.org/10.3390/math7020111
Submission received: 4 December 2018 / Revised: 15 January 2019 / Accepted: 18 January 2019 / Published: 22 January 2019
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
An extragradient type method for finding the common solutions of two variational inequalities has been proposed. The convergence result of the algorithm is given under mild conditions on the algorithm parameters.

1. Introduction

Let H be a real Hilbert space equipped with inner product · , · and norm · . Let C H be a closed and convex set. Let A : C H be a mapping. Recall that the variational inequality (VI) seeks an element x C such that
A z , z z 0 , z C .
The solution set of (1) is denoted by VI ( C , A ) .
The problem (1) introduced and studied by Stampacchia [1] is being applied as a useful tool and model to refine a multitude of problems. A large number of methods for solving VI (1) are projection methods that implement projections onto the feasible set of the VI (1), or onto another set in order to achieve a solution. Several iterative methods for solving the VI (1) have been proposed. See, e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. A basic one is the natural extension of the gradient projection algorithm for solving the optimization problem min x C f ( x ) . For x 0 C , calculate iteratively the sequence { x n } through
x n + 1 = p r o j C [ x n α n f ( x n ) ] , n 0 ,
where p r o j is the metric projection and α n > 0 is the step-size.
Korpelevich [31] introduced an iterative method for solving the VI (1), known as the extragradient method ([7]). In Korpelevich’s method, two projections are used for computing the next iteration. For the current iteration x n , compute
y n = p r o j C [ x n ξ A x n ] , x n + 1 = p r o j C [ x n ξ A y n ] , n 0 ,
where ξ > 0 is a fixed number.
Korpelevich’s method has received so much attention by a range of scholars, who improved it in several ways; see, e.g., [32,33,34,35,36]. Now, we know that Korpelevich’s method (3) can only achieve weak convergence in a large dimensional spaces ([37,38]). In order to reach strong convergence, Korpelevich’s method was adapted by many mathematicians. For example, in [32], it is shown that several extragradient-type methods converge strongly to an element in VI ( C , A ) .
Very recently, Censor, Gibali and Reich [39] presented an alternating method for finding common solutions of two variational inequalities. In [40], Zaslavski studied an extragradient method for finding common solutions of a finite family of variational inequalities.
Inspired by the work given above, in this article, we present an extragradient type method for finding the common solutions of two variational inequalities. We prove the strong convergence of the proposed method under the mild assumptions on the parameters.

2. Preliminaries

Let H be a real Hilbert space. Let C H be a nonempty, closed and convex set.
Definition 1.
An operator A : C H is called Lipschitz if
A u A v L u v , u , v C ,
where L > 0 is a constant.
If L = 1 , we call A is nonexpansive.
Definition 2.
An operator A : C H is called inverse strongly monotone if
α A u A v 2 A u A v , u v , u , v C ,
where α 0 is a constant.
In this case, we call A is α -inverse-strongly-monotone.
Proposition 1
([41]). If C is a bounded closed convex subset of a real Hilbert space H and A : C H is an inverse strongly monotone operator, then V I ( C , A ) .
For fixed z H , there exists a unique z C satisfying
z z = inf { z z ˜ : z ˜ C } .
We denote z by p r o j C z . The following inequality is an important property of projection p r o j C : for given x H ,
x p r o j C x , y p r o j C x 0 , y C ,
which is equivalent to
x y , p r o j C x p r o j C y p r o j C x p r o j C y 2 , x , y H .
It follows that p r o j C is nonexpansive. We also know that 2 p r o j C I is nonexpansive.
Lemma 1
([41]). If C is a closed convex subset of a real Hilbert space H and A : C H is an α-inverse strongly monotone operator, then
( I μ A ) u ( I μ A ) v 2 u v 2 + μ ( μ 2 α ) A u A v 2 , u , v C .
Especially, I μ A is nonexpansive provided 0 μ 2 α .
Lemma 2
([42]). Suppose that { u n } and { v n } are two bounded sequences in Banach spaces. Let { λ n } [ 0 , 1 ] be a sequence satisfying 0 < lim inf n λ n lim sup n λ n < 1 . Suppose u n + 1 = ( 1 λ n ) v n + λ n u n , n 0 and lim sup n ( v n + 1 v n u n + 1 u n ) 0 . Then lim n u n v n = 0 .
Lemma 3
([43]). Let { μ n } ( 0 , ) , { γ n } ( 0 , 1 ) and { δ n } be three real number sequences. If μ n + 1 ( 1 γ n ) μ n + δ n for all n 0 with n = 1 γ n = and lim sup n δ n / γ n 0 or n = 1 | δ n | < , then lim n μ n = 0 .

3. Main Results

Let C be a convex and closed subset of a real Hilbert space H. Let the operators A , B : C H be α -inverse strongly monotone and β -inverse strongly monotone, respectively. Let { α n } [ 0 , 1 ] , { β n } [ 0 , 1 ] , { λ n } [ 0 , 2 α ] and { μ n } [ 0 , 2 β ] be four sequences. In the sequel, assume Ω = VI ( C , A ) VI ( C , B ) .
Motivated by the algorithms presented in [31,39,40], we present the following iterative Algorithm 1 for finding the common solution of two variational inequalities.
Algorithm 1:
 For u , x 0 C . Assume the sequence { x n } has been constructed. Compute the next iteration { x n + 1 } by the following manner
y n = p r o j C [ x n ( 1 α n ) λ n A x n + α n ( u x n ) ] , x n + 1 = p r o j C [ x n μ n β n B y n + β n ( y n x n ) ] , n 0 .
  Suppose that the control parameters { α n } , { β n } , { λ n } and { μ n } satisfy the following assumptions:
(C1):
lim n α n = 0 and n = 1 α n = ;
(C2):
lim inf n β n > 0 and lim n ( β n + 1 β n ) = 0 ;
(C3):
λ n [ a , b ] ( 0 , 2 α ) and lim n ( λ n + 1 λ n ) = 0 ;
(C4):
μ n [ c , d ] ( 0 , 2 β ) and lim n ( μ n + 1 μ n ) = 0 .
We will divide our main result into several propositions.
Proposition 2.
The sequence { x n } generated by (5) is bounded.
Proof. 
Choose any x Ω . Note that x = p r o j C [ ( I δ A ) x ] for any δ > 0 . Hence,
x = p r o j C [ α n x + ( 1 α n ) ( x λ n A x ) ] , n 0 .
Thus, by (5) and (6), we have
y n x = p r o j C [ α n u + ( 1 α n ) ( I λ n A ) x n ] p r o j C [ α n x + ( 1 α n ) ( I λ n A ) x ] [ α n u + ( 1 α n ) ( I λ n A ) x n ] [ α n x + ( 1 α n ) ( I λ n A ) x ] α n ( u x ) + ( 1 α n ) [ ( I λ n A ) x n ( I λ n A ) x ] α n u x + ( 1 α n ) ( I λ n A ) x n ( I λ n A ) x .
From Lemma 1, we know that I λ n A and I μ n B are nonexpansive. Thus, from (7), we get
y n x α n u x + ( 1 α n ) x n x .
So,
x n + 1 x = p r o j C [ ( 1 β n ) x n + β n ( y n μ n B y n ) ] p r o j C [ ( 1 β n ) x + β n ( x μ n B x ) ] [ ( 1 β n ) x n + β n ( y n μ n B y n ) ] [ ( 1 β n ) x + β n ( x μ n B x ) ] β n ( y n μ n B y n ) ( x μ n B x ) + ( 1 β n ) x n x β n y n x + ( 1 β n ) x n x ( 1 β n α n ) x n x + β n α n u x .
It follows that
x n + 1 x max { x 0 x , u x } .
Then { x n } is bounded, and hence the sequences { y n } , { A x n } and { B y n } are all bounded.  □
Proposition 3.
The following two conclusions hold
lim n x n + 1 x n = 0 a n d lim n x n y n = 0 .
Proof. 
Let S = 2 p r o j C I . It is clear that S is nonexpansive. Set v n = ( 1 β n ) x n + β n ( I μ n B ) y n for all n 0 . Then, we can rewrite x n + 1 in (5) as
x n + 1 = 1 β n 2 x n + β n 2 ( I μ n B ) y n + 1 2 S v n = 1 + β n 2 z n + 1 β n 2 x n ,
where z n = β n 1 + β n ( I μ n B ) y n + 1 1 + β n S v n for all n 0 .
Hence,
z n + 1 z n = 1 1 + β n + 1 S v n + 1 + β n + 1 1 + β n + 1 ( I μ n + 1 B ) y n + 1 β n 1 + β n ( I μ n B ) y n 1 1 + β n S v n .
Hence,
z n + 1 z n β n + 1 1 + β n + 1 ( I μ n + 1 B ) y n + 1 ( I μ n B ) y n + 1 1 + β n + 1 S v n + 1 S v n + | β n + 1 1 + β n + 1 β n 1 + β n | ( I μ n B ) y n + | 1 1 + β n + 1 1 1 + β n | S v n β n + 1 1 + β n + 1 ( I μ n + 1 B ) y n + 1 ( I μ n + 1 B ) y n + β n + 1 1 + β n + 1 | μ n + 1 μ n | B y n + | β n + 1 1 + β n + 1 β n 1 + β n | y n μ n B y n + 1 1 + β n + 1 S v n + 1 S v n + | 1 1 + β n + 1 1 1 + β n | S v n .
By using the nonexpansivity of I μ n B and S to deduce
z n + 1 z n β n + 1 1 + β n + 1 y n + 1 y n + | β n + 1 1 + β n + 1 β n 1 + β n | y n μ n B y n + β n + 1 [ ( I μ n + 1 B ) y n + 1 ( I μ n + 1 B ) y n ] + | 1 1 + β n 1 1 + β n | S v n + β n + 1 1 + β n + 1 | μ n + 1 μ n | B y n + 1 1 + β n + 1 ( 1 β n + 1 ) ( x n + 1 x n ) + ( β n + 1 β n ) ( y n x n ) + ( β n μ n β n + 1 μ n + 1 ) B y n | β n + 1 β n | 1 + β n + 1 ( x n + y n ) + 1 β n + 1 1 + β n + 1 x n + 1 x n + | 1 1 + β n 1 1 + β n | S v n + 2 β n + 1 1 + β n + 1 y n + 1 y n + | β n + 1 1 + β n + 1 β n 1 + β n | y n μ n B y n + β n + 1 1 + β n + 1 | μ n + 1 μ n | B y n + | β n + 1 μ n + 1 β n μ n | 1 + β n + 1 B y n .
Next, we estimate y n + 1 y n . By (5), we get
y n + 1 y n = p r o j C [ α n + 1 u + ( 1 α n + 1 ) ( I λ n + 1 A ) x n + 1 ] p r o j C [ α n u + ( 1 α n ) ( I λ n A ) x n ] | α n + 1 α n | x n + | λ n + 1 λ n | A x n + α n + 1 u + α n u + x n + 1 x n + | λ n + 1 λ n | A x n + | λ n + 1 α n + 1 λ n α n | A x n .
Substituting (10) into (9) to get
z n + 1 z n | β n + 1 1 + β n + 1 β n 1 + β n | y n μ n B y n + β n + 1 1 + β n + 1 | μ n + 1 μ n | B y n + | β n + 1 β n | 1 + β n + 1 ( x n + y n ) + | β n + 1 μ n + 1 β n μ n | 1 + β n + 1 B y n + x n + 1 x n + | 1 1 + β n 1 1 + β n | S v n + 2 | λ n + 1 λ n | A x n + 2 α n + 1 u + 2 α n u + 2 | α n + 1 α n | x n + 2 | λ n + 1 λ n | A x n + 2 | λ n + 1 α n + 1 λ n α n | A x n .
Since lim n ( β n + 1 β n ) = 0 and lim n ( μ n + 1 μ n ) = 0 , we derive that lim n | β n + 1 1 + β n + 1 β n 1 + β n | = 0 , lim n | β n + 1 μ n + 1 β n μ n | = 0 and lim n | 1 1 + β n 1 1 + β n | = 0 . At the same time, note that { x n } , { A x n } , { y n } and { B y n } are bounded. Therefore,
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
Applying Lemma 2 to derive
lim n z n x n = 0 .
Hence,
lim n x n + 1 x n = lim n 1 + β n 2 z n x n = 0 .
By virtue of (7), (8) and Lemma 1, we deduce
x n + 1 x 2 ( 1 β n ) x n x 2 + β n ( y n μ n B y n ) ( x μ n B x ) 2 ( 1 β n ) x n x 2 + β n y n x 2 + β n μ n ( μ n 2 β ) B y n B x 2 β n α n ( u x ) + ( 1 α n ) [ ( x n λ n A x n ) ( x λ n A x ) ] 2 + ( 1 β n ) x n x 2 + β n μ n ( μ n 2 β ) B y n B x 2 β n [ α n u x 2 + ( 1 α n ) ( I λ n A ) x n ( I λ n A ) x 2 ] + β n μ n ( μ n 2 β ) B y n B x 2 + ( 1 β n ) x n x 2 ( 1 β n ) x n x 2 + α n β n u x 2 + β n μ n ( μ n 2 β ) B y n B x 2 + ( 1 α n ) β n [ x n x 2 + λ n ( λ n 2 α ) A x n A x 2 ] α n β n u x 2 + x n x 2 + β n μ n ( μ n 2 β ) B y n B x 2 + ( 1 α n ) β n λ n ( λ n 2 α ) A x n A x 2 .
It follows that
( 1 α n ) β n λ n ( 2 α λ n ) A x n A x 2 + β n μ n ( 2 β μ n ) B y n B x 2 . α n β n u x 2 + x n x 2 x n + 1 x 2 α n β n u x 2 + ( x n x + x n + 1 x ) × x n x n + 1 .
This implies that
lim n A x n A x = 0 a n d lim n B y n B x = 0
According to (4) and (5), we get
y n x 2 = p r o j C [ α n u + ( 1 α n ) ( I λ n A ) x n ] p r o j C ( I λ n A ) x 2 α n u + ( 1 α n ) ( I λ n A ) x n ( I λ n A ) x , y n x = 1 2 { ( I λ n A ) x n ( I λ n A ) x + α n ( u x n + λ n A x ) 2 + y n x 2 α n u + ( 1 α n ) ( I λ n A ) x n ( I λ n A ) x ( y n x ) 2 } 1 2 { ( I λ n A x n ) ( I λ n A ) x 2 + 2 α n u x n + λ n A x ( I λ n A ) x n ( I λ n A ) x + α n ( u x n + λ n A x ) + y n x 2 ( x n y n ) λ n ( A x n A x ) + α n ( u x n + λ n A x ) 2 } 1 2 { ( I λ n A ) x n ( I λ n A ) x 2 + α n M + y n x 2 ( x n y n ) λ n ( A x n A x ) + α n ( u x n + λ n A x ) 2 } 1 2 { x n x 2 x n y n 2 + α n M + y n x 2 + 2 λ n x n y n , A x n A x 2 α n u x n + λ n A x , x n y n λ n ( A x n A x ) α n ( u x n + λ n A x ) 2 } 1 2 { x n x 2 + α n M + y n x 2 x n y n 2 + 2 α n u x n + λ n A x x n y n + 2 λ n x n y n A x n A x } ,
where M > 0 is some constant such that
sup n { 2 u x n + λ n A x ( I λ n A ) x n ( I λ n A ) x + α n ( u x n + λ n A x ) } M .
Thus
y n x 2 x n x 2 + α n M x n y n 2 + 2 λ n x n y n A x n A x + 2 α n u x n + λ n A x x n y n ,
and it follows that
x n + 1 x 2 x n x 2 + α n M β n x n y n 2 + 2 λ n x n y n A x n A x + 2 α n u x n + λ n A x x n y n .
Therefore,
β n x n y n 2 2 λ n x n y n A x n A x + ( x n x + x n + 1 x ) x n + 1 x n + α n M + 2 α n u x n + λ n A x x n y n .
Since lim n α n = 0 , lim n x n x n + 1 = 0 and lim n A x n A x = 0 , we derive
lim n x n y n = 0 .
This concludes the proof.  □
Proposition 4.
lim sup n x ˜ u , x ˜ y n 0 , where x ˜ = P Ω u .
Proof. 
Let { y n i } be a subsequence of { y n } satisfying
lim sup n x ˜ u , x ˜ y n = lim i x ˜ u , x ˜ y n i .
By the boundedness of { y n i } , we can choose a subsequence { y n i j } of { y n i } such that y n i j z .
Next, we demonstrate that z Ω . First, we prove that z VI ( C , A ) . Let N C v be the normal cone of C at v C ; i.e., N C v = { w H : 0 v u , w , u C } . Define a mapping T by the formula
T v = A v + N C v , v C , , v C .
Let ( v , w ) G ( T ) . Since y n C and w A v N C v , we deduce 0 v y n , w A v . According to y n = p r o j C [ α n u + ( 1 α n ) ( I λ n A ) x n ] , we obtain
0 v y n , y n α n u ( 1 α n ) ( x n λ n A x n ) ,
that is,
0 v y n , y n x n λ n + A x n α n λ n ( u x n + λ n A x n ) .
Thus,
v y n i , w v y n i , A v v y n i , A v v y n i , y n i x n i λ n i + A x n i α n i λ n i ( u x n i + λ n i A x n i ) = v y n i , A v A x n i y n i x n i λ n i + α n i λ n i ( u x n i + λ n i A x n i ) = v y n i , A v A y n i + v y n i , A y n i A x n i v y n i , y n i x n i λ n i α n i λ n i ( u x n i + λ n i A x n i ) v y n i , A y n i A x n i v y n i , y n i x n i λ n i α n i λ n i ( u x n i + λ n i A x n i ) .
Noting that α n i 0 and y n i x n i 0 , we deduce 0 v z , w . Hence, z T 1 ( 0 ) and thus z VI ( C , A ) .
Next, we show that z VI ( C , B ) . Define a mapping T as follows
T v = B v + N C v , v C , , v C .
Let ( v , w ) G ( T ) . Since w B v N C v and x n + 1 C , we obtain 0 v x n + 1 , w B v . By virtue of x n + 1 = p r o j C [ x n μ n β n B y n + β n ( y n x n ) ] , we obtain
0 v x n + 1 , μ n β n B y n + x n + 1 x n β n ( y n x n ) ,
that is,
0 v x n + 1 , B y n + x n + 1 x n β n μ n y n x n μ n .
Therefore, we have
v x n i + 1 , w v x n i + 1 , B v v x n i + 1 , B v v x n i + 1 , x n i + 1 x n i β n i μ n i + B y n i y n i x n i μ n i = v x n i + 1 , B v B y n i x n i + 1 x n i β n i μ n i + y n i x n i μ n i = v x n i + 1 , B v B y n i + v x n i + 1 , B x n i + 1 B y n i v x n i + 1 , x n i + 1 x n i β n i μ n i y n i x n i μ n i v x n i + 1 , B x n i + 1 B y n i v x n i + 1 , x n i + 1 x n i β n i μ n i y n i x n i μ n i .
Noting that y n i x n i 0 and x n i + 1 x n i 0 , we get 0 v z , w . Hence, z T 1 ( 0 ) and z VI ( C , B ) . Thus, z Ω and we have
lim sup n x ˜ u , x ˜ y n = lim i x ˜ u , x ˜ y n i = x ˜ u , x ˜ z 0 .
 □
Finally, we prove our main result.
Theorem 1.
Suppose that Ω = VI ( C , A ) VI ( C , B ) . Assume that { α n } , { β n } , { λ n } and { μ n } satisfy the following restrictions (C1)–(C4). Then { x n } defined by (5) converges strongly to x ˜ = p r o j Ω ( u ) .
Proof. 
First, we have Propositions 2–4 in hand. In terms of (4), we have
y n x ˜ 2 = p r o j C [ α n u + ( 1 α n ) ( x n λ n A x n ) ] p r o j C [ x ( 1 α n ) λ n A x ] 2 α n ( u x ) + ( 1 α n ) [ ( x n λ n A x n ) ( x λ n A x ) ] , y n x α n x ˜ u , x ˜ y n + ( 1 α n ) ( x n λ n A x n ) ( x ˜ λ n A x ˜ ) y n x ˜ α n x ˜ u , x ˜ y n + ( 1 α n ) x n x ˜ y n x ˜ α n x ˜ u , x ˜ y n + 1 α n 2 x n x ˜ 2 + 1 2 y n x ˜ 2 .
It follows that
y n x ˜ 2 ( 1 α n ) x n x ˜ 2 + 2 α n x ˜ u , x ˜ y n .
Therefore,
x n + 1 x ˜ 2 ( 1 β n ) x n x ˜ 2 + β n y n x ˜ 2 ( 1 α n β n ) x n x ˜ 2 + 2 α n β n x ˜ u , x ˜ y n .
By Lemma 3 and the above inequality, we deduce x n x ˜ . This completes the proof.  □
If we take u = 0 , then we have the following Algorithm 2.
Algorithm 2:
 For initial value x 0 C . Assume the sequence { x n } has been constructed. Compute the next iteration { x n + 1 } by the following manner
y n = p r o j C [ ( 1 α n ) ( x n λ n A x n ) ] , x n + 1 = p r o j C [ x n μ n β n B y n + β n ( y n x n ) ] , n 0 .
Corollary 1.
Suppose that Ω = VI ( C , A ) VI ( C , B ) . Assume that { α n } , { β n } , { λ n } and { μ n } satisfy the following restrictions (C1)–(C4). Then { x n } defined by (11) converges strongly to the minimum norm element x ˜ in Ω.

4. Conclusions

In this paper, we investigated the variational inequality problem. We suggest an extragradient type method for finding the common solutions of two variational inequalities. We prove the strong convergence of the method under the mild conditions. Noting that in our suggested iterative sequence (Equation (5)), the involved operators A and B require some form of strong monotonicity. A natural question arises, i.e., how to weaken these assumptions?

Author Contributions

All the authors have contributed equally to this paper. All the authors have read and approved the final manuscript.

Funding

This research was partially supported by the grants NSFC61362033 and NZ17015.

Acknowledgments

The authors are thankful to the anonymous referees for their careful corrections and valuable comments on the original version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stampacchi, G. Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
  2. Alber, Y.-I.; Iusem, A.-N. Extension of subgradient techniques for nonsmooth optimization in Banach spaces. Set Valued Anal. 2001, 9, 315–335. [Google Scholar] [CrossRef]
  3. Bello Cruz, J.-Y.; Iusem, A.-N. A strongly convergent direct method for monotone variational inequalities in Hilbert space. Numer. Funct. Anal. Optim. 2009, 30, 23–36. [Google Scholar] [CrossRef]
  4. Cho, S.-Y.; Qin, X.; Yao, J.-C.; Yao, Y. Viscosity approximation splitting methods for monotone and nonexpansive operators in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 251–264. [Google Scholar]
  5. Dong, Q.-L.; Cho, Y.-J.; Zhong, L.-L.; Rassias, T.-M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 7, 687–704. [Google Scholar] [CrossRef]
  6. Dong, Q.-L.; Cho, Y.-J.; Rassias, T.-M. The projection and contraction methods for finding common solutions to variational inequality problems. Optim. Lett. 2018, 12, 1871–1896. [Google Scholar] [CrossRef]
  7. Facchinei, F.; Pang, J.-S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003; Volumes 1 and 2. [Google Scholar]
  8. He, B.-S.; Yang, Z.-H.; Yuan, X.-M. An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300, 362–374. [Google Scholar] [CrossRef] [Green Version]
  9. Bello Cruz, J.-Y.; Iusem, A.-N. Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46, 247–263. [Google Scholar] [CrossRef]
  10. Iiduka, H.; Takahashi, W. Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl. 2008, 339, 668–679. [Google Scholar] [CrossRef]
  11. Li, C.-L.; Jia, Z.-F.; Postolache, M. New convergence methods for nonlinear uncertain variational inequality problems. J. Nonlinear Convex Anal. 2018, 19, 2153–2164. [Google Scholar]
  12. Lions, J.-L.; Stampacchia, G. Variational inequalities. Comm. Pure Appl. Math. 1967, 20, 493–517. [Google Scholar] [CrossRef]
  13. Lu, X.; Xu, H.-K.; Yin, X. Hybrid methods for a class of monotone variational inequalities. Nonlinear Anal. 2009, 71, 1032–1041. [Google Scholar] [CrossRef]
  14. Solodov, M.-V.; Svaiter, B.-F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  15. Solodov, M.-V.; Tseng, P. Modified projection-type methods formonotone variational inequalities. SIAM J. Control Optim. 1996, 34, 1814–1830. [Google Scholar] [CrossRef]
  16. Thakur, B.-S.; Postolache, M. Existence and approximation of solutions for generalized extended nonlinear variational inequalities. J. Inequal. Appl. 2013, 2013, 590. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, S.-H.; Zhao, M.-L.; Kumam, P.; Cho, Y.-J. A viscosity extragradient method for an equilibrium problem and fixed point problem in Hilbert space. J. Fixed Point Theory Appl. 2018, 20, 19. [Google Scholar] [CrossRef]
  18. Xu, H.-K.; Kim, T.-H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  19. Yao, Y.; Chen, R.; Xu, H.-K. Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72, 3447–3456. [Google Scholar] [CrossRef]
  20. Yao, Y.; Liou, Y.-C.; Kang, S.-M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  21. Yao, Y.; Liou, Y.-C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar] [CrossRef]
  22. Yao, Y.; Postolache, M.; Liou, Y.C. Variant extragradient-type method for monotone variational inequalities. Fixed Point Theory Appl. 2013, 2013, 185. [Google Scholar] [CrossRef] [Green Version]
  23. Yao, Y.; Postolache, M.; Liou, Y.-C.; Yao, Z.-S. Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 2016, 10, 1519–1528. [Google Scholar] [CrossRef]
  24. Yao, Y.; Qin, X.; Yao, J.-C. Projection methods for firmly type nonexpansive operators. J. Nonlinear Convex Anal. 2018, 19, 407–415. [Google Scholar]
  25. Yao, Y.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
  26. Yao, Y.; Yao, J.-C.; Liou, Y.-C.; Postolache, M. Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpath. J. Math. 2018, 34, 459–466. [Google Scholar]
  27. Zegeye, H.; Yao, Y. Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 2015, 64, 453–471. [Google Scholar] [CrossRef]
  28. Zhao, J.; Liang, Y.-S.; Liu, Y.-L.; Cho, Y.-J. Split equilibrium, variational inequality and fixed point problems for multi-valued mappings in Hilbert spaces. Appl. Comput. Math. 2018, 17, 271–283. [Google Scholar]
  29. Yao, Y.; Postolache, M.; Yao, J.-C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef]
  30. Yao, Y.; Postolache, M.; Yao, J.-C. Iterative algorithms for generalized variational inequalities. Univ. Politeh. Buch. Ser. A 2019, in press. [Google Scholar]
  31. Korpelevich, G.-M. An extragradient method for finding saddle points and for other problems. Ekon. Matorsz. Metod. 1976, 12, 747–756. [Google Scholar]
  32. Bnouhachem, A.; Noor, M.-A.; Hao, Z. Some new extragradient iterative methods for variational inequalities. Nonlinear Anal. 2009, 70, 1321–1329. [Google Scholar] [CrossRef]
  33. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  34. Iusem, A.-N.; Svaiter, B.-F. A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
  35. Iusem, A.-N.; Lucambio Peŕez, L.-R. An extragradient-type algorithm for non-smooth variational inequalities. Optimization 2000, 48, 309–332. [Google Scholar] [CrossRef]
  36. Khobotov, E.-N. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1989, 27, 120–127. [Google Scholar] [CrossRef]
  37. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  38. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  39. Censor, Y.; Gibali, A.; Reich, S. A von Neumann alternating method for finding common solutions to variational inequalities. Nonlinear Anal. 2012, 75, 4596–4603. [Google Scholar] [CrossRef] [Green Version]
  40. Zaslavski, A.-J. The extragradient method for finding a common solution of a finite family of variational inequalities and a finite family of fixed point problems in the presence of computational errors. J. Math. Anal. Appl. 2013, 400, 651–663. [Google Scholar] [CrossRef]
  41. Takahashi, W.; Toyoda, M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118, 417–428. [Google Scholar] [CrossRef]
  42. Suzuki, T. Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005, 103–123. [Google Scholar] [CrossRef]
  43. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Yao, Y.; Wang, K.; Qin, X.; Zhu, L.-J. Extension of Extragradient Techniques for Variational Inequalities. Mathematics 2019, 7, 111. https://doi.org/10.3390/math7020111

AMA Style

Yao Y, Wang K, Qin X, Zhu L-J. Extension of Extragradient Techniques for Variational Inequalities. Mathematics. 2019; 7(2):111. https://doi.org/10.3390/math7020111

Chicago/Turabian Style

Yao, Yonghong, Ke Wang, Xiaowei Qin, and Li-Jun Zhu. 2019. "Extension of Extragradient Techniques for Variational Inequalities" Mathematics 7, no. 2: 111. https://doi.org/10.3390/math7020111

APA Style

Yao, Y., Wang, K., Qin, X., & Zhu, L. -J. (2019). Extension of Extragradient Techniques for Variational Inequalities. Mathematics, 7(2), 111. https://doi.org/10.3390/math7020111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop