Next Article in Journal
Optimization by Context Refinement for Development of Incremental Granular Models
Next Article in Special Issue
Split Common Coincidence Point Problem: A Formulation Applicable to (Bio)Physically-Based Inverse Planning Optimization
Previous Article in Journal
Kinetics of Nanostructuring Processes of Material Surface under Influence of Laser Radiation
Previous Article in Special Issue
The Generalized Trust-Region Sub-Problem with Additional Linear Inequality Constraints—Two Convex Quadratic Relaxations and Strong Duality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Parallel Extragradient Method for Systems of Variational Inequalities Involving Fixed Points of Demicontractive Mappings

by
Lateef Olakunle Jolaoso
* and
Maggie Aphane
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Pretoria 0204, South Africa
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(11), 1915; https://doi.org/10.3390/sym12111915
Submission received: 27 October 2020 / Revised: 16 November 2020 / Accepted: 18 November 2020 / Published: 20 November 2020
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory)

Abstract

:
Herein, we present a new parallel extragradient method for solving systems of variational inequalities and common fixed point problems for demicontractive mappings in real Hilbert spaces. The algorithm determines the next iterate by computing a computationally inexpensive projection onto a sub-level set which is constructed using a convex combination of finite functions and an Armijo line-search procedure. A strong convergence result is proved without the need for the assumption of Lipschitz continuity on the cost operators of the variational inequalities. Finally, some numerical experiments are performed to illustrate the performance of the proposed method.

1. Introduction

Let H be a real Hilbert space and C be a nonempty, closed, and convex subset of H. Let A : C H be an operator. The Variational Inequalities (VI) is defined as finding x * C such that
A x * , y x * 0 , y C .
The solution set of the VI (1) is denoted by V I ( C , A ) . Mathematically, the VI is considered as a powerful tool for studying many nonlinear problems arising in mechanics, optimization, control network, equilibrium problems, etc.; see [1,2,3]. The problem of finding a common solution of a systems of VI has received a lot of attention by many authors recently, see, e.g., in [4,5,6,7,8,9,10] and references therein. This problem covers as special cases, convex feasibility problem, common equilibrium problem, etc. In this paper, we consider the following common problem.
Problem 1.
Find an element x * C such that
x * i = 1 N V I ( C , A i ) j = 1 M F i x ( T j ) ,
where for i = 1 , 2 , , N , A i : H H are pseudomonotonotone operators, j = 1 , 2 , , M , T j : H H are k j -demicontractive mappings, F i x ( T j ) = { x H : T j x = x } denotes the fixed point set of T j .
The motivation for considering Problem 1 lies in its possible applications to mathematical models whose constraints can be expressed as the common variational inequalities and common fixed point problems. This happen in particular, in network resource allocations, image processing, Nash equilibrium problem, etc., see, e.g., in [11,12,13,14].
The simplest method for solving the VI (1) is the projection method of Goldstein [15] which is a natural extension of the gradient projection method, and for x 0 C , λ > 0 it is given by
x n + 1 = P C ( x n λ A x n ) , n 0 .
The projection method (3) converges weakly to a solution of VI (1) if and only if A satisfies some strong conditions such as α -strongly monotone and L-Lipschitz continuous. When this condition is relaxed, the method fails to convergence to any solution of the VI (1). Korpelevich [16] later introduced an Extragradient Method (EgM) for solving the VI when A is monotone and L-Lipschitz continuous as follows, for x 0 C ,
y n = P C ( x n λ A x n ) , x n + 1 = P C ( x n λ A y n ) , n 0 ,
where λ 0 , 1 L . The EgM has been extended to infinite-dimensional spaces by many authors, see, for instance, in [7,17,18,19,20,21,22,23]. More so, several modifications of the EgM have been introduced recently, see in [24,25,26,27,28,29,30]. For finding a common element in the set of solution of monotone variational inequalities and fixed point of k-demicontractive mapping, Mainge [14] introduced the following extragradient method, for x 0 C ,
y n = P C ( x n λ n A x n ) , z n = P C ( x n λ n A y n ) , x n + 1 = [ ( 1 α ) I + α T ] u n , u n = z n γ n B z n , n 0 ,
where { λ n } , { γ n } ( 0 , ) , w [ 0 , 1 ] , A : C H is a monotone and L-Lipschitz continuous, T : H H is a k-demicontractive mapping and B : H H is β -strongly monotone operator with β > 0 . The author proved a strong convergence for the sequence generated by (4) provided the step size λ n satisfies
0 < lim inf n λ n lim sup n λ n < 1 L .
Recently, Hieu et al. [31] modified (4) and introduced the following extragradient method for approximating a common solution of VI and fixed point problem; given x 0 C , compute x n + 1 via
y n = P C ( x n λ n A x n ) , z n = P C ( x n ρ n A y n ) , w n = P C ( x n ρ n A z n ) , x n + 1 = ( 1 α n ) u n + α n T u n , u n = w n γ n B w n , n 0 ,
where { ρ n } , { λ n } ( 0 , ) such that 0 λ n ρ n , { α n } ( 0 , 1 ) , A , T and B are as defined for Algorithm (4). They also proved a strong convergence for the sequence generated by (6) with the aid of (5). An obvious disadvantage in (4) and (6) which impedes their wide usage is the assumption that the Lipschitz constant of A admits a simple estimate. Moreover, in many practical problems, the cost operator may not satisfies Lipschitz condition.
On the other hand, for finding a common fixed point of quasi-nonexpansive mappings, Anh and Hieu [11,32] proposed a parallel hybrid algorithm as follows,
x 0 C , y n i = α n x n + ( 1 α n ) T i x n , i = 1 , 2 , , N , i n = A r g m a x { y n i x n : i = 1 , 2 , , N } , y ¯ n : = y n i n , C n + 1 = { v C n : v y ¯ n v x n } , x n + 1 = P C n + 1 ( x 0 ) .
Furthermore, Censor et al. [6] proposed a parallel hybrid-extragradient method for finite family of variational inequalities as follows; choose x 0 H , compute
y n i = P C i ( x n λ n i A i x n ) , z n i = P C i ( x n λ n i A i y n i ) , C n i = { z H : x n z n i , z x n γ n i ( z n i x n ) 0 } , Q n = i = 1 N C n i , W n = { z H : x 0 x n , z x n 0 } , C n + 1 = P Q n W n x 0 .
Motivated by (7) and (8), Anh and Phuong [8] recently introduced the following Algorithm 1 parallel hybrid-extragradient method for solving variational inequalities and fixed point of nonexpansive mappings.
Algorithm 1: PHEM
Initialization: Given x 0 H , λ n , i 0 , 1 a L i , where L i are the Lipschitz constant of A i ,
   i = 1 , 2 , , N , a ( 0 , 1 ) , { α n , i } ( 0 , 1 ) , { γ n , i } ( 0 , 1 2 ) , n 0 .
Iterative steps: Compute in parallel
y n i = P C i ( x n λ n i A i x n ) , z n i = P C i ( x n λ n i A i y n i ) , t n i = α n , i x n + ( 1 α n , i ) T i z n i , C n , i = { x C i , x n t n i , x x n γ n , i ( t n i x n ) 0 } , Q n = i = 1 N C n , i , W n = { x H : x 0 x n , x x n 0 } , x n + 1 = P Q n W n x 0 , n = n + 1 .

Meanwhile, Hieu [33] introduced a parallel hybrid-subgradient extragradient method which also required finding a farthest element from the iterate x n as follows.
The author proved that the sequence generated by Algorithm 2 converges strongly to a solution of the systems of VI.
Algorithm 2: PHSEM
Initialization: Choose x 0 H , 0 < λ < 1 L . Set n = 0 . Step 1: Find N projections z n i on C i in
  parallel, i.e.,
y n i = P C ( x n λ A i x n ) , i = 1 , , N .

Step 2: Find N projections z n i on half-spaces T n i in parallel, i.e.,
z n i = P T n i ( x n λ A i y n i ) , i = 1 , , N ,
  where T n i = { v H : x n λ A i x n y n i , v y n i 0 } .
Step 3: Find the farthest element from x n among z n i , i.e.,
i n = a r g m a x { z n i x n : i = 1 , , N } , z ¯ n = z n i n .

Step 4: Construct the half-spaces C n and Q n such that
C n = { w H : z ¯ n w x n w } , Q n = { w H : w x n , x n x 0 0 } .

Step 5: Find the next iterate via
x n + 1 = P C n Q n x 0 .
 Set n = n + 1 and go to Step 1.
However, it should be observed that at each step in the parallel hybrid-extragradient methods mentioned above, one needs to calculate a projection onto the intersection of two sets Q n and W n . This can be computationally expensive when the feasible set is not simple. Moreover, the convergence of the algorithms require prior knowledge of the Lipschitz constants of A i which are very difficult to estimate in practice.
Motivated by these results, in this paper, we introduce an efficient parallel-extragradient method which does not requires the computation of projection onto Q n W n and the prior estimates of the Lipschitz constants of A i for i = 1 , 2 , , N . In particular, we highlight some contributions in this paper as follows.
  • In our method, the involved cost operators A i do not need to satisfy the Lipschitz condition. Instead, we assumed that A i are pseudomonotone and weakly sequentially continuous which is more general than the monotone and Lipschitz continuous used in previous results.
  • The sequence generated by our method converges strongly to a solution of (2) without the aid of prior estimate of a Lipschitz constant.
  • Furthermore, we performed only single projection onto C in parallel and our algorithm does not need to find the farthest element from the iterate x n .
  • More so, our algorithm does not require the computation of projection onto Q n W n which make it easier for computations.

2. Preliminaries

In this section, we give some Definitions and basic results that will be used in our subsequent analysis. Let H be a real Hilbert space. The weak and the strong convergence of { x n } H to x H is denoted by x n x and x n x as n , respectively. Let C be a nonempty, closed, and convex subset of H. The metric projection of x H onto C is defined as the necessary unique vector P C ( x ) satisfying
| | x P C x | | | | x y | | y C .
It is well known that P C has the following properties (see, e.g., in [34,35]).
(i)
For each x H and z C ,
z = P C x x z , z y 0 , y C .
(ii)
For any x , y H ,
P C x P C y , x y | | P C x P C y | | 2 .
(iii)
For any x H and y C ,
| | P C x y | | 2 | | x y | | 2 | | x P C x | | 2 .
Next, we state some classes of functions that play essential roles in our convergence analysis.
Definition 1.
The operator A : C H is said to be
1.
β-strongly monotone if there exists β > 0 such that
A x A y , x y β x y x , y C ;
2.
monotone if
A x A y , x y 0 x , y C ;
3.
η-strongly pseudomonotone if there exists η > 0 such that
A x , y x 0 A y , y x γ x y 2 ,
for all x , y C ;
4.
pseudomonotone if for all x , y C
A x , y x 0 A y , y x 0 ;
5.
L-Lipschitz continuous if there exists a constant L > 0 such that
A x A y L x y , x , y C .
When L ( 0 , 1 ) , then A is called a contraction;
6.
weakly sequentially continuous if for any { x n } H such that x n x ¯ implies A x n A x ¯ .
It is easy to see that (1) ⇒ (2) ⇒ (4) and (1) ⇒ (3) ⇒ (4), but the converse implications do not hold in general, see, for instance, in [21].
Lemma 1
([36] Lemma 2.1). Consider the VIP (1) with C being a nonempty closed convex subset of H and A : C H is pseudomonotone and continuous. Then, x ¯ V I P ( C , A ) if and only if
A y , y x ¯ 0 y C .
Definition 2
([37]). A mapping T : H H is called
(i)
nonexpansive if
| | T u T v | | | | u v | | , u , v H ;
(ii)
quasi-nonexpansive mapping if F ( T ) and
| | T u z | | | | u z | | , u H , z F ( T ) ;
(iii)
μ-strictly pseudocontractive if there exists a constant μ [ 0 , 1 ) such that
| | T u T v | | 2 | | u v | | 2 + μ | | ( I T ) u ( I T ) v | | 2 u , v H ;
(iv)
κ-demicontractive mapping if there exists κ [ 0 , 1 ) and F ( T ) such that
| | T u z | | 2 | | u z | | 2 + κ | | u T u | | 2 , u H , z F ( T ) .
It is easy to see that the class of demicontractive mappings contains the class of quasi-nonexpansive and strictly pseudocontractive mappings. Due to this generality and its possible applications in applied analysis, the class of demicontractive mapping has continue to attracts the attention of many authors in this decade.
A bounded linear operator A on H is said to be strongly positive if there exists a constant γ > 0 such that
x , A x γ x 2 , x H .
It is known that when A is strongly positive bounded linear operator with 0 < ρ < 1 A , then
I ρ A 1 ρ γ .
For any real Hilbert space H , it is known that the following identities hold (see, e.g., in [38]).
Lemma 2.
For all x , y , z H , then
(i)
| | x + y | | 2 = | | x | | 2 + 2 x , y + | | y | | 2 ,
(ii)
| | x + y | | 2 | | x | | 2 + 2 y , x + y ,
(iii)
| | η x + ( 1 η ) y | | 2 = η | | x | | 2 + ( 1 η ) | | y | | 2 η ( 1 η ) | | x y | | 2 , η [ 0 , 1 ] .
Lemma 3
([24]). Let C be a nonempty closed convex subset of a real Hilbert space H and h be a real-valued function on H. Suppose D = { x C : h ( x ) 0 } is nonempty and h is Lipschitz continuous on C with modulus ϑ > 0 , then
d i s t ( x , D ) ϑ 1 max { h ( x ) , 0 } x C .
Lemma 4
([39]). Let { Γ n } be a non-negative real sequence satisfying Γ n + 1 ( 1 θ n ) Γ n + θ n b n , where { θ n } ( 0 , 1 ) , n = 0 θ n = and { b n } is a sequence such that lim sup n b n 0 . Then, lim n Γ n = 0 .
Lemma 5
((Lemma 3.1) [37]). Let { a n } be a sequence of real numbers such that there exists a subsequence { a n i } of { a n } with a n i < a n i + 1 for all i N . Consider the integer { m k } defined by
m k = max { j k : a j < a j + 1 } .
Then { m k } is a non-decreasing sequence verifying lim n m n = , and for all k N , the following estimate holds,
a m k a m k + 1 , and a k a m k + 1 .

3. Algorithm and Convergence Analysis

In this section, we describe our algorithm and prove its convergence under suitable conditions. Let H be a real Hilbert space and C be a nonempty, closed, and convex subset of H. We suppose that the following assumptions hold.
Assumption 1.
(A1) 
For i = 1 , 2 , , N , A i : H H are pseudomonotone, uniformly continuous and weakly sequentially continuous operators;
(A2) 
For j = 1 , 2 , , M , T j : H H are κ j -demicontractive mappings with κ = max { κ j : 1 j M } such that I T j are demiclosed at zero;
(A3) 
f : H H is an α-contraction mapping with α ( 0 , 1 ) ;
(A4) 
For k = 1 , 2 , , N ¯ , B k : H H are strongly positive bounded linear operators with coefficients γ k > 0 , where γ ¯ = min { γ k : 1 k N ¯ } and 0 < γ < γ ¯ 2 α ;
(A5) 
The solution set
S o l = i = 1 N V I ( C , A i ) j = 1 M F i x ( T j )
is nonempty.
We now present our method as follows.
Remark 1.
Observe that we are at a solution of Problem (2) if x n = y n = u n . We will implicitly assume that this does not occur after finitely many iterations so that our Algorithm 3 generates an infinitely sequence for our analysis.
Algorithm 3: EFEM
     Initialization: Choose σ ( 0 , 1 ) , ρ ( 0 , 1 ) , { α n } , { δ n , j } j = 0 M ( 0 , 1 ) . Let x 1 C be given
   arbitrarily and set n = 1 .
     Iterative step:
Step 1: For i = 1 , 2 , , N , compute in parallel
y n i = P C ( x n A i x n ) .
              If θ i ( x n ) = x n y n i = 0 : set x n = w n and do Step 3. Otherwise: do Step 2.
Step 2. Compute z n i = x n ρ l n θ i ( x n ) , where l n is the smallest non-negative integer satisfying
A i z n i , θ i ( x n ) σ 2 θ i ( x n ) 2 .
              Set w n = P D n ( x n ) , where
D n = x H : i = 1 N β n i h n i ( x n ) 0 ,
               { β n i } i = 1 N ( 0 , 1 ) such that i = 1 N β n i = 1 , and h n i ( x ) = A i z n i , x z n i .
 Step 3. Compute
u n = δ n , 0 w n + n = 1 M δ n , j T j w n ,
and
x n + 1 = α n γ f ( x n ) + I α n k = 1 N ¯ c k B k u n ,
where { c k } k = 1 N ¯ ( 0 , 1 ) such that k = 1 N ¯ c k = 1 .
Stopping criterion: If x n = y n = u n , then stop; otherwise, set n : = n + 1 and go back to Step 1.
In order to prove the convergence of our algorithm, we assume that the control parameters satisfy the following conditions.
Assumption 2.
(B1) 
lim n α n = 0 and n = 0 α n = + ;
(B2) 
lim inf n ( δ n , 0 κ ) > 0 .
We begin the convergence analysis of Algorithm 3 by proving some useful Lemmas.
Lemma 6.
Let u * S o l and h n i be as defined in Algorithm 3. Then h n i ( x n ) ρ l n σ 2 x n y n i 2 and h n i ( u * ) 0 . In particular, if θ i ( x n ) 0 , then h n i ( x n ) > 0 for all n N .
Proof. 
As z n i = x n ρ l n ( x n y n i ) for i = 1 , 2 , , N , then
h n i ( x n ) = A z n i , x n z n i = ρ l n A z n i , x n y n i ρ l n σ 2 x n y n i 2 .
Furthermore, if x n y n i for i = 1 , 2 , , N , then h n i ( x n ) > 0 . As u * S o l and A i are pseudomonotone, then
A y , y u * 0 y C .
Therefore,
A z n i , z n i u * 0 .
Therefore,
h n i ( u * ) = A z n i , u * z n i 0 .
Remark 2.
Lemma 6 shows that D n and so P D n is well defined. Consequently, Algorithm 3 is well defined.
Now we show that the sequence { x n } generated by Algorithm 3 is bounded.
Lemma 7.
Let { x n } be the sequence generated by Algorithm 3. Then { x n } is bounded.
Proof. 
Let u * S o l , then from (11), we have
w n u * 2 = P D n x n u * 2 x n u * 2 P D n x n x n 2 = x n u * 2 d i s t ( x n , D n ) 2 x n u * 2 .
Moreover, from Lemma 2 (iii), we get
u n u * 2 = δ n , 0 ( w n u * ) j = 1 M δ n , j ( T j w n u * ) 2 = δ n , 0 w n u * 2 + j = 1 M δ n , j T j w n u * 2 δ n , 0 δ n , j T j w n w n 2 δ n , 0 w n u * 2 + j = 1 M δ n , j w n u * 2 + κ j w n T j w n 2 ) δ n , 0 δ n , j T j w n w n 2 w n u * 2 j = 1 M ( δ n , 0 κ ) δ n , j w n T j w n 2 x n u * 2 j = 1 M ( δ n , 0 κ ) δ n , j w n T j w n 2 .
This implies that
u n u * x n u * .
Then from (13), we obtain
x n + 1 u * = α n γ f ( x n ) + 1 α n k = 0 N ¯ c k B k u n u * = α n γ f ( x n ) k = 0 N ¯ c k B k u * + I α n k = 0 N ¯ c k B k u k I α n k = 0 N ¯ c k B k u * I α n k = 0 N ¯ c k B k ( u n u * ) + α n γ f ( x n ) f ( u * ) + α n γ f ( u * ) k = 0 N ¯ c k B k u * ( 1 α n k = 0 N ¯ c k γ k ) u n u * + α n α γ x n u * + α n γ f ( u * ) k = 0 N ¯ c k B k u * ( 1 α n k = 0 N ¯ c k γ ¯ ) x n u * + α n α γ x n u * + α n γ f ( u * ) k = 0 N ¯ c k B k u * = ( 1 α n γ ¯ ) x n u * + α n α γ x n u * + α n γ f ( u * ) k = 0 N ¯ c k B k u * = ( 1 α n ( γ ¯ α γ ) x n u * + α n ( γ ¯ α γ ) γ f ( u * ) k = 0 N ¯ c k B k u * ( γ ¯ α γ ) max x n u * , γ f ( u * ) k = 0 N ¯ c k B k u * ( γ ¯ α γ ) .
By induction, we have
x n u * max x 1 u * , γ f ( u * ) k = 0 N ¯ c k B k u * ( γ ¯ α γ ) .
This implies that { x n } is bounded. □
Lemma 8.
Let u * S o l and { x n } be the sequence generated by Algorithm 3, then { x n } satisfies the following estimates.
(i) 
w n u * 2 x n u * 2 σ ρ l n L i = 1 N β n i x n y n i 2
for some L 0 ;
(ii) 
s n + 1 ( 1 a n ) s n + a n b n
where
s n = x n u * 2 , a n = 2 α n ( γ ¯ α γ ) 1 α n α γ , b n = α n M + γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * γ ¯ α n γ ,
for some M > 0 .
Proof. 
As { x n } is bounded and A i are continuous on bounded subsets of H, then { A i x n } are bounded, and thus there exists some constants L i > 0 such that
A i x n L i 2 n N , i = 1 , 2 , , N .
Consequently,
A i x n L 2 where L = max { L i , i = 1 , 2 , , N } .
Therefore from Lemma 3, we have
d i s t ( x n , D n ) 2 L i = 1 N β n i h n i ( x n ) , n 0 .
Thus from (16) and (18), we get
w n u * 2 = x n u * 2 d i s t ( x n , D n ) 2 w n u * 2 2 L i = 1 N β n i h n i ( x n ) 2 .
Hence from Lemma 6, we obtain
w n u * 2 x n u * 2 σ ρ l n L i = 1 N β n i x n y n i 2 .
This established (i).
Moreover, we have from Algorithm 3 that
x n + 1 u * 2 = α n γ f ( x n ) + ( I α n k = 0 N ¯ c k B k ) u n u * 2 = α n ( γ f ( x n ) k = 0 N ¯ c k B k u * ) + ( 1 α n k = 0 N ¯ c k B k ) u n ( 1 α n k = 0 N ¯ c k B k ) u * 2 ( 1 α n k = 0 N ¯ c k B k ) u n ( 1 α n k = 0 N ¯ c k B k ) u * 2 + 2 α n γ f ( x n ) k = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n k = 0 N ¯ c k γ k ) 2 u n u * 2 + 2 α n γ f ( x n ) f ( u * ) , x n + 1 u * + 2 α n γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n k = 0 N ¯ c k γ ¯ ) 2 x n u * 2 + 2 α n α γ x n u * x n + 1 u * + 2 α n γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n γ ¯ ) 2 x n u * 2 + α n γ ( x n u * 2 + x n + 1 u * 2 ) + 2 α n γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * .
Therefore, we obtain
x n + 1 u * 2 ( 1 α n γ ¯ ) 2 + α n α γ 1 α n α γ x n u * 2 + 2 α n 1 α n α γ γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * = 1 2 α n ( γ ¯ α γ ) 1 α n α γ x n u * 2 + α n 2 γ ¯ 2 2 α n α γ x n u * 2 + 2 α n 1 α n α γ γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * = 1 2 α n ( γ ¯ α γ ) 1 α n α γ x n u * 2 + 2 α n ( γ ¯ α γ ) 1 α n α γ α n M + γ f ( u * ) k = 0 N ¯ c k B k u * , x n + 1 u * γ ¯ α n γ = ( 1 a n ) s n + a n b n ,
where the existence of M follows from the boundedness of { x n } . This completes the proof. □
Lemma 9.
Let { x n l } be a subsequence of { x n } generated by Algorithm 3 such that { x n l } converges weakly to p H and lim l x n l y n l i = 0 , for all i = 1 , 2 , , N . Then
(i) 
0 lim inf l A i x n l , x x n l , for all x C , i = 1 , 2 , , N ;
(ii) 
p i = 1 N V I ( C , A i ) .
Proof. 
(i) From the Definition of y n i and (10), we have
x n l A i x n l y n l i , x y n l i 0 , x C , i = 1 , 2 , , N .
Thus,
x n l y n l i , x y n l i A i x n l , x y n l i , x C , i = 1 , 2 , N .
This implies that
x n l y n l i , x y n l + A i x n l , y n l i x n l A x n l , x x n l , x C , i = 1 , 2 , , N .
Fix x C and let l in (19), since y n l i x n l 0 , then
0 lim inf l A i x n l , x x n l , x C , i = 1 , 2 , , N .
(ii) Suppose { ξ l } is a decreasing sequence of non-negative numbers such that ξ l 0 as l . For each ξ l , we denote by N l the smallest positive integer such that
A i x n l , x x n l + ξ l 0 , l N l , i = 1 , 2 , , N ,
where the existence of N l follows from (i). This means that
A i x n l , x + ξ l t n l i x n l 0 , l N l , i = 1 , 2 , , N ,
for some t n l i H satisfying 1 = A i x n l , t n l i (since A i x n l 0 for i = 1 , 2 , , N ). As A i are pseudomonotone, it follows from (i) that
A i ( x + ξ l t n l i ) , x + ξ l t n l i x n l 0 l N l , i = 1 , 2 , , N .
Furthermore, x n l p as l and A i are weakly sequentially continuous, then A i x n l A i p for i = 1 , 2 , , N . Therefore,
0 < A i p lim inf l A i x n l , i = 1 , 2 , , N .
Moreover, { x N l } { x n l } and ξ l 0 as l . Thus, we obtain
0 lim sup l ξ l t n l i = lim sup l ξ l A i x n l lim sup l ξ l lim inf l A i x n l 0 A i p = 0 ,
which implies that lim l ξ l t n l i = 0 . Thus, taking limit of (20) as l , we obtain
A i x , x p 0 , i = 1 , 2 , , N .
Using Lemma 1, we have p V I ( C , A i ) for all i = 1 , 2 , , N . Therefore p i = 1 N V I ( C , A i ) . This completes the proof. □
We now present our main result.
Theorem 1.
Suppose { x n } is generated by Algorithm 3. Then { x n } converges strongly to a point z S o l .
Proof. 
Let u * S o l and put Γ n = x n x * 2 . We consider the following possible cases.
CaseA
Assume that there exists n 0 N such that { Γ n } is monotonically decreasing for n n 0 . Since { Γ n } is bounded, then
Γ n Γ n + 1 0 as n .
Moreover, from Lemma 8 (i), we have
x n + 1 u * 2 = α n γ f ( x n ) ( 1 α n n = 0 N ¯ c k B k ) u n u * 2 = α n ( γ f ( x n ) n = 0 N ¯ c k B k u * ) + ( 1 α n n = 0 N ¯ c k B k ) ( u n u * ) 2 ( I α n n = 0 N ¯ c k B k ) ( u n u * ) 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n n = 0 N ¯ c k γ k ) u n u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n n = 0 N ¯ c k γ ¯ ) w n u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n γ ¯ ) x n u * 2 σ ρ l n L i = 1 N β n i x n y n i 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * .
Therefore
( 1 α n γ ¯ ) σ ρ l n L i = 1 N β n i x n y n i 2 ( 1 α n γ ¯ ) x n u * 2 x n + 1 u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * .
As α n 0 as n and from (21), we have
lim n σ ρ l n L i = 1 N β n i x n y n i 2 = 0 .
Furthermore, from (17), we obtain
x n + 1 u * 2 ( 1 α n γ ¯ ) u n u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * ( 1 α n γ ¯ ) [ x n u * 2 j = 1 M ( δ n , 0 κ ) δ n , j w n T j w n 2 ] + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * .
This implies that
( 1 α n γ ¯ ) j = 1 M ( δ n , 0 κ ) δ n , j w n T j w n 2 ( 1 α n γ ¯ ) x n u * 2 x n + 1 u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * .
Therefore,
lim n j = 1 M ( δ n , 0 κ j ) δ n , j w n T j w n = 0 .
Using condition (C2), we obtain
lim n w n T j w n = 0 .
Consequently,
u n w n δ n , 0 w n w n + j = 1 M δ n . j w n T j w n 0 as n .
Furthermore, from (11), we have
w n u * 2 = P D n ( x n ) u * 2 x n u * 2 w n x n 2 .
Then, from (22), we have
w n x n 2 x n u * 2 w n u * 2 = x n u * 2 x n + 1 u * 2 + x n + 1 u * 2 w n u * 2 x n u * 2 x n + 1 u * 2 + w n u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * w n u * 2 = x n u * 2 x n + 1 u * 2 + 2 α n γ f ( x n ) α n n = 0 N ¯ c k B k u * , x n + 1 u * .
Moreover, as α n 0 as n , then
lim n w n x n = 0 .
From (25) and (26), we get
u n x n u n w n + w n x n 0 as n .
Therefore,
x n + 1 x n = α n ( γ f ( x n ) n = 0 N ¯ c k B k x n ) + ( I α n n = 0 N ¯ ) ( u n x n ) α n γ f ( x n ) n = 0 N ¯ c k B k x n + ( 1 α n γ ¯ ) u n x n 0 as n .
Now, we show that Ω w ( x n ) S o l , where Ω w ( x n ) is the set of weak subsequential limits of { x n } . Let p Ω w ( x n ) , then there exists a subsequence { x n l } of { x n } such that x n l p as l . Let { y n l i } be subsequences of { y n i } for i = 1 , 2 , , N . From (23), we have
lim l γ m n l i = 1 N β n l x n l y n l i = 0 , i = 1 , 2 , , N .
Now we claim that
lim l x n l y n l i = 0 , i = 1 , 2 , , N .
Indeed, we consider two distinct cases depending on the behavior of subsequence { γ m n l } .
(i) If lim inf l γ m n l > 0 , then
0 x n l y n l i 2 = γ m n l x n l y n l 2 γ m n l .
This implies that
lim sup l x n l y n l 2 lim sup l γ m n l x n l y n l 2 lim sup l 1 γ m n l = lim sup l γ m n l x n l y n l 2 1 lim inf l γ m n l = 0 .
Therefore,
lim l x n l y n l i = 0 .
(ii) Suppose lim inf l γ m n l = 0 . Then we may assume without loss of generality that lim l γ m n l = 0 and lim l x n l y n l = a > 0 . Let us define z ¯ n l i = 1 t γ m n l y n l i + 1 1 t γ m n l x n for i = 1 , 2 , , N . This implies that z ¯ n l i x n = 1 t γ m n l ( y n l i x n l ) for i = 1 , 2 , , N . Since { y n l i x n l } are bounded and lim l γ m n l = 0 , then
lim l z ¯ n l i x n l = 0 .
As A i are uniformly continuous, then
lim l A i z ¯ n l i A i x n l = 0 , i = 1 , 2 , , N .
Using (12) and from the Definition of z ¯ n l i for i = 1 , 2 , , N , we know that
A i z ¯ n l i , x n l y n l i < σ 2 x n l y n l i , i = 1 , 2 , , N .
Therefore,
2 A i x n l , x n l y n l i + 2 A i z ¯ n l i A i x n l , x n l y n l i < σ x n l y n l i 2 , i = 1 , 2 , , N .
Putting v n l i = x n l A i x n l , for all i = 1 , 2 , , N , we obtain
2 x n l v n l i , x n l y n l i + 2 A i z ¯ n l i A i x n l , x n l y n l i < σ x n l y n l i 2 , i = 1 , 2 , , N .
Moreover, from Lemma 2 (i), we have
2 x n l y n l , x n l y n l i = x n l v n l i 2 + x n l y n l i 2 y n l i v n l i 2 .
Substituting (30) into (29), we have
x n l v n l i 2 y n l i v n l i 2 < ( σ 1 ) x n l y n l i 2 2 A i z ¯ n l i A i x n l , x n l y n l i .
Passing limit to the last inequality as l and using (28), we get
lim l x n l v n l i 2 y n l i v n l i 2 ( σ 1 ) a < 0 .
For ϵ = ( σ 1 ) a 2 > 0 , there exists m N such that
x n l v n l i 2 y n l i v n l i 2 ( σ 1 ) a + ϵ = ( σ 1 ) a 2 < 0 ,
for all l N , n m , i = 1 , 2 , , N . Therefore
x n l v n l i 2 < y n l i v n l i 2 , l N , n m , i = 1 , 2 , , N .
This contradicts the Definition of metric projection as y n l i = P C ( x n l A i x n l ) . Thus a = 0 . Therefore, we obtain
lim l x n l y n l i = 0 , i = 1 , 2 , , N .
Consequently from Lemma 9, we have p i = 1 N V I ( C , A i ) . Furthermore, as w n , l p and v n l , j w n l 0 , then by the demi-closedness of T j , j = 1 , 2 , , M , we have that p F i x ( T j ) , for each j = 1 , 2 , , M . This means that p j = 1 M F i x ( T j ) . Therefore, p S o l , which show that Ω w ( x n ) S o l . We now show that { x n } converges strongly to a point u * S o l . As x n l p and x n l + 1 x n l 0 as l , then x n l + 1 p . Therefore,
lim sup n γ f ( u * ) n = 0 N ¯ c k B k u * , x n + 1 u * = lim l γ f ( u * ) n = 0 N ¯ c k B k u * , x n l + 1 u * = γ f ( u * ) n = 0 N ¯ c k B k u * , p u * .
As p S o l , then it follows from (10) and (32) that
lim sup n γ f ( u * ) n = 0 N ¯ c k B k u * , x n + 1 u * 0 .
Therefore, using Lemma 4 and Lemma 8 (ii), we have that lim n x n u * = 0 . This implies that { x n } converges strongly to u * .
Case B: Suppose { Γ n } is not monotonically decreasing. Let τ : N N for all n n 0 (for some n 0 large enough) be defined by
τ ( n ) = max { k N : k n , Γ k Γ k + 1 } .
Clearly τ is non-decreasing, τ ( n ) as n and
0 Γ τ ( n ) Γ τ ( n ) + 1 , n n 0 .
As { x τ ( n ) } is bounded, there exists a subsequence of { x τ ( n ) } still denoted by { x τ ( n ) } which converges weakly to p C . Following similar arguments as in Case A, we get
lim n w τ ( n ) x τ ( n ) = lim n u τ ( n ) x τ ( n ) = lim n x τ ( n ) + 1 x τ ( n ) = 0 , lim n x τ ( n ) y τ ( n ) i = 0 , i = 1 , 2 , N , lim n v τ ( n ) , j w τ ( n ) = 0 , j = 1 , 2 , , M ,
and Ω w ( x τ ( n ) ) S o l , where Ω w ( x τ ( n ) ) is the set of weak subsequential limits of { x τ ( n ) } . Furthermore,
lim sup n γ f ( u * ) n = 0 N ¯ c k B k u * , x τ ( n ) + 1 u * 0 .
From Lemma 8 (ii), we have
x τ ( n ) + 1 u * 2 1 2 α τ ( n ) ( γ ¯ α γ ) 1 α τ ( n ) α γ x τ ( n ) u * 2 + 2 α n ( γ ¯ α γ ) 1 α τ ( n ) α γ α τ ( n ) M + γ f ( u * ) k = 0 N ¯ c k B k u * , x τ ( n ) + 1 u * γ ¯ α τ ( n ) γ ,
for some M > 0 . As 0 x τ ( n ) u * 2 x τ ( n ) + 1 u * 2 , then we get
x τ ( n ) u * 2 α τ ( n ) M + γ f ( u * ) k = 0 N ¯ c k B k u * , x τ ( n ) + 1 u * γ ¯ α τ ( n ) γ .
Then from (33) and the fact that α τ ( n ) 0 , we have
lim n x τ ( n ) u * = 0 .
Furthermore, for n n 0 , it is easy to see that Γ n Γ τ ( n ) + 1 . As a consequence, we get for all sufficiently large n that 0 Γ n Γ τ ( n ) + 1 . Thus, lim n x n u * = 0 . Therefore, { x n } converges strongly to u * . Consequently, { y n i } , { z n i } , { w n } and { u n } converges strongly to u * . This completes the proof. □
Remark 3.
(i) 
Instead of finding the farthest element to the iterate x n , we construct a sub-level set using the convex combination of the finite functions and perform a single projection onto the sub-level set. Note that this projection can be calculated explicitly irrespective of the feasible set C.
(ii) 
We emphasize that the convergence of our Algorithm 3 is proved without using a prior estimate of any Lipshitz constant. Moreover, the cost operators do not even need to satisfy the Lipschitz condition. Note that the previous results of [6,8,33] and references therein cannot be applied in this situation.
(iii) 
We give an example of a finite family of A i : H H which does not satisfy Lipschitz condition.
Example 1.
Let H = n defined by n = { x ¯ = ( x 1 , x 2 , , x n ) , x l : l = 1 n | x l | 2 < } with norm · : n [ 0 . ) defined by x ¯ : = l = 1 n | x l | 2 1 2 for arbitrary x ¯ = ( x 1 , x 2 , , x n ) n . Let C i = C = { x ¯ n : x ¯ 1 } and for i = 1 , , N , let A i : C H be defined by
A i x = x + i x + 1 x , i = 1 , , N .
It is clear that V I ( C , A i ) as 0 V I ( C , A i ) for each i = 1 , , N . First, we show that A i is pseudomonotone and not Lipschitz continuous for i = 1 , 2 , , N . Let u , v C be such that A i u , v u 0 . This means that u , v u 0 . Thus,
A i v , v u = v + i v + 1 v , v u > v + i v + 1 ( v , v u u , v u ) = v + i v + 1 v u 2 > 0 .
Therefore, A i is pseudomonotone for i = 1 , , N . To see that A i is not L-Lipschitz continuous for i = 1 , 2 , , N , let u = ( L i , 0 , , 0 ) and v = ( 0 , 0 , , 0 ) . Then,
A u A v = A u = u + i u + 1 u = L i + i L i + 1 L i .
Moreover, A i u A i v L i u v implies that
L i + i L i + 1 L i L i 2 .
Thus, i L i + 1 0 , which is a contradiction. Therefore, A i is not Lipschitz continuous for i = 1 , , N .

4. Numerical Experiments

In this section, we present some numerical experiments to illustrate the performance of the proposed algorithm. We compare our Algorithm 3 with Algorithm 1 of Anh and Phuong [8], Algorithm 2 of Hieu [33], Algorithm 1 of Suantai et al. [40], and other algorithms in the literature. The projections onto C i are computed explicitly. All codes are written with a HP PC with the following specification: Intel(R)core i7-9700, CPU 3.00GHz, RAM 4.0GB, MATLAB version 9.9 (R2020b).
Example 2.
First, we consider the variational inequalities with operators A i : m m for i = 1 , 2 , , N , defined by A i ( x ) = G i ( x ) + q i , where
G i = S i S i T + Q i + R i , i = 1 , 2 , , N ,
such that for each i , S i is a m × m matrix, Q i is a m × m skew symmetric matrix, R i is a m × m diagonal matrix, whose diagonal entries are non-negative (so G i is positive definite) and q i is a vector in m . The feasible set C is given by C i = C = { x m : x , a c } , where a m is generated randomly and c is a positive real number randomly in [ 1 , m ] . It is clear that for each i, G i is monotone (hence, pseudomonotone) and Lipschitz continuous with Lipschitz constant L i = max { G i : i = 1 , 2 , , N } . The entries of matrices S i , Q i , R i are generated randomly and uniformly in [ m , m ] , diagonal entries of R i are in [ 1 , m ] and q i is equal to the zero vector. In this case, it is easy to see that the V I ( C , A i ) = { 0 } . For j = 1 , 2 , , M , let T j : m m be defined by T j x = x 2 j , for j N . Then T j is 0-demicontractive mapping, F i x ( T j ) = { 0 } and ( I T j ) is demiclosed at 0. Also for k = 1 , 2 , , N ¯ , let B k = 1 2 k I , f = I (I being the identity operator on H), we choose α = 1 , γ = 1 8 , σ = 0.28 , ρ = 0.36 , λ = 1 4 , c k = 1 N ¯ , δ n , j = 1 M + 1 , α n = 1 n + 1 , β n , i = 1 N + 1 , for all n N . We compare Algorithm 3 with Algorithm 1 of Anh and Phuong [8], Algorithm 2 of Hieu [33], and Algorithm 1 of Suantai et al. [40]. We test the algorithms using the following parameters.
  • Anh and Phuong alg.: λ n , i = 0.99 2 L i , α n , i = 1 n i + 1 , γ n , i = 1 3 ,
  • Hieu alg.: λ = 1 1.5 L ,
  • Suantai et al. alg.: ρ = 0.34 , μ = 0.06 ,
and
Case I:
m = 5 , N = 5 , M = 2 , N ¯ = 1 ,
Case II:
m = 10 , N = 10 , M = 5 , N ¯ = 5 ,
Case III:
m = 20 , N = 15 , M = 10 , N ¯ = 10 ,
Case IV:
m = 50 , N = 20 , M = 5 , N ¯ = 15 .
We also use x n x * < 10 5 as stopping criterion for each algorithm and plot the graphs of D n = x n x * 2 against the number of iteration. The computational results are shown in Table 1 and Figure 1.
Now, we consider the case when N = 1 with finite family of demicontractive mappings in infinite dimensional spaces. In this example, we compare our algorithm with Algorithm 1 of Anh et al. [41] and Algorithm 2.1 of Hieu [42].
Example 3.
Let H = 2 ( ) and define A : H H by A x = 2 2 + x . It is easy to see that A is easy to see that A is strongly pseudomonotone and Lipschitz continuous with L = 1 2 . We defined the feasible set C = { x = ( x 1 , x 2 , ) 2 : x 1 } and for j = 1 , 2 , , M , T j : H H is defined by T j x = ( 1 + j ) j x , for j N . Then T j is demicontractive mapping with κ j = 1 1 + 2 j , F i x ( T j ) = { 0 } and ( I T j ) is demiclosed at 0. We choose N ¯ = 1 , B k = 1 2 I , f ( x ) = x 4 , σ = 0.02 , ρ = 0.036 , γ = 1 16 , α n = 1 ( n + 1 ) , δ n , j = 1 M + 1 , β n i = 1 , c k = 1 . For Anh et al. alg., we take λ n = 1 n + 1 , α n = 1 2 n + 4 ; and for Hieu alg., we take λ n = 1 n + 1 , β n i = n 2 n + i , γ n i = 1 M + 1 (where i = j in this context). We test the algorithms for M = 5 , 15 , 20 , 30 and study the behavior of the sequence generated by the algorithms using D n = x n x * < 10 5 as stopping criterion. The numerical results are shown in Table 2 and Figure 2.

5. Conclusions

In this paper, we introduced a new efficient parallel extragradient method for solving systems of variational inequalities involving common fixed point of demicontractive mappings in real Hilbert spaces. The algorithm is designed such that its step size is determined by an Armijo line search technique and a projection onto a sub-level set is computed for determining the next iterate. A strong convergence result is proved under suitable conditions on the control parameters. Finally, some numerical results were reported to show the performance of the proposed method with respect to some other methods in the literature.

Author Contributions

Conceptualization, L.O.J.; Formal analysis, L.O.J.; Funding acquisition, M.A.; Investigation, L.O.J.; Methodology, L.O.J.; Project administration, L.O.J., M.A.; Resources, M.A.; Software, L.O.J.; Supervision, M.A.; Validation, L.O.J.; Visualization, L.O.J.; Writing—original draft, L.O.J.; Writing—review editing, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

L.O. Jolaoso is supported by the Postdoctoral research grant from the Sefako Makgatho Health Sciences University, South Africa.

Acknowledgments

The authors acknowledge with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Glowinski, R.; Lions, J.L.; Trémoliéres, R. Numerical Analysis of Variational Inequalities; North-Holland: Amsterdam, The Netherlands, 1981. [Google Scholar]
  2. Kinderlehrer, D.; Stampachia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  3. Marcotte, P. Applications of Khobotov’s algorithm to variational and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 1991, 29, 255–270. [Google Scholar] [CrossRef]
  4. Facchinei, F.; Pang, J. Finite-Dimensional Variational Inequalities and Complementary Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  5. Reich, S.; Zalas, R. A modular string averaging procedure for solving the common fixed point problem for quasi-nonexpansive mappings in Hilbert space. Numer. Algorithm 2016, 72, 297–323. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set Valued Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
  7. Nadezhkina, N.; Takahashi, W. Strong convergence Theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM Optim. 2006, 16, 1230–1241. [Google Scholar] [CrossRef]
  8. Anh, P.N.; Phuong, N.X. A parallel extragradient-like projection method for unrelated variational inequalities and fixed point problem. J. Fixed Point Theory Appl. 2018, 20, 74. [Google Scholar] [CrossRef]
  9. Anh, P.N.; Phuong, N.X. Linesearch methods for variational inequalities involving strict pseudocontractions. Optimization 2015, 64, 1841–1854. [Google Scholar] [CrossRef]
  10. Cholamjiak, P.; Suantai, S.; Sunthrayuth, P. An explicit parallel algorithm for solving variational inclusion problem and fixed point problem in Banach spaces. Banach J. Math. Anal. 2020, 14, 20–40. [Google Scholar] [CrossRef]
  11. Anh, P.K.; Hieu, D.V. Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2016, 44, 351–374. [Google Scholar] [CrossRef]
  12. Iiduka, H. A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 2010, 59, 873–885. [Google Scholar] [CrossRef]
  13. Iiduka, H.; Yamada, I. A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 2009, 19, 1881–1893. [Google Scholar] [CrossRef]
  14. Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  15. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef] [Green Version]
  16. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 1976, 12, 747–756. (In Russian) [Google Scholar]
  17. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for variational inequality problems in Euclidean space. Optimization 2012, 61, 119–1132. [Google Scholar] [CrossRef]
  19. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  20. Ceng, L.C.; Hadjisavas, N.; Weng, N.C. Strong convergence Theorems by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46, 635–646. [Google Scholar] [CrossRef]
  21. Jolaoso, L.O.; Aphane, M. Weak and strong convergence Bregman extragradient schemes for solving pseudo-monotone and non-Lipschitz variational inequalities. J. Ineq. Appl. 2020, 2020, 195. [Google Scholar] [CrossRef]
  22. Jolaoso, L.O.; Aphane, M. A generalized viscosity inertial projection and contraction method for pseudomonotone variational inequality and fixed point problems. Mathematics 2020, 8, 2039. [Google Scholar] [CrossRef]
  23. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A strong convergence Theorem for solving pseudo-monotone variational inequalities using projection methods in a reflexive Banach space. J. Optim. Theory Appl. 2020, 185, 744–766. [Google Scholar] [CrossRef]
  24. He, B.S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
  25. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  26. Migorski, S.; Fang, C.; Zeng, S. A new modified subgradient extragradient method for solving variational inequalities. Appl. Anal. 2019, 1–10. [Google Scholar] [CrossRef]
  27. Hieu, D.V.; Thong, D.V. New extragradient-like algorithms for strongly pseudomonotone variational inequalities. J. Glob. Optim. 2018, 70, 385–399. [Google Scholar] [CrossRef]
  28. Dong, Q.-L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  29. Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational inequality problem. Acta Appl. Math. 2020, 169, 217–245. [Google Scholar] [CrossRef]
  30. Yamada, I. The hybrid steepest-descent method for variational inequalities problems over the intersection of the fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications; Butnariu, D., Censor, Y., Reich, S., Eds.; North-Holland: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  31. Hieu, D.V.; Son, D.X.; Anh, P.K.; Muu, L.D. A Two-Step Extragradient-Viscosity Method for Variational Inequalities and Fixed Point Problems. Acta Math. Vietnam. 2018, 2, 531–552. [Google Scholar]
  32. Anh, P.K.; Hieu, D.V. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpensive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  33. Hieu, D.V. Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities. Afr. Math. 2017, 28, 677–692. [Google Scholar] [CrossRef]
  34. Rudin, W. Functional Analysis; McGraw-Hill Series in Higher Mathematics: New York, NY, USA, 1991. [Google Scholar]
  35. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  36. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  37. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  38. Marino, G.; Xu, H.K. Weak and strong convergence Theorems for strict pseudo-contraction in Hilbert spaces. J. Math. Anal. Appl. 2007, 329, 336–346. [Google Scholar] [CrossRef] [Green Version]
  39. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  40. Suantai, S.; Peeyada, P.; Yambangwai, D.; Cholamjiak, W. A parallel-viscosity-type subgradient extragradient-line method for finding the common solution of variational inequality problems applied to image restoration problems. Mathematics 2020, 8, 248. [Google Scholar] [CrossRef] [Green Version]
  41. Anh, T.V.; Muu, L.D.; Son, D.X. Parallel algorithms for solving a class of variational inequalities over the common fixed points set of a finite family of demicontractive mappings. Numer. Funct. Anal. Optim. 2018, 39, 1477–1494. [Google Scholar] [CrossRef]
  42. Hieu, D.V. An explicit parallel algorithm for variational inequalities. Bull. Malays. Math. Sci. Soc 2019, 42, 201–221. [Google Scholar] [CrossRef]
Figure 1. Example 2, From Top to Bottom: Case I, Case II, Case III, and Case IV.
Figure 1. Example 2, From Top to Bottom: Case I, Case II, Case III, and Case IV.
Symmetry 12 01915 g001
Figure 2. Example 3, From Top to Bottom: M = 5 , 15 , 20 , 30 .
Figure 2. Example 3, From Top to Bottom: M = 5 , 15 , 20 , 30 .
Symmetry 12 01915 g002
Table 1. Computational result for Example 2.
Table 1. Computational result for Example 2.
Algorithm 3Anh-Phuong [8]Hieu [33]Suantai et al. [40]
Case INo of Iter.16343967
Time (sec)0.00380.00340.00320.0061
Case IINo of Iter.156310798
Time (sec)0.00200.00540.01000.0097
Case IIINo of Iter.145793183
Time (sec)0.00200.00530.00930.0236
Case IVNo of Iter.1053114183
Time (sec)0.00190.00470.01360.0244
Table 2. Computational result for Example 2.
Table 2. Computational result for Example 2.
Algorithm 3Anh et al. [41]Hieu [42]
M = 5 No of Iter.71467
Time (sec)5.836e-040.00140.0021
M = 15 No of Iter.81325
Time (sec)6.300e-040.00140.0018
M = 20 No of Iter.111637
Time (sec)0.00100.00330.0088
M = 30 No of Iter.101743
Time (sec)7.583e-040.00180.0043
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jolaoso, L.O.; Aphane, M. An Efficient Parallel Extragradient Method for Systems of Variational Inequalities Involving Fixed Points of Demicontractive Mappings. Symmetry 2020, 12, 1915. https://doi.org/10.3390/sym12111915

AMA Style

Jolaoso LO, Aphane M. An Efficient Parallel Extragradient Method for Systems of Variational Inequalities Involving Fixed Points of Demicontractive Mappings. Symmetry. 2020; 12(11):1915. https://doi.org/10.3390/sym12111915

Chicago/Turabian Style

Jolaoso, Lateef Olakunle, and Maggie Aphane. 2020. "An Efficient Parallel Extragradient Method for Systems of Variational Inequalities Involving Fixed Points of Demicontractive Mappings" Symmetry 12, no. 11: 1915. https://doi.org/10.3390/sym12111915

APA Style

Jolaoso, L. O., & Aphane, M. (2020). An Efficient Parallel Extragradient Method for Systems of Variational Inequalities Involving Fixed Points of Demicontractive Mappings. Symmetry, 12(11), 1915. https://doi.org/10.3390/sym12111915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop