Next Article in Journal
A Cross-Efficiency Evaluation Method Based on Evaluation Criteria Balanced on Interval Weights
Next Article in Special Issue
Parallel Tseng’s Extragradient Methods for Solving Systems of Variational Inequalities on Hadamard Manifolds
Previous Article in Journal
Oscillation Criteria for Third Order Neutral Generalized Difference Equations with Distributed Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Algorithms for Variational Inequalities Involving a Strict Pseudocontraction

Department of Liberal Arts, Gyeongnam National University of Science and Technology, Jinju-Si 600-758, Gyeongsangnam-do, Korea
Symmetry 2019, 11(12), 1502; https://doi.org/10.3390/sym11121502
Submission received: 11 November 2019 / Revised: 4 December 2019 / Accepted: 9 December 2019 / Published: 11 December 2019
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory)

Abstract

:
In a real Hilbert space, we investigate the Tseng’s extragradient algorithms with hybrid adaptive step-sizes for treating a Lipschitzian pseudomonotone variational inequality problem and a strict pseudocontraction fixed-point problem, which are symmetry. By imposing some appropriate weak assumptions on parameters, we obtain a norm solution of the problems, which solves a certain hierarchical variational inequality.

1. Introduction

In a real Hilbert space H, one employs · , · and · to stand for its inner product and norm. Let P C be the projection operator from the space H onto a nonempty convex and closed set C, where C H . Let us denote by Fix ( S ) the set of all fixed points of an operator S : C H . The notations →, R , and ⇀ will be used to stand for the strong convergence, the set of real numbers, and the weak convergence, respectively. A self-operator S : C C is named ς -strictly pseudocontractive if ς [ 0 , 1 ) such that
S u S v 2 ς ( I S ) u ( I S ) v 2 + u v 2 u , v C .
In particular, whenever ς = 0 , S is called nonexpansive. This means that the class of nonexpansive mappings is a proper subclass of the one of strict pseudocontractions. Recall that an operator S : C H is called
(i)
Lipschitz with module L if L > 0 such that
S u S v L u v u , v C ;
(ii)
monotone if u v , S u S v 0 , u , v C ;
(iii)
pseudomonotone if
v u , S u 0 v u , S v 0 u , v C ;
(iv)
strongly monotone with module β if β > 0 s.t.
u v , S u S v β u v 2 u , v C ;
(v)
sequentially weakly continuous if u n u S u n S u { u n } C .
It is not hard to see that the pseudomonotone operators may not be monotone. In addition, recall that the operator S : C C is ς -strictly pseudocontractive with constant ς [ 0 , 1 ) iff the following inequality holds: 2 S u S v , u v 2 u v 2 ( 1 ς ) ( I S ) u ( I S ) v 2 u , v C . It is obvious that if S is a ς -strict pseudocontraction, then S satisfies Lipschitz condition S u S v 1 + ς 1 ς u v u , v C . For each point u H , we know that there exists a unique nearest point in C, denoted by P C u , such that u P C u u v v C . The operator P C is called the metric projection of H onto C.
Consider an operator A : H H . The classical monotone variational inequality problem (VIP) consists of finding u * C s.t. v u * , A u * 0 v C . The solution set of such a VIP is denoted by VI( C , A ). Korpelevich [1] first designed an extragradient method with two projections
v n = P C ( u n A u n ) , u n + 1 = P C ( u n A v n ) ,
with ( 0 , 1 L ) , which has been one of the most popular methods for dealing with the VIP up until now. If VI ( C , A ) , it was shown in [1] that { x n } weakly converges to a vector in VI ( C , A ) . The gradient (reduced) type iterative schemes are under the spotlight of investigators of applied mathematicians and engineers in the communities of nonlinear and optimization. Based on this approach, a number of authors have conducted various investigations on efficient iterative algorithms; for examples, see [2,3,4,5,6,7,8,9,10,11].
Let both the operators A and B be inverse-strongly monotone from C to H and the self-mapping S : C C be ς -strictly pseudocontractive. In 2010, via the extragradient approach, Yao et al. [12] designed an efficient, fast algorithm for obtaining a feasibility point in a common solution set:
w n = P C ( u n μ B u n ) , v n = ( 1 β n ) P C ( w n λ A w n ) + β n f ( u n ) , u n + 1 = γ n P C ( w n λ A w n ) + δ n S v n + σ n u n , n 0 ,
where f : C C is a δ -contractive map with δ [ 0 , 1 2 ) , and { β n } , { σ n } , { γ n } , { δ n } are four sequences in [ 0 , 1 ] s.t. σ n + γ n + δ n = 1 , ( γ n + δ n ) ς γ n < ( 1 2 δ ) δ n , n = 0 β n = , lim inf n δ n > 0 , lim inf n σ n > 0 , and lim n ( γ n + 1 1 σ n + 1 γ n 1 σ n ) = lim n β n = 0 . They claimed the strong convergence of the sequence in H.
In the extragradient approach, one has to compute two projection operators onto C. It is clear that the projection operator onto the convex set C is closely related to a minimum distance problem. In the case where C is a general convex set, the computation of two projections might be prohibitively time-consuming. Via Korpelevich’s extragradient approach, Censor et al. [13] suggested a subgradient algorithm, in which the second projection operator onto the subset C is changed onto a half-space. Recently, numerous methods of reduced-gradient-type are focused and extensively investigated in both infinite and infinite dimensional spaces; see, for example [14,15,16,17,18,19,20,21,22,23]. Based on inertial effects, Thong and Hieu [24] proposed an inertial subgradient method, and also proved the weak convergence of their algorithms. In addition, the authors [25] investigated subgradient-based fast algorithms with inertial effects.
Inspired by the above research works in [12,24,25,26,27], we are concerned with hybrid-adaptive step-sizes Tseng’s extragradient algorithms, that are more advantageous and more subtle than the above iterative algorithms because they involve solving the VIP with Lipschitzian, pseudomonotone operators, and the common fixed-point problem of a finite family of strict pseudocontractions in Hilbert spaces. By imposing some appropriate weak assumptions on parameters, one obtains a norm solution of the problems, which solves a certain hierarchical variational inequality. The outline of this article is organized below. In Section 2, a toolbox containing definitions and preliminary results is provided. In Section 3, we propose and investigate the iterative algorithms and their convergence criteria. In Section 4, theorems of norm solutions are employed as illustrating examples to support the convergence criteria.

2. Preliminaries

Lemma 1
([28]). Let S : C C be a ς-strict pseudocontraction. If { u n } is a sequence in C such that ( I d S ) u n 0 , where I d is the identity operator on H, and u n u C , then u = S u . Further, S is 1 + ζ 1 ζ Lipschitz continuous.
Lemma 2
([29]). Let S : C C be a ς-strict pseudocontraction, and let γ and β be real numbers in [ 0 , + ) . Then, γ ( u v ) + β ( S v S u ) ( β + γ ) v u v , u C provided that γ ς ( β + γ ) .
Lemma 3
([30]). Let f be a pseudomonotone mapping from C into H which is continuous on finite-dimensional subspaces. Then, x C is a solution of u x , f ( x ) 0 , u C iff u x , f ( u ) 0 , u C .
Lemma 4
([31]). Let { a n } be a sequence in [ 0 , + ) satisfying the condition a n + 1 a n + s n b n s n a n n 1 , where { s n } ( 0 , 1 ) and { b n } ( , ) s.t. (a) n = 1 | s n b n | < , lim sup n b n 0 (b) n = 1 s n = . Then, a n 0 as n .

3. Results

From now on, one can always assume that our feasibility set Ω = Fix ( T ) VI ( C , A ) is consistent.
Put n : = n + 1 and return to Step 1 (Algorithm 1), where { ϵ n } ( 0 , 1 ] and { β n } , { σ n } , { γ n } , { δ n } ( 0 , 1 ) are such that σ n + γ n + δ n = 1 ; lim n ϵ n β n = lim n β n = 0 ; lim inf n δ n > 0 ; lim inf n σ n > 0 ; lim inf n ( ( 1 2 δ ) δ n γ n ) > 0 ; lim sup n σ n < 1 ; n = 1 β n = ; ( γ n + δ n ) ζ γ n < ( 1 2 δ ) δ n ; the pseudomonotone self-operator A is Lipschitz continuous with module L and sequentially weakly continuous on H; T is a ζ -strictly pseudocontractive self-operator on H; and f : is a δ -contraction operator, where δ [ 0 , 1 2 ) , from H to C.
Algorithm 1: Initial Step: Fix two initials x 0 and x 1 in H and set α > 0 , τ 1 > 0 , μ ( 0 , 1 ) .
Iteration Steps: calculate iterative sequence x n + 1 as follows:
Step 1. Given the iterates x n 1 and x n ( n 1 ) , choose α n s.t. 0 α n α ¯ n , where
α ¯ n = min { α , ϵ n x n x n 1 } if x n x n 1 , α otherwise .

Step 2. Let w n = x n + α n ( x n x n 1 ) and calculate y n = P C ( w n τ n A w n ) .
Step 3. Calculate x n + 1 = σ n x n + γ n ( y n τ n ( A y n A w n ) ) + δ n T z n , where z n = β n f ( x n ) + ( 1 β n ) ( y n τ n ( A y n A w n ) ) . Update
τ n + 1 = min { μ w n y n A w n A y n , τ n } if A w n A y n 0 , τ n otherwise .
Remark 1.
We show lim n ( α n x n x n 1 ) / β n = 0 . It follows from (1) that ϵ n α n x n x n 1 n 1 . Since lim n ϵ n β n = 0 , one sees that 0 = lim n ϵ n β n lim sup n α n x n x n 1 β n .
Lemma 5.
Let { τ n } be generated by (2). Then, { τ n } is a nonincreasing sequence with τ n τ ˜ : = min { τ 1 , μ L } n 1 and lim n τ n τ ˜ : = min { τ 1 , μ L } .
Proof. 
By borrowing (2), one concludes that τ n τ n + 1 n 1 . One also has
A w n A y n L w n y n τ n + 1 min { τ n , μ L } .
Note that τ 1 τ ˜ : = min { τ 1 , μ L } . So, τ n τ ˜ : = min { τ 1 , μ L } n 1 . □
Lemma 6.
Let { y n } , { w n } , and { z n } be three iterative vector sequences defined by Algorithm 1. We have
p z n 2 β n δ p x n 2 + ( 1 β n ) p w n 2 ( 1 β n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 + 2 β n ( f I ) p , z n p , p Ω ,
where u n : = y n τ n ( A y n A w n ) .
Proof. 
Fixing p Ω = Fix ( T ) VI ( C , A ) arbitrarily, one asserts A p , y n p 0 and T p = p . This yields
p u n 2 = p y n 2 + τ n 2 A y n A w n 2 2 τ n y n p , A y n A w n = p w n 2 + w n y n 2 + 2 y n w n , w n p + τ n 2 A y n A w n 2 2 τ n y n p , A y n A w n = p w n 2 w n y n 2 + 2 y n w n , y n p + τ n 2 A y n A w n 2 2 τ n y n p , A y n A w n .
Thanks to y n = P C ( w n τ n A w n ) , we have
y n w n , y n p τ n A w n , y n p .
This ensures that
p u n 2 p w n 2 w n y n 2 2 τ n A w n , y n p + τ n 2 A y n A w n 2 2 τ n y n p , A y n A w n = p w n 2 w n y n 2 + τ n 2 A y n A w n 2 2 τ n A y n , y n p .
By using the fact that A p , y n p 0 , one obtains that A y n , y n p 0 . Hence,
p u n 2 p w n 2 + τ n 2 A y n A w n 2 w n y n 2 .
Moreover, from (2), it follows that
τ n + 1 A w n A y n μ w n y n n 1 .
Combining (4) and (5), we obtain
p u n 2 p w n 2 ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 .
On the other hand,
z n p = ( 1 β n ) ( y n τ n ( A y n A w n ) ) p + β n f ( x n ) = ( 1 β n ) ( u n p ) + β n ( f I ) p + β n ( f ( x n ) f ( p ) ) .
Using the convexity of the norm function, we get
z n p 2 2 β n f p p , z n p + ( 1 β n ) ( u n p ) β n ( f ( p ) f ( x n ) ) 2 2 β n ( f p p ) , z n p + [ ( 1 β n ) p u n + β n δ p x n ] 2 2 β n ( f p p ) , z n p + ( 1 β n ) p u n 2 + β n δ p x n 2 ( 1 β n ) [ p w n 2 ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 ] + β n δ p x n 2 + 2 β n ( f p p ) , z n p = ( 1 β n ) p w n 2 ( 1 β n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 + β n δ p x n 2 + 2 β n ( f p p ) , z n p .
This completes the proof. □
Lemma 7.
Let { z n } , { y n } , and { x n } be three iterative sequences, which are bounded, defined by Algorithm 1. Suppose that there exists a subsequence { w n k } of the weakly convergent sequence { w n } such that w n k z H . If x n x n + 1 0 , w n y n 0 , w n z n 0 , then z Ω .
Proof. 
Algorithm 1 shows x n w n = α n x n 1 x n . Utilizing Remark 1, we have lim n w n x n = 0 . This, together with the assumption w n z n 0 , yields that
z n x n z n w n + w n x n 0 ( n ) .
Since { x n } is bounded and α n ( x n x n 1 ) 0 , one asserts that { w n } is bounded. Note that (4) yields
u n p 2 w n p 2 + τ 1 2 L 2 y n w n 2 .
Hence, { u n } is bounded, where u n : = y n τ n ( A y n A w n ) . By Algorithm 1, we also get
z n x n = β n f ( x n ) + u n x n β n u n .
So, it follows from the boundedness of { x n } and { u n } that
u n x n = z n x n β n f ( x n ) + β n u n z n x n + β n ( f ( x n ) + u n ) ,
which indicates u n x n tends to 0 as n tends to the infinity. Using Algorithm 1 again, we get
x n + 1 z n = σ n ( x n z n ) + γ n ( u n z n ) + δ n ( T z n z n ) = σ n ( x n z n ) + γ n ( u n x n + x n z n ) + δ n ( T z n z n ) = ( 1 δ n ) ( x n z n ) + γ n ( u n x n ) + δ n ( T z n z n ) ,
which immediately leads to
δ n z n T z n = x n + 1 γ n ( u n x n ) z n ( 1 δ n ) ( x n z n ) = x n + 1 ( 1 δ n ) ( x n z n ) x n + x n z n γ n ( u n x n ) = x n + 1 x n + δ n ( x n z n ) γ n ( u n x n ) x n + 1 x n + x n z n + u n x n .
Since x n x n + 1 , z n x n , and u n x n tend to 0 as n tends to the infinity and lim inf n δ n > 0 , we obtain
lim n z n T z n = 0 ,
which, together with Lemma 1, ensures that
x n T x n x n z n + z n T z n + T z n T x n x n z n + z n T z n + 1 + ζ 1 ζ z n x n 2 1 ζ x n z n + z n T z n 0 ( n ) .
From the restriction on the operator A, we have
τ n A w n , x w n w n y n , x y n τ n A w n , w n y n x C .
Using the boundedness of { w n k } and Lipschitzian property of A, we get the boundedness of { A w n k } . Note that τ n τ ˜ : = min { τ 1 , μ L } and the boundedness of { y n k } . Inequality (8) deduces lim inf k w n k x , A w n k 0 x C . Borrowing the facts that lim n w n y n = 0 and A is Lipschitz continuous with moudle L, one concludes that lim n A y n A w n = 0 , which combines with (8) and sends us to the situation lim inf k y n k x , A y n k 0 x C .
One now focuses on z Fix ( T ) . Thanks to the weak convergence w n k z , as k , one reaches x n k z . Without loss of generality, we may assume l = n k mod N for all k. Since by the assumption x n x n + 1 0 we have x n k + j z for all j 1 , we deduce from (7) that
x n k + j T l + j x n k + j = T n k + j x n k + j x n k + j 0
as k . An application of Lemma 1 is to yield z Fix ( T l + j ) for all j. This amounts to
z Fix ( T ) .
Let { ε k } be a decreasing real sequence in ( 0 , 1 ) converging to 0 and let
ε k + x y n j , A y n j 0
for all j m k , where m k is the smallest integer satisfying the above inequality. Note that sequence { m k } is increasing and A y m k 0 . It follows that A y m k , h m k = 1 , where h m k : = A y m k A y m k 2 . This sends us to y m k x ε k h m k , A y m k 0 , which guarantees
y m k x ε k h m k , A ( ε k h m k + x ) 0 .
This sends us to
A x , y m k x y m k x ε k h m k , A x A ( ε k h m k + x ) ε k A x , h m k .
On the other hand, one has lim n w n y n = 0 and w n k z as k . This infers w n k z , which lies in C , as k goes to the infinity. So, A y n k A z as k goes to the infinity. This shows that z is not a solution. In the sense of norms, one obtains lim inf k A y n k A z . This further concludes that
0 = lim sup k ε k / lim inf k A y n k lim sup k ε k / A y m k lim sup k h m k ε k 0 ,
which reaches that h m k ε k 0 as k .
Finally, one focuses on the desired point z. (11), the boundedness of sequences { h m k } and { y m k } , and the fact that h m k ε k 0 as k , yield that z x , A x = lim inf k y m k x , A x 0 for all x in C. Lemma 3 asserts that the desired point z is a solution to the VIP, e.g., z VI ( C , A ) . Therefore, we have from (9) that z Ω : = VI ( C , A ) Fix ( T ) . The proof is complete. □
Theorem 1.
Let { x n } be a vector sequence constructed by Algorithm 1 and let A ( H ) be bounded. Suppose that x * is in Ω, which uniquely solves x * f x * , x * x 0 , x Ω . Then,
x n x * Ω x n x n + 1 0 , sup n 1 ( I f ) x n < .
Proof. 
Noticing condition (iv) on { σ n } , one may assume that { σ n } [ a , b ] , which is a subset of ( 0 , 1 ) . Using the Banach Fixed Point Theory, one deduces that a unique point x * in H s.t. x * = P Ω f ( x * ) . Hence, there is a solution x * Ω = Fix ( T ) VI ( C , A ) to the HVI problem
x * x , x * f x * 0
for any point x in Ω . If lim n x n x * = 0 , then
sup n 1 ( x * x n + f ( x * ) f ( x n ) + f ( x * ) x * ) sup n 1 x n f ( x n )
and
x n + 1 x * + x * x n x n + 1 x n 0 .
So, x n + 1 x n 0 as n . In order to prove the sufficiency of the theorem, one supposes x n x n + 1 0 and sup n 1 ( I f ) x n < . Then, we divide the proof of the sufficiency into several steps. □
Step 1. One proves the boundedness of { x n } . In fact, taking an arbitrary p Ω , one has p = T p and (6), that is,
p u n 2 + ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 p w n 2 .
Since lim n ( 1 μ 2 τ n 2 τ n + 1 2 ) = 1 μ 2 > 0 , there exists an integer n 0 1 with
1 μ 2 τ n 2 τ n + 1 2 > 0 n n 0 .
Using (13), we have
p u n p w n , n n 0 .
So,
p w n α n x n x n 1 + p x n = β n · α n β n x n x n 1 + p x n .
From Remark 1, we have α n β n x n x n 1 0 ( n ) . This ensures that M 1 > 0 s.t.
α n β n x n x n 1 M 1 n 1 .
Combining (14), (15), and (16), we have
u n p w n p x n p + β n M 1 n n 0 .
Note that A ( H ) is bounded, y n = P C ( w n τ n A w n ) , f ( H ) C , and
u n = y n τ n ( A y n A w n ) .
Hence, we know that { A y n } and { A w n } are both bounded. From sup n 1 ( I f ) x n < and α n x n x n 1 0 , we conclude that
u n f ( x n ) = y n τ n ( A y n A w n ) P C f ( x n ) P C ( w n τ n A w n ) P C f ( x n ) + τ n A y n A w n α n x n x n 1 + x n f ( x n ) + τ 1 ( A w n + A y n A w n ) M 0 ,
where
sup n 1 { α n x n x n 1 + x n f ( x n ) + τ 1 ( A w n + A y n A w n ) } M 0
for some M 0 > 0 . By using (17), one concludes
p z n ( 1 β n ) p u n + β n f p p + β n δ p x n ( 1 β n ) ( p x n + β n M 1 ) + β n f p p + β n δ p x n ( 1 β n ( 1 δ ) ) p x n + β n ( f p p + M 1 ) ,
which, together with Lemma 2 and ( γ n + δ n ) ζ γ n , yields
p x n + 1 ( 1 σ n ) 1 1 σ n [ γ n ( z n p ) + δ n ( T z n p ) ] + γ n u n z n + σ n p x n ( 1 σ n ) p z n + γ n β n u n f ( x n ) + σ n p x n ( 1 σ n ) [ ( 1 β n ( 1 δ ) ) p x n + β n ( M 0 + M 1 + ( f I ) p ) ] + σ n p x n = [ 1 β n ( 1 σ n ) ( 1 δ ) ] p x n + β n ( 1 σ n ) ( 1 δ ) M 1 + M 0 + ( I f ) p 1 δ max { M 0 + M 1 + ( f I ) p 1 δ , p x n } .
By induction, we obtain
x n p max { M 0 + M 1 + ( I f ) p 1 δ , x n 0 p } .
This indicates that all the vector sequence { x n } , { u n } , { w n } , { y n } , and { z n } are bounded sequences.
Step 2. We claim
( 1 β n ) ( 1 σ n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) y n w n 2 p x n 2 + α n M 4 p x n + 1 2 , n n 0 ,
for some M 4 > 0 . Indeed, using Lemma 2, Lemma 6, and the convexity of · 2 , we have from ( γ n + δ n ) ζ γ n that n n 0 ,
x n + 1 p 2 = σ n ( x n p ) + γ n ( z n p ) + δ n ( T z n p ) + γ n ( u n z n ) 2 σ n ( x n p ) + γ n ( z n p ) + δ n ( T z n p ) 2 + 2 γ n β n u n f ( x n ) , x n + 1 p σ n p x n 2 + ( 1 σ n ) 1 1 σ n [ γ n ( z n p ) + δ n ( T z n p ) ] 2 + 2 ( 1 σ n ) β n u n f ( x n ) x n + 1 p σ n p x n 2 + ( 1 σ n ) z n p 2 + 2 ( 1 σ n ) β n u n f ( x n ) p x n + 1 σ n p x n 2 + ( 1 σ n ) { β n δ x n p 2 + ( 1 β n ) p w n 2 ( 1 β n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 + 2 β n ( f I ) p , z n p } + 2 ( 1 σ n ) β n u n f ( x n ) x n + 1 p σ n p x n 2 + ( 1 σ n ) { β n δ p x n 2 + ( 1 β n ) p w n 2 ( 1 β n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 + β n M 2 } ,
where sup n 1 2 ( ( f I ) p z n p + u n f ( x n ) x n + 1 p ) M 2 for some M 2 > 0 . In addition, from (17) we get
p w n 2 = β n ( 2 M 1 p x n + β n M 1 2 ) + p x n 2 β n M 3 + p x n 2 ,
where sup n 1 ( β n M 1 2 + 2 M 1 p x n ) M 3 for some M 3 > 0 . Substituting (19) for (18), we obtain that for all n n 0 ,
p x n + 1 2 σ n p x n 2 + ( 1 σ n ) { δ β n p x n 2 + ( 1 β n ) [ p x n 2 + β n M 3 ] ( 1 β n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) y n w n 2 + β n M 2 } = [ 1 ( 1 σ n ) ( 1 δ ) β n ] p x n 2 + ( 1 σ n ) β n ( 1 β n ) M 3 ( 1 β n ) ( 1 σ n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) y n w n 2 + ( 1 σ n ) β n M 2 p x n 2 ( 1 β n ) ( 1 σ n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) y n w n 2 + β n M 4 ,
where M 4 : = M 2 + M 3 . This immediately implies that for all n n 0 ,
( 1 β n ) ( 1 σ n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) y n x n 2 p x n 2 + β n M 4 p x n + 1 2 .
Step 3. One proves
p x n + 1 2 [ 1 ( 1 2 δ ) δ n γ n 1 β n γ n β n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] β n 1 β n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n · α n β n x n x n 1 3 M } , n n 0 ,
where M is some appropriate constant.
3 M α n x n x n 1 + p x n 2 p x n 2 + α n ( α n x n x n 1 + 2 p x n ) x n x n 1 = ( p x n + α n x n x n 1 ) 2 p w n 2 ,
where M sup n 1 { α n x n x n 1 , p x n } . From the convexity of · 2 , one arrives at
x n + 1 p 2 = σ n ( x n p ) + γ n ( z n p ) + δ n ( T z n p ) + γ n ( u n z n ) 2 σ n ( x n p ) + γ n ( z n p ) + δ n ( T z n p ) 2 + 2 γ n β n u n f ( x n ) , x n + 1 p σ n x n p 2 + ( 1 σ n ) 1 1 σ n [ γ n ( z n p ) + δ n ( T z n p ) ] 2 + 2 γ n β n u n p , x n + 1 p + 2 γ n β n p f ( x n ) , x n + 1 p ,
which yields that
x n + 1 p 2 ( 1 σ n ) p z n 2 + σ n p x n 2 + 2 γ n β n p u n p x n + 1 + 2 γ n β n p f ( x n ) , x n + 1 p σ n p x n 2 + ( 1 σ n ) [ ( 1 β n ) p u n 2 + 2 β n f ( x n ) p , z n p ] + γ n β n ( p u n 2 + p x n + 1 2 ) + 2 γ n β n p f ( x n ) , x n + 1 p .
From (17) and (22) we know that u n p 2 x n p 2 + α n x n x n 1 3 M n n 0 . Hence, we have, n n 0 , that
x n + 1 p 2 σ n p x n 2 + ( 1 σ n ) ( 1 β n ) ( p x n 2 + α n x n x n 1 3 M ) + 2 β n ( 1 σ n ) f ( x n ) p , z n p + γ n β n ( p x n 2 + p x n + 1 2 + α n x n x n 1 3 M ) + 2 γ n β n p f ( x n ) , x n + 1 p [ 1 β n ( 1 σ n ) ] p x n 2 + 2 β n δ n f ( x n ) p , z n p + γ n β n ( p x n 2 + p x n + 1 2 ) + ( 1 σ n ) α n x n x n 1 3 M + 2 γ n β n f ( x n ) p , z n x n + 1 [ 1 β n ( 1 σ n ) ] p x n 2 + 2 γ n β n p f ( x n ) z n x n + 1 + 2 β n δ n f ( x n ) p , x n p + 2 β n δ n f ( x n ) p , z n x n + γ n β n ( p x n 2 + p x n + 1 2 ) + ( 1 σ n ) α n x n x n 1 3 M [ 1 β n ( 1 σ n ) ] p x n 2 + 2 γ n β n p f ( x n ) z n x n + 1 + 2 β n δ n δ p x n 2 + 2 β n δ n f ( p ) p , x n p + 2 β n δ n p f ( x n ) z n x n + γ n β n ( p x n 2 + p x n + 1 2 ) + ( 1 σ n ) α n x n x n 1 3 M ,
which immediately yields
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 β n γ n β n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] β n 1 β n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n · α n β n x n x n 1 3 M } .
Step 4. We claim strong convergence of vector sequence { x n } to the unique solution of HVI (12), x * Ω . One lets p = x * , and use (23) to obtain
x n + 1 x * 2 [ 1 ( 1 2 δ ) δ n γ n 1 β n γ n β n ] x n x * 2 + [ ( 1 2 δ ) δ n γ n ] β n 1 β n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( x * ) x * , x n x * + γ n + δ n ( 1 2 δ ) δ n γ n · α n β n x n x n 1 3 M } .
From (21), x n x n + 1 0 , β n 0 , 1 μ 2 τ n 2 τ n + 1 2 1 μ 2 , and { σ n } [ a , b ] ( 0 , 1 ) , we obtain
lim sup n ( 1 β n ) ( 1 b ) ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 lim sup n ( 1 β n ) ( 1 σ n ) ( 1 μ 2 τ n 2 τ n + 1 2 ) w n y n 2 lim sup n [ p x n 2 p x n + 1 2 + β n M 4 ] lim sup n [ p x n 2 p x n + 1 2 ] + lim sup n β n M 4 lim sup n ( p x n + p x n + 1 ) x n x n + 1 = 0 .
This immediately implies that w n y n 0 . From the Lipschitzian property of A, we have u n y n = τ n A y n A w n τ 1 L y n w n . Consequently,
lim n u n y n = lim n w n y n = 0 .
Thus, we get
x n y n x n w n + w n y n 0 ( n ) .
Since z n = β n f ( x n ) + ( 1 β n ) u n with u n : = y n τ n ( A y n A w n ) , from (25) and the boundedness of { x n } , { u n } , we get
z n y n = β n f ( x n ) β n u n + u n y n β n ( f ( x n ) + u n ) + u n y n 0 ( n ) ,
and hence,
z n x n z n y n + y n x n 0 ( n ) .
Obviously, combining (25) and (26) guarantees that w n z n y n w n + z n y n , which indicates that lim n w n z n = 0 . Let vector sequence { x n k } be a subsequence of original sequence { x n } . From its boundedness, one asserts that
lim k ( f I ) x * , x * x n k = lim sup n ( f I ) x * , x * x n .
Without loss of generality, one lets x n k x ˜ . (28) implies
lim sup n ( f I ) x * , x n x * = lim k ( f I ) x * , x n k x * = ( f I ) x * , x ˜ x * .
On the other hand, one has lim k w n k x n k = 0 . This indicates w n k x ˜ . Lemma 7 guarantees that x ˜ is in Ω . Therefore, (12) and (29) amount to lim sup n x n x * , ( f I ) x * = x ˜ x * , ( f I ) x * 0 . Note that lim inf n ( 1 2 δ ) δ n γ n 1 β n γ n > 0 . It follows that n = 1 ( 1 2 δ ) δ n γ n 1 β n γ n β n = . It is clear that
lim sup n { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( x * ) x * , x n x * + γ n + δ n ( 1 2 δ ) δ n γ n · α n β n x n x n 1 3 M } 0 .
By utilizing Lemma 4, one concludes x n x * easily. The proof is complete.

4. Applications

In this section, our main results are applied to solve the VIP and CFPP in an illustrating example. The initial point x 0 = x 1 is randomly chosen in R . Take f ( x ) = sin x 4 , α = τ 1 = μ = 1 2 , ϵ n = 1 n 2 , β n = 1 n + 1 , σ n = 1 3 , γ n = 1 6 , and δ n = 1 2 .
We first provide an example of Lipschitzian, pseudomonotone operator A satisfying the boundedness of A ( H ) and strictly pseudocontractive operator T 1 with Ω = Fix ( T 1 ) VI ( C , A ) . Let C = [ 1 , 1 ] and H = R with the inner product a , b = a b and induced norm · = | · | . Then, f is a δ -contractive map with δ = 1 4 [ 0 , 1 2 ) and f ( H ) C because f ( x ) f ( y ) = 1 4 sin x sin y 1 4 x y for all x , y H .
Let A : H H and T 1 : H H be defined as A x : = 1 1 + | sin x | 1 1 + | x | and T 1 x : = 5 8 x 1 4 sin x for all x H . Now, we first show that A is Lipschitzian, pseudomonotone operator with L = 2 , such that A ( H ) is bounded. Indeed, for all x , y H , we have
A x A y | 1 1 + x 1 1 + y | + | 1 1 + sin x 1 1 + sin y | x y ( 1 + x ) ( 1 + y ) + sin x sin y ( 1 + sin x ) ( 1 + sin y ) 2 x y .
This implies that A is Lipschitzian operator with L = 2 . Next, we verify that A is pseudomonotone. For any given x , y H , it is clear that the relation holds:
A x , y x = ( 1 1 + | sin x | 1 1 + | x | ) ( y x ) 0 A y , y x = ( 1 1 + | sin y | 1 1 + | y | ) ( y x ) 0 .
Furthermore, it is easy to see that T 1 is strictly pseudocontractive with constant ζ 1 = 1 4 . Indeed, we observe that for all x , y H ,
T x T y 5 8 x y + 1 4 sin x sin y x y + 1 4 ( I T ) x ( I T ) y .
It is clear that ( γ n + δ n ) ζ 1 = ( 1 6 + 1 2 ) × 1 4 1 6 = γ n < ( 1 2 δ ) δ n = ( 1 2 × 1 4 ) 1 2 = 1 4 for all n 1 . In addition, it is clear that Fix ( T 1 ) = { 0 } and A 0 = 0 because the derivative d ( T 1 u ) / d u = 5 8 1 4 cos u > 0 . Therefore, Ω = Fix ( T 1 ) VI ( C , A ) = { 0 } . In this case, Algorithm 1 can be rewritten below:
w n = x n + α n ( x n x n 1 ) , y n = P C ( w n τ n A w n ) , z n = 1 n + 1 f ( x n ) + n n + 1 ( y n τ n ( A y n A w n ) ) , x n + 1 = 1 3 x n + 1 6 ( y n τ n ( A y n A w n ) ) + 1 2 T 1 z n n 1 ,
where, for each n 1 , α ¯ n ( = α n ) and τ n are chosen as in Algorithm 1. Then, by Theorem 1, we know that x n 0 Ω iff x n x n + 1 0 ( n ) and sup n 1 | ( I f ) x n | < .

Funding

This work was supported by Gyeongnam National University of Science and Technology Grant in 1 March 2019–29 February 2020.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  2. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef] [Green Version]
  3. Dehaish, B.A.B. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  4. Ceng, L.C.; Shang, M. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2019. [Google Scholar] [CrossRef]
  5. Gibali, A.; Shehu, Y. An efficient iterative method for finding common fixed point and variational inequalities in Hilbert spaces. Optimization 2019, 68, 13–32. [Google Scholar] [CrossRef]
  6. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  7. Qin, X.; Yao, J.C. Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 2017, 18, 925–935. [Google Scholar]
  8. An, N.T.; Nam, N.M.; Qin, X. Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 2019. [Google Scholar] [CrossRef]
  9. Ansari, Q.H.; Babu, F. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  10. Zhao, X.; Ng, K.F.; Li, C.; Yao, J.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  11. Qin, X.; Petrusel, A. CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 157–165. [Google Scholar]
  12. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  13. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Qin, X.; Yao, J.C. Weak convergence of a Mann-like algorithm for nonexpansive and accretive operators. J. Inequal. Appl. 2016, 2016, 232. [Google Scholar] [CrossRef] [Green Version]
  15. Chang, S.S. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  16. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malaysian Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  17. Chidume, C.E.; Romanus, O.M.; Nnyaba, U.V. An iterative algorithm for solving split equilibrium problems and split equality variational inclusions for a class of nonexpansive-type maps. Optimization 2018, 67, 1949–1962. [Google Scholar] [CrossRef]
  18. Ye, M. An improved projection method for solving generalized variational inequality problems. Optimization 2018, 67, 1523–1533. [Google Scholar] [CrossRef]
  19. Takahashi, W.; Wen, C.F. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  20. Bin Dehaish, B.A. A regularization projection algorithm for various problems with nonlinear mappings in Hilbert spaces. J. Inequal. Appl. 2015, 2015, 51. [Google Scholar] [CrossRef] [Green Version]
  21. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  22. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  23. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  24. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  25. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  26. Zhang, L.; Zhao, H.; Lv, Y. A modified inertial projection and contraction algorithms for quasi-variational inequalities. Appl. Set-Valued Anal. Optim. 2019, 1, 63–76. [Google Scholar]
  27. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  28. Marino, G.; Xu, H.K. Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 2007, 329, 336–346. [Google Scholar] [CrossRef] [Green Version]
  29. Ceng, L.C.; Kong, Z.R.; Wen, C.F. On general systems of variational inequalities. Comput. Math. Appl. 2013, 66, 1514–1532. [Google Scholar] [CrossRef]
  30. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  31. Xue, Z.; Zhou, H.; Cho, Y.J. Iterative solutions of nonlinear equations for m-accretive operators in Banach spaces. J. Nonlinear Convex Anal. 2000, 1, 313–320. [Google Scholar]

Share and Cite

MDPI and ACS Style

Cho, S.Y. Hybrid Algorithms for Variational Inequalities Involving a Strict Pseudocontraction. Symmetry 2019, 11, 1502. https://doi.org/10.3390/sym11121502

AMA Style

Cho SY. Hybrid Algorithms for Variational Inequalities Involving a Strict Pseudocontraction. Symmetry. 2019; 11(12):1502. https://doi.org/10.3390/sym11121502

Chicago/Turabian Style

Cho, Sun Young. 2019. "Hybrid Algorithms for Variational Inequalities Involving a Strict Pseudocontraction" Symmetry 11, no. 12: 1502. https://doi.org/10.3390/sym11121502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop