Next Article in Journal
A Quadratic Mean Field Games Model for the Langevin Equation
Next Article in Special Issue
On Convex F-Contraction in b-Metric Spaces
Previous Article in Journal
An Analytic and Numerical Investigation of a Differential Game
Previous Article in Special Issue
Qualitative Analyses of Integro-Fractional Differential Equations with Caputo Derivatives and Retardations via the Lyapunov–Razumikhin Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mann-Type Inertial Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points of Nonexpansive and Quasi-Nonexpansive Mappings

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Research Center for Interneural Computing, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(2), 67; https://doi.org/10.3390/axioms10020067
Submission received: 16 March 2021 / Revised: 14 April 2021 / Accepted: 15 April 2021 / Published: 19 April 2021
(This article belongs to the Special Issue Special Issue in Honor of the 60th Birthday of Professor Hong-Kun Xu)

Abstract

:
Suppose that in a real Hilbert space H, the variational inequality problem with Lipschitzian and pseudomonotone mapping A and the common fixed-point problem of a finite family of nonexpansive mappings and a quasi-nonexpansive mapping with a demiclosedness property are represented by the notations VIP and CFPP, respectively. In this article, we suggest two Mann-type inertial subgradient extragradient iterations for finding a common solution of the VIP and CFPP. Our iterative schemes require only calculating one projection onto the feasible set for every iteration, and the strong convergence theorems are established without the assumption of sequentially weak continuity for A. Finally, in order to support the applicability and implementability of our algorithms, we make use of our main results to solve the VIP and CFPP in two illustrating examples.

1. Introduction

In a real Hilbert space ( H , · ), equipped with the inner product · , · , we assume that C is a nonempty closed convex subset and P C is the metric projection of H onto C. If S : C H is a mapping on C, then we denote by Fix ( S ) the fixed-point set of S. Moreover, we denote by R the set of all real numbers. Given a mapping A : H H . Consider the classical variational inequality problem (VIP) of finding x * C such that A x * , x x * 0 for all x C . We denote by VI( C , A ) the solution set of the VIP.
To the best of our knowledge, one of the most efficient methods to deal with the VIP is the extragradient method invented by Korpelevich [1] in 1976, that is, for any given u 0 C , { u m } is the sequence constructed by
v m = P C ( u m A u m ) , u m + 1 = P C ( u m A v m ) m 0 ,
with constant ( 0 , 1 L ) . If VI ( C , A ) , one knows that this method has only weak convergence, and only requires that A is monotone and L-Lipschitzian. The literature on the VIP is vast, and Korpelevich’s extragradient method has received great attention from many authors, who improved it via various approaches so that some new iterative methods happen to solve the VIP and related optimization problems; see, e.g., [2,3,4,5,6,7,8,9,10,11,12] and the references therein, to name but a few.
It is worth pointing out that the extragradient method needs to calculate two projections onto the feasible set C per iteration. Without question, once one is hard to calculate the projection onto C, the minimum distance problem has to be solved twice per iteration. This perhaps affects the applicability and implementability of the method. To improve Algorithm 1, one has to reduce the number of projections per iteration. In 2011, Censor et al. [13] first suggested the subgradient extragradient method, in which the second projection onto C is replaced by a projection onto a half-space:
v m = P C ( u m A u m ) , C m = { w H : u m A u m v m , w v m 0 } , u m + 1 = P C m ( u m A v m ) m 0 ,
where A is a L-Lipschitzian monotone mapping and ( 0 , 1 L ) .
Since then, various modified extragradient-like iterative methods have been investigated by many researchers; see, e.g., [14,15,16,17,18,19]. In 2014, combining the subgradient extragradient method and Halpern’s iteration method, Kraikaew and Saejung [20] proposed the Halpern subgradient extragradient method for solving the VIP, that is, for any given u 0 H , { u m } is the sequence constructed by
v m = P C ( u m A u m ) , C m = { v H : u m A u m v m , v v m 0 } , w m = P C m ( u m A v m ) , u m + 1 = α m u 0 + ( 1 α m ) w m m 0 ,
where ( 0 , 1 L ) , { α m } ( 0 , 1 ) , lim m α m = 0 and m = 1 α m = + . They proved the strong convergence of { u m } to P VI ( C , A ) u 0 .
In 2018, Thong and Hieu [21] first suggested the inertial subgradient extragradient method, that is, for any given u 0 , u 1 H , the sequence { u m } is generated by
w m = u m + α m ( u m u m 1 ) , v m = P C ( w m A w m ) , C m = { v H : w m A w m v m , v v m 0 } , u m + 1 = P C m ( w m A v m ) m 1 ,
with constant ( 0 , 1 L ) . Under suitable conditions, they proved the weak convergence of { u m } to an element of VI ( C , A ) . Later, Thong and Hieu [22] designed two inertial subgradient extragradient algorithms with linesearch process for solving a VIP with monotone and Lipschitz continuous mapping A and a FPP of quasi-nonexpansive mapping T with a demiclosedness property in H. Under appropriate conditions, they established the weak convergence results for the suggested algorithms.
Suppose that the notations VIP and CFPP represent a variational inequality problem with Lipschitzian and pseudomonotone mapping A : H H and a common fixed-point problem of finitely many nonexpansive mappings { T i } i = 1 N and a quasi-nonexpansive mapping T with a demiclosedness property, respectively. Inspired by the research works above, we design two Mann-type inertial subgradient extragradient iterations for finding a common solution of the VIP and CFPP. Our algorithms require only computing one projection onto the feasible set C per iteration, and the strong convergence theorems are established without the assumption of sequentially weak continuity for A on C. Finally, in order to support the applicability and implementability of our algorithms, we make use of our main results to solve the VIP and CFPP in two illustrating examples.
This paper is organized as follows: In Section 2, we recall some definitions and preliminaries for the sequel use. Section 3 deals with the convergence analysis of the proposed algorithms. Finally, in Section 4, in order to support the applicability and implementability of our algorithms, we make use of our main results to find a common solution of the VIP and CFPP in two illustrating examples.

2. Preliminaries

Throughout this paper, we assume that C is a nonempty closed convex subset of a real Hilbert space H. If { u m } is a sequence in H, then we denote by u m u (respectively, u m u ) the strong (respectively, weak) convergence of { u m } to u. A mapping F : C H is said to be nonexpansive if F u F v u v u , v C . Recall also that F : C H is called
(i)
L-Lipschitz continuous (or L-Lipschitzian) if L > 0 such that F u F v L u v u , v C ;
(ii)
monotone if F u F v , u v 0 u , v C ;
(iii)
pseudomonotone if F u , v u 0 F v , v u 0 u , v C ;
(iv)
α -strongly monotone if α > 0 such that F u F v , u v α u v 2 u , v C ;
(v)
quasi-nonexpansive if Fix ( F ) , and F u p u p u C , p Fix ( F ) ;
(vi)
sequentially weakly continuous on C if for { u m } C , the relation holds: u m u F u m F u .
It is clear that every monotone operator is pseudomonotone, but the converse is not true. Next, we provide an example of a quasi-nonexpansive mapping which is not nonexpansive.
Example 1.
Let H = R with the inner product a , b = a b and induced norm · = | · | . Let T : H H be defined as T u : = u 2 sin u u H . It is clear that Fix ( T ) = { 0 } and T is quasi-nonexpansive. However, we claim that T is not nonexpansive. Indeed, putting u = 2 π and v = 3 π 2 , we have T u T v = 2 π 2 sin 2 π 3 π 4 sin 3 π 2 = 3 π 4 > 2 π 3 π 2 = π 2 .
Definition 1
([23]). Assume that T : H H is a nonlinear operator with Fix ( T ) . Then I T is said to be demiclosed at zero if for any { u n } in H, the implication holds: u n u and ( I T ) u n 0 u Fix ( T ) .
Very recently, Thong and Hieu gave an example to illustrate that there exists a quasi-nonexpansive mapping T, but I T is not demiclosed at zero; see ([22], Example 2). For each u H , we know that there exists a unique nearest point in C, denoted by P C u , such that u P C u u v v C . P C is called a metric projection of H onto C.
Lemma 1
([23]). The following hold:
(i) 
u v , P C u P C v P C u P C v 2 u , v H ;
(ii) 
u P C u , v P C u 0 u H , v C ;
(iii) 
u v 2 u P C u 2 + v P C u 2 u H , v C ;
(iv) 
u v 2 = u 2 v 2 2 u v , v u , v H ;
(v) 
λ u + ( 1 λ ) v 2 = λ u 2 + ( 1 λ ) v 2 λ ( 1 λ ) u v 2 u , v H , λ [ 0 , 1 ] .
Lemma 2
([24]). For all u H and α β > 0 , the inequalities hold: u P C ( u α A u ) α u P C ( u β A u ) β and u P C ( u β A u ) u P C ( u α A u ) .
Lemma 3
([13]). Suppose that A : C H is pseudomonotone and continuous. Then u * C is a solution to the VIP A u * , u u * 0 u C , if and only if A u , u u * 0 u C .
Lemma 4
([25]). Suppose that { a m } is a sequence of nonnegative numbers satisfying the conditions: a m + 1 ( 1 λ m ) a m + λ m γ m m 1 , where { λ m } and { γ m } lie in R = ( , ) such that (i) { λ m } [ 0 , 1 ] and m = 1 λ m = , and (ii) lim sup m γ m 0 or m = 1 | λ m γ m | < . Then lim m a m = 0 .
Lemma 5
([23]). Suppose that T : C C is a nonexpansive mapping with Fix ( T ) . Then I T is demiclosed at zero, that is, if { u m } is a sequence in C such that u m u C and ( I T ) u m 0 , then ( I T ) u = 0 , where I is the identity mapping of H.
Lemma 6
([25]). Suppose that λ ( 0 , 1 ] , T : C H is a nonexpansive mapping, and the mapping T λ : C H is defined as T λ u : = T u λ μ F ( T u ) u C , where F : H H is κ-Lipschitzian and η-strongly monotone. Then T λ is a contraction provided 0 < μ < 2 η κ 2 , that is, T λ u T λ v ( 1 λ ) u v u , v C , where : = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .
Lemma 7
([26]). Suppose that { Γ m } is a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence { Γ m k } of { Γ m } which satisfies Γ m k < Γ m k + 1 for each integer k 1 . Define the sequence { τ ( m ) } m m 0 of integers as follows:
τ ( m ) = max { k m : Γ k < Γ k + 1 } ,
where integer m 0 1 such that { k m 0 : Γ k < Γ k + 1 } . Then, the following conclusions hold:
(i) 
τ ( m 0 ) τ ( m 0 + 1 ) and τ ( m ) ;
(ii) 
Γ τ ( m ) Γ τ ( m ) + 1 and Γ m Γ τ ( m ) + 1 m m 0 .

3. Iterative Algorithms and Convergence Criteria

In this section, let the feasible set C be a nonempty closed convex subset of a real Hilbert space H, and assume always that the following hold:
T i : H H is nonexpansive for i = 1 , . . . , N and T : H H is a quasi-nonexpansive mapping such that I T is demiclosed at zero;
A : H H is L-Lipschitz continuous, pseudomonotone on H, and satisfies the condition that for { x n } C , x n z A z lim inf n A x n ;
Ω = i = 0 N Fix ( T i ) VI ( C , A ) with T 0 : = T ;
f : H H is a contraction with constant δ [ 0 , 1 ) , and F : H H is η -strongly monotone and κ -Lipschitzian such that δ < τ : = 1 1 ρ ( 2 η ρ κ 2 ) for ρ ( 0 , 2 η κ 2 ) ; { ζ n } , { β n } , { γ n } ( 0 , 1 ) , and { τ n } ( 0 , ) are such that
(i)
β n + γ n < 1 and n = 1 β n = ;
(ii)
lim n β n = 0 and τ n = ( β n ) , i.e., lim n τ n / β n = 0 ;
(iii)
0 < lim inf n γ n lim sup n γ n < 1 and 0 < lim inf n ζ n lim sup n ζ n < 1 .
In addition, we write T n : = T n mod N for integer n 1 with the mod function taking values in the set { 1 , 2 , . . . , N } , i.e., if n = j N + q for some integers j 0 and 0 q < N , then T n = T N if q = 0 and T n = T q if 0 < q < N .
Algorithm 1. Initialization: Let λ 1 > 0 , α > 0 , μ ( 0 , 1 ) and x 0 , x 1 H be arbitrary.
Iterative Steps: Calculate x n + 1 as follows:
Step 1. Given the iterates x n 1 and x n ( n 1 ) , choose α n such that 0 α n α ¯ n , where
α ¯ n = min { α , τ n x n x n 1 } if x n x n 1 , α otherwise .
Step 2. Compute w n = x n + α n ( x n x n 1 ) and y n = P C ( w n λ n A w n ) .
Step 3. Construct the half-space C n : = { z H : w n λ n A w n y n , z y n 0 } , and compute z n = P C n ( w n λ n A y n ) .
Step 4. Calculate v n = ζ n x n + ( 1 ζ n ) T n w n and x n + 1 = β n f ( x n ) + γ n T z n + ( ( 1 γ n ) I β n ρ F ) v n , and update
λ n + 1 = min { μ w n y n 2 + z n y n 2 2 A w n A y n , z n y n , λ n } if A w n A y n , z n y n > 0 , λ n otherwise .
Let n : = n + 1 and return to Step 1.
Remark 1.
It is easy to see that, from (5) we get lim n α n β n x n x n 1 = 0 . Indeed, we have α n x n x n 1 τ n n 1 , which together with lim n τ n β n = 0 implies that α n β n x n x n 1 τ n β n 0 as n .
Lemma 8.
Let { λ n } be generated by (6). Then { λ n } is a nonincreasing sequence with λ n λ : = min { λ 1 , μ L } n 1 , and lim n λ n λ : = min { λ 1 , μ L } .
Proof. 
First, from (6) it is clear that λ n λ n + 1 n 1 . Furthermore, observe that
1 2 ( w n y n 2 + z n y n 2 ) w n y n z n y n A w n A y n , z n y n L w n y n z n y n λ n + 1 min { λ n , μ L } .
Remark 2.
In terms of Lemmas 2 and 8, we claim that if w n = y n or A y n = 0 , then y n is an element of VI ( C , A ) . Indeed, if w n = y n or A y n = 0 , then 0 = y n P C ( y n λ n A y n ) y n P C ( y n λ A y n ) . Thus, the assertion is valid.
The following lemmas are quite helpful for the convergence analysis of our algorithms.
Lemma 9.
Let { w n } , { y n } , { z n } be the sequences generated by Algorithm 1. Then
z n p 2 w n p 2 ( 1 μ λ n λ n + 1 ) w n y n 2 ( 1 μ λ n λ n + 1 ) z n y n 2 p Ω .
Proof. 
First, by the definition of { λ n } we claim that
2 A w n A y n , z n y n μ λ n + 1 w n y n 2 + μ λ n + 1 z n y n 2 n 1 .
Indeed, if A w n A y n , z n y n 0 , then inequality (8) holds. Otherwise, from (6) we get (8). Furthermore, observe that for each p Ω C C n ,
z n p 2 = P C n ( w n λ n A y n ) P C n p 2 z n p , w n λ n A y n p = 1 2 z n p 2 + 1 2 w n p 2 1 2 z n w n 2 z n p , λ n A y n ,
which hence yields
z n p 2 w n p 2 z n w n 2 2 z n p , λ n A y n .
From p VI ( C , A ) , we get A p , x p 0 x C . By the pseudomonotonicity of A on C we have A x , x p 0 x C . Putting x : = y n C we get A y n , p y n 0 . Thus,
A y n , p z n = A y n , p y n + A y n , y n z n A y n , y n z n .
Substituting (10) for (9), we obtain
z n p 2 w n p 2 z n y n 2 y n w n 2 + 2 w n λ n A y n y n , z n y n .
Since z n = P C n ( w n λ n A y n ) , we get z n C n : = { z H : w n λ n A w n y n , z y n 0 } , and hence
2 w n λ n A y n y n , z n y n = 2 w n λ n A w n y n , z n y n + 2 λ n A w n A y n , z n y n 2 λ n A w n A y n , z n y n ,
which together with (8), implies that
2 w n λ n A y n y n , z n y n μ λ n λ n + 1 w n y n 2 + μ λ n λ n + 1 z n y n 2 .
Therefore, substituting the last inequality for (11), we infer that inequality (7) holds. □
Lemma 10.
Suppose that { w n } , { x n } , { y n } , and { z n } are bounded sequences generated by Algorithm 1. If x n x n + 1 0 , w n y n 0 , w n z n 0 , z n T n z n 0 , and { w n k } { w n } s.t. w n k z H , then z Ω .
Proof. 
Utilizing the similar arguments to those in the proof of Lemma 3.3 of [12], we can derive the desired result. □
Lemma 11.
Assume that { w n } , { x n } , { y n } , { z n } are the sequences generated by Algorithm 1. Then they all are bounded.
Proof. 
Since 0 < lim inf n γ n lim sup n γ n < 1 and 0 < lim inf n ζ n lim sup n ζ n < 1 , we may assume, without loss of generality, that
{ γ n } [ a , b ] ( 0 , 1 ) and { ζ n } [ c , d ] ( 0 , 1 ) .
Choose a fixed p Ω arbitrarily. Then we obtain T p = p and T n p = p for all n 1 , and (7) holds. Noticing lim n ( 1 μ λ n λ n + 1 ) = 1 μ > 0 , we might assume that 1 μ λ n λ n + 1 > 0 for all n 1 . So it follows from (7) that for all n 1 ,
z n p w n p .
Furthermore, note that
w n p x n p + α n x n x n 1 = x n p + β n · α n β n x n x n 1 .
In terms of Remark 1, one has α n β n x n x n 1 0 as n . Hence we deduce that M 1 > 0 s.t.
M 1 α n β n x n x n 1 n 1 .
Using (13)–(15), we obtain that for all n 1 ,
z n p w n p x n p + β n M 1 .
Noticing β n + γ n < 1 n 1 , we have β n 1 γ n < 1 for all n 1 . So, using Lemma 6 and (16) we deduce that
v n p ζ n x n p + ( 1 ζ n ) T n w n p ζ n x n p + ( 1 ζ n ) w n p ζ n ( x n p + β n M 1 ) + ( 1 ζ n ) ( x n p + β n M 1 ) = x n p + β n M 1 ,
and hence
x n + 1 p = β n f ( x n ) + γ n T z n + ( ( 1 γ n ) I β n ρ F ) v n p β n f ( x n ) p + γ n T z n p + ( 1 β n γ n ) ( 1 γ n 1 β n γ n I β n 1 β n γ n ρ F ) v n p β n ( f ( x n ) f ( p ) + f ( p ) p ) + γ n z n p + ( 1 β n γ n ) ( 1 γ n 1 β n γ n I β n 1 β n γ n ρ F ) v n p β n ( δ x n p + f ( p ) p ) + γ n z n p + ( 1 γ n ) ( I β n 1 γ n ρ F ) v n ( 1 β n 1 γ n ) p = β n ( δ x n p + f ( p ) p ) + γ n z n p + ( 1 γ n ) ( I β n 1 γ n ρ F ) v n ( I β n 1 γ n ρ F ) p + β n 1 γ n ( I ρ F ) p β n ( δ x n p + f ( p ) p ) + γ n z n p + ( 1 γ n ) [ ( 1 β n 1 γ n τ ) v n p + β n 1 γ n ( I ρ F ) p ] = β n ( δ x n p + f ( p ) p ) + γ n z n p + ( 1 γ n β n τ ) v n p + β n ( I ρ F ) p β n δ ( x n p + β n M 1 ) + β n f ( p ) p + γ n ( x n p + β n M 1 ) + ( 1 γ n β n τ ) ( x n p + β n M 1 ) + β n ( I ρ F ) p [ 1 β n ( τ δ ) ] x n p + β n ( M 1 + f ( p ) p + ( I ρ F ) p ) = [ 1 β n ( τ δ ) ] x n p + β n ( τ δ ) · M 1 + f ( p ) p + ( I ρ F ) p τ δ max { x n p , M 1 + f ( p ) p + ( I ρ F ) p τ δ } .
By induction, we obtain x n p max { x 1 p , M 1 + f ( p ) p + ( I ρ F ) p τ δ } n 1 . Thus, { x n } is bounded, and so are the sequences { w n } , { y n } , { z n } , { T z n } , { F v n } , { T n w n } . □
Theorem 1.
Let the sequence { x n } be constructed by Algorithm 1. Then { x n } converges strongly to the unique solution x * Ω of the following VIP:
( ρ F f ) x * , p x * 0 p Ω .
Proof. 
First, it is not difficult to show that P Ω ( f + I ρ F ) is a contraction. In fact, by Lemma 6 and the Banach contraction mapping principle, we obtain that P Ω ( f + I ρ F ) has a unique fixed point. Say x * H , i.e., x * = P Ω ( f + I ρ F ) x * . Thus, the following VIP has only a solution x * Ω :
( ρ F f ) x * , p x * 0 p Ω .
We now claim that
γ n ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] x n x * 2 x n + 1 x * 2 + β n M 4 ,
for some M 4 > 0 . In fact, observe that
x n + 1 x * = β n ( f ( x n ) x * ) + γ n ( T z n x * ) + ( 1 β n γ n ) { 1 γ n 1 β n γ n [ ( I β n 1 γ n ρ F ) v n ( I β n 1 γ n ρ F ) x * ] + β n 1 β n γ n ( I ρ F ) x * } = β n ( f ( x n ) f ( x * ) ) + γ n ( T z n x * ) + ( 1 γ n ) [ ( I β n 1 γ n ρ F ) v n ( I β n 1 γ n ρ F ) x * ] + β n ( f ρ F ) x * .
Using Lemma 6 and the convexity of the function h ( t ) = t 2 t R , we have
x n + 1 x * 2 β n ( f ( x n ) f ( x * ) ) + γ n ( T z n x * ) + ( 1 γ n ) [ ( I β n 1 γ n ρ F ) v n ( I β n 1 γ n ρ F ) x * ] 2 + 2 β n ( f ρ F ) x * , x n + 1 x * β n δ x n x * 2 + γ n z n x * 2 + ( 1 β n τ γ n ) v n x * 2 + β n M 2
where M 2 sup n 1 2 f ρ F ) x * x n x * for some M 2 > 0 . From (7) and (17), we have
x n + 1 x * 2 β n δ x n x * 2 + γ n [ w n x * 2 ( 1 μ λ n λ n + 1 ) w n y n 2 ( 1 μ λ n λ n + 1 ) z n y n 2 ] + ( 1 β n τ γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) w n x * 2 ] + β n M 2 .
Again from (16), we obtain
w n x * 2 ( x n x * + β n M 1 ) 2 x n x * 2 + β n M 3 ,
where M 3 sup n 1 ( 2 M 1 x n x * + β n M 1 2 ) for some M 3 > 0 . Using (19) and (20), we get
x n + 1 x * 2 [ 1 β n ( τ δ ) ] ( x n x * 2 + β n M 3 ) γ n ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] + β n M 2 x n x * 2 γ n ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] + β n M 4 ,
where M 4 : = M 2 + M 3 . Consequently,
γ n ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] x n x * 2 x n + 1 x * 2 + β n M 4 .
Next we claim that
x n + 1 x * 2 [ 1 β n ( τ δ ) ] x n x * 2 + β n ( τ δ ) [ 2 τ δ ( f ρ F ) x * , x n + 1 x * + 3 M τ δ · α n β n · x n x n 1 ]
for some M > 0 . In fact, it is easy to see that
w n x * 2 x n x * 2 + α n x n x n 1 [ 2 x n x * + α n x n x n 1 ] .
Using (16), (18), and (22), we get
x n + 1 x * 2 β n δ x n x * 2 + γ n w n x * 2 + ( 1 β n τ γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) w n x * 2 ] + 2 β n ( f ρ F ) x * , x n + 1 x * β n δ x n x * 2 + γ n [ x n x * 2 + α n x n x n 1 ( 2 x n x * + α n x n x n 1 ) ] + ( 1 β n τ γ n ) { ζ n x n x * 2 + ( 1 ζ n ) [ x n x * 2 + α n x n x n 1 ( 2 x n x * + α n x n x n 1 ) ] } + 2 β n ( f ρ F ) x * , x n + 1 x * [ 1 β n ( τ δ ) ] x n x * 2 + β n ( τ δ ) · [ 2 ( f ρ F ) x * , x n + 1 x * τ δ + 3 M τ δ · α n β n · x n x n 1 ] ,
where M sup n 1 { x n x * , α n x n x n 1 } for some M > 0 .
For each n 0 , we set
Γ n = x n x * 2 , ε n = β n ( τ δ ) , ϑ n = α n x n x n 1 3 M + 2 β n ( f ρ F ) x * , x n + 1 x * .
Then (23) can be rewritten as the following formula:
Γ n + 1 ( 1 ε n ) Γ n + ϑ n n 0 .
We next show the convergence of { Γ n } to zero by the following two cases:
Case 1.Suppose that there exists an integer n 0 1 such that { Γ n } is non-increasing. Then
Γ n Γ n + 1 0 .
From (21), we get
γ n ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] Γ n Γ n + 1 + β n M 4 .
Since β n 0 , Γ n Γ n + 1 0 , 1 μ λ n λ n + 1 1 μ and { γ n } [ a , b ] ( 0 , 1 ) , we have
lim n w n y n = lim n z n y n = 0 .
Using Lemma 1 (v), we deduce from (16) that
x n + 1 x * 2 = β n f ( x n ) + γ n T z n + ( ( 1 γ n ) I β n ρ F ) v n x * 2 = β n ( f ( x n ) ρ F v n ) + γ n ( T z n x * ) + ( 1 γ n ) ( v n x * ) 2 γ n ( T z n x * ) + ( 1 γ n ) ( v n x * ) 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * = γ n T z n x * 2 + ( 1 γ n ) v n x * 2 γ n ( 1 γ n ) T z n v n 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * = γ n T z n x * 2 + ( 1 γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) T n w n x * 2 ζ n ( 1 ζ n ) x n T n w n 2 ] γ n ( 1 γ n ) T z n v n 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * γ n z n x * 2 + ( 1 γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) w n x * 2 ζ n ( 1 ζ n ) x n T n w n 2 ] γ n ( 1 γ n ) T z n v n 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * γ n ( x n x * + β n M 1 ) 2 + ( 1 γ n ) ( x n x * + β n M 1 ) 2 ( 1 γ n ) ζ n ( 1 ζ n ) x n T n w n 2 γ n ( 1 γ n ) T z n v n 2 + 2 β n f ( x n ) ρ F v n x n + 1 x * = ( x n x * + β n M 1 ) 2 ( 1 γ n ) ζ n ( 1 ζ n ) x n T n w n 2 γ n ( 1 γ n ) T z n v n 2 + 2 β n f ( x n ) ρ F v n x n + 1 x * ,
which immediately yields
( 1 γ n ) ζ n ( 1 ζ n ) x n T n w n 2 + γ n ( 1 γ n ) T z n v n 2 ( x n x * + β n M 1 ) 2 x n + 1 x * 2 + 2 β n f ( x n ) ρ F v n x n + 1 x * = Γ n Γ n + 1 + β n M 1 ( 2 x n x * + β n M 1 ) + 2 β n f ( x n ) ρ F v n x n + 1 x * .
Since β n 0 , Γ n Γ n + 1 0 , { γ n } [ a , b ] ( 0 , 1 ) and { ζ n } [ c , d ] ( 0 , 1 ) , we have
lim n x n T n w n = lim n T z n v n = 0 .
Using Lemma 1 (v) again, we have
T z n v n 2 = ζ n ( T z n x n ) + ( 1 ζ n ) ( T z n T n w n ) 2 = ζ n T z n x n 2 + ( 1 ζ n ) T z n T n w n 2 ζ n ( 1 ζ n ) T n w n x n 2 .
So it follows from (26) and { ζ n } [ c , d ] ( 0 , 1 ) that
lim n T z n x n = lim n T z n T n w n = 0 .
Therefore, from (25)–(27), we conclude that
w n z n w n y n + y n z n 0 ( n ) ,
z n T n z n z n w n + w n x n + x n T n w n + T n w n T n z n 2 z n w n + w n x n + x n T n w n 0 ( n ) ,
and
x n + 1 x n = β n f ( x n ) + γ n T z n + ( ( 1 γ n ) I β n ρ F ) v n x n = β n ( f ( x n ) ρ F v n ) + γ n ( T z n x n ) + ( 1 γ n ) ( v n x n ) β n f ( x n ) ρ F v n + γ n T z n x n + ( 1 γ n ) v n x n β n ( f ( x n ) + ρ F v n ) + γ n T z n x n + ( 1 γ n ) ( v n T z n + T z n x n ) β n ( f ( x n ) + ρ F v n ) + T z n x n + v n T z n 0 ( n ) .
Next, by the boundedness of { x n } , we know that { x n k } { x n } s.t.
lim sup n ( f ρ F ) x * , x n x * = lim k ( f ρ F ) x * , x n k x * .
Further we might assume that x n k x ^ . So, from (31) we have
lim sup n ( f ρ F ) x * , x n x * = ( f ρ F ) x * , x ^ x * .
Noticing w n x n 0 and x n k x ^ , we obtain w n k x ^ . Since x n x n + 1 0 , w n y n 0 , w n z n 0 , z n T n z n 0 (due to (25) and (28)–(30)) and w n k x ^ , by Lemma 10 we get x ^ Ω . So it follows from (17) and (32) that
lim sup n ( f ρ F ) x * , x n x * = ( f ρ F ) x * , x ^ x * 0 ,
which hence yields
lim sup n ( f ρ F ) x * , x n + 1 x * lim sup n [ ( f ρ F ) x * x n + 1 x n + ( f ρ F ) x * , x n x * ] 0 .
Since { β n ( τ δ ) } [ 0 , 1 ] , n = 1 β n ( τ δ ) = , and
lim sup n [ 2 ( f ρ F ) x * , x n + 1 x * τ δ + 3 M τ δ · α n β n · x n x n 1 ] 0 ,
by Lemma 4 we conclude from (23) that lim n 0 x n x * = 0 .
Case 2.Suppose that { Γ n k } { Γ n } s.t. Γ n k < Γ n k + 1 k N , where N is the set of all positive integers. Define the mapping τ : N N by
τ ( n ) : = max { k n : Γ k < Γ k + 1 } .
Using Lemma 7, we have
Γ τ ( n ) Γ τ ( n ) + 1 and Γ n Γ τ ( n ) + 1 .
Putting Γ n = x n x * 2 n N and using the same inference as in Case 1, we can obtain
lim n x τ ( n ) + 1 x τ ( n ) = 0
and
lim sup n ( f ρ F ) x * , x τ ( n ) + 1 x * 0 .
Because of Γ τ ( n ) Γ τ ( n ) + 1 and β τ ( n ) > 0 , we conclude from (23) that
x τ ( n ) x * 2 2 τ δ ( f ρ F ) x * , x τ ( n ) + 1 x * + 3 M τ δ · α τ ( n ) β τ ( n ) · x τ ( n ) x τ ( n ) 1 ,
and hence
lim sup n x τ ( n ) x * 2 0 .
Thus, we have
lim n x τ ( n ) x * 2 = 0 .
Using (35), we obtain
x τ ( n ) + 1 x * 2 x τ ( n ) x * 2 = 2 x τ ( n ) + 1 x τ ( n ) , x τ ( n ) x * + x τ ( n ) + 1 x τ ( n ) 2 2 x τ ( n ) + 1 x τ ( n ) x τ ( n ) x * + x τ ( n ) + 1 x τ ( n ) 2 0 ( n ) .
Taking into account Γ n Γ τ ( n ) + 1 , we have
x n x * 2 x τ ( n ) + 1 x * 2 x τ ( n ) x * 2 + 2 x τ ( n ) + 1 x τ ( n ) x τ ( n ) x * + x τ ( n ) + 1 x τ ( n ) 2 .
It is easy to see from (35) that x n x * as n . This completes the proof.
Next, we introduce another Mann-type inertial subgradient extragradient algorithm.
Algorithm 2. Initialization: Let λ 1 > 0 , α > 0 , μ ( 0 , 1 ) and x 0 , x 1 H be arbitrary.
Iterative Steps: Calculate x n + 1 as follows:
Step 1. Given the iterates x n 1 and x n ( n 1 ) , choose α n such that 0 α n α ¯ n , where
α ¯ n = min { α , τ n x n x n 1 } if x n x n 1 , α otherwise .
Step 2. Compute w n = x n + α n ( x n x n 1 ) and y n = P C ( w n λ n A w n ) .
Step 3. Construct the half-space C n : = { z H : w n λ n A w n y n , z y n 0 } , and compute z n = P C n ( w n λ n A y n ) .
Step 4. Calculate v n = ζ n x n + ( 1 ζ n ) T z n and x n + 1 = β n f ( x n ) + γ n T n w n + ( ( 1 γ n ) I β n ρ F ) v n , and update
λ n + 1 = min { μ w n y n 2 + z n y n 2 2 A w n A y n , z n y n , λ n } if A w n A y n , z n y n > 0 , λ n otherwise .
Let n : = n + 1 and return to Step 1.
It is worth pointing out that Lemmas 8–11 are still valid for Algorithm 2.
Theorem 2.
Let the sequence { x n } be constructed by Algorithm 2. Then { x n } converges strongly to the unique solution x * Ω of the following VIP:
( ρ F f ) x * , p x * 0 p Ω .
Proof. 
Utilizing the same arguments as in the proof of Theorem 1, we deduce that there exists a unique solution x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) to the VIP (17). □
We now claim that
( 1 β n τ γ n ) ( 1 ζ n ) ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] x n x * 2 x n + 1 x * 2 + β n M 4 ,
for some M 4 > 0 . In fact, observe that
x n + 1 x * = β n ( f ( x n ) f ( x * ) ) + γ n ( T n w n x * ) + ( 1 γ n ) [ ( I β n 1 γ n ρ F ) v n ( I β n 1 γ n ρ F ) x * ] + β n ( f ρ F ) x * ,
where v n : = ζ n x n + ( 1 ζ n ) T z n . Using the similar arguments to those of (19) and (20), we have
x n + 1 x * 2 β n δ x n x * 2 + γ n w n x * 2 + ( 1 β n τ γ n ) { ζ n x n x * 2 + ( 1 ζ n ) [ w n x * 2 ( 1 μ λ n λ n + 1 ) w n y n 2 ( 1 μ λ n λ n + 1 ) z n y n 2 ] } + β n M 2 .
and
w n x * 2 ( x n x * + β n M 1 ) 2 x n x * 2 + β n M 3 ,
where M 2 sup n 1 2 ( f ρ F ) x * x n x * for some M 2 > 0 and M 3 sup n 1 ( 2 M 1 x n x * + β n M 1 2 ) for some M 3 > 0 . Combining the last inequalities, we obtain
x n + 1 x * 2 β n δ x n x * 2 + γ n ( x n x * 2 + β n M 3 ) + ( 1 β n τ γ n ) ( x n x * 2 + β n M 3 ) ( 1 β n τ γ n ) ( 1 ζ n ) [ ( 1 μ λ n λ n + 1 ) w n y n 2 + ( 1 μ λ n λ n + 1 ) z n y n 2 ] + β n M 2 x n x * 2 ( 1 β n τ γ n ) ( 1 ζ n ) ( 1 μ λ n λ n + 1 ) [ w n y n 2 + z n y n 2 ] + β n M 4 ,
where M 4 : = M 2 + M 3 . This ensures that (39) holds.
Next we claim that
x n + 1 x * 2 [ 1 β n ( τ δ ) ] x n x * 2 + β n ( τ δ ) [ 2 τ δ ( f ρ F ) x * , x n + 1 x * + 3 M τ δ · α n β n · x n x n 1 ]
for some M > 0 . In fact, using the similar arguments to those of (22) and (23), we have
w n x * 2 x n x * 2 + α n x n x n 1 [ 2 x n x * + α n x n x n 1 ] ,
and
x n + 1 x * 2 β n δ x n x * 2 + γ n w n x * 2 + ( 1 β n τ γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) z n x * 2 ] + 2 β n ( f ρ F ) x * , x n + 1 x * β n δ x n x * 2 + ( 1 β n τ ) [ x n x * 2 + α n x n x n 1 ( 2 x n x * + α n x n x n 1 ) ] + 2 β n ( f ρ F ) x * , x n + 1 x * [ 1 β n ( τ δ ) ] x n x * 2 + α n x n x n 1 ( 2 x n x * + α n x n x n 1 ) + 2 β n ( f ρ F ) x * , x n + 1 x * [ 1 β n ( τ δ ) ] x n x * 2 + β n ( τ δ ) · [ 2 ( f ρ F ) x * , x n + 1 x * τ δ + 3 M τ δ · α n β n · x n x n 1 ] ,
where M sup n 1 { x n x * , α n x n x n 1 } for some M > 0 .
For each n 0 , we set
Γ n = x n x * 2 , ε n = β n ( τ δ ) , ϑ n = α n x n x n 1 3 M + 2 β n ( f ρ F ) x * , x n + 1 x * .
Then (41) can be rewritten as the following formula:
Γ n + 1 ( 1 ε n ) Γ n + ϑ n n 0 .
We next show the convergence of { Γ n } to zero by the following two cases:
Case 3.Suppose that there exists an integer n 0 1 such that { Γ n } is non-increasing. Then
Γ n Γ n + 1 0 .
Using the similar arguments to those of (25), we have
lim n w n y n = lim n z n y n = 0 .
Using Lemma 1 (v), we get
x n + 1 x * 2 = β n ( f ( x n ) ρ F v n ) + γ n ( T n w n x * ) + ( 1 γ n ) ( v n x * ) 2 γ n ( T n w n x * ) + ( 1 γ n ) ( v n x * ) 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * = γ n T n w n x * 2 + ( 1 γ n ) v n x * 2 γ n ( 1 γ n ) T n w n v n 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * = γ n T n w n x * 2 + ( 1 γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) T z n x * 2 ζ n ( 1 ζ n ) x n T z n 2 ] γ n ( 1 γ n ) T n w n v n 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * γ n w n x * 2 + ( 1 γ n ) [ ζ n x n x * 2 + ( 1 ζ n ) z n x * 2 ζ n ( 1 ζ n ) x n T z n 2 ] γ n ( 1 γ n ) T n w n v n 2 + 2 β n f ( x n ) ρ F v n , x n + 1 x * γ n ( x n x * + β n M 1 ) 2 + ( 1 γ n ) ( x n x * + β n M 1 ) 2 ( 1 γ n ) ζ n ( 1 ζ n ) x n T z n 2 γ n ( 1 γ n ) T n w n v n 2 + 2 β n f ( x n ) ρ F v n x n + 1 x * = ( x n x * + β n M 1 ) 2 ( 1 γ n ) ζ n ( 1 ζ n ) x n T z n 2 γ n ( 1 γ n ) T n w n v n 2 + 2 β n f ( x n ) ρ F v n x n + 1 x * ,
which immediately yields
( 1 γ n ) ζ n ( 1 ζ n ) x n T z n 2 + γ n ( 1 γ n ) T n w n v n 2 ( x n x * + β n M 1 ) 2 x n + 1 x * 2 + 2 β n f ( x n ) ρ F v n x n + 1 x * = Γ n Γ n + 1 + β n M 1 ( 2 x n x * + β n M 1 ) + 2 β n f ( x n ) ρ F v n x n + 1 x * .
Since β n 0 , Γ n Γ n + 1 0 , { γ n } [ a , b ] ( 0 , 1 ) and { ζ n } [ c , d ] ( 0 , 1 ) , we have
lim n x n T z n = lim n T n w n v n = 0 .
Note that
T n w n v n 2 = ζ n ( T n w n x n ) + ( 1 ζ n ) ( T n w n T z n ) 2 = ζ n T n w n x n 2 + ( 1 ζ n ) T n w n T z n 2 ζ n ( 1 ζ n ) T z n x n 2 .
Hence, from (44) we have
lim n T n w n x n = lim n T n w n T z n = 0 .
So, from (43)–(45) we infer that
w n z n w n y n + y n z n 0 ( n ) ,
z n T n z n z n w n + w n x n + x n T n w n + T n w n T n z n 2 z n w n + w n x n + x n T n w n 0 ( n ) ,
and
x n + 1 x n = β n f ( x n ) + γ n T n w n + ( ( 1 γ n ) I β n ρ F ) v n x n = β n ( f ( x n ) ρ F v n ) + γ n ( T n w n x n ) + ( 1 γ n ) ( v n x n ) β n f ( x n ) ρ F v n + γ n T n w n x n + ( 1 γ n ) v n x n β n ( f ( x n ) + ρ F v n ) + γ n T n w n x n + ( 1 γ n ) ( v n T n w n + T n w n x n ) β n ( f ( x n ) + ρ F v n ) + T n w n x n + v n T n w n 0 ( n ) .
In addition, using the similar arguments to those of (33) and (34), we have
lim sup n ( f ρ F ) x * , x n x * 0 ,
and hence
lim sup n ( f ρ F ) x * , x n + 1 x * 0 .
Consequently, applying Lemma 4 to (41), we have lim n 0 x n x * = 0 .
Case 4.Suppose that { Γ n k } { Γ n } s.t. Γ n k < Γ n k + 1 k N , where N is the set of all positive integers. Define the mapping τ : N N by τ ( n ) : = max { k n : Γ k < Γ k + 1 } . In the remainder of the proof, using the same arguments as in Case 2 of the proof of Theorem 1, we obtain the desired assertion. This completes the proof.
It is markable that our results improve and extend the corresponding results of Kraikaew and Saejung [20] and Ceng et al. [11], in the following aspects.
(i) Our problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) includes as a special case the problem of finding an element of VI ( C , A ) in [20], where T 1 , . . . , T N are nonexpansive and T 0 = T is quasi-nonexpansive. It is worth mentioning that Halpern’s subgradient extragradient method for solving the VIP in [20] is extended to develop our Mann-type inertial subgradient extragradient rule for solving the VIP and CFPP, in which A is L-Lipschitz continuous, pseudomonotone on H, but it is not required to be sequentially weakly continuous on C.
(ii) Our problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) includes as a special case the problem of finding an element of i = 1 N Fix ( T i ) VI ( C , A ) in [11], where in [11], A is required to be L-Lipschitz continuous, pseudomonotone on H, and sequentially weakly continuous on C. The modified inertial subgradient extragradient method for solving the VIP and CFPP in [11] is extended to develop our Mann-type inertial subgradient extragradient rule for solving the VIP and CFPP, where T i is nonexpansive for i = 1 , . . . , N and T 0 = T is quasi-nonexpansive.

4. Applicability and Implementability of Algorithms

In this section, in order to support the applicability and implementability of our Algorithms 1 and 2, we make use of our main results to find a common solution of the VIP and CFPP in two illustrating examples.
Example 2.
Let C = [ 1 , 1 ] and H = R with the inner product a , b = a b and induced norm · = | · | . Let x 0 , x 1 H be arbitrary. Put f ( x ) = F ( x ) = 1 2 x , β n = 1 n + 1 , τ n = β n 2 , μ = 0.2 , α = λ 1 = 0.1 , γ n = ζ n = 1 3 , ρ = 2 , and
α n = min { β n 2 x n x n 1 , α } if x n x n 1 , α otherwise .
Then we know that κ = η = 1 2 and τ = 1 1 ρ ( 2 η ρ κ 2 ) = 1 ( 0 , 1 ] . For N = 1 , we now present Lipschitz continuous and pseudomonotone mapping A, quasi-nonexpansive mapping T and nonexpansive mapping T 1 such that Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Indeed, let A , T , T 1 : H H be defined as A x : = 1 1 + | sin x | 1 1 + | x | , T 1 x : = sin x and T x : = x 2 sin x for all x H . We first show that A is pseudomonotone and L-Lipschitz continuous with L = 2 . Indeed, it is easy to see that for all x , y H ,
A x A y = | 1 1 + sin x 1 1 + x 1 1 + sin y + 1 1 + y | | y x ( 1 + x ) ( 1 + y ) | + | sin y sin x ( 1 + sin x ) ( 1 + sin y ) | x y ( 1 + x ) ( 1 + y ) + sin x sin y ( 1 + sin x ) ( 1 + sin y ) 2 x y ,
and
A x , y x = ( 1 1 + | sin x | 1 1 + | x | ) ( y x ) 0 A y , y x = ( 1 1 + | sin y | 1 1 + | y | ) ( y x ) 0 .
Furthermore, it is clear that Fix ( T ) = { 0 } , T is quasi-nonexpansive but not nonexpansive. Meantime, I T is demiclosed at 0 due to the continuity of T. In addition, it is clear that T 1 is nonexpansive and Fix ( T 1 ) = { 0 } . Therefore, Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) = { 0 } . In this case, Algorithm 1 can be rewritten as follows:
w n = x n + α n ( x n x n 1 ) , y n = P C ( w n λ n A w n ) , z n = P C n ( w n λ n A y n ) , v n = 1 3 x n + 2 3 T 1 w n , x n + 1 = 1 n + 1 · 1 2 x n + 1 3 T z n + ( n n + 1 1 3 ) v n n 1 ,
where for each n 1 , C n and λ n are chosen as in Algorithm 1. So, using Theorem 1, we know that { x n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Meanwhile, Algorithm 2 can be rewritten as follows:
w n = x n + α n ( x n x n 1 ) , y n = P C ( w n λ n A w n ) , z n = P C n ( w n λ n A y n ) , v n = 1 3 x n + 2 3 T z n , x n + 1 = 1 n + 1 · 1 2 x n + 1 3 T 1 w n + ( n n + 1 1 3 ) v n n 1 ,
where, for each n 1 , C n and λ n are chosen as in Algorithm 2. So, using Theorem 2, we know that { x n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) .
Example 3.
Let H = L 2 ( [ 0 , 1 ] ) with the inner product and induced norm defined by
x , y = 0 1 x ( t ) y ( t ) d t and x = ( 0 1 | x ( t ) | 2 d t ) 1 / 2 x , y H ,
respectively. Then ( H , · , · ) is a Hilbert space. Let C : = { x H : x 1 } be the unit closed ball of H. It is known that
P C ( x ) = x x if x > 1 , x if x 1 .
Let x 0 , x 1 H be arbitrary. Put f ( x ) = F ( x ) = 1 2 x , β n = 1 n + 1 , τ n = β n 2 , μ = 0.2 , α = λ 1 = 0.1 , γ n = ζ n = 1 3 , ρ = 2 , and
α n = min { β n 2 x n x n 1 , α } if x n x n 1 , α otherwise .
Then we know that κ = η = 1 2 and τ = 1 1 ρ ( 2 η ρ κ 2 ) = 1 ( 0 , 1 ] . For N = 1 , we now present Lipschitz continuous and pseudomonotone mapping A, quasi-nonexpansive mapping T and nonexpansive mapping T 1 such that Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Indeed, let A , T , T 1 : H H be defined as ( A x ) ( t ) : = max { 0 , x ( t ) } , ( T 1 x ) ( t ) : = 1 2 x ( t ) 1 2 sin x ( t ) and ( T x ) ( t ) : = 1 2 x ( t ) + 1 2 sin x ( t ) for all x H . It can be easily verified (see, e.g., [8,9]) that A is monotone and L-Lipschitz continuous with L = 1 , and the solution set of the VIP for A is given by
VI ( C , A ) = { 0 } .
We next show that T and T 1 are nonexpansive and Fix ( T ) = Fix ( T 1 ) = { 0 } . Indeed, it is easy to see that for all x , y H ,
T x T y = ( 0 1 | 1 2 ( x ( t ) y ( t ) ) + 1 2 ( sin x ( t ) sin y ( t ) ) | 2 d t ) 1 / 2 ( 0 1 ( 1 2 | x ( t ) y ( t ) | + 1 2 | x ( t ) y ( t ) | ) 2 d t ) 1 / 2 = ( 0 1 | x ( t ) y ( t ) | 2 d t ) 1 / 2 = x y .
Similarly, we get T 1 x T 1 y x y x , y H . Moreover, it is clear that Fix ( T ) = Fix ( T 1 ) = { 0 } . Therefore, Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) = { 0 } . In this case, Algorithm 1 can be rewritten as follows:
w n = x n + α n ( x n x n 1 ) , y n = P C ( w n λ n A w n ) , z n = P C n ( w n λ n A y n ) , v n = 1 3 x n + 2 3 T 1 w n , x n + 1 = 1 n + 1 · 1 2 x n + 1 3 T z n + ( n n + 1 1 3 ) v n n 1 ,
where for each n 1 , C n and λ n are chosen as in Algorithm 1. So, using Theorem 1, we know that { x n } converges strongly to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Meantime, Algorithm 2 can be rewritten as follows:
w n = x n + α n ( x n x n 1 ) , y n = P C ( w n λ n A w n ) , z n = P C n ( w n λ n A y n ) , v n = 1 3 x n + 2 3 T z n , x n + 1 = 1 n + 1 · 1 2 x n + 1 3 T 1 w n + ( n n + 1 1 3 ) v n n 1 ,
where for each n 1 , C n and λ n are chosen as in Algorithm 2. So, using Theorem 2, we know that { x n } converges strongly to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) .

Author Contributions

All the authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The research of J. C. Yao was partially supported by the Grant MOST 108-2115-M-039-005-MY3.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Cho, S.Y. A monotone Bregan projection algorithm for fixed point and equilibrium problems in a reflexive Banach space. Filomat 2020, 34, 1487–1497. [Google Scholar] [CrossRef]
  3. Nguyen, L.V.; Ansari, Q.H.; Qin, X. Weak sharpness and finite convergence for solutions of nonsmooth variational inequalities in Hilbert spaces. Appl. Math. Optim. 2020. [Google Scholar] [CrossRef]
  4. Liu, L. A hybrid steepest descent method for solving split feasibility problems involving nonexpansive mappings. J. Nonlinear Convex Anal. 2019, 20, 471–488. [Google Scholar]
  5. Cho, S.Y. A convergence theorem for generalized mixed equilibrium problems and multivalued asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2000, 21, 1017–1026. [Google Scholar]
  6. Ceng, L.C.; Shang, M. Generalized Mann viscosity implicit rules for solving systems of variational inequalities with constraints of variational inclusions and fixed point problems. Mathematics 2019, 7, 933. [Google Scholar] [CrossRef] [Green Version]
  7. Cho, S.Y.; Bin Dehaish, B.A. Weak convergence of a splitting algorithm in Hilbert spaces. J. Appl. Anal. Comput. 2017, 7, 427–438. [Google Scholar]
  8. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 2020, 53, 208–224. [Google Scholar] [CrossRef]
  9. Gebrie, A.G.; Wangkeeree, R. Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 2020, 53, 332–351. [Google Scholar] [CrossRef]
  10. Shehu, Y.; Dong, Q.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
  11. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
  12. Ceng, L.C.; Shang, M.J. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2021, 70, 715–740. [Google Scholar] [CrossRef]
  13. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Tan, B.; Xu, S.; Li, S. Inertial shrinking proection algorithms for solving hierarchical variational inequality problems. J. Nonlinear Convex Anal. 2020, 21, 871–884. [Google Scholar]
  15. Fan, J. A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities. Optimization 2020, 69, 2199–2215. [Google Scholar] [CrossRef]
  16. Qin, X.; Cho, S.Y.; Wang, L. Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 2018, 67, 1377–1388. [Google Scholar] [CrossRef]
  17. Ceng, L.C.; Yuan, Q. Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl. 2019, 2019, 374. [Google Scholar] [CrossRef]
  18. Nguyen, L.V.; Qin, X. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2020, 28, 239–257. [Google Scholar] [CrossRef]
  19. Ceng, L.C.; Postolache, M.; Yao, Y. Iterative algorithms for a system of variational inclusions in Banach spaces. Symmetry 2019, 11, 811. [Google Scholar] [CrossRef] [Green Version]
  20. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  21. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Alg. 2018, 79, 597–610. [Google Scholar] [CrossRef]
  22. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Alg. 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  23. Zhou, H.; Qin, X. Fixed Points of Nonlinear Operators; Iterative Methods; De Gruyter: Berlin, Germany, 2020. [Google Scholar]
  24. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  25. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  26. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Yao, J.-C. Mann-Type Inertial Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points of Nonexpansive and Quasi-Nonexpansive Mappings. Axioms 2021, 10, 67. https://doi.org/10.3390/axioms10020067

AMA Style

Ceng L-C, Yao J-C. Mann-Type Inertial Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points of Nonexpansive and Quasi-Nonexpansive Mappings. Axioms. 2021; 10(2):67. https://doi.org/10.3390/axioms10020067

Chicago/Turabian Style

Ceng, Lu-Chuan, and Jen-Chih Yao. 2021. "Mann-Type Inertial Subgradient Extragradient Rules for Variational Inequalities and Common Fixed Points of Nonexpansive and Quasi-Nonexpansive Mappings" Axioms 10, no. 2: 67. https://doi.org/10.3390/axioms10020067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop