Next Article in Journal
On Irregularity Measures of Some Dendrimers Structures
Next Article in Special Issue
Cayley Inclusion Problem Involving XOR-Operation
Previous Article in Journal
Traveling Wave Solutions of a Delayed Cooperative System
Previous Article in Special Issue
A General Algorithm for the Split Common Fixed Point Problem with Its Applications to Signal Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variational Inequalities Approaches to Minimization Problems with Constraints of Generalized Mixed Equilibria and Variational Inclusions

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Center for General Education, China Medical University, Taichung 40402, Taiwan
3
Romanian Academy, Gh. Mihoc-C. Iacob Institute of Mathematical Statistics and Applied Mathematics, 050711 Bucharest, Romania
4
Department of Mathematics and Informatics, University “Politehnica” of Bucharest, 060042 Bucharest, Romania
5
Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan
6
School of Mathematical Sciences, Tianjin Polytechnic University, Tianjin 300387, China
7
The Key Laboratory of Intelligent Information and Data Processing of NingXia Province, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 270; https://doi.org/10.3390/math7030270
Submission received: 18 February 2019 / Revised: 10 March 2019 / Accepted: 13 March 2019 / Published: 16 March 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
Multistep composite implicit and explicit extragradient-like schemes are presented for solving the minimization problem with the constraints of variational inclusions and generalized mixed equilibrium problems. Strong convergence results of introduced schemes are given under suitable control conditions.

1. Introduction

Let H be a real Hilbert space and C H be a closed convex set. Let A : C H be a nonlinear operator. Let us consider the variational inequality problem (VIP) which aims to find x * C verifying
A x * , x x * 0 , x C .
We denote the solution set of VIP (1) by VI ( C , A ) .
In [1], Korpelevich suggested an extragradient method for solving VIP (1). Korpelevich’s method has been studied by many authors, see e.g., references [2,3,4,5,6,7,8,9,10,11,12,13] and references therein. In 2011, Ceng, Ansari and Yao [14] considered the following algorithm
u k + 1 = P C [ α k γ V u k + ( I α k μ F ) T u k ] , k 0 .
They showed that { u k } strongly converges to x Fix ( T ) which solves the following VIP
( μ F γ V ) x , x x ˜ 0 , x ˜ Fix ( T ) .
Ceng, Guu and Yao [15] presented a composite implicit scheme
x t = ( I θ t B ) T x t + θ t [ T x t t ( μ F T x t γ V x t ) ] ,
and another composite explicit scheme
y n = ( I α n μ F ) T x n + α n γ V x n , x n + 1 = ( I β n B ) T x n + β n y n , n 0 .
Ceng, Guu and Yao proved that { x t } and { x n } converge strongly to the same point x Fix ( T ) , which solves the following VIP
( B I ) x , x x ˜ 0 , x ˜ Fix ( T ) .
In [2], Peng and Yao considered the generalized mixed equilibrium problem (GMEP) which aims to find x C verifying
φ ( y ) φ ( x ) + Θ ( x , y ) + A x , y x 0 , y C ,
where φ : C R is a real-valued function and Θ : C × C R is a bifunction. The set of solutions of GMEP (7) is denoted by GMEP ( Θ , φ , A ) .
It is clear that GMEP (7) includes optimization problems, VIP and Nash equilibrium problems as special cases.
Special cases:
(i) Letting φ = 0 , GMEP (7) reduces to find x C verifying
( G E P ) : Θ ( x , y ) + A x , y x 0 , y C ,
which was studied in [13,16].
(ii) Letting A = 0 , GMEP (7) reduces to find x C verifying
( M E P ) : Θ ( x , y ) + φ ( y ) φ ( x ) 0 , y C ,
which was considered in [17].
(iii) Letting φ = 0 and A = 0 , GMEP (7) reduces to find x C verifying
( E P ) : Θ ( x , y ) 0 , y C ,
which was discussed in [18,19].
GMEP (7) has been discussed extensively in the literature; see e.g., [6,7,20,21,22,23,24,25,26,27,28,29,30,31,32].
Now, we consider the minimization problem (CMP):
min x C f ( x ) ,
where f : C R is a convex and continuously Fréchet differentiable functional. Use Ξ to denote the set of minimizers of CMP (8).
For solving (8), an efficient algorithm is the gradient-projection algorithm (GPA) which is defined by: for given the initial guess x 0 ,
z k + 1 : = P C ( z k μ f ( z k ) ) , k 0 ,
or a general form,
z n + 1 : = P C ( z k μ k f ( z k ) ) , k 0 .
It is known that S : = P C ( I λ f ) is a contractive operator if f is α -strongly monotone and L-Lipschitz and 0 < λ < 2 α L 2 . In this case, GPA (9) has strong convergence.
Recall that the variational inclusion problem is to find a point x C verifying
0 B x + R x .
where B : C H is a single-valued mapping and R is a multivalued mapping with D ( R ) = C . Use I ( B , R ) to denote the solution set of (11).
Taking B = 0 in (11), it reduces to the problem introduced by Rockafellar [33]. Let R : D ( R ) H 2 H be a maximal monotone operator. The resolvent operator J R , λ : H D ( R ) ¯ is defined by J R , λ = ( I + λ R ) 1 , x H . In [34], Huang considered problem (11) under the assumptions that B is strongly monotone Lipschitz operator and R is maximal monotone. Zeng, Guu and Yao [35] discussed problem (11) under more general cases. Related work, please refer to [36,37] and the references therein.
In the present paper, we introduce two composite schemes for finding a solution of the CMP (8) with the constraints of finitely many GMEPs and finitely many variational inclusions for maximal monotone and inverse strongly monotone mappings in a real Hilbert space H. Strong convergence of the suggested algorithms are given. Our theorems complement, develop and extend the results obtained in [6,15,38], having as background [39,40,41,42,43,44,45,46].

2. Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space H. Recall that the metric projection P C : H C is defined by x P C x = inf y C x y = : d ( x , C ) . A mapping A : H H is called strongly positive if A x , x γ ¯ x 2 , x H for γ ¯ > 0 . A mapping F : C H is called Lipschitz if F x F y L x y , x , y C for some L 0 . F is called nonexpansive if L = 1 and it is called contractive provided L [ 0 , 1 ) .
Proposition 1.
Let x H and z C . Then, the following hold
  • z = P C x x z , z x 0 , x C ;
  • z = P C x x z 2 + x z 2 x x 2 , x C ;
  • P C x P C x , x x P C x P C x 2 , x H .
Definition 1.
A mapping F : C H is called
  • monotone if F x F x , x x 0 , x , x C ;
  • η-strongly monotone if F x F x , x x η x x 2 , x , y C for some η > 0 ;
  • α-inverse-strongly monotone (ism) if F x F x , x x α F x F x 2 , x , x C for some α > 0 . In this case, we have for all u , u C ,
    ( I μ F ) u ( I μ F ) u 2 u u 2 + μ ( μ 2 α ) F u F u 2 ,
    where μ > 0 is a constant.
Definition 2.
A mapping T : H H is called firmly nonexpansive if 2 T I is nonexpansive. Thus, T is firmly nonexpansive iff T = 1 2 ( I + S ) , where S : H H is nonexpansive.
Let Θ : C × C R be a bifunction satisfying the following assumptions:
(A1)
Θ ( x , x ) = 0 for all x C ;
(A2)
Θ ( x , x ) + Θ ( x , x ) 0 for all x , x C ;
(A3)
for x , x , y C , lim sup t 0 + Θ ( t y + ( 1 t ) x , x ) Θ ( x , x ) ;
(A4)
Θ ( x , · ) is convex and lower semicontinuous for each x C .
Let φ : C R be a lower semicontinuous and convex function satisfying the following conditions:
(B1)
for each x H and r > 0 , there exists a bounded subset D x C and y x C such that for any z C \ D x ,
Θ ( z , y x ) + φ ( y x ) φ ( z ) + 1 r y x z , z x < 0 ;
(B2)
C is a bounded set.
Let r > 0 . Define a mapping T r ( Θ , φ ) : H C by the following form: for any x H ,
T r ( Θ , φ ) ( x ) : = { x C : Θ ( x , y ) + φ ( y ) φ ( x ) + 1 r x x , y x 0 , y C } .
If φ 0 , then T r Θ ( x ) : = { x C : Θ ( x , y ) + 1 r x x , y x 0 , y C } .
Proposition 2
([17]). Let a bifunction Θ : C × C R satisfying assumptions (A1)–(A4). Let a convex function φ : C R be a proper lower semi-continuous. Assume conditions (B1) or (B2) holds. Then, we have
  • x H , T r ( Θ , φ ) ( x ) is single-valued;
  • T r ( Θ , φ ) is firmly nonexpansive;
  • Fix ( T r ( Θ , φ ) ) = MEP ( Θ , φ ) ;
  • MEP ( Θ , φ ) is convex and closed;
  • s , t > 0 , T s ( Θ , φ ) z T t ( Θ , φ ) z 2 s t s T s ( Θ , φ ) z T t ( Θ , φ ) z , T s ( Θ , φ ) z z for z H .
Proposition 3
([41]). We have the following conclusions:
  • A mapping T is nonexpansive iff the complement I T is 1 2 -ism.
  • If a mapping T is α-ism, then γ T is α γ -ism where γ > 0 .
  • A mapping T is averaged iff the complement I T is α-ism for some α > 1 / 2 .
Lemma 1.
Let E be a real inner product space. Then,
x + x 2 x 2 + 2 x , x + x , x , x E .
Lemma 2.
In a real Hilbert space H, we have the following results
  • x x 2 = x 2 2 x x , x x 2 , x , x H ;
  • λ x + γ x 2 = λ x 2 + γ x 2 λ γ x x 2 , x , x H and λ , γ [ 0 , 1 ] with λ + γ = 1 ;
  • { u n } H is a sequence satisfying u n x . Then,
    lim sup n u n x 2 = lim sup n u n x 2 + x x 2 , x H .
Lemma 3
([45]). Let H be a real Hilbert space and C H be a closed convex set. Let T : C C be a k-strictly pseudocontractive mapping. Then,
  • T x T y 1 + k 1 k x y , x , y C .
  • I T is demiclosed at 0.
  • Fix ( T ) of T is closed and convex.
Lemma 4
([39]). Let H be a real Hilbert space and C H be a closed convex set. Let T : C C be a k-strictly pseudocontractive mapping. Then,
γ ( x x ) + μ ( T x T x ) ( γ + μ ) x x , x , x C ,
where γ 0 and μ 0 with ( γ + μ ) k γ .
Let T : C C be a nonexpansive operator and the operator F : C H be κ -Lipschitzian and η -strongly monotone. Define an operator T λ : C H by T λ x : = T x λ μ F ( T x ) , x C , where λ ( 0 , 1 ] and μ > 0 are two constants.
Lemma 5
([42]). Let 0 < μ < 2 η κ 2 . Then, for x , y C , we have T λ x T λ y ( 1 λ τ ) x y , where τ = 1 1 μ ( 2 η μ κ 2 ) .
Lemma 6
([46]). Let { a n } [ 0 , + ) , { ω n } [ 0 , 1 ] , { δ n } and { r n } [ 0 , + ) be four sequences. If
(i) 
n = 0 ω n = and n = 1 r n < ;
(ii) 
either lim sup n δ n 0 or n = 0 ω n | δ n | < ;
(iii) 
a n + 1 ( 1 ω n ) a n + ω n δ n + r n , n 0 .
Then, lim n a n = 0 .
Lemma 7
([44]). Let bounded linear operator A : H H be γ ¯ -strongly positive. Then, I ρ A 1 ρ γ ¯ provided 0 < ρ A 1 .
Let LIM be a Banach limit. Then, we have following properties:
  • a n c n , n 1 LIM n a n LIM n c n ;
  • LIM n a n + N = LIM n a n , where N is a fixed positive integer;
  • lim inf n a n LIM n a n lim sup n a n , { a n } l .
Lemma 8
([40]). Assume that the sequence { s n } l satisfy LIM n s n M , where M is a constant. If lim sup n ( s n + 1 s n ) 0 , then lim sup n s n M .
Let the operator R : D ( R ) H 2 H be maximal monotone. Let λ , μ > 0 be two constants.
Lemma 9
([43]). There holds the resolvent identity
J R , λ x = J R , μ ( μ λ x + ( 1 μ λ ) J R , λ x ) , x H .
Remark 1.
The resolvent has the following property
J R , λ x J R , μ x x x + | λ μ | ( 1 λ J R , λ x x + 1 μ x J R , μ x ) , x , x H .
Lemma 10
([34,35]). J R , λ satisfies
J R , λ x J R , λ y , x y J R , λ x J R , λ y 2 , x , y H .

3. Main Results

Let H be a real Hilbert space and C H be a closed convex set. In what follows, we assume:
f : C R is a convex functional with gradient f being L-Lipschitz, the operator F : C H is η -strongly monotone and κ -Lipschitzian, R i : C 2 H is a maximal monotone mapping and B i : C H ( i = 1 , , N ) is η i -ism, Θ j : C × C R satisfies (A1)–(A4), φ j : C R { + } is a proper convex and lower semicontinuous satisfying (B1) or (B2), and A j : C H is μ j -ism for each j = 1 , , M ;
V : C H is a nonexpansive mapping and A : H H is a γ ¯ -strongly positive bounded linear operator such 1 < γ ¯ < 2 , 0 < μ < 2 η κ 2 and 0 γ < τ with τ = 1 1 μ ( 2 η μ κ 2 ) ;
P C ( I λ t f ) = s t I + ( 1 s t ) T t where T t is nonexpansive, s t = 2 λ t L 4 ( 0 , 1 2 ) and λ t : ( 0 , 1 ) ( 0 , 2 L ) with lim t 0 λ t = 2 L ;
The operator Λ t N : C C is defined by Λ t N x = J R N , λ N , t ( I λ N , t B N ) J R 1 , λ 1 , t ( I λ 1 , t B 1 ) x , t ( 0 , 1 ) , for { λ i , t } [ a i , b i ] ( 0 , 2 η i ) , i = 1 , , N ;
The operator Λ n N : C C is defined by Λ n N x = J R N , λ N , n ( I λ N , n B N ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) x with { λ i , n } [ a i , b i ] ( 0 , 2 η i ) and lim n λ i , n = λ i , for each i = 1 , , N ;
The operator Δ t M : C C is defined by Δ t M x = T r M , t ( Θ M , φ M ) ( I r M , t A M ) T r 1 , t ( Θ 1 , φ 1 ) ( I r 1 , t A 1 ) x , t ( 0 , 1 ) , for { r j , t } [ c j , d j ] ( 0 , 2 μ j ) , j = 1 , , M ;
The operator Δ n M : C C is defined by Δ n M x = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x with { r j , n } [ c j , d j ] ( 0 , 2 μ j ) and lim n r j , n = r j , for each j = 1 , , M ;
Ω : = j = 1 M GMEP ( Θ j , φ j , A j ) i = 1 N I ( B i , R i ) Ξ ;
{ α n } [ 0 , 1 ] , { s n } ( 0 , min { 1 2 , A 1 } ) and { s t } t ( 0 , min { 1 , 2 γ ¯ τ γ } ) ( 0 , min { 1 2 , A 1 } ) .
Next, set
Λ t i = J R i , λ i , t ( I λ i , t B i ) J R i 1 , λ i 1 , t ( I λ i 1 , t B i 1 ) J R 1 , λ 1 , t ( I λ 1 , t B 1 ) , t ( 0 , 1 ) ,
Λ n i = J R i , λ i , n ( I λ i , n B i ) J R i 1 , λ i 1 , n ( I λ i 1 , n B i 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) , n 0 ,
Δ t j = T r j , t ( Θ j , φ j ) ( I r j , t A j ) T r j 1 , t ( Θ j 1 , φ j 1 ) ( I r j 1 , t A j 1 ) T r 1 , t ( Θ 1 , φ 1 ) ( I r 1 , t A 1 ) , t ( 0 , 1 ) ,
and
Δ n j = T r j , n ( Θ j , φ j ) ( I r j , n A j ) T r j 1 , n ( Θ j 1 , φ j 1 ) ( I r j 1 , n A j 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) , n 0 ,
for j = 1 , , M and i = 1 , , N , Λ t 0 = Λ n 0 = I and Δ t 0 = Δ n 0 = I .
By Proposition 3, λ f is 1 λ L -ism for λ > 0 . In addition, hence, I λ f is λ L 2 -averaged. It is clear that P C ( I λ t f ) is 2 + λ t L 4 -averaged for each λ t ( 0 , 2 L ) . Thus, P C ( I λ t f ) = s t I + ( 1 s t ) T t , where T t is nonexpansive and s t : = s t ( λ t ) = 2 λ t L 4 ( 0 , 1 2 ) for each λ t ( 0 , 2 L ) . Similarly, for each n 0 , P C ( I λ n f ) is 2 + λ n L 4 -averaged for each λ n ( 0 , 2 L ) and P C ( I λ n f ) = s n I + ( 1 s n ) T n . Please note that Fix ( T t ) = Fix ( T n ) = Ξ . Since { λ i , t } [ a i , b i ] ( 0 , 2 η i ) , by (12) and Lemma 10, we deduce
Λ t N x Λ t N y = J R N , λ N , t ( I λ N , t B N ) Λ t N 1 x J R N , λ N , t ( I λ N , t B N ) Λ t N 1 y x y .
On the other hand, since { r i , t } [ c i , d i ] ( 0 , 2 μ i ) , according to (12) and Proposition 2, we get
Δ t M x Δ t M y = T r M , t ( Θ M , φ M ) ( I r M , t A M ) Δ t M 1 x T r M , t ( Θ M , φ M ) ( I r M , t A M ) Δ t M 1 y ( I r M , t A M ) Δ t M 1 x ( I r M , t A M ) Δ t M 1 y x y .
Next, we present the following net { x t } t ( 0 , min { 1 , 2 γ ¯ τ γ } ) :
u t = T r M , t ( Θ M , φ M ) ( I r M , t A M ) T r M 1 , t ( Θ M 1 , φ M 1 ) ( I r M 1 , t A M 1 ) T r 1 , t ( Θ 1 , φ 1 ) ( I r 1 , t A 1 ) x t , v t = J R N , λ N , t ( I λ N , t B N ) J R N 1 , λ N 1 , t ( I λ N 1 , t B N 1 ) J R 1 , λ 1 , t ( I λ 1 , t B 1 ) u t , x t = P C [ ( I s t A ) T t v t + s t [ V x t t ( μ F V x t γ T t v t ) ] .
We prove the strong convergence of { x t } as t 0 to a point x ˜ Ω which solves
( A V ) x ˜ , p x ˜ 0 , p Ω .
Let x 0 C , define a sequence { x n } as follows
u n = T r M , n ( Θ M , φ M ) ( I r M , n A M ) T r M 1 , n ( Θ M 1 , φ M 1 ) ( I r M 1 , n A M 1 ) T r 1 , n ( Θ 1 , φ 1 ) ( I r 1 , n A 1 ) x n , v n = J R N , λ N , n ( I λ N , n B N ) J R N 1 , λ N 1 , n ( I λ N 1 , n B N 1 ) J R 1 , λ 1 , n ( I λ 1 , n B 1 ) u n , y n = α n γ T n v n + ( I α n μ F ) V x n , x n + 1 = P C [ ( I s n A ) T n v n + s n y n ] , n 0 .
We will show the convergence of { x n } as n to x ˜ Ω , which solves (15).
Now, for t ( 0 , min { 1 , 2 γ ¯ τ γ } ) , and s t ( 0 , min { 1 2 , A 1 } ) , let Q t : C C be defined by Q t x = P C [ ( I s t A ) T t Λ t N Δ t M x + s t [ V x t ( μ F V x γ T t Λ t N Δ t M x ) ] , x C . By Lemmas 5 and 7, we have
Q t x Q t y ( I s t A ) T t Λ t N Δ t M x ( I s t A ) T t Λ t N Δ t M y + s t ( ( I t μ F ) V x + t γ T t Λ t N Δ t M x ) ( ( I t μ F ) V y + t γ T t Λ t N Δ t M y ) ( 1 s t γ ¯ ) x y + s t [ ( 1 t τ ) x y + t γ x y ] = [ 1 s t ( γ ¯ 1 + t ( τ γ ) ) ] x y .
It is easy to check that 0 < 1 s t ( γ ¯ 1 + t ( τ γ ) ) < 1 . Therefore, Q t : C C is contraction and Q t has a unique fixed point, denoted by x t .
Proposition 4.
Let { x t } be defined via (14). Then
(i) 
{ x t } is bounded for t ( 0 , min { 1 , 2 γ ¯ τ γ } ) ;
(ii) 
lim t 0 x t T t x t = 0 , lim t 0 x t Λ t N x t = 0 and lim t 0 x t Δ t M x t = 0 provided lim t 0 λ t = 2 L ;
(iii) 
x t , s t , λ i , t , r j , t are locally Lipschitzian.
Proof. 
(i) Pick up p Ω . Noting that Fix ( T t ) = Ξ , Λ t i p = p and Δ t j p = p for all i { 1 , , N } and j { 1 , , M } , by the nonexpansivity of T t , Λ t i and Δ t j and Lemmas 5 and 7 we get
x t p ( I s t A ) T t Λ t N Δ t M x t ( I s t A ) T t Λ t N Δ t M p + s t ( I t μ F ) V x t + t γ T t Λ t N Δ t M x t A p s t ( I t μ F ) V x t ( I t μ F ) V p + t ( γ T t Λ t N Δ t M x t μ F V p ) + V p A p + ( 1 s t γ ¯ ) T t Λ t N Δ t M x t T t Λ t N Δ t M p [ 1 s t ( γ ¯ 1 + t ( τ γ ) ) ] x t p + s t ( t ( γ I μ F V ) p + ( V A ) p ) .
Thus,
x t p ( V A ) p + ( γ I μ F V ) p γ ¯ 1 .
Hence { x t } is bounded and so are { V x t } , { u t } , { v t } , { T t v t } and { F V x t } .
(ii) From (14), we obtain
x t T t v t = P C [ ( I s t A ) T t v t + s t ( ( I t μ F ) V x t + t γ T t v t ) ] P C T t v t s t V x t A T t v t + t γ T t v t μ F V x t 0 as t 0 .
From (12), we have
v t p 2 Λ t i u t p 2 = J R i , λ i , t ( I λ i , t B i ) Λ t i 1 u t J R i , λ i , t ( I λ i , t B i ) p 2 x t p 2 + λ i , t ( λ i , t 2 η i ) B i Λ t i 1 u t B i p 2 ,
and
u t p 2 Δ t j x t p 2 = T r j , t ( Θ j , φ j ) ( I r j , t A j ) Δ t j 1 x t T r j , t ( Θ j , φ j ) ( I r j , t A j ) p 2 x t p 2 + r j , t ( r j , t 2 μ j ) A j Δ t j 1 x t A j p 2 .
Simple calculations show that
x t p = x t w t + ( I s t A ) ( T t v t T t p ) + s t [ ( I t μ F ) V x t ( I t μ F ) V p + t γ ( T t v t T t p ) + t ( γ I μ F V ) p ] + s t ( V A ) p ,
where w t = ( I s t A ) T t v t + s t ( t γ T t v t + ( I t μ F ) V x t ) . Then, by the nonexpansivity of T t , Proposition 1, and Lemmas 5 and 7, from (18)–(20) we obtain that
x t p 2 ( I s t A ) ( T t v t T t p ) , x t p + s t [ ( I t μ F ) V x t ( I t μ F ) V p , x t p + t γ ( T t v t T t p ) , x t p + t ( γ I μ F V ) p , x t p ] + s t ( V A ) p , x t p ( 1 s t ( γ ¯ t γ ) ) 1 2 [ x t p 2 + r j , t ( r j , t 2 μ j ) A j Δ t j 1 x t A j p 2 + λ i , t ( λ i , t 2 η i ) B i Λ t i 1 u t B i p 2 + x t p 2 ] + s t [ ( 1 t τ ) x t p 2 + t ( γ I μ F V ) p x t p ] + s t ( V A ) p x t p x t p 2 1 s t ( γ ¯ t γ ) 2 [ r j , t ( 2 μ j r j , t ) A j Δ t j 1 x t A j p 2 + λ i , t ( 2 η i λ i , t ) × B i Λ t i 1 u t B i p 2 ] + s t ( t ( γ I μ F V ) p + ( V A ) p ) x t p ,
which together with { λ i , t } [ a i , b i ] ( 0 , 2 η i ) , i = 1 , , N and { r j , t } [ c j , d j ] ( 0 , 2 μ j ) , j = 1 , , M , implies that
1 s t ( γ ¯ t γ ) 2 [ c j ( 2 μ j d j ) A j Δ t j 1 x t A j p 2 + a i ( 2 η i b i ) B i Λ t i 1 u t B i p 2 ] 1 s t ( γ ¯ t γ ) 2 [ r j , t ( 2 μ j r j , t ) A j Δ t j 1 x t A j p 2 + λ i , t ( 2 η i λ i , t ) B i Λ t i 1 u t B i p 2 ] s t ( t ( γ I μ F V ) p + ( V A ) p ) x t p .
Since lim t 0 s t = 0 and { x t } is bounded, we have
lim t 0 B i Λ t i 1 u t B i p = 0 and lim t 0 A j Δ t j 1 x t A j p = 0 ( i { 1 , , N } , j { 1 , , M } ) .
According to Proposition 2, we have
Δ t j x t p 2 ( I r j , t A j ) Δ t j 1 x t ( I r j , t A j ) p , Δ t j x t p 1 2 ( Δ t j 1 x t p 2 + Δ t j x t p 2 Δ t j 1 x t Δ t j x t r j , t ( A j Δ t j 1 x t A j p ) 2 ) ,
which implies that
Δ t j x t p 2 Δ t j 1 x t p 2 Δ t j 1 x t Δ t j x t r j , t ( A j Δ t j 1 x t A j p ) 2 x t p 2 Δ t j 1 x t Δ t j x t 2 + 2 r j , t Δ t j 1 x t Δ t j x t A j Δ t j 1 x t A j p .
Also, by Lemma 10, we obtain that for each i { 1 , , N }
Λ t i u t p 2 ( I λ i , t B i ) Λ t i 1 u t ( I λ i , t B i ) p , Λ t i u t p = 1 2 ( ( I λ i , t B i ) Λ t i 1 u t ( I λ i , t B i ) p 2 + Λ t i u t p 2 ( I λ i , t B i ) Λ t i 1 u t ( I λ i , t B i ) p ( Λ t i u t p ) 2 ) 1 2 ( u t p 2 + Λ t i u t p 2 Λ t i 1 u t Λ t i u t λ i , t ( B i Λ t i 1 u t B i p ) 2 ) ,
which implies
Λ t i u t p 2 u t p 2 Λ t i 1 u t Λ t i u t λ i , t ( B i Λ t i 1 u t B i p ) 2 u t p 2 Λ t i 1 u t Λ t i u t 2 + 2 λ i , t Λ t i 1 u t Λ t i u t B i Λ t i 1 u t B i p .
Thus, utilizing Lemma 1, from (21), (23) and (24) we have
x t p 2 ( 1 s t ( γ ¯ t γ ) ) 1 2 ( Λ t i u t p 2 + x t p 2 ) + s t ( V A ) p x t p + s t [ ( 1 t τ ) x t p 2 + t ( γ I μ F V ) p x t p ] ( 1 s t ( γ ¯ t γ ) ) 1 2 [ x t p 2 Δ t j 1 x t Δ t j x t 2 + 2 r j , t Δ t j 1 x t Δ t j x t A j Δ t j 1 x t A j p Λ t i 1 u t Λ t i u t 2 + 2 λ i , t Λ t i 1 u t Λ t i u t B i Λ t i 1 u t B i p + x t p 2 ] + s t [ ( 1 t τ ) x t p 2 + t ( γ I μ F V ) p x t p ] + s t ( V A ) p x t p x t p 2 1 s t ( γ ¯ t γ ) 2 ( Δ t j 1 x t Δ t j x t 2 + Λ t i 1 u t Λ t i u t 2 ) + r j , t Δ t j 1 x t Δ t j x t A j Δ t j 1 x t A j p + λ i , t Λ t i 1 u t Λ t i u t B i Λ t i 1 u t B i p + s t ( t ( γ I μ F V ) p + ( V A ) p ) x t p ,
which together with { λ i , t } [ a i , b i ] ( 0 , 2 η i ) and { r j , t } [ c j , d j ] ( 0 , 2 μ j ) , leads to
1 s t ( γ ¯ t γ ) 2 ( Δ t j 1 x t Δ t j x t 2 + Λ t i 1 u t Λ t i u t 2 ) d j Δ t j 1 x t Δ t j x t A j Δ t j 1 x t A j p + b i Λ t i 1 u t Λ t i u t B i Λ t i 1 u t B i p + s t ( t ( γ I μ F V ) p + ( V A ) p ) x t p .
Since lim t 0 s t = 0 , lim t 0 A j Δ t j 1 x t A j p = 0 and lim t 0 B i Λ t i 1 u t B i p = 0 (due to (22)), we deduce from the boundedness of { x t } , { Λ t i u t } and { Δ t j x t } that
lim t 0 Λ t i 1 u t Λ t i u t = 0 and lim t 0 Δ t j 1 x t Δ t j x t = 0 .
Hence we get
x t u t Δ t 0 x t Δ t 1 x t + Δ t 1 x t Δ t 2 x t + + Δ t M 1 x t Δ t M x t 0 as t 0 ,
and
u t v t Λ t 0 u t Λ t 1 u t + Λ t 1 u t Λ t 2 u t + + Λ t N 1 u t Λ t N u t 0 as t 0 .
So, taking into account that x t v t x t u t + u t v t , we have
lim t 0 x t v t = 0 .
In the meantime, from the nonexpansivity of T t and Λ t N , it is easy to see that
x t T t x t x t T t v t + T t v t T t x t x t T t v t + v t x t ,
and
x t Λ t N x t x t Λ t N u t + Λ t N u t Λ t N x t x t v t + u t x t .
Consequently, from (17), (26) and (28) we immediately deduce that
lim t 0 x t T t x t = 0 and lim t 0 x t Λ t N x t = 0 .
(iii) Since f is 1 L -ism, P C ( I λ t f ) is nonexpansive for λ t ( 0 , 2 L ) . Then,
P C ( I λ t f ) v t 0 P C ( I λ t f ) v t 0 P C ( I λ t f ) p + p v t 0 + 2 p .
This implies that { P C ( I λ t f ) v t 0 } is bounded. Also, observe that
T t v t 0 T t 0 v t 0 4 L | λ t 0 λ t | P C ( I λ t f ) v t 0 ( 2 + λ t L ) ( 2 + λ t 0 L ) + 4 L | λ t λ t 0 | ( 2 + λ t L ) ( 2 + λ t 0 L ) v t 0 + 4 ( 2 + λ t L ) P C ( I λ t f ) v t 0 P C ( I λ t 0 f ) v t 0 ( 2 + λ t L ) ( 2 + λ t 0 L ) | λ t λ t 0 | [ L P C ( I λ t f ) v t 0 + 4 f ( v t 0 ) + L v t 0 ] M ˜ | λ t λ t 0 | ,
where sup t { L P C ( I λ t f ) v t 0 + 4 f ( v t 0 ) + L v t 0 } M ˜ for some M ˜ > 0 . So, by (30), we have that
T t v t T t 0 v t 0 T t v t T t v t 0 + T t v t 0 T t 0 v t 0 v t v t 0 + M ˜ | λ t λ t 0 | v t v t 0 + 4 M ˜ L | s t s t 0 | .
Utilizing (12) and (13), we obtain that
v t v t 0 J R N , λ N , t ( I λ N , t B N ) Λ t N 1 u t J R N , λ N , t ( I λ N , t 0 B N ) Λ t N 1 u t + J R N , λ N , t ( I λ N , t 0 B N ) Λ t N 1 u t J R N , λ N , t 0 ( I λ N , t 0 B N ) Λ t 0 N 1 u t 0 ( I λ N , t B N ) Λ t N 1 u t ( I λ N , t 0 B N ) Λ t N 1 u t + ( I λ N , t 0 B N ) Λ t N 1 u t ( I λ N , t 0 B N ) Λ t 0 N 1 u t 0 + | λ N , t λ N , t 0 | × ( 1 λ N , t J R N , λ N , t ( I λ N , t 0 B N ) Λ t N 1 u t ( I λ N , t 0 B N ) Λ t 0 N 1 u t 0 + 1 λ N , t 0 ( I λ N , t 0 B N ) Λ t N 1 u t J R N , λ N , t 0 ( I λ N , t 0 B N ) Λ t 0 N 1 u t 0 ) | λ N , t λ N , t 0 | ( B N Λ t N 1 u t + M ^ ) + Λ t N 1 u t Λ t 0 N 1 u t 0 M ˜ 0 i = 1 N | λ i , t λ i , t 0 | + u t u t 0 ,
where
sup t { 1 λ i , t J R i , λ i , t ( I λ i , t 0 B i ) Λ t i 1 u t ( I λ i , t 0 B i ) Λ t 0 i 1 u t 0 + 1 λ i , t 0 ( I λ i , t 0 B i ) Λ t i 1 u t J R i , λ i , t 0 ( I λ i , t 0 B i ) Λ t 0 i 1 u t 0 } M ^ ,
for some M ^ > 0 and sup t { i = 1 N B i Λ t i 1 u t + M ^ } M ˜ 0 for some M ˜ 0 > 0 . Also, utilizing Proposition 2, we deduce that
u t u t 0 T r M , t ( Θ M , φ M ) ( I r M , t B M ) Δ t M 1 x t T r M , t 0 ( Θ M , φ M ) ( I r M , t B M ) Δ t M 1 x t + T r M , t 0 ( Θ M , φ M ) ( I r M , t B M ) Δ t M 1 x t T r M , t 0 ( Θ M , φ M ) ( I r M , t 0 B M ) Δ t M 1 x t + ( I r M , t 0 B M ) Δ t M 1 x t ( I r M , t 0 B M ) Δ t 0 M 1 x t 0 | r M , t r M , t 0 | [ B M Δ t M 1 x t + 1 r M , t T r M , t ( Θ M , φ M ) ( I r M , t B M ) Δ t M 1 x t ( I r M , t B M ) Δ t M 1 x t ] + + Δ t 0 x t Δ t 0 0 x t 0 + | r 1 , t r 1 , t 0 | [ B 1 Δ t 0 x t + 1 r 1 , t T r 1 , t ( Θ 1 , φ 1 ) ( I r 1 , t B 1 ) Δ t 0 x t ( I r 1 , t B 1 ) Δ t 0 x t ] M ˜ 1 j = 1 M | r j , t r j , t 0 | + x t x t 0 ,
where M ˜ 1 > 0 is a constant and
j = 1 M [ B j Δ t j 1 x t + 1 r j , t T r j , t ( Θ j , φ j ) ( I r j , t B j ) Δ t j 1 x t ( I r j , t B j ) Δ t j 1 x t ] M ˜ 1 .
In terms of (31)–(33), we calculate
T t v t T t 0 v t 0 x t x t 0 + M ˜ 1 j = 1 M | r j , t r j , t 0 | + M ˜ 0 i = 1 N | λ i , t λ i , t 0 | + 4 M ˜ L | s t s t 0 | ,
and hence
x t x t 0 ( I s t A ) T t v t + s t ( ( I t μ F ) V x t + t γ T t v t ) ( I s t 0 A ) T t 0 v t 0 s t 0 ( ( I t 0 μ F ) V x t 0 + t 0 γ T t 0 v t 0 ) | s t s t 0 | A T t v t + ( 1 s t 0 γ ¯ ) T t v t T t 0 v t 0 + t 0 γ T t v t T t 0 v t 0 + | s t s t 0 | [ V x t + t ( γ T t v t + μ F V x t ) ] + s t 0 [ ( γ T t v t + μ F V x t ) | t t 0 | + ( 1 t 0 τ ) x t x t 0 ] ( 1 s t 0 ( γ ¯ 1 + t 0 ( τ γ ) ) ) ( x t x t 0 + M ˜ 1 j = 1 M | r j , t r j , t 0 | + M ˜ 0 i = 1 N | λ i , t λ i , t 0 | + 4 M ˜ L | s t s t 0 | ) + | s t s t 0 | [ A T t v t + V x t + t ( γ T t v t + μ F V x t ) ] + s t 0 ( γ T t v t + μ F V x t ) | t t 0 | .
This immediately implies that
x t x t 0 A T t v t + V x t + t ( γ T t v t + μ F V x t ) + 4 M ˜ L s t 0 ( γ ¯ 1 + t 0 ( τ γ ) ) | s t s t 0 | + γ T t v t + μ F V x t γ ¯ 1 + t 0 ( τ γ ) | t t 0 | + M ˜ 1 s t 0 ( γ ¯ 1 + t 0 ( τ γ l ) ) j = 1 M | r j , t r j , t 0 | + M ˜ 0 s t 0 ( γ ¯ 1 + t 0 ( τ γ l ) ) i = 1 N | λ i , t λ i , t 0 | .
Since s t , λ i , t , r j , t are locally Lipschitzian, we conclude that x t is locally Lipschitzian. □
Theorem 1.
Assume that lim t 0 s t = 0 . Then, x t defined by (14) converges strongly to x ˜ Ω as t 0 , which solves (15).
Proof. 
Let x ˜ be the unique solution of (15). Let p Ω . Then, we have
x t p = x t w t + s t [ ( I t μ F ) V x t ( I t μ F ) V p + t γ ( T t v t T t p ) + t ( γ I μ F V ) p ] + ( I s t A ) ( T t v t T t p ) + s t ( V A ) p ,
where w t = ( I s t A ) T t v t + s t ( ( I t μ F ) V x t + t γ T t v t ) . Then, by Proposition 1 and the nonexpansivity of T t , we obtain from (18) that
x t p 2 ( I s t A ) ( T t v t T t p ) , x t p + s t [ ( I t μ F ) V x t ( I t μ F ) V p , x t p + t γ T t v t T t p , x t p + t ( γ I μ F V ) p , x t p ] + s t ( V A ) p , x t p ( 1 s t γ ¯ ) v t p x t p + s t [ ( 1 t τ ) x t p 2 + t γ v t p x t p + t ( γ I μ F V ) p , x t p ] + s t ( V A ) p , x t p [ 1 s t ( ( γ ¯ 1 ) + t ( τ γ ) ) ] x t p 2 + ( V A ) p , x t p + s t ( t ( γ I μ F V ) p , x t p ) .
Hence,
x t p 2 1 γ ¯ 1 + t ( τ γ ) ( ( V A ) p , x t p + t ( γ I μ F V ) p , x t p ) .
It follows that x t n x * , where { t n } ( 0 , min { 1 , 2 γ ¯ τ γ } ) such that t n 0 . By Proposition 4, we get lim n x t n T t n x t n = 0 . Observe that
P C ( I λ t n f ) x t n x t n = s t n x t n + ( 1 s t n ) T t n x t n x t n T t n x t n x t n ,
where s t n = 2 λ t n L 4 ( 0 , 1 2 ) for λ t n ( 0 , 2 L ) . Hence we have
P C ( I 2 L f ) x t n x t n ( I 2 L f ) x t n ( I λ t n f ) x t n + P C ( I λ t n f ) x t n x t n ( 2 L λ t n ) f ( x t n ) + T t n x t n x t n .
By the boundedness of { x t n } , s t n 0 ( λ t n 2 L ) and T t n x t n x t n 0 , we deduce
x * P C ( I 2 L f ) x * = lim n x t n P C ( I 2 L f ) x t n = 0 .
Therefore, x * VI ( C , f ) = Ξ .
Furthermore, from (25), (26) and (28), we have that Δ t n j x t n x * , Λ t n m u t n x * , u t n x * and v t n x * . First, we prove that x * m = 1 N I ( B m , R m ) . Please note that R m + B m is maximal monotone. Let ( v , g ) G ( R m + B m ) , i.e., g B m v R m v . Noting that Λ t n m u t n = J R m , λ m , t n ( I λ m , t n B m ) Λ t n m 1 u t n , m { 1 , , N } , we have
Λ t n m 1 u t n λ m , t n B m Λ t n m 1 u t n ( I + λ m , t n R m ) Λ t n m u t n ,
that is,
1 λ m , t n ( Λ t n m 1 u t n Λ t n m u t n λ m , t n B m Λ t n m 1 u t n ) R m Λ t n m u t n .
According to the monotonicity of R m , we have
v Λ t n m u t n , g B m v 1 λ m , t n ( Λ t n m 1 u t n Λ t n m u t n λ m , t n B m Λ t n m 1 u t n ) 0
and hence
v Λ t n m u t n , g v Λ t n m u t n , B m v + 1 λ m , t n ( Λ t n m 1 u t n Λ t n m u t n λ m , t n B m Λ t n m 1 u t n ) v Λ t n m u t n , B m Λ t n m u t n B m Λ t n m 1 u t n + v Λ t n m u t n , 1 λ m , t n ( Λ t n m 1 u t n Λ t n m u t n ) .
Since Λ t n m u t n Λ t n m 1 u t n 0 (due to (25)) and B m Λ t n m u t n B m Λ t n m 1 u t n 0 , we deduce from Λ t n m u t n x * and { λ m , t n } [ a m , b m ] ( 0 , 2 η m ) that
lim n v Λ t n m u t n , g = v x * , g 0 .
By the maximal monotonicity of B m + R m , we derive 0 ( R m + B m ) x * , i.e., x * I ( B m , R m ) . Thus, x * m = 1 N I ( B m , R m ) . Next we prove that x * j = 1 M GMEP ( Θ j , φ j , A j ) . Since Δ t n j x t n = T r j , t n ( Θ j , φ j ) ( I r j , t n A j ) Δ t n j 1 x t n , j { 1 , , M } , we have
Θ j ( Δ t n j x t n , y ) + φ j ( y ) φ j ( Δ t n j x t n ) + A j Δ t n j 1 x t n , y Δ t n j x t n + y Δ t n j x t n , Δ t n j x t n Δ t n j 1 x t n r j , t n 0 .
By (A2), we have
φ j ( y ) φ j ( Δ t n j x t n ) + A j Δ t n j 1 x t n , y Δ t n j x t n + y Δ t n j x t n , Δ t n j x t n Δ t n j 1 x t n r j , t n Θ j ( y , Δ t n j x t n ) .
Let y C and t ( 0 , 1 ] . Set z t = t y + ( 1 t ) x * . Thus, z t C . Hence,
z t Δ t n j x t n , A j z t φ j ( Δ t n j x t n ) φ j ( z t ) + z t Δ t n j x t n , A j z t A j Δ t n j x t n + Θ j ( z t , Δ t n j x t n ) + z t Δ t n j x t n , A j Δ t n j x t n A j Δ t n j 1 x t n z t Δ t n j x t n , Δ t n j x t n Δ t n j 1 x t n r j , t n .
By (25), we have A j Δ t n j x t n A j Δ t n j 1 x t n 0 as n . Please note that z t Δ t n j x t n , A j z t A j Δ t n j x t n 0 . Then,
z t x * , A j z t φ j ( x * ) φ j ( z t ) + Θ j ( z t , x * ) .
Applying (A1), (A4) and (36), we obtain
0 = Θ j ( z t , z t ) + φ j ( z t ) φ j ( z t ) t [ Θ j ( z t , y ) + φ j ( y ) φ j ( z t ) ] + ( 1 t ) t y x * , A j z t ,
and hence
0 Θ j ( z t , y ) + φ j ( y ) φ j ( z t ) + ( 1 t ) y x * , A j z t .
Thus,
0 Θ j ( x * , y ) + φ j ( y ) φ j ( x * ) + y x * , A j x * .
So, x * GMEP ( Θ j , φ j , A j ) and x * j = 1 M GMEP ( Θ j , φ j , A j ) . Therefore, x * j = 1 M GMEP ( Θ j , φ j , A j ) m = 1 N I ( B m , R m ) Ξ = : Ω .
Next, we prove that x t x ˜ as t 0 . First, let us assert that x * is a solution of the VIP (15). As a matter of fact, since x t = x t w t + ( I s t A ) T t Λ t N Δ t M x t + s t ( ( I t μ F ) V x t + t γ T t Λ t N Δ t M x t ) , we have
x t T t Λ t N Δ t M x t = x t w t + s t ( V A ) T t Λ t N Δ t M x t + s t ( V x t V T t Λ t N Δ t M x t + t ( γ T t Λ t N Δ t M x t μ F V x t ) ) .
Since T t , Λ t N and Δ t M are nonexpansive mappings, I T t Λ t N Δ t M is monotone. By the monotonicity of I T t Λ t N Δ t M , we have
0 ( I T t Λ t N Δ t M ) x t ( I T t Λ t N Δ t M ) p , x t p s t ( V A ) T t v t ( V A ) x t , x t p + s t V x t V T t v t , x t p + s t ( V A ) x t , x t p + s t t ( γ T t v t μ F V x t ) , x t p .
Hence,
( A V ) x t , x t p ( V A ) T t v t ( V A ) x t x t p + V x t V T t v t x t p + t γ T t v t μ F V x t x t p ( 2 + A ) T t v t x t x t p + t ( γ T t v t + μ F V x t ) x t p .
Now, replacing t in (37) with t n and noticing the boundedness of { γ T t n v t n + μ F V x t n } and the fact that ( V A ) T t n v t n ( V A ) x t n 0 , we deduce
( A V ) x * , x * p 0 .
Thus, x * Ω is a solution of (15); hence x * = x ˜ by uniqueness. Consequently, x t x ˜ as t 0 . □
Theorem 2.
Assume that the sequences { α n } [ 0 , 1 ] and { s n } ( 0 , 1 2 ) satisfy: ( C 1 ) lim n α n = 0 and lim n s n = 0 . Let the sequence { x n } be defined by (16). Then LIM n ( A V ) x ˜ , x ˜ x n 0 , where x ˜ = lim t 0 + x t and x t is defined by
x t = P C [ ( I s t A ) T Λ N Δ M x t + s t ( V x t t ( μ F V x t γ T Λ N Δ M x t ) ) ] ,
where T , Λ N , Δ M : C C are defined by T x = P C ( I 2 L f ) x , Λ N x = J R N , λ N ( I λ N B N ) J R 1 , λ 1 ( I λ 1 B 1 ) x and Δ M x = T r M ( Θ M , φ M ) ( I r M A M ) T r 1 ( Θ 1 , φ 1 ) ( I r 1 A 1 ) x for λ i [ a i , b i ] ( 0 , 2 η i ) , i = 1 , , N and r j [ c j , d j ] ( 0 , 2 μ j ) , j = 1 , , M .
Proof. 
We assume, without loss of generality, that 0 < s n A 1 for all n 0 . Let lim t 0 x t = x ˜ and x ˜ is the unique solution of (15). By Proposition 4, we deduce that the nets { x t } , { V x t } , { Δ M x t } , { Λ N Δ M x t } and { F V x t } are bounded.
Let p Ω . Then T n p = p , Λ n N p = p and Δ n M p = p . Applying Lemmas 5 and 7, we obtain that
x n + 1 p s n [ ( 1 α n τ ) x n p + α n γ v n p + α n ( γ I μ F V ) p ] + ( 1 s n γ ¯ ) v n p + s n ( V A ) p s n [ ( 1 α n τ ) x n p + α n γ x n p + α n ( γ I μ F V ) p ] + s n ( V A ) p + ( 1 s n γ ¯ ) x n p max { ( V A ) p + ( γ I μ F V ) p γ ¯ 1 , x n p } .
By induction
x n p max { ( V A ) p + ( γ I μ F V ) p γ ¯ 1 , x 0 p } , n 0 .
Thus, { x n } , { u n } , { v n } , { T n v n } , { F V x n } , { V x n } and { y n } are all bounded. By (C1), we obtain
x n + 1 T n v n ( I s n A ) T n v n + s n y n T n v n = s n y n A T n v n 0 as n .
By (31), we also have
T Λ N Δ M x n T n Λ n N Δ n M x n Λ N Δ M x n Λ n N Δ n M x n + M ^ | 2 L λ n | ,
where sup n 0 { L P C ( I 2 L f ) v n + 4 f ( v n ) + L v n } M ^ for some M ^ > 0 . According to (32), we have
Λ N Δ M x n Λ n N Δ n M x n M ^ 0 i = 1 N | λ i λ i , n | + Δ M x n Δ n M x n ,
where
sup n 0 { 1 λ i J R i , λ i ( I λ i , n B i ) Λ i 1 Δ M x n ( I λ i , n B i ) Λ n i 1 u n + 1 λ i , n ( I λ i , n B i ) Λ i 1 Δ M x n J R i , λ i , n ( I λ i , n B i ) Λ n i 1 u n } N ^ ,
for some N ^ > 0 and sup n 0 { i = 1 N B i Λ i 1 Δ M x n + N ^ } M ^ 0 for some M ^ 0 > 0 . In terms of (33), we have
Δ M x n Δ n M x n M ^ 1 j = 1 M | r j r j , n | ,
where sup n 0 { j = 1 M [ A j Δ j 1 x n + 1 r j T r j ( Θ j , φ j ) ( I r j A j ) Δ j 1 x n ( I r j A j ) Δ j 1 x n ] } M ^ 1 for some M ^ 1 > 0 . In terms of (39)–(41) we calculate
T Λ N Δ M x n T n Λ n N Δ n M x n M ^ 1 j = 1 M | r j r j , n | + M ^ 0 i = 1 N | λ i λ i , n | + M ^ | 2 L λ n | .
It is clear that
T Λ N Δ M x t x n + 1 T Λ N Δ M x t T Λ N Δ M x n + T Λ N Δ M x n T n Λ n N Δ n M x n + T n Λ n N Δ n M x n x n + 1 x t x n + M ^ 1 j = 1 M | r j r j , n | + M ^ 0 i = 1 N | λ i λ i , n | + M ^ | 2 L λ n | + T n v n x n + 1 = x t x n + ϵ n ,
where ϵ n = M ^ 1 j = 1 M | r j r j , n | + M ^ 0 i = 1 N | λ i λ i , n | + M ^ | 2 L λ n | + T n v n x n + 1 0 as n . Please note that
A x t A x n , x t x n γ ¯ x t x n 2 .
Furthermore, for simplicity, we write w t = ( I s t A ) T Λ N Δ M x t + s t ( ( I t μ F ) V x t + t γ T Λ N Δ M x t ) . By (38), we get x t = P C w t and
x t x n + 1 = ( I s t A ) T Λ N Δ M x t ( I s t A ) x n + 1 + s t [ ( I t μ F ) V x t ( I t μ F ) V x n + 1 + t ( γ T Λ N Δ M x t μ F V x n + 1 ) + ( V A ) x n + 1 ] + x t w t .
Applying Lemma 1 and Proposition 1, we have
x t x n + 1 2 2 s t t γ T Λ N Δ M x t μ F V x n + 1 ) , x t x n + 1 + 2 s t ( V A ) x n + 1 , x t x n + 1 + 2 s t ( I t μ F ) V x t ( I t μ F ) V x n + 1 , x t x n + 1 + ( I s t A ) T Λ N Δ M x t ( I s t A ) x n + 1 2 ( 1 s t γ ¯ ) 2 T Λ N Δ M x t x n + 1 2 + 2 s t V x t V x n + 1 , x t x n + 1 2 s t t μ F V x t F V x n + 1 , x t x n + 1 + 2 s t ( V A ) x n + 1 , x t x n + 1 + 2 s t t γ T Λ N Δ M x t μ F V x n + 1 x t x n + 1 ( 1 s t γ ¯ ) 2 T Λ N Δ M x t x n + 1 2 + 2 s t V x t V x n + 1 , x t x n + 1 + 2 s t t ( μ F V x t F V x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 + 2 s t ( V A ) x n + 1 , x t x n + 1 .
Using (42) and (43) in (44), we obtain
x t x n + 1 2 ( 1 s t γ ¯ ) 2 ( x t x n + ϵ n ) 2 + 2 s t V x t V x n + 1 , x t x n + 1 + 2 s t t ( μ κ x t x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 + 2 s t ( V A ) x n + 1 , x t x n + 1 s t 2 γ ¯ A ( x t x n ) , x t x n + x t x n 2 + ( 1 s t γ ¯ ) 2 ( 2 x t x n ϵ n + ϵ n 2 ) + 2 s t t ( μ κ x t x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 + 2 s t [ ( V A ) x t , x t x n + 1 + A ( x t x n + 1 ) , x t x n + 1 A ( x t x n ) , x t x n ] .
Applying the Banach limit LIM to (45), we have
LIM n x t x n + 1 2 s t 2 γ ¯ LIM n A ( x t x n ) , x t x n + LIM n x t x n 2 + 2 s t t LIM n ( μ κ x t x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 + 2 s t [ LIM n ( V A ) x t , x t x n + 1 + LIM n A ( x t x n + 1 ) , x t x n + 1 LIM n A ( x t x n ) , x t x n ] .
Using the property LIM n a n = LIM n a n + 1 , we have
LIM n ( A V ) x t , x t x n = LIM n ( A V ) x t , x t x n + 1 s t γ ¯ 2 LIM n A ( x t x n ) , x t x n + 1 2 s t [ LIM n x t x n 2 LIM n x t x n + 1 2 ] + t LIM n ( μ κ x t x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 + LIM n A ( x t x n + 1 ) , x t x n + 1 LIM n A ( x t x n ) , x t x n = s t γ ¯ 2 LIM n A ( x t x n ) , x t x n + t LIM n ( μ κ x t x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 .
Since
s t A ( x t x n ) , x t x n s t A x t x n 2 s t A K 2 0 as t 0 ,
where x t x n + γ T Λ N Δ M x t μ F V x n K ,
t x t x n + 1 2 0 and t γ T Λ N Δ M x t μ F V x n + 1 x t x n + 1 0 as t 0 ,
we conclude from (47)–(49) that
LIM n ( A V ) x ˜ , x ˜ x n lim sup t 0 LIM n ( A V ) x t , x t x n lim sup t 0 t LIM n ( μ κ x t x n + 1 + γ T Λ N Δ M x t μ F V x n + 1 ) x t x n + 1 + lim sup t 0 s t γ ¯ 2 LIM n A ( x t x n ) , x t x n = 0 .
This completes the proof. □
Theorem 3.
Let the sequences { α n } [ 0 , 1 ] and { s n } ( 0 , 1 2 ) satisfy: (C1) lim n α n = 0 , lim n s n = 0 and (C2) n = 0 s n = . Let the sequence { x n } be defined by (16). If x n + 1 x n 0 , then x n converges strongly to x ˜ Ω , which solves (15).
Proof. 
We assume, without loss of generality, that α n τ < 1 and 2 s n ( γ ¯ 1 ) 1 s n < 1 for all n 0 . Let x t be defined by (38). Then, lim t 0 x t : = x ˜ Ω (due to Theorem 1). We divide the rest of the proof into several steps.
Step 1. It is easy to show that
x n p max { x 0 p , ( V A ) p + ( γ I μ F V ) p γ ¯ 1 } , n 0 .
Hence { x n } , { u n } , { v n } , { T n v n } , { F V x n } , { V x n } and { y n } are all bounded.
Step 2. We show that lim sup n ( V A ) x ˜ , x n x ˜ 0 . To this end, let
a n : = ( A V ) x ˜ , x ˜ x n , n 0 .
According to Theorem 2, we deduce LIM n a n 0 . Choose a subsequence { x n j } of { x n } such that
lim sup n ( a n + 1 a n ) = lim sup j ( a n j + 1 a n j )
and x n j v H . This indicates that x n j + 1 v . Hence,
w lim j ( x ˜ x n j + 1 ) = w lim j ( x ˜ x n j ) = ( x ˜ v ) ,
and therefore
lim sup n ( a n + 1 a n ) = lim j ( A V ) x ˜ , ( x ˜ x n j + 1 ) ( x ˜ x n j ) = 0 .
Then, by Lemma 8, we derive
lim sup n ( V A ) x ˜ , x n x ˜ = lim sup n ( A V ) x ˜ , x ˜ x n 0 .
Step 3. We show that lim n x n x ˜ = 0 . Set w n = ( I s n A ) T n v n + s n y n for all n 0 . Then x n + 1 = P C w n . Utilizing (16) and T n Λ n N Δ n M x ˜ = T n x ˜ = x ˜ , we have
x n + 1 x ˜ = s n [ ( I α n μ F ) V x n ( I α n μ F ) V x ˜ + α n γ ( T n v n T n x ˜ ) + α n ( γ I μ F V ) x ˜ ] + ( I s n A ) ( T n v n T n x ˜ ) + s n ( V A ) x ˜ + x n + 1 w n .
Thus, utilizing Proposition 1 and Lemma 1, we get
x n + 1 x ˜ 2 [ ( 1 s n γ ¯ ) T n v n T n x ˜ + s n ( ( 1 α n τ ) x n x ˜ + α n γ T n v n T n x ˜ ) ] 2 + 2 s n [ α n ( γ I μ F V ) x ˜ x n + 1 x ˜ + ( A V ) x ˜ , x ˜ x n + 1 ] [ 1 s n ( γ ¯ 1 + α n ( τ γ ) ) ] x n x ˜ 2 + 2 s n [ α n ( γ I μ F V ) x ˜ x n + 1 x ˜ + ( A V ) x ˜ , x ˜ x n + 1 ] = [ 1 s n ( γ ¯ 1 + α n ( τ γ ) ) ] x n x ˜ 2 + s n ( γ ¯ 1 + α n ( τ γ ) ) × 2 γ ¯ 1 + α n ( τ γ ) [ α n ( γ I μ F V ) x ˜ x n + 1 x ˜ + ( A V ) x ˜ , x ˜ x n + 1 ] = ( 1 ω n ) x n x ˜ 2 + ω n δ n ,
where ω n = s n ( γ ¯ 1 + α n ( τ γ ) ) and
δ n = 2 γ ¯ 1 + α n ( τ γ ) [ α n ( γ I μ F V ) x ˜ x n + 1 x ˜ + ( A V ) x ˜ , x ˜ x n + 1 ] .
It is easy to check that ω n 0 , n = 0 ω n = and lim sup n δ n 0 . By Lemma 6 with r n = 0 , we deduce that lim n x n x ˜ = 0 . This completes the proof. □

Author Contributions

All the authors have contributed equally to this paper. All the authors have read and approved the final manuscript.

Funding

This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Conflicts of Interest

The authors declare no conflict of interest.

References and Note

  1. Korpelevich, G.-M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  2. Peng, J.-W.; Yao, J.-C. A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12, 1401–1432. [Google Scholar] [CrossRef]
  3. Ceng, L.-C.; Petrusel, A.; Lee, C.; Wong, M.-M. Two extragradient approximation methods for variational inequalities and fixed point problems of strict pseudo-contractions. Taiwan. J. Math. 2009, 13, 607–632. [Google Scholar]
  4. Ceng, L.-C.; Ansari, Q.-H.; Yao, J.-C. Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218, 1112–1123. [Google Scholar] [CrossRef]
  5. Ceng, L.-C.; Guu, S.-M.; Yao, J.-C. Finding common solutions of a variational inequality, a general system of variational inequalities, and a fixed-point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011, 2011, 626159. [Google Scholar] [CrossRef]
  6. Ceng, L.-C.; Al-Homidan, S. Algorithms of common solutions for generalized mixed equilibria, variational inclusions, and constrained convex minimization. Abstr. Appl. Anal. 2014, 2014, 132053. [Google Scholar] [CrossRef]
  7. Ceng, L.-C.; Yao, J.-C. A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72, 1922–1937. [Google Scholar] [CrossRef]
  8. Yao, Y.; Chen, R.; Xu, H.-K. Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72, 3447–3456. [Google Scholar] [CrossRef]
  9. Yao, Y.; Liou, Y.-C.; Yao, J.-C. Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 2017, 10, 843–854. [Google Scholar] [CrossRef]
  10. Yao, Y.; Postolache, M.; Liou, Y.-C.; Yao, Z.-S. Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 2016, 10, 1519–1528. [Google Scholar] [CrossRef]
  11. Yao, Y.; Qin, X.; Yao, J.-C. Projection methods for firmly type nonexpansive operators. J. Nonlinear Convex Anal. 2018, 19, 407–415. [Google Scholar]
  12. Yao, Y.; Postolache, M.; Yao, J.-C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef]
  13. Ceng, L.-C.; Ansari, Q.-H.; Schaible, S.; Yao, J.-C. Iterative methods for generalized equilibrium problems, systems of general generalized equilibrium problems and fixed point problems for nonexpansive mappings in Hilbert spaces. Fixed Point Theory 2011, 12, 293–308. [Google Scholar]
  14. Ceng, L.-C.; Ansari, Q.-H.; Yao, J.-C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  15. Ceng, L.-C.; Guu, S.-M.; Yao, J.-C. A general composite iterative algorithm for nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 2011, 61, 2447–2455. [Google Scholar] [CrossRef]
  16. Takahashi, S.; Takahashi, W. Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69, 1025–1033. [Google Scholar] [CrossRef]
  17. Ceng, L.-C.; Yao, J.-C. A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214, 186–201. [Google Scholar] [CrossRef]
  18. Combettes, P.-L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53, 475–504. [Google Scholar] [CrossRef]
  19. Zeng, L.-C.; Yao, J.-C. Modified combined relaxation method for general monotone equilibrium problems in Hilbert spaces. J. Optim. Theory Appl. 2006, 131, 469–483. [Google Scholar] [CrossRef]
  20. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization, in press.
  21. Yao, Y.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
  22. Zegeye, H.; Shahzad, N.; Yao, Y.-H. Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 2015, 64, 453–471. [Google Scholar] [CrossRef]
  23. Yao, Y.H.; Chen, R.-D.; Yao, J.-C. Strong convergence and certain control conditions for modified Mann iteration. Nonlinear Anal. 2008, 68, 1687–1693. [Google Scholar] [CrossRef]
  24. Yao, Y.; Postolache, M.; Yao, J.-C. Iterative algorithms for the generalized variational inequalities. U.P.B. Sci. Bull. Ser. A 2019, in press. [Google Scholar]
  25. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  26. Thakur, B.-S.; Postolache, M. Existence and approximation of solutions for generalized extended nonlinear variational inequalities. J. Inequal. Appl. 2013, 2013, 590. [Google Scholar] [CrossRef]
  27. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019. [Google Scholar] [CrossRef]
  28. Dong, Q.-L.; Lu, Y.-Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  29. Zhao, X.-P.; Ng, K.-F.; Li, C.; Yao, J.-C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  30. Yao, Y.-H.; Postolache, M.; Liou, Y.-C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 2013, 201. [Google Scholar] [CrossRef]
  31. Yao, Y.; Yao, J.-C.; Liou, Y.-C.; Postolache, M. Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpathian J. Math. 2018, 34, 459–466. [Google Scholar]
  32. Yao, Y.; Liou, Y.-C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar] [CrossRef]
  33. Rockafellar, R.-T. Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  34. Huang, N.J. A new completely general class of variational inclusions with noncompact valued mappings. Comput. Math. Appl. 1998, 35, 9–14. [Google Scholar] [CrossRef]
  35. Zeng, L.-C.; Guu, S.-M.; Yao, J.C. Characterization of H-monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005, 50, 329–337. [Google Scholar] [CrossRef]
  36. Ceng, L.-C.; Ansari, Q.-H.; Wong, M.-M.; Yao, J.-C. Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012, 13, 403–422. [Google Scholar]
  37. Ceng, L.-C.; Guu, S.-M.; Yao, J.-C. Iterative algorithm for finding approximate solutions of mixed quasi-variational-like inclusions. Comput. Math. Appl. 2008, 56, 942–952. [Google Scholar] [CrossRef]
  38. Jung, J.-S. A general composite iterative method for strictly pseudocontractive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014, 2014, 173. [Google Scholar] [CrossRef]
  39. Yao, Y.; Liou, Y.-C.; Kang, S.-M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef]
  40. Shioji, N.; Takahashi, W. Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Am. Math. Soc. 1997, 125, 3641–3645. [Google Scholar] [CrossRef]
  41. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  42. Xu, H.-K.; Kim, T.-H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  43. Barbu, V. Nonlinear Semigroups and Differential Equations in Banach Spaces; Springer: Heidelberg, Germany, 1976. [Google Scholar]
  44. Marino, G.; Xu, H.-K. A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–56. [Google Scholar] [CrossRef]
  45. Marino, G.; Xu, H.-K. Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Math. Appl. 2007, 329, 336–346. [Google Scholar] [CrossRef]
  46. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Postolache, M.; Wen, C.-F.; Yao, Y. Variational Inequalities Approaches to Minimization Problems with Constraints of Generalized Mixed Equilibria and Variational Inclusions. Mathematics 2019, 7, 270. https://doi.org/10.3390/math7030270

AMA Style

Ceng L-C, Postolache M, Wen C-F, Yao Y. Variational Inequalities Approaches to Minimization Problems with Constraints of Generalized Mixed Equilibria and Variational Inclusions. Mathematics. 2019; 7(3):270. https://doi.org/10.3390/math7030270

Chicago/Turabian Style

Ceng, Lu-Chuan, Mihai Postolache, Ching-Feng Wen, and Yonghong Yao. 2019. "Variational Inequalities Approaches to Minimization Problems with Constraints of Generalized Mixed Equilibria and Variational Inclusions" Mathematics 7, no. 3: 270. https://doi.org/10.3390/math7030270

APA Style

Ceng, L. -C., Postolache, M., Wen, C. -F., & Yao, Y. (2019). Variational Inequalities Approaches to Minimization Problems with Constraints of Generalized Mixed Equilibria and Variational Inclusions. Mathematics, 7(3), 270. https://doi.org/10.3390/math7030270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop