Next Article in Journal
Strong Convergence of Modified Inertial Mann Algorithms for Nonexpansive Mappings
Next Article in Special Issue
Nondifferentiable Multiobjective Programming Problem under Strongly K-Gf-Pseudoinvexity Assumptions
Previous Article in Journal
A New Fuzzy MARCOS Method for Road Traffic Risk Analysis
Previous Article in Special Issue
A Parallel-Viscosity-Type Subgradient Extragradient-Line Method for Finding the Common Solution of Variational Inequality Problems Applied to Image Restoration Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Projected Subgradient Algorithms for Pseudomonotone Equilibrium Problems and Fixed Points of Pseudocontractive Operators

1
School of Mathematical Sciences, Tiangong University, Tianjin 300387, China
2
The Key Laboratory of Intelligent Information and Big Data Processing of NingXia Province, North Minzu University, Yinchuan 750021, China
3
Department of Mathematics, King Abdulaziz University, P. O. B. 80203, Jeddah 21589, Saudi Arabia
4
Center for General Education, China Medical University, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(4), 461; https://doi.org/10.3390/math8040461
Submission received: 12 December 2019 / Revised: 13 March 2020 / Accepted: 19 March 2020 / Published: 25 March 2020

Abstract

:
The projected subgradient algorithms can be considered as an improvement of the projected algorithms and the subgradient algorithms for the equilibrium problems of the class of monotone and Lipschitz continuous operators. In this paper, we present and analyze an iterative algorithm for finding a common element of the fixed point of pseudocontractive operators and the pseudomonotone equilibrium problem in Hilbert spaces. The suggested iterative algorithm is based on the projected method and subgradient method with a linearsearch technique. We show the strong convergence result for the iterative sequence generated by this algorithm. Some applications are also included. Our result improves and extends some existing results in the literature.

1. Introduction

Throughout, let H be a real Hilbert space endowed with inner product · , · and induced norm · ( x = x , x , x H ) . Let C H be a closed and convex set. Let f : C × C R be a bifunction. Recall that f is said to be monotone if
f ( u , v ) + f ( v , u ) 0 , u , v C .
f is said to be pseudomonotone if
f ( u , v ) 0 implies   f ( v , u ) 0 , u , v C .
Clearly, we have the inclusion relation: E q u a t i o n ( 1 ) E q u a t i o n ( 2 ) .
In this paper, our research is associated with the equilibrium problem [1] of seeking an element u ˜ C such that
f ( u ˜ , u ) 0 , u C .
The solution set of the equilibrium problem in Equation (3) is denoted by E P ( f , C ) .
Equilibrium problems have been studied extensively in the literature (see, e.g., [2,3,4,5]). Many problems, such as variational inequalities [6,7,8,9,10,11,12,13,14,15], fixed point problems [16,17,18,19,20,21], and Nash equilibrium in noncooperative games theory [1,22], can be formulated in the form of Equation (3). An important method for solving Equation (3) is the proximal point method, which was originally introduced by Martinet [23] and further developed by Rockafellar [24] for finding a zero of maximal monotone operators. In 2000, Konnov [25] extended the proximal point method to the monotone equilibrium problem. However, the proximal point method cannot be applied for solving the pseudomonotone equilibrium problem [26].
Another basis algorithm for solving the equilibrium problem is the projection algorithm [27]. However, the projection algorithm may fail to converge for the pseudomonotone monotone equilibrium problem. To overcome this disadvantage, the extragradient algorithm [4] can be applied to solve the pseudomonotone equilibrium problem. More precisely, the extragradient algorithm generates a sequence { x k } iteratively as follows
y k = arg min y C { λ f ( x k , y ) + 1 2 x k y 2 } , x k + 1 = arg min y C { λ f ( y k , y ) + 1 2 x k y 2 } .
However, the main difficulty of the extragradient algorithm in Equation (4) is that, at each iterative step, it requires to solve two strongly convex programs. Consequently, the subgradient algorithm [28,29] has been proposed and developed for solving a large class of equilibrium problems that solves only one strongly convex program rather than two as in the extragradient algorithm, and the convergence results show the efficiency of the algorithms.
At the same time, to solve the equilibrium problem in Equation (3), the bifunction f is always to be assumed to possess the following Lipschitz-type condition [30]:
f ( u , v ) + f ( v , w ) f ( u , w ) ζ 1 u v 2 ζ 2 v w 2 , u , v , w C ,
where ζ 1 and ζ are two positive constants.
It should be pointed out that the condition in Equation (5), in general, is not satisfied. Moreover, even if the condition in Equation (5) holds, finding the constants ζ 1 and ζ 2 is not an easy task. To avoid this difficulty, one can merge in the algorithm, a linesearch procedure into the iterative step. The current study continues developing subgradient algorithms without Lipschitz-type condition for solving the equilibrium problem.
Another problem of interest is the fixed point problem of nonlinear operators. Recall that an operator S : C C is said to be pseudocontractive if
S u S u 2     u u 2 + ( I S ) u ( I S ) u 2
and S is called L-Lipschitz if
S u S u     L u u
for some L > 0 and for all u , u C . If L = 1 , then S is said to be nonexpansive.
It is easy to see that the class of pseudocontractive operators includes the class of nonexpansive operators. The interest in pseudocontractive operators [2,31] is due mainly to their connection with the important class of nonlinear monotone (accretive) operators.
The fixed point problem has numerous applications in science and engineering, and it includes the optimization problem [32], the convex feasibility problem [2], the variational inequality problem [33], and so on. The fixed point problem can be solved by using iterative methods, such as the Mann method [34], the Halpern method [35], and the hybrid method [36].
In this paper, we devote to study iterative algorithms for finding a common element of the set of solutions of the equilibrium problem and the set of fixed points of a wide class of nonlinear operators. The main motivation for considering such a common problem is due to its possible applications in network resource allocation, signal processing, and image recovery [28,37]. Recently, iterative algorithms for solving a common problem of the equilibrium problem and the fixed point problem have been investigated by many researchers [28,38,39,40]. Especially, Nguyen, Strodiot, and Nguyen [41] (Algorithm 3) presented the following hybrid self-adaptive method, for solving the equilibrium and the fixed point problem:
Let α ( 0 , 2 ) and γ ( 0 , 1 ) . Let x 0 H and C 1 = C . Let x 1 = P C 1 [ x 0 ] and set k = 1 .
Step 1. Compute y k = min y C { λ k f ( x k , y ) + 1 2 x k y 2 } and w k = ( 1 γ m ) x k + γ m y k where m is the smallest nonnegative integer such that f ( w k , x k ) f ( w k , y k ) α 2 λ k x k y k 2 .
Step 2. Calculate z k = P C [ x k σ k g k ] , where g k 2 f ( z k , x k ) and σ k = f ( w k , x k ) g k 2 if y k x k and σ k = 0 otherwise.
Step 3. Calculate t k = α k z k + ( 1 α k ) T n z k , where T n : C C is nonexpansive.
Step 4. Compute x k + 1 = P C k + 1 [ x 0 ] , where C k + 1 = { z C k | t k z 2 x k z 2 ( 1 α k ) α k z k T n z k 2 } .
Step 5. Set k : = k + 1 and return to Step 1.
We observe that, in the above algorithm, f is assumed to be monotone, the involved operator T n is nonexpansive, and the construction of half-space C k + 1 is complicated.
The purpose of this paper is to improve and extend the main result in [41] to a general case: (i) We consider the pseudomonotone equilibrium problem, that is, f is assumed to be pseudomonotone. (ii) We extend T n from the nonexpansive operator to the pseudocontractive operator which includes the nonexpansive operator as a special case. (iii) We adapt the half-space C k + 1 to a simple form. We propose an iterative algorithm for seeking a common solution of the pseudomonotone equilibrium problem and fixed point of pseudocontractive operators. The suggested iterative algorithm is based on the projected method and subgradient method with a linearsearch technique. We show the strong convergence result for the iterative sequence generated by this algorithm.
The paper is organized as follows. In Section 2, we collect several notations and lemmas that are used in the paper. In Section 3, we adapt and suggest an iterative algorithm and prove its convergence. In Section 4, we give some applications. Finally, a concluding remark is included.

2. Notations and Lemmas

Throughout, we assume that C is a convex and closed subset of a real Hilbert space H. The following symbols are needed in the paper.
  • p k p indicates the weak convergence of p k to p as k .
  • p k p implies the strong convergence of p k to p as k .
  • F i x ( S ) means the set of fixed points of S.
  • ω w ( p k ) = { p : { p k i } { p k } such that p k i p ( i ) } .
Let g : C ( , + ] be a function.
  • g is said to be proper if { u C : g ( u ) < + } .
  • g is said to be lower semicontinuous if { x C : g ( x ) r } is closed for each r R .
  • g is said to be convex if g ( α u + ( 1 α ) v ) α g ( u ) + ( 1 α ) g ( v ) for every u , v C and α [ 0 , 1 ] .
  • g is said to be ρ -strongly convex ( ρ > 0 ) if g ( α u + ( 1 α ) v ) + ρ 2 α ( 1 α ) u v 2 α g ( u ) + ( 1 α ) g ( v ) for every u , v C and α ( 0 , 1 ) .
Let g : C ( , + ] be a proper, lower semicontinuous, and convex function. Then, the subdifferential g of g is defined by
g ( u ) : = { v H : g ( u ) + v , u u g ( u ) , u C }
for each u C .
It is known that g possesses the following properties:
(i)
g is a set-valued maximal monotone operator.
(ii)
If g is ρ -strongly convex ( ρ > 0 ), then g is ρ -strongly monotone (i.e., g ( u ˜ ) g ( v ˜ ) , u ˜ v ˜ ρ u ˜ v ˜ 2 ).
(iii)
u is a solution to the optimization problem min u C { g ( u ) } if and only if 0 g ( u ) + N C ( u ) , where N C ( u ) means the normal cone of C at u defined by
N C ( u ) = { ω H : ω , u u 0 , u C } .
Let f : C × C R be a bi-function satisfying the following assumptions:
(f1):
f ( z , z ) = 0 for all z C ;
(f2):
f is pseudomonotone on E P ( f , C ) ;
(f3):
f is jointly sequently weakly continuous on Δ × Δ , where Δ is an open convex set containing C (recall that f is called jointly sequently weakly continuous on E × E , if x k x and y k y , then f ( x k , y k ) f ( x , y ) ); and
(f4):
f ( z , · ) is convex and subdifferentiable for all z C .
For each z , x C , we use 2 f ( z , · ) to denote the subdifferential of f ( z , · ) at x.
Recall that the metric projection P C : H C is an orthographic projection from H onto C, which possesses the following characteristic: for given x H ,
x P C [ x ] , y P C [ x ] 0 , y C .
The following lemmas are used in the next section.
Lemma 1
([42]). In a Hilbert space H, we have
κ u + ( 1 κ ) u 2 = κ u 2 + ( 1 κ ) u 2 κ ( 1 κ ) u u 2 ,
u , u H and κ [ 0 , 1 ] .
Lemma 2
([31]). Assume that the operator S : C C is L-Lipschitz pseudocontractive. Then, for all u ˜ C and u F i x ( S ) , we have
u S ( ( 1 η ) u ˜ + η S u ˜ ) 2     u ˜ u 2 + ( 1 η ) u ˜ S ( ( 1 η ) u ˜ + η S u ˜ ) 2 ,
where 0 < η < 1 1 + L 2 + 1 .
The next lemma plays a critical role which can be considered as an infinite-dimensional version of Theorem 24.5 in [43]. The proof can be found in [44].
Lemma 3.
Assume that the bi-function f : Δ × Δ R satisfies Assumptions (f3) and (f4). For given two points u ¯ , v ¯ Δ and two sequences { u k } Δ and { v k } Δ , if u k u ¯ and v k v ¯ , respectively, then, for any ϵ > 0 , there exist η > 0 and N ϵ N verifying
2 f ( v k , u k ) 2 f ( v ¯ , u ¯ ) + ϵ η B
for every k N ϵ , where B : = { b H | b 1 } .
The following lemma is the demi-closed principle of the pseudocontractive operator.
Lemma 4
([45]). If the operator S : C C is continuous pseudocontractive, then:
(i) 
the fixed point set F i x ( S ) C is closed and convex; and
(ii) 
S satisfies demi-closedness, i.e., t k z ˜ and S t k z as k imply that S z ˜ = z .
Lemma 5
([46]). For given a sequence { q k } H and q H , if ω w ( q k ) C and q k q q P C [ q ] for all k N , then q k P C [ q ] .

3. Main Results

In this section, we first present our algorithm to solve the pseudomonotone equilibrium problem and fixed point problem and, consequently, we prove the convergence of the suggested algorithm, see Algorithm 1. Next, we state several assumptions on the underlying spaces, the involved operators, and the control parameters.
Assumptions:
(A1):
C H is closed convex and Δ is a given open set which contains C;
(A2):
the function f : Δ × Δ R satisfies Assumptions (f1)–(f4) stated in Section 2 (under this condition E P ( f , C ) is closed and convex [3]);
(A3):
the operator S : C C is Lipschitz pseudocontractive with Lipschitz constant L > 0 ;
(A4):
the intersection E P ( f , C ) F i x ( S ) ;
(C1):
the sequence { λ k } satisfies: λ k [ ρ , 1 ] with 0 < ρ 1 for all k 0 ;
(C2):
the sequences { δ k } and { σ k } satisfy: 0 < δ ̲ < δ k < δ ¯ < σ k < σ ¯ < 1 1 + L 2 + 1 , k 0 ; and
(C3):
γ ( 0 , 2 ) and μ ( 0 , 1 ) are two constants.
Algorithm 1: Let x 0 H be an initial guess.
Step 1.
Set C 1 = C and compute x 1 = P C 1 [ x 0 ] . Set k = 0 .
Step 2.
Assume that the current sequence { x k } has been given and then calculate
y k = arg min y C f ( x k , y ) + 1 2 λ k x k y 2 .
Step 3.
Compute { z k } by the following manner
z k = ( 1 μ m k ) x k + μ m k y k ,
where m k takes the smallest nonnegative integer verifying
f ( z k , x k ) f ( z k , y k ) γ 2 λ k x k y k 2 .
Step 4.
Calculate the sequence { u k } via
u k = x k , if z k = x k , P C x k f ( z k , x k ) g k g k 2 , where g k 2 f ( z k , x k ) , if z k x k .
Step 5.
Calculate the next iterate { x k + 1 } by the following form
v k = ( 1 δ k ) u k + δ k S [ ( 1 σ k ) u k + σ k S u k ] , C k + 1 = { u C k : v k u x k u } , x k + 1 = P C k + 1 [ x 0 ] .
Step 6.
Set k : = k + 1 and return to Step 2.
Proposition 1.
For each z C , we have
f ( x k , z ) f ( x k , y k ) + 1 λ k x k y k , z y k .
Proof. 
According to Equation (8), by the definition of y k , we have
0 2 f ( x k , y ) + 1 2 λ k x k y 2 | y = y k + N C ( y k ) .
It follows from Equation (14) that there exists p k 2 f ( x k , y k ) verifying 1 λ k ( x k y k ) p k N C ( y k ) , it yields that
1 λ k ( x k y k ) p k , z y k 0 , z C .
By the definition of subgradient of f ( x k , · ) at y k , we obtain
f ( x k , z ) f ( x k , y k ) + p k , z y k , z C .
Combine Equations (15) and (16) to conclude the desired result. □
Remark 1.
The search rule in Equation (10) is well-defined, i.e., there exists m k such that Equation (10) holds.
Proof. 
Case 1. x k = y k . In this case, z k = x k . Consequently, f ( z k , x k ) = f ( z k , y k ) = 0 because of (f1). Thus, Equation (10) holds and m k = 0 .
Case 2. x k y k . Suppose that the search rule in Equation (10) is not well-defined. Hence, m k must violate the inequality in Equation (10), i.e., for every m k N , we have
f ( z k , x k ) f ( z k , y k ) < γ 2 λ k x k y k 2 .
Noting that z k = ( 1 μ m k ) x k + μ m k y k and letting m k , we conclude that z k x k as m k . Thanks to Condition (f3), we deduce that f ( z k , x k ) 0 and f ( z k , y k ) f ( x k , y k ) . This, together with Equation (17), implies that
f ( x k , y k ) γ 2 λ k x k y k 2 .
Letting z = x k in Equation (13) and noting that f ( x k , x k ) = 0 , we deduce
0 f ( x k , y k ) + x k y k 2 λ k .
Combine the above inequality and Equation (18) to derive that 0 ( 1 λ k γ 2 λ k ) x k y k 2 0 . Hence, x k = y k , which is incompatible with the assumption. Consequently, the search rule in Equation (10) is well-defined. □
Remark 2.
If z k x k , then 0 2 f ( z k , x k ) and thus g k 0 and { u k } is well-defined.
Proof. 
Suppose that 0 2 f ( z k , x k ) . Since z k x k , from Equation (9), z k = ( 1 μ m k ) x k + μ m k y k . By using the convexity of f ( z k , · ) , we have
0 = f ( z k , z k ) ( 1 μ m k ) f ( z k , x k ) + μ m k f ( z k , y k ) .
Substituting Equation (10) into the last inequality, we get 0 γ μ m k 2 λ k x k y k 2 f ( z k , x k ) . On the other hand, by the assumption and the definition of the subdifferential, we deduce f ( z k , u ) f ( z k , x k ) , u C . Hence, 0 = f ( z k , z k ) f ( z k , x k ) , which is a contradiction. □
Proposition 2.
The sequence { x k } generated by Equation (12) is well-defined.
Proof. 
Firstly, we prove by induction that E P ( f , C ) F i x ( S ) C k for all k 1 . E P ( f , C ) F i x ( S ) C 1 is obvious. Suppose that E P ( f , C ) F i x ( S ) C k for some k N . Pick up p E P ( f , C ) F i x ( S ) C k . In the light of Equation (12) and Lemmas 1 and 2, we obtain
v k p 2 =   ( 1 δ k ) ( u k p ) + δ k ( S [ ( 1 σ k ) u k + σ k S u k ] p ) 2 =   ( 1 δ k ) u k p 2 δ k ( 1 δ k ) S [ ( 1 σ k ) u k + σ k S u k ] u k 2 + δ k S [ ( 1 σ k ) u k + σ k S u k ] p 2   ( 1 δ k ) u k p 2 δ k ( 1 δ k ) S [ ( 1 σ k ) u k + σ k S u k ] u k 2 + δ k ( u k p 2 + ( 1 σ k ) u k S [ ( 1 σ k ) u k + σ k S u k ] 2 ) =   u k p 2 δ k ( σ k δ k ) u k S [ ( 1 σ k ) u k + σ k S u k ] 2 .
Since f is pseudomonotone and p E P ( f , C ) , f ( z k , p ) 0 . According to g k 2 f ( z k , x k ) , by the subdifferential inequality, we have f ( z k , p ) f ( z k , x k ) + g k , p x k . It follows that g k , x k p f ( z k , x k ) f ( z k , p ) f ( z k , x k ) .
Case 1. z k x k . In terms of Equation (11), we get
u k p 2 =   P C x k f ( z k , x k ) g k g k 2 p 2   x k p f ( z k , x k ) g k g k 2 2 =   x k p 2 2 f ( z k , x k ) g k 2 g k , x k p + f 2 ( z k , x k ) g k 2   x k p 2 2 f 2 ( z k , x k ) g k 2 + f 2 ( z k , x k ) g k 2 =   x k p 2 f 2 ( z k , x k ) g k 2 .
Combining Equations (19) and (20), we obtain
v k p 2   x k p 2 f 2 ( z k , x k ) g k 2 δ k ( σ k δ k ) u k S [ ( 1 σ k ) u k + σ k S u k ] 2   x k p 2 ,
and hence p C k + 1 .
Case 2. z k = x k . In this case, u k = x k and v k p x k p is obvious. Thus, E P ( f , C ) F i x ( S ) C k for all k 1 .
Secondly, we show that C k is closed and convex for all k N . It is obvious that C 1 = C is closed and convex. Suppose that C k is closed and convex for some k N . For u C k , note that C k + 1 is equivalent to C k + 1 = { u C k : v k x k 2 + 2 v k x k , x k u 0 } . It is obvious that C k + 1 is nonempty, closed convex. Therefore, the sequence { x k } is well-defined. □
Proposition 3.
lim k f ( z k , x k ) g k = 0 and lim k u k S u k = 0 .
Proof. 
Since x k = P C k [ x 0 ] , by the property in Equation (7) of the metric projection, for any u C k , we have
x 0 x k , u x k 0 .
Then,
x k x 0 2 =   x 0 x k , u x k + x 0 x k , x 0 u   x 0 x k , x 0 u   x 0 x k x 0 u .
It yields
x k x 0     x 0 u , u C k ,
which, by selecting u = p E P ( f , C ) F i x ( S ) C k , implies that the sequence { x k } is bounded.
By terms of Equation (22), we have x 0 x k , x k + 1 x k 0 due to x k + 1 C k + 1 C k . Thus,
x k + 1 x k 2 =   2 x 0 x k , x k + 1 x k + x k + 1 x 0 2 x 0 x k 2   x k + 1 x 0 2 x k x 0 2 .
From Equation (23), we deduce x k x 0     x 0 x k + 1 . Thus, the limit lim k x k x 0 exists, denoted by q. This, together with Equation (24), implies that x k + 1 x k 0 . Thanks to the definition of C k + 1 and x k + 1 C k , we derive v k x k + 1     x k x k + 1 0 . Hence,
v k x k     v k x k + 1 + x k + 1 x k 0   as   k .
By Equation (21), we obtain
0   f 2 ( z k , x k ) g k 2 + δ k ( σ k δ k ) u k S [ ( 1 σ k ) u k + σ k S u k ] 2   x k p 2 v k p 2   x k v k [ x k p + v k p ] 0 ( k ) .
Therefore,
lim k f ( z k , x k ) g k = 0 ,
and
lim k u k S [ ( 1 σ k ) u k + σ k S u k ] = 0 .
On the other hand,
u k S u k   u k S [ ( 1 σ k ) u k + σ k S u k ] + S [ ( 1 σ k ) u k + σ k S u k ] S u k   u k S [ ( 1 σ k ) u k + σ k S u k ] + L σ k u k S u k .
It follows from Equation (26) that
u k S u k     1 1 L σ k u k S [ ( 1 σ k ) u k + σ k S u k ] 0   as   k .
 □
Proposition 4.
ω w ( x k ) E P ( f , C ) F i x ( S ) .
Proof. 
Selecting any x ω w ( x k ) , there exists a subsequence { x k i } { x k } such that x k i x C . Set F ( y ) = f ( x k i , y ) + 1 2 λ k i x k i y 2 for each y C . Noting that 0 F ( y k i ) + N C ( y k i ) , then there exists A ( y k i ) F ( y k i ) such that
A ( y k i ) , y y k i 0 , y C .
Observe that F ( y ) is 1 λ k i -strongly monotone because F ( y ) is 1 λ k i -strongly convex due to the convexity of f ( x k i , y ) . Thus, we have
A ( x k i ) A ( y k i ) , x k i y k i 1 λ k i x k i y k i 2 ,
where A ( x k i ) F ( x k i ) .
Taking into account Equations (28) and (29), we obtain
A ( x k i ) x k i y k i A ( x k i ) , x k i y k i 1 λ k i x k i y k i 2 + A ( y k i ) , x k i y k i 1 λ k i x k i y k i 2 .
It follows that
x k i y k i     λ k i A ( x k i ) , A ( x k i ) F ( x k i ) = 2 f ( x k i , x k i ) .
Since x n i x , by Lemma 3, for any ϵ 1 > 0 , there exist η 1 > 0 and n ϵ 1 N such that
2 f ( x k i , x k i ) 2 f ( x , x ) + ϵ 1 η 1 B , i n ϵ 1 .
The above inclusion and Equation (30) yield that there exists M > 0 such that x k i y k i   M for all i n ϵ 1 . This indicates that the sequence { y k i } is bounded owing to the boundedness of { x k i } . Then, there exists a subsequence of { y k i } , again denoted by { y k i } such that y k i y . Consequently, by the definition of { z k i } , it is also bounded. Thus, there exists a subsequence of { z k i } , without loss of generality, still denoted by { z k i } that converges to z C . Applying Lemma 3, for any ϵ 2 > 0 , there exist η 2 > 0 and n ϵ 2 N such that
2 f ( z k i , x k i ) 2 f ( z , x ) + ϵ 2 η 2 B , i n ϵ 2 .
Thus, { g k i } is bounded. This, together with Equation (25), implies
f ( z k i , x k i ) 0   as   i .
Next, we show x E P ( f , C ) . We consider two cases. Case 1: y k i = x k i . According to Equation (13), we have
f ( x k i , z ) 0 , z C .
Since f ( · , z ) is sequently weakly continuous on the open set Δ C , letting i in Equation (32), we deduce that f ( x , z ) 0 , z C , i.e., x E P ( f , C ) .
Case 2: y k i x k i . By the convexity of f ( z k i , · ) , we get
0 = f ( z k i , z k i ) = f ( z k i , ( 1 μ m k i ) x k i + μ m k i y k i ) ( 1 μ m k i ) f ( z k i , x k i ) + μ m k i f ( z k i , y k i ) ,
which results that μ m k i [ f ( z k i , x k i ) f ( z k i , y k i ) ] f ( z k i , x k i ) . Furthermore, from Equation (10), we have f ( z k i , x k i ) f ( z k i , y k i ) γ 2 λ k i x k i y k i 2 . Hence,
f ( z k i , x k i ) γ μ m k i 2 λ k i x k i y k i 2 .
If lim sup i μ m k i > 0 , then there exists a subsequence of { μ m k i } , still denoted by { μ m k i } , such that μ m k i v > 0 . In the light of Equations (31) and (33), we conclude that y k i x k i   0 . In the case where μ m k i 0 as i , let { l i } be the smallest positive integers such that, for each i,
f ( z k i , x k i ) f ( z k i , y k i ) γ 2 λ k i x k i y k i 2 ,
where z k i = ( 1 μ l i ) x k i + μ l i y k i .
Consequently, l i 1 must violate the above search rule in Equation (34), i.e.,
f ( z ¯ k i , x k i ) f ( z ¯ k i , y k i ) < γ 2 λ k i x k i y k i 2 ,
where z ¯ k i = ( 1 μ l i 1 ) x k i + μ l i 1 y k i .
At the same time, by Equation (13), we obtain
0 f ( x k i , y k i ) + 1 λ k i x k i y k i 2 .
From Equations (35) and (36), we have
f ( z ¯ k i , x k i ) f ( z ¯ k i , y k i ) < γ 2 f ( x k i , y k i ) .
Letting i in Equation (37) and noting that x k i x , y k i y and z ¯ k i x , we deduce
f ( x , y ) α 2 f ( x , y ) .
It yields that f ( x , y ) 0 . This, together with Equation (36), implies that x k i y k i   0 . Consequently, y k i x . Again, applying Equation (13), we conclude that f ( x , z ) 0 for all z C , i.e., x E P ( f , C ) .
Next, we show x F i x ( S ) . Observe that u k i x k i   | f ( z k i , x k i ) | g k i 0 by Equation (25) and thus u k i x ( i ) . This, together with Lemma 4 and Equation (27), implies that x F i x ( S ) . Thus, ω w ( x k ) E P ( f , C ) F i x ( S ) . □
Theorem 1.
The iterate { x k } defined by Algorithm 1 converges strongly to P E P ( f , C ) F i x ( S ) [ x 0 ] .
Proof. 
First, by Conditions (A2) and (A4) and Lemma 4, E P ( f , C ) F i x ( S ) is nonempty, closed and convex. Hence, P E P ( f , C ) F i x ( S ) is well-defined. Thanks to Equation (23), we deduce
x k x 0     x 0 P E P ( f , C ) F i x ( S ) [ x 0 ] .
By Proposition 4, we obtain ω w ( x k ) E P ( f , C ) F i x ( S ) . Hence, all conditions of Lemma 5 are fulfilled. Consequently, we conclude that x k P E P ( f , C ) F i x ( S ) [ x 0 ] by the conclusion of Lemma 5. □
Remark 3.
In Algorithm 1, if S is nonexpansive, then the conclusion still holds. The construction of half-space C k + 1 in Algorithm 1 is simpler than that in [41]. Our result improves and extends the corresponding result in [41].

4. Applications

In Equation (3), setting f ( u ˜ , u ) = A u ˜ , u u ˜ , the EP in Equation (3) reduces to the following variational inequality (VI) of seeking u ˜ C verifying
A u ˜ , u u ˜ 0 , u C .
The solution set of the variational inequality in Equation (38) is denoted by V I ( A , C ) .
In this case, solving strongly convex program
y k = arg min y C f ( x k , y ) + 1 2 λ k x k y 2
is converted to solve y k = P C ( x k λ k A x k ) . The Armijo-like assumption
f ( z k , x k ) f ( z k , y k ) γ 2 λ k x k y k 2
can be expressed as
A z k , x k y k γ 2 λ k x k y k 2 .
Consequently, we obtain the following algorithm for solving a common problem of the VI and the FPP, see Algorithm 2.
Theorem 2.
Let C H be a closed convex and Δ be a given open set which contains C. Let A : Δ H be a pseudomonotone and jointly sequently weakly continuous operator. Let the operator S : C C be Lipschitz pseudocontractive with Lipschitz constant L > 0 . Suppose that the intersection V I ( A , C ) F i x ( S ) . Assume that Conditions (C1)–(C3) are satisfied. Then, the iterate { x k } defined by Algorithm 2 converges strongly to P V I ( A , C ) F i x ( S ) [ x 0 ] .
In Algorithm 2, setting S = I , the identity operator, then L = 1 and Condition (C2) reduces to Condition (C4): 0 < δ ̲ < δ k < δ ¯ < σ k < σ ¯ < 1 2 + 1 , k 0 . In this case, we have the following Algorithm 3 and corollary for solving the VI.
Algorithm 2: Let x 0 H be an initial guess.
Step 1.
Set C 1 = C and compute x 1 = P C 1 [ x 0 ] . Set k = 0 .
Step 2.
Assume that the current sequence { x k } has been given and then calculate
y k = P C ( x k λ k A x k ) .
Step 3.
Compute { z k } by the following manner z k = ( 1 μ m k ) x k + μ m k y k , where m k takes the smallest nonnegative integer verifying A z k , x k y k γ 2 λ k x k y k 2 .
Step 4.
Calculate the sequence { u k } via
u k = x k , if z k = x k , P C x k A z k , x k z k A z k 2 , if z k x k .
Step 5.
Calculate the next iterate { x k + 1 } by the following form
v k = ( 1 δ k ) u k + δ k S [ ( 1 σ k ) u k + σ k S u k ] , C k + 1 = { u C k : v k u x k u } , x k + 1 = P C k + 1 [ x 0 ] .
Step 6.
Set k : = k + 1 and return to Step 2.
Algorithm 3: Let x 0 H be an initial guess.
Step 1.
Set C 1 = C and compute x 1 = P C 1 [ x 0 ] . Set k = 0 .
Step 2.
Assume that the current sequence { x k } has been given and then calculate
y k = P C ( x k λ k A x k ) .
Step 3.
Compute { z k } by the following manner z k = ( 1 μ m k ) x k + μ m k y k , where m k takes the smallest nonnegative integer verifying A z k , x k y k γ 2 λ k x k y k 2 .
Step 4.
Calculate the sequence { u k } via
u k = x k , if z k = x k , P C x k A z k , x k z k A z k 2 , if z k x k .
Step 5.
Calculate the next iterate { x k + 1 } by the following form
C k + 1 = { u C k : u k u x k u } , x k + 1 = P C k + 1 [ x 0 ] .
Step 6.
Set k : = k + 1 and return to Step 2.
Corollary 1.
Let C H be a closed convex and Δ be a given open set which contains C. Let A : Δ H be a pseudomonotone and jointly sequently weakly continuous operator. Suppose that V I ( A , C ) . Assume that Conditions (C1), (C3), and (C4) are satisfied. Then, the iterate { x k } defined by Algorithm 3 converges strongly to P V I ( A , C ) [ x 0 ] .

5. Conclusions

In this paper, we investigate pseudomonotone equilibrium problems and fixed point problems in Hilbert spaces. We present an iterative algorithm for finding a common element of the fixed point of pseudocontractive operators and the pseudomonotone equilibrium problem without Lipschitz-type continuity. We prove the strong convergence of the suggested algorithm under some additional assumptions. Since, in our suggested Algorithm 1, the involved function f is assumed to be pseudomonotone, a natural problem arises: how to weaken this assumption to nonmonotone.

Author Contributions

Conceptualization, Y.Y. and N.S.; Data curation, Y.Y.; Formal analysis, Y.Y. and J.-C.Y.; Funding acquisition, N.S.; Investigation, Y.Y., N.S. and J.-C.Y.; Methodology, Y.Y. and N.S.; Project administration, J.-C.Y.; Supervision, J.-C.Y. All the authors have contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

Yonghong Yao was supported in part by the grant TD13-5033. Jen-Chih Yao was partially supported by the Grant MOST 106-2923-E-039-001-MY3.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  3. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  4. Quoc, T.D.; Muu, L.D.; Hien, N.V. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  5. Tada, A.; Takahashi, W. Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 2007, 133, 359–370. [Google Scholar] [CrossRef]
  6. Bello-Cruz, J.-Y.; Iusem, A.-N. Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46, 247–263. [Google Scholar] [CrossRef]
  7. Malitsky, Y. Proximal extrapolated gradient methods for variational inequalities. Optim. Meth. Softw. 2018, 33, 140–164. [Google Scholar] [CrossRef] [Green Version]
  8. Yao, Y.; Postolache, M.; Yao, J.-C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Politeh. Buch. Ser. A 2020, 83, 3–12. [Google Scholar]
  9. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified extragradient-like algorithms with new stepsizes for variational inequalities. Comput. Optim. Appl. 2019, 73, 913–932. [Google Scholar] [CrossRef]
  10. Thong, D.V.; Gibali, A. Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities. J. Fixed Point Theory Appl. 2019, 21, 20. [Google Scholar] [CrossRef]
  11. Zhang, C.; Zhu, Z.; Yao, Y.; Liu, Q. Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities. Optimization 2019, 68, 2293–2312. [Google Scholar] [CrossRef]
  12. Yang, J.; Liu, H.W. A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 2018, 179, 197–211. [Google Scholar] [CrossRef]
  13. Yao, Y.; Postolache, M.; Liou, Y.-C.; Yao, Z.-S. Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 2016, 10, 1519–1528. [Google Scholar] [CrossRef]
  14. Thakur, B.S.; Postolache, M. Existence and approximation of solutions for generalized extended nonlinear variational inequalities. J. Inequal. Appl. 2013, 2013, 590. [Google Scholar] [CrossRef] [Green Version]
  15. Zhao, X.P.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, in press. [Google Scholar]
  16. Zegeye, H.; Shahzad, N.; Yao, Y. Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 2015, 64, 453–471. [Google Scholar] [CrossRef]
  17. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  18. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed Points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  19. Yao, Y.; Postolache, M.; Yao, J.-C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef] [Green Version]
  20. Yao, Y.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
  21. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  22. Muu, L.-D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  23. Martinet, B. Régularisation dínéquations variationelles par approximations successives. Rev. Française Autom. Inform. Rech. Opér. Anal. Numér. 1970, 4, 154–159. [Google Scholar]
  24. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  25. Konnov, I.V. Combined Relaxation Methods for Variational Inequalities; Springer: Berlin, Germany, 2001. [Google Scholar]
  26. Dang, V.H. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 2018, 77, 983–1001. [Google Scholar]
  27. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  28. Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
  29. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  30. Mastroeni, G. Gap functions for equilibrium problems. J. Glob. Optim. 2003, 27, 411–426. [Google Scholar] [CrossRef]
  31. Yao, Y.; Liou, Y.C.; Yao, J.C. Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 2015, 127. [Google Scholar] [CrossRef] [Green Version]
  32. Iiduka, H. Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 2013, 23, 1–26. [Google Scholar] [CrossRef]
  33. Zhao, X.P.; Yao, Y.H. Modified extragradient algorithms for solving monotone variational inequalities and fixed point problems. Optimization 2020, in press. [Google Scholar] [CrossRef]
  34. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  35. Halpern, B. Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  36. Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef] [Green Version]
  37. Iiduka, H.; Uchida, M. Fixed point optimization algorithms for network bandwidth allocation problems with compoundable constraints. IEEE Commun. Lett. 2011, 15, 596–598. [Google Scholar] [CrossRef]
  38. Anh, P.K.; Hieu, D.V. Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2016, 44, 351–374. [Google Scholar] [CrossRef]
  39. Hieu, D.V.; Muu, L.D.; Anh, P.K. Parallel hybrid extragradientmethods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 2016, 73, 197–217. [Google Scholar] [CrossRef]
  40. Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space. Optimization 2015, 64, 429–451. [Google Scholar] [CrossRef]
  41. Nguyen, T.T.V.; Strodiot, J.J.; Nguyen, V.H. Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 2014, 160, 809–831. [Google Scholar] [CrossRef]
  42. Dang, V.-H. An extension of hybrid method without extrapolation step to equilibrium problems. JIMO 2017, 13, 1723–1741. [Google Scholar]
  43. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  44. Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 2012, 155, 605–627. [Google Scholar] [CrossRef]
  45. Zhou, H. Strong convergence of an explicit iterative algorithm for continuous pseudocontractions in Banach spaces. Nonlinear Anal. 2009, 70, 4039–4046. [Google Scholar] [CrossRef]
  46. Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Yao, Y.; Shahzad, N.; Yao, J.-C. Projected Subgradient Algorithms for Pseudomonotone Equilibrium Problems and Fixed Points of Pseudocontractive Operators. Mathematics 2020, 8, 461. https://doi.org/10.3390/math8040461

AMA Style

Yao Y, Shahzad N, Yao J-C. Projected Subgradient Algorithms for Pseudomonotone Equilibrium Problems and Fixed Points of Pseudocontractive Operators. Mathematics. 2020; 8(4):461. https://doi.org/10.3390/math8040461

Chicago/Turabian Style

Yao, Yonghong, Naseer Shahzad, and Jen-Chih Yao. 2020. "Projected Subgradient Algorithms for Pseudomonotone Equilibrium Problems and Fixed Points of Pseudocontractive Operators" Mathematics 8, no. 4: 461. https://doi.org/10.3390/math8040461

APA Style

Yao, Y., Shahzad, N., & Yao, J. -C. (2020). Projected Subgradient Algorithms for Pseudomonotone Equilibrium Problems and Fixed Points of Pseudocontractive Operators. Mathematics, 8(4), 461. https://doi.org/10.3390/math8040461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop