Next Article in Journal
Multi-Modal Rigid Image Registration and Segmentation Using Multi-Stage Forward Path Regenerative Genetic Algorithm
Previous Article in Journal
Long-Time Bit Storage and Retrieval without Cold Atom Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Viscosity Approximation Method for Solving General System of Variational Inequalities, Generalized Mixed Equilibrium Problems and Fixed Point Problems

by
Maryam Yazdi
1,* and
Saeed Hashemi Sababe
1,2
1
Young Researchers and Elite Club, Malard Branch, Islamic Azad University, Malard MX7C+G74, Iran
2
Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB T6G 2R3, Canada
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(8), 1507; https://doi.org/10.3390/sym14081507
Submission received: 13 June 2022 / Revised: 8 July 2022 / Accepted: 19 July 2022 / Published: 22 July 2022

Abstract

:
This paper is devoted to introducing a new viscosity approximation method using the implicit midpoint rules for finding a common element in the set of solutions of a generalized mixed equilibrium problem, the set of solutions of a general system of variational inequalities and the set of common fixed points of a finite family of nonexpansive mappings in a symmetric Hilbert space. Then, we prove a strong convergence theorem regarding the proposed iterative scheme under some suitable conditions on the parameters. Finally, we provide two numerical results to show the consistency and accuracy of the scheme. One of them, moreover, compares the behavior of our scheme with the iterative scheme of Ke and Ma (Fixed Point Theory Appl 190, 2015).

1. Introduction

Let H be a real symmetric Hilbert space equipped with the inner product · , · and norm · , and let C be a nonempty closed convex subset of H. A mapping T of C into itself is called nonexpansive if T x T y x y for all x , y C . We use F i x ( T ) to denote the set of fixed points T, i.e., F i x ( T ) = { x C : T x = x } . Additionally, f : C C is a contraction if f ( x ) f ( y ) κ x y for all x , y C and some constant κ [ 0 , 1 ) . In this case, f is said to be a κ -contraction.
In 2008, Peng and Yao [1] considered the following generalized mixed equilibrium problem, which involves finding x * C such that
Θ ( x * , y ) + φ ( y ) φ ( x * ) + A x * , y x * 0 for all y C ,
where A : C H is a nonlinear mapping, φ : C R is a function and Θ : C × C R is a bifunction of C. The solution set of (1) is denoted by Ω .
If A = φ = 0 , then problem (1) reduces to the following equilibrium problem (EP), which aims to find a point x C satisfying the following property:
Θ ( x , y ) 0 for all y C .
We use E P ( Θ ) to denote the set of solutions of EP (2), that is, E P ( Θ ) = { x C : ( 2 ) holds } . The EP (2) includes, as special cases, numerous problems in physics, optimization and economics. Some authors (e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14,15]) have proposed some useful methods for solving the EP (2). Set Θ ( x , y ) = A x , y x for all x , y C , where A : C H is a nonlinear mapping. Then, x * E P ( Θ ) if and only if
A x * , y x * 0 for all y C ,
that is, x * is a solution of the variational inequality. The (3) is well known as the classical variational inequality. The set of solutions of (3) is denoted by V I ( A , C ) .
Let A be a bounded operator on C. A is γ ¯ -strongly; that is, there exists a constant γ ¯ > 0 such that A x , x γ ¯ x 2 for all x C .
In 1967, Halpern [16] considered the following explicit iterative process:
x n + 1 = α n u + ( 1 α n ) T x n , n 0 ,
where u is a given point and T : C C is nonexpansive. He proved the strong convergence of { x n } to a fixed point of T provided that α n = n θ with θ ( 0 , 1 ) . In 2003, Xu [17] introduced the following iterative process:
x n + 1 = α n u + ( 1 α n A ) T x n , n 0 ,
where { α n } is a sequence in ( 0 , 1 ) . He proved that the above sequence { x n } converges strongly to the unique solution of the minimization problem with C = F i x ( T ) : m i n x C 1 2 A x , x x , u , where A is a strongly positive bounded linear operator on H.
In 2006, Marino and Xu [18] considered the following viscosity iterative method:
x n + 1 = α n γ f ( x n ) + ( 1 α n A ) T x n , n 0 ,
where f is a contraction on H. They proved the above sequence { x n } converges strongly to the unique solution of the variational inequality
( A γ f ) x * , x x * 0 , x F i x ( T ) .
In 2001, Yamada et al. [19] considered the following hybrid steepest-descent iterative method:
x n + 1 = T x n μ λ n F ( T x n ) , n 0 ,
where F is κ -Lipschitzian continuous and η -strongly monotone operator with κ > 0 , η > 0 and 0 < μ < 2 η κ 2 . Under some suitable conditions, the above sequence { x n } converges strongly to the unique solution of the variational inequality
F ( x * ) , x x * 0 , x F i x ( T ) .
In 2010, Tian [20] considered the following general viscosity type iterative method:
x n + 1 = α n γ f ( x n ) + ( I μ α n F ) T x n , n 0 .
Under certain approximate conditions, the above sequence { x n } converges strongly to a fixed point of T, which solves the variational inequality
( γ f μ F ) x * , x x * 0 , x F i x ( T ) .
In 2014, Zhang and Yang [21] proposed an explicit iterative algorithm based on the viscosity method for finding a solution for a class of variational inequalities over the common fixed points set of the finite family of nonexpansive mappings { T i } i = 1 N , as follows:
x n + 1 = α n γ V ( x n ) + ( I α n μ F ) T N n T N 1 n . . . T 1 n x n , n 0 ,
where T i n = ( 1 σ n i ) I + σ n i T n for i = 1 , 2 , . . . , N , V is ρ -Lipschitzian and { σ n i } is a real sequence in ( 0 , 1 ) . They proved that { x n } converges strongly to the unique solution x * i = 1 N F i x ( T i ) of the variational inequality:
( μ F γ V ) x * , x x * 0 , x i = 1 N F i x ( T i ) .
In 2016, Jeong [22] introduced a new iterative method based on the hybrid viscosity approximation method and the hybrid steepest-descent method, as follows:
ϕ ( u n , y ) + ϕ ( y ) ϕ ( u n ) + A x n , y u n + 1 r n y u n , u n x n 0 , y C , x n + 1 = α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T N n T N 1 n . . . T 1 n u n , n 0 .
He proved that the sequence { x n } converges strongly to the unique solution x * i = 1 N F i x ( T i ) Ω of the variational inequality:
( μ F γ V ) x * , x x * 0 , x i = 1 N F i x ( T i ) Ω .
On the other hand, in 2008, Ceng et al. [23] considered the following problem of finding ( x * , y * ) C × C satisfying
ν A y * + x * y * , x x * 0 for all x C , μ B x * + y * x * , x y * 0 for all x C ,
which is called a general system of variational inequalities, where A , B : C H are two nonlinear mappings, and λ > 0 and μ > 0 are two fixed constants. Precisely, they introduced the following iterative algorithm:
x 1 = u C , y n = P C ( x n μ B x n ) , x n + 1 = α n u + β n x n + γ n S P C ( y n λ A y n ) ,
where { α n } , { β n } and { γ n } are real sequences, S is a nonexpansive mapping on C, and P C is the metric projection of H onto C and the strong convergence theorem obtained.
The implicit midpoint rules for solving fixed point problems of nonexpansive mappings are a powerful numerical method for solving ordinary differential equations; see [24,25,26] and the references therein. Therefore, many authors have studied them; see [27,28,29,30,31]. In 2015, Xu et al. [31] applied the viscosity technique to the implicit midpoint rule for nonexpansive mappings and proposed the following viscosity implicit midpoint rule:
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( x n + x n + 1 2 ) , n 0 ,
where { α n } is a real sequence. They proved that the sequence { x n } converges strongly to a fixed point of T, which is the unique solution of a certain variational inequality. Additionally, Ke and Ma [29] studied the following generalized viscosity implicit rules:
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( t n x n + ( 1 t n ) x n + 1 ) , n 0 ,
where { α n } and { t n } are real sequences. They showed that the sequence { x n } converges strongly to a fixed point of T, which is the unique solution of a certain variational inequality.
Recently, Cai et al. [32] introduced the following modified viscosity implicit rules:
x 1 C , u n = t n x n + ( 1 t n ) y n , z n = P C ( I μ B ) u n , y n = P C ( I λ A ) z n , x n + 1 = P C ( α n f ( x n ) + β n x n + ( ( 1 β n ) I α n ρ F ) T y n ) , n 1 ,
where F is a Lipschitzian and strongly monotone map, { α n } , { β n } and { t n } are real sequences and P C is the metric projection of H onto C. Under some suitable assumptions imposed on the parameters, they obtained some strong convergence theorems.
Motivated by the above results, we proposed a new composite iterative scheme for finding a common element of the set of solutions of a general system of variational inequalities, a generalized mixed equilibrium problem and the set of common fixed points of a finite family of nonexpansive mappings in Hilbert spaces. Then, we proved a strong convergence theorem. Finally, we provided two numerical examples for supporting our main result.

2. Preliminaries

Let H be a real Hilbert space. We use ⇀ and → to denote the weak and strong convergences in H, respectively. The following identity holds:
α x + β y 2 = α x 2 + β y 2 α β x y 2 ,
for all x , y H and α , β [ 0 , 1 ] such that α + β = 1 . Let C be a nonempty closed convex subset of H. Then, for any x H , there exists a unique nearest point in C, denoted by P C ( x ) , such that
x P C ( x ) x y for all y C .
P C is called the metric projection of H onto C. It is known that P C is nonexpansive and satisfies
x y , P C ( x ) P C ( y ) P C ( x ) P C ( y ) 2 for all x , y H .
Furthermore, for x H and z C , we have
z = P C ( x ) x z , z y 0 for all y C .
Lemma 1.
Let H be a real Hilbert space. Then, for all x , y H ,
x + y 2 x 2 + 2 y , x + y .
Definition 2
([32]). A mapping T : H H is called firmly nonexpansive if for any x , y H ,
T x T y 2 T x T y , x y .
Definition 3
([32]). A mapping A : H H is called α-strongly monotone if for any x , y H ,
A x A y , x y α x y 2 .
Definition 4
([33]). A mapping T : H H is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping; that is, T = ( 1 α ) I + α S , where α ( 0 , 1 ) and S : H H is nonexpansive. More precisely, we say that T is α averaged.
Clearly, a firmly nonexpansive mapping is a 1 2 averaged map.
Proposition 5
([34]). ( i ) The composite of finitely many averaged mapping is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 . . . T N . In particular, if T 1 is α 1 averaged, and T 2 is α 2 averaged, where α 1 , α 2 ( 0 , 1 ) , then the composite T 1 T 2 is α averaged, where α = α 1 + α 2 α 1 α 2 .
( i i ) If the mappings { T i } i = 1 N are averaged and have a common fixed point, then i = 1 N F i x ( T i ) = F i x ( T 1 , . . . , T N ) . In particular, if N = 2 , we have F i x ( T 1 ) F i x ( T 2 ) = F i x ( T 1 T 2 ) = F i x ( T 2 T 1 ) .
Lemma 6
([35]). Let C be a nonempty closed convex subset of H and Θ : C × C R be a bifunction satisfying the following conditions:
( A 1 )
Θ ( x , x ) = 0 for all x C ;
( A 2 )
Θ is monotone, i.e., Θ ( x , y ) + Θ ( y , x ) 0 for all x , y C ;
( A 3 )
For each y C , x Θ ( x , y ) is weakly upper semicontinuous;
( A 4 )
For each x C , y Θ ( x , y ) is convex and lower semicontinuous.
Suppose that ϕ : C R is convex and lower semicontinuous satisfying the following conditions:
( H 1 )
For each x H and r > 0 , there exist a bounded subset D x C and y x C such that for any z C D x ,
Θ ( z , y x ) + φ ( y x ) φ ( z ) + 1 r y x z , z x 0 f o r a l l y C ;
( H 2 )
C is bounded set.
For r > 0 and x H , define a mapping T r Θ , φ : H C as follows:
T r Θ , φ ( x ) = { z C : Θ ( z , y ) + φ ( y ) φ ( z ) + 1 r y z , z x 0 f o r a l l y C } ,
for all x H . Then, the following hold:
(i)
T r Θ , φ ( x ) for each x H and T r Θ , φ is single-valued;
(ii)
T r Θ , φ is firmly nonexpansive;
(iii)
F i x ( T r Θ , φ ) = Ω ;
(iv)
Ω is closed and convex.
Lemma 7
([36]). Let C, H, Θ and T r Θ , φ be as in Lemma 6. Then, the following inequality holds:
T s Θ , φ ( x ) T t Θ , φ ( x ) 2 s t s T s Θ , φ ( x ) T t Θ , φ ( x ) , T s Θ , φ ( x ) x ,
for all s , t > 0 and x H .
Definition 8
([32]). A nonlinear operator A in which the domain is D ( A ) H and the range is R ( A ) H is said to be α inverse strongly monotone (for short, α ism ) if there exists α > 0 such that
x y , A x A y α A x A y 2 f o r a l l x , y D ( A ) .
Lemma 9
([37]). Let C be a closed convex subset of H and T : C C be a nonexpansive mapping with F i x ( T ) . If { x n } is a sequence in C such that x n x and ( I T ) x n 0 , then ( I T ) x = 0 .
Lemma 10
([38]). Let F : H H be an L-Lipschitzian and η-strongly monotone mapping. Let 0 < μ < 2 η L 2 and λ ( 0 , 1 ] . Define
T λ x : = T x λ μ F ( T x ) , x H ,
where T : H H is a nonexpansive mapping. Then, the mapping T λ is a contraction from H into H, that is,
T λ x T λ y ( 1 λ τ ) x y , x , y H ,
where τ = 1 1 μ ( 2 η μ L 2 ) ( 0 , 1 ) .
Lemma 11
([39]). Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + γ n v n + μ n ,
where { γ n } is a sequence in [ 0 , 1 ] , { μ n } a sequence of nonnegative real numbers and { v n } a sequence in R such that n = 1 γ n = , lim sup n v n 0 and n = 1 μ n < . Then, lim n a n = 0 .
Lemma 12
([23]). For a given x * , y * C , ( x * , y * ) is a solution of problem (4) if and only if x * is a fixed point of the mapping G : C C defined by
G ( x ) = P C ( P C ( x μ B x ) ν A P C ( x μ B x ) ) for all x C ,
where y * = P C ( x * μ B x * ) .
Lemma 13
([30]). Let { x n } and { y n } be bounded sequences in Banach space X and { β n } be a sequence in [ 0 , 1 ] with 0 < lim inf n β n lim sup n β n < 1 . Suppose that x n + 1 = ( 1 β n ) y n + β n x n for all integer n 0 and lim sup n ( y n + 1 y n x n + 1 x n ) 0 . Then, lim n x n y n = 0 .

3. Main Result

Theorem 14.
Let C be a closed convex subset of H; Θ : C × C R be a bifunction satisfying the conditions ( A 1 ) ( A 4 ) of Lemma 6; φ : C R be a lower semicontinuous and convex function with restriction ( H 1 ) or ( H 2 ) of Lemma 6; A , B , D : H H be α-ism, β-ism and ω-ism, respectively; { T i } i = 1 N be an infinite family of nonexpansive self-mappings on H; F : H H be an L-Lipschitzian and ν-strongly monotone mapping; and V : H H be a κ-Lipschitzian mapping. Let 0 < μ < 2 ν L 2 and 0 < γ κ < τ , where τ = 1 1 μ ( 2 ν μ L 2 ) . Set Γ : = n = 1 N F i x ( T n ) F i x ( G ) Ω and assume Γ . Suppose that { α n } , { β n } , { t n } and { r n } are real sequences satisfying the following conditions:
( B 1 )
{ α n } ( 0 , 1 ) , lim n α n = 0 , n = 1 | α n + 1 α n | < and n = 1 α n = ;
( B 2 )
0 < lim inf n β n lim sup n β n < 1 and n = 1 | β n + 1 β n | < ;
( B 3 )
0 < lim inf n r n lim sup n r n < 2 α and n = 1 | r n + 1 r n | < ;
( B 4 )
{ t n } ( b , 1 ] for some b > 0 and n = 1 | t n + 1 t n | < .
Given x 1 H , let { x n } be a sequence generated by
u n = t n x n + ( 1 t n ) y n , θ ( v n , y ) + φ ( y ) φ ( v n ) + A u n , y v n + 1 r n y v n , v n u n 0 , y C , z n = P C ( I ρ B ) v n , y n = P C ( I λ D ) z n , x n + 1 = α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T N n T N 1 n . . . T 1 n y n ,
where T i n = ( 1 s n i ) I + s n i T i for i = 1 , 2 , . . . , N and s n i ( ε 1 , ε 2 ) for some ε 1 , ε 2 ( 0 , 1 ) , ρ ( 0 , 2 β ) and λ ( 0 , 2 ω ) . Suppose lim n | s n + 1 i s n i | = 0 for i = 1 , 2 , . . . , N . Then, the sequence { x n } converges strongly to x * Γ , where x * = P Γ ( I μ F + γ V ) ( x * ) , which solves the variational inequality (VI):
( μ F γ V ) x * , x * x 0 f o r a l l x Γ .
To prove Theorem 14, we first establish some lemmas.
Lemma 15.
Let F : H H be an L-Lipschitzian and ν-strongly monotone mapping with τ = 1 1 μ ( 2 ν μ L 2 ) . Then, I μ F is nonexpansive.
Proof. 
For x , y H , we have
( I μ F ) x ( I μ F ) y 2 = x y μ ( F x F y ) 2 = x y 2 2 μ x y , F x F y + μ 2 F x F y 2 x y 2 2 μ ν x y 2 + μ 2 L 2 x y 2 = ( 1 2 μ ν + μ 2 L 2 ) x y 2 = ( 1 τ ) 2 x y 2 .
Lemma 16.
Let A : H H be an α-ism and ν ( 0 , 2 α ) . Then, I ν A is nonexpansive.
Proof. 
For x , y H , we have
( I ν A ) x ( I ν A ) y 2 = x y ν ( A x A y ) 2 = x y 2 2 ν x y , A x A y + ν 2 A x A y 2 x y 2 2 α ν A x A y 2 + ν 2 A x A y 2 = x y 2 + ν ( ν 2 α ) A x A y 2 x y 2 .
Proof of Theorem 14. 
We break the proof into several steps.
Step 1. The sequences { x n } and { v n } are bounded. Suppose x * Γ and y * = P C ( x * ρ B x * ) . Therefore, from (7), we obtain
u n x * = t n x n + ( 1 t n ) y n x * t n x n x * + ( 1 t n ) y n x * .
since A is α ism, 0 < r n < 2 α , x * = T r n Θ , φ ( x * r n A x * ) and v n = T r n Θ , φ ( u n r n A u n ) , we derive from (7) and Lemma 16 that
v n x * 2 = T r n Θ , φ ( u n r n A u n ) T r n Θ , φ ( x * r n A x * ) 2 u n r n A u n ( x * r n A x * ) 2 u n x * 2 r n ( r n 2 α ) A u n A x * 2 u n x * 2 .
Then, from (11), we have
y n x * = G v n x * = G v n G x * v n x * u n x * t n x n x * + ( 1 t n ) y n x * .
Hence, y n x * x n x * . Therefore, by (11), we have u n x * x n x * . Therefore, from (12), we have v n x * x n x * . Hence, by (10), we have
z n y * 2 = P C ( I ρ B ) v n P C ( I ρ B ) x * 2 ( I ρ B ) v n ( I ρ B ) x * 2 v n x * 2 ρ ( 2 β ρ ) B v n B x * 2 x n x * 2 ρ ( 2 β ρ ) B v n B x * 2 .
In a similar way, we have
y n x * 2 z n y * 2 λ ( 2 ω λ ) D z n D y * 2 .
Substituting (14) into (15), we obtain
y n x * 2 x n x * 2 ρ ( 2 β ρ ) B v n B x * 2 λ ( 2 ω λ ) D z n D y * 2 x n x * 2 .
By using (7), and conditions ( B 1 ) and ( B 2 ), we may assume, without loss of generality, α n 1 β n . Then, from (7), (9) and Lemma 10, we have
x n + 1 x * α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n x * = α n γ [ V ( x n ) V ( x * ) ] + α n [ γ V ( x * ) μ F ( x * ) ] + β n ( x n x * ) + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n ( ( 1 β n ) I α n μ F ) T 2 n T 1 n x * α n γ κ x n x * + β n x n x * + α n γ V ( x * ) μ F ( x * ) + ( 1 β n ) ( I α n 1 β n μ F ) T 2 n T 1 n y n ( I α n 1 β n μ F ) T 2 n T 1 n x * α n γ κ x n x * + β n x n x * + α n γ V ( x * ) μ F ( x * ) + ( 1 β n ) ( 1 α n τ 1 β n ) y n x * ( α n γ κ + β n ) x n x * + α n γ V ( x * ) μ F ( x * ) + ( 1 β n α n τ ) x n x * ( 1 α n ( τ γ κ ) ) x n x * + α n γ V ( x * ) μ F ( x * ) max { x n x * , γ V ( x * ) μ F ( x * ) τ γ κ } .
By induction, we have
x n x * max { x 1 x * , γ V ( x * ) μ F ( x * ) τ γ κ } ,
for all n 2 . Hence, { x n } is bounded, which implies that { u n } , { v n } , { y n } , { V ( x n ) } , { μ F ( T n u n ) } and { T n y n } are all bounded.
Step 2. The sequence { x n } is asymptotically regular, that is, lim n x n + 1 x n = 0 . To see this, we set x n + 1 = β n x n + ( 1 β n ) w n to derive that
w n + 1 w n = x n + 2 β n + 1 x n + 1 1 β n + 1 x n + 1 β n x n 1 β n = α n + 1 γ V ( x n + 1 ) + ( ( 1 β n + 1 ) I α n + 1 μ F ) T 2 n + 1 T 1 n + 1 y n + 1 1 β n + 1 α n γ V ( x n ) + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n 1 β n = α n + 1 1 β n + 1 γ V ( x n + 1 ) α n 1 β n γ V ( x n ) + T 2 n + 1 T 1 n + 1 y n + 1 T 2 n T 1 n y n + α n 1 β n μ F ( T 2 n T 1 n y n ) α n + 1 1 β n + 1 μ F ( T 2 n + 1 T 1 n + 1 y n + 1 ) = α n + 1 1 β n + 1 [ γ V ( x n + 1 ) μ F ( T 2 n + 1 T 1 n + 1 y n + 1 ) ] + α n 1 β n [ μ F ( T 2 n T 1 n y n ) γ V ( x n ) ] + T 2 n + 1 T 1 n + 1 y n + 1 T 2 n + 1 T 1 n + 1 y n + T 2 n + 1 T 1 n + 1 y n T 2 n T 1 n y n .
It follows that
w n + 1 w n x n + 1 x n α n + 1 1 β n + 1 ( γ V ( x n + 1 ) + μ F ( T 2 n + 1 T 1 n + 1 y n + 1 ) ) + α n 1 β n ( γ V ( x n ) + μ F ( T 2 n T 1 n y n ) ) + T 2 n + 1 T 1 n + 1 y n T 2 n T 1 n y n + y n + 1 y n x n + 1 x n .
Note that
T 2 n + 1 T 1 n + 1 y n T 2 n T 1 n y n T 2 n + 1 T 1 n + 1 y n T 2 n + 1 T 1 n y n + T 2 n + 1 T 1 n y n T 2 n T 1 n y n T 1 n + 1 y n T 1 n y n + T 2 n + 1 T 1 n y n T 2 n T 1 n y n
and
T 1 n + 1 y n T 1 n y n = ( 1 s n + 1 1 ) y n s n + 1 1 T 1 y n ( 1 s n 1 ) y n + s n 1 T 1 y n = ( s n + 1 1 + s n 1 ) y n + ( s n 1 s n + 1 1 ) T 1 y n | s n 1 s n + 1 1 | ( y n + T 1 y n ) .
Since | s n 1 s n + 1 1 | 0 for i = 1 , 2 and { y n } , { T 1 y n } are bounded, we have
lim n T 1 n + 1 y n T 1 n y n = 0 .
Similarly, we obtain
T 2 n + 1 T 1 n y n T 2 n T 1 n y n = ( s n + 1 2 + s n 2 ) T 1 n y n + ( s n 2 s n + 1 2 ) T 2 T 1 n y n | s n 2 s n + 1 2 | ( T 1 n y n + T 2 T 1 n y n ) .
Hence,
lim n T 2 n + 1 T 1 n y n T 2 n T 1 n y n = 0 .
Thus, by (18)–(20), we have
lim n T 2 n + 1 T 1 n + 1 y n T 2 n T 1 n y n = 0 .
Observe that by Lemma 16, we have
T r n + 1 Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n + 1 r n + 1 A u n + 1 ) u n r n A u n u n + 1 + r n + 1 A u n + 1 u n r n A u n u n + 1 + r n A u n + 1 + r n A u n + 1 r n + 1 A u n + 1 u n u n + 1 + | r n r n + 1 | A u n + 1 .
Therefore,
v n v n + 1 = T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n + 1 r n + 1 A u n + 1 ) T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) + T r n + 1 Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n + 1 r n + 1 A u n + 1 ) T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) + u n u n + 1 + | r n r n + 1 | A u n + 1 .
Therefore, from (7), we have
y n + 1 y n = P C ( I λ D ) P C ( I ρ B ) v n + 1 P C ( I λ D ) P C ( I ρ B ) v n v n + 1 v n T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) + u n u n + 1 + | r n r n + 1 | A u n + 1 = t n + 1 x n + 1 + ( 1 t n + 1 ) y n + 1 t n x n ( 1 t n ) y n + | r n r n + 1 | A u n + 1 + T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) = t n + 1 ( x n + 1 x n ) + ( t n + 1 t n ) x n + ( 1 t n + 1 ) ( y n + 1 y n ) ( t n + 1 t n ) y n + T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) + | r n r n + 1 | A u n + 1 t n + 1 x n + 1 x n + ( 1 t n + 1 ) y n + 1 y n + | t n + 1 t n | x n y n + | r n r n + 1 | A u n + 1 + T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) ,
which implies that
y n + 1 y n x n + 1 x n + | t n + 1 t n | t n + 1 x n y n + | r n r n + 1 | t n + 1 A u n + 1 + 1 t n + 1 T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) x n + 1 x n + | t n + 1 t n | b x n y n + | r n + 1 r n | b A u n + 1 + 1 b T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) x n + 1 x n + ( | t n + 1 t n | + | r n + 1 r n | ) M + 1 b T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) ,
where M > 0 is a big enough constant. Additionally, from Lemma 7, we have
l i m n T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) l i m n r n + 1 r n r n + 1 T r n + 1 Θ , φ ( u n r n A u n ) T r n Θ , φ ( u n r n A u n ) , T r n + 1 Θ , φ ( u n r n A u n ) ( u n r n A u n ) = 0 .
Consequently, it follows from (17), (19), (21)–(23), and conditions ( B 1 ) and ( B 2 ) that
lim sup n w n + 1 w n x n + 1 x n lim sup n { α n + 1 1 β n + 1 ( γ V ( x n + 1 ) + μ F ( T 2 n + 1 T 1 n + 1 y n + 1 ) ) + α n 1 β n ( γ V ( x n ) + μ F ( T 2 n T 1 n y n ) ) + T 2 n + 1 T 1 n + 1 y n T 2 n T 1 n y n + ( | t n + 1 t n | + | r n + 1 r n | ) M + 1 b T r n Θ , φ ( u n r n A u n ) T r n + 1 Θ , φ ( u n r n A u n ) } = 0 .
Hence, by Lemma 13, we have lim n w n x n = 0 . Therefore,
lim n x n + 1 x n = ( 1 β n ) w n x n = 0 .
Step 3. We prove
(3a)
x n y n 0 ,
(3b)
u n v n 0 ,
(3c)
v n T 2 n T 1 n v n 0 .
From (7), we have
T 2 n T 1 n y n x n T 2 n T 1 n y n x n + 1 + x n + 1 x n α n γ V ( x n ) μ F ( T 2 n T 1 n y n ) + x n + 1 x n + β n T 2 n T 1 n y n x n .
This implies that
T 2 n T 1 n y n x n 1 1 β n x n + 1 x n + α n 1 β n γ V ( x n ) μ F ( T 2 n T 1 n y n ) .
Therefore, from (24), we obtain
lim n T 2 n T 1 n y n x n = 0 .
Let x * Γ and y * = P C ( x * ρ B x * ) . Therefore, from (7) and (16), we obtain
x n + 1 x * 2 = α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n x * 2 = α n [ γ V ( x n ) μ F ( x * ) ] + β n ( x n x * ) + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n ( ( 1 β n ) I α n μ F ) T 2 n T 1 n x * 2 ( α n γ V ( x n ) μ F ( x * ) + β n x n x * + ( 1 β n ) ( ( I α n 1 β n μ F ) T 2 n T 1 n y n ( I α n 1 β n μ F ) T 2 n T 1 n x * ) ) 2 ( α n γ V ( x n ) μ F ( x * ) + β n x n x * + ( 1 β n ) ( 1 α n τ 1 β n ) y n x * ) 2 α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + β n x n x * 2 + ( 1 β n α n τ ) y n x * 2 α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + β n x n x * 2 + ( 1 β n α n τ ) ( x n x * 2 ρ ( 2 β ρ ) B v n B x * 2 λ ( 2 ω λ ) D z n D y * 2 ) α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + x n x * 2 + ( 1 β n α n τ ) ( ρ ( 2 β ρ ) B v n B x * 2 λ ( 2 ω λ ) D z n D y * 2 ) .
Therefore,
( 1 β n α n τ ) ( ρ ( 2 β ρ ) B v n B x * 2 + λ ( 2 ω λ ) D z n D y * 2 ) x n x * 2 x n + 1 x * 2 + α n M ( x n x * x n + 1 x * ) ( x n x * + x n + 1 x * ) + α n M x n x n + 1 ( x n x * + x n + 1 x * ) + α n M .
From (24), we have
B v n B x * 0 and D z n D y * 0 .
Additionaly, from (12), (13) and (26), we have
x n + 1 x * 2 α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + β n x n x * 2 + ( 1 β n α n τ ) y n x * 2 α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + β n x n x * 2 + ( 1 β n α n τ ) v n x * 2 α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + β n x n x * 2 + ( 1 β n α n τ ) ( u n x * 2 r n ( r n 2 α ) A u n A x * 2 ) α n τ 1 τ ( γ V ( x n ) μ F ( x * ) ) 2 + x n x * 2 ( 1 β n α n τ ) r n ( r n 2 α ) A u n A x * 2
Therefore,
r n ( r n 2 α ) ( 1 β n α n τ ) r n ( r n 2 α ) A u n A x * 2 ) ( x n x * 2 x n + 1 x * 2 ) + α n M .
Thus,
A u n A x * 0 .
On the other hand, by (7) and (6), we have
y n x * 2 = P C ( I λ D ) z n P C ( I λ D ) y * 2 ( I λ D ) z n ( I λ D ) y * , y n x * = 1 2 [ ( I λ D ) z n ( I λ D ) y * 2 + y n x * 2 z n y n + x * y * λ ( D z n D y * ) 2 ] .
Therefore, by Lemma 16, we have
y n x * 2 z n y * 2 z n y n + x * y * λ ( D z n D y * ) 2 = z n y * 2 [ z n y n + x * y * 2 + λ 2 D z n D y * 2 2 λ z n y n + x * y * , D z n D y * ] z n y * 2 z n y n + x * y * 2 + 2 λ z n y n + x * y * D z n D y * .
Again, by (7), we obtain
z n y * 2 = P C ( I ρ B ) v n P C ( I ρ B ) x * 2 ( I ρ B ) v n ( I ρ B ) x * , z n y * = 1 2 [ ( I ρ B ) v n ( I ρ B ) x * 2 + z n y * 2 v n z n + y * x * ρ ( B v n B x * ) 2 ] ,
which implies
z n y * 2 v n x * 2 v n z n + y * x * ρ ( B v n B x * ) 2 = v n x * 2 [ v n z n + y * x * 2 2 ρ v n z n + y * x * , B v n B x * + ρ 2 B v n B x * 2 ] x n x * 2 v n z n + y * x * 2 + 2 ρ v n z n + y * x * B v n B x * .
It follows from (29) and (30) that
y n x * 2 x n x * 2 v n z n + y * x * 2 z n y n + x * y * 2 + 2 ρ v n z n + y * x * B v n B x * + 2 λ z n y n + x * y * D z n D y * .
Therefore, from Lemmas 1 and 10, we obtain
x n + 1 x * 2 = α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n x * 2 = α n [ γ V ( x n ) μ F ( x * ) ] + β n ( x n T 2 n T 1 n y n ) + ( I α n μ F ) T 2 n T 1 n y n ( I α n μ F ) T 2 n T 1 n x * 2 β n ( x n T 2 n T 1 n y n ) + ( I α n μ F ) T 2 n T 1 n y n ( I α n μ F ) T 2 n T 1 n x * 2 + 2 α n γ V ( x n ) μ F ( x * ) , x n + 1 x * [ β n x n T 2 n T 1 n y n + ( I α n μ F ) T 2 n T 1 n y n ( I α n μ F ) T 2 n T 1 n x * ] 2 + 2 α n γ V ( x n ) μ F ( x * ) , x n + 1 x * [ β n x n T 2 n T 1 n y n + ( 1 α n τ ) y n x * ] 2 + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * [ x n T 2 n T 1 n y n + y n x * ] 2 + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * x n T 2 n T 1 n y n 2 + y n x * 2 + 2 x n T 2 n T 1 n y n y n x * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * x n T 2 n T 1 n y n 2 + 2 x n T 2 n T 1 n y n y n x * + x n x * 2 v n z n + y * x * 2 z n y n + x * y * 2 + 2 ρ v n z n + y * x * B v n B x * + 2 λ z n y n + x * y * D z n D y * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * .
Hence,
v n z n + y * x * 2 + z n y n + x * y * 2 x n x * 2 x n + 1 x * 2 + x n T 2 n T 1 n y n 2 + 2 x n T 2 n T 1 n y n y n x * + 2 ρ v n z n + y * x * B v n B x * + 2 λ z n y n + x * y * D z n D y * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * x n + 1 x n ( x n x * + x n + 1 x * ) + x n T 2 n T 1 n y n 2 + 2 x n T 2 n T 1 n y n y n x * + 2 ρ v n z n + y * x * B v n B x * + 2 λ z n y n + x * y * D z n D y * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * .
From ( B 1 ) , (24), (25) and (27), we obtain
v n z n + y * x * 0 and z n y n + x * y * 0 .
By (32) and
v n y n v n z n + y * x * + z n y n + x * y * ,
we have lim n v n y n = 0 . The firm nonexpansivity of T r Θ , φ implies that
v n x * 2 = T r n Θ , φ ( u n r n A u n ) T r n Θ , φ ( x * r n A x * ) 2 u n r n A u n ( x * r n A x * ) , v n x * = 1 2 { u n r n A u n ( x * r n A x * ) 2 + v n x * 2 u n r n A u n ( x * r n A x * ) ( v n x * ) 2 } 1 2 { u n x * 2 + v n x * 2 u n v n r n ( A u n A x * ) 2 } 1 2 { x n x * 2 + v n x * 2 u n v n r n ( A u n A x * ) 2 } = 1 2 { x n x * 2 + v n x * 2 u n v n 2 + 2 r n u n v n , A u n A x * r n 2 A u n A x * 2 } .
Hence,
v n x * 2 x n x * 2 u n v n 2 + 2 r n u n v n , A u n A x * r n 2 A u n A x * 2 x n x * 2 u n v n 2 + 2 r n u n v n A u n A x * .
Thus, from (31), we have
x n + 1 x * 2 x n T 2 n T 1 n y n 2 + y n x * 2 + 2 x n T 2 n T 1 n y n y n x * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * x n T 2 n T 1 n y n 2 + v n x * 2 + 2 x n T 2 n T 1 n y n y n x * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x * x n T 2 n T 1 n y n 2 + 2 x n T 2 n T 1 n y n y n x * + x n x * 2 u n v n 2 + 2 r n u n v n A u n A x * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x *
Therefore,
u n v n 2 x n + 1 x n ( x n x * + x n + 1 x * ) + x n T 2 n T 1 n y n 2 + 2 r n u n v n A u n A x * + 2 x n T 2 n T 1 n y n y n x * + 2 α n γ V ( x n ) μ F ( x * ) x n + 1 x *
From (24), (25) and (28), we obtain u n v n 0 . Then, from v n y n 0 , we have u n y n 0 . By (7), we have u n y n = t n x n y n . Hence,
x n y n = u n y n t n u n y n b .
Therefore, x n y n 0 . So x n v n 0 . Since
v n T 2 n T 1 n v n T 2 n T 1 n v n T 2 n T 1 n y n + x n T 2 n T 1 n y n + x n v n v n y n + x n T 2 n T 1 n y n + x n v n ,
it follows from (25), x n v n 0 and v n y n 0 that
lim n v n T 2 n T 1 n v n = 0 .
Step 4. We have the following variational inequality:
lim sup n γ V ( x * ) μ F ( x * ) , x n x * 0 ,
where x * is the unique fixed point of the contraction P Γ ( I μ F + γ V ) ; namely, x * = P Γ ( I μ F + γ V ) x * . Alternatively, x * is the unique solution of the variational inequality
γ V ( x * ) μ F ( x * ) , y x * 0 , y Γ .
To prove (34), take a subsequence { v n i } of { v n } weakly convergent to a point v ^ C and such that
lim sup n γ V ( x * ) μ F ( x * ) , x n x * = lim sup n γ V ( x * ) μ F ( x * ) , v n x * = lim i γ V ( x * ) μ F ( x * ) , v n i x * = γ V ( x * ) μ F ( x * ) , v ^ x * .
By virtue of VI (35), it suffices to show that v ^ Γ . To see v ^ n = 1 N F i x ( T n ) , we use ( 3 c ) and the demiclosedness principle of nonexpansive mappings then ensures that v ^ F i x ( T 2 n T 1 n ) . Since { s k i } is bounded for i = 1 , 2 , we can assume that s k j i s i as j , where 0 < ε 1 s i ε 2 < 1 for i = 1 , 2 . Define T i = ( 1 s i ) I + s i T i for i = 1 , 2 . Therefore, F i x ( T i ) = F i x ( T i ) for i = 1 , 2 . Note that
T i k j x T i x = ( 1 s k j i ) x + s k j i T i x ( 1 s i ) x s i T i x | s k j i s i | ( x + T i x ) .
Hence,
sup x E T i k j x T i x 0 ,
for i = 1 , 2 , where E is an arbitrary bounded subset of H. Since F i x ( T 1 ) F i x ( T 2 ) = F i x ( T 1 ) F i x ( T 2 ) and T i is s i averaged for i = 1 , 2 , by Lemma 5, we have F i x ( T 2 T 1 ) = F i x ( T 1 ) F i x ( T 2 ) . From
v n j T 2 T 1 v n j v n j T 2 n j T 1 n j v n j + T 2 n j T 1 n j v n j T 2 T 1 n j v n j + T 2 T 1 n j v n j T 2 T 1 v n j v n j T 2 n j T 1 n j v n j + sup v E T 2 n j v T 2 v + sup v E T 1 n j v T 1 v ,
where E is a bounded subset including { T 1 n j v n j } and E is a bounded subset including { v n j } . By (33) and (36), we obtain lim n v n j T 2 T 1 v n j = 0 . Therefore, from Lemma 9, we have v ^ F i x ( T 2 T 1 ) . Hence, v ^ n = 1 N F i x ( T n ) . Next, we show v ^ Ω . Since v n = T r n Θ , φ ( u n r n A u n ) , it follows from the definition of T r Θ , φ and the monotonicity of φ that
Θ ( v n , y ) + φ ( y ) φ ( v n ) + A u n , y v n + 1 r n y v n , v n u n 0 , y C .
From ( A 2 ), it follows that
φ ( y ) φ ( v n ) + A u n , y v n + 1 r n y v n , v n u n Θ ( y , v n ) , y C .
Replacing n by n i , we have
φ ( y ) φ ( v n i ) + A u n i , y v n i + 1 r n i y v n i , v n i u n i Θ ( y , v n i ) , y C .
Now, set y t = t y + ( 1 t ) v ^ C with t ( 0 , 1 ) . Then, from (37), we have
A y t , y t v n i A y t , y t v n i + Θ ( y t , v n i ) φ ( y t ) + φ ( v n i ) y t v n i , v n i u n i r n i A u n i , y t v n i = A y t A v n i , y t v n i + A v n i A u n i , y t u n i + Θ ( y t , v n i ) φ ( y t ) + φ ( v n i ) y t v n i , v n i u n i r n i .
From (3b), we have A u n i A v n i 0 . Moreover, by the monotonicity of A, the lower semi-continuous of φ , ( A 4 ) and ( B 3 ) , we obtain
A y t , y t v ^ Θ ( y t , v ^ ) φ ( y t ) + φ ( v ^ )
as i . From ( A 1 ), ( A 4 ), the convexity of φ and (38), we have
0 = Θ ( y t , y t ) φ ( y t ) + φ ( y t ) t Θ ( y t , y ) + ( 1 t ) Θ ( y t , v ^ ) + t φ ( y ) + ( 1 t ) φ ( v ^ ) φ ( y t ) = t ( Θ ( y t , y ) + φ ( y ) φ ( y t ) ) + ( 1 t ) ( Θ ( y t , v ^ ) + φ ( v ^ ) φ ( y t ) ) t ( Θ ( y t , y ) + φ ( y ) φ ( y t ) ) + ( 1 t ) A y t , y t v ^ = t ( Θ ( y t , y ) + φ ( y ) φ ( y t ) ) + t ( 1 t ) A y t , y v ^ .
Thus,
Θ ( y t , y ) + φ ( y ) φ ( y t ) + ( 1 t ) A y t , y v ^ 0 .
Letting t 0 , we have
Θ ( v ^ , y ) + φ ( y ) φ ( v ^ ) + A v ^ , y v ^ 0 .
Hence, v ^ Ω . Then, v ^ G remains to be solved. we know
lim i v n i G v n i = lim i v n i y n i = 0 .
From Lemma 9, we have z F i x ( G ) . Therefore, z Γ and the proof of Step 4 is complete.
Step 5. Strong convergence: x n x * in the norm, where x * satisfies (34). From Lemma 1 and (7), we have
x n + 1 x * 2 α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n x * 2 = [ ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n ( ( 1 β n ) I α n μ F ) T 2 n T 1 n x * ] + β n ( x n x * ) + α n ( γ V ( x n ) μ F ( x * ) ) 2 ( ( ( 1 β n ) I α n μ F ) T 2 n T 1 n y n ( ( 1 β n ) I α n μ F ) T 2 n T 1 n x * + β n x n x * ) 2 + 2 α n γ V ( x n ) μ F ( x * ) , x n + 1 x * ( β n x n x * + ( 1 β n α n τ ) y n x * ) 2 + 2 α n γ V ( x n ) V ( x * ) , x n + 1 x * + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( β n x n x * + ( 1 β n α n τ ) x n x * ) 2 + 2 α n γ κ x n x * x n + 1 x * + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( 1 α n τ ) 2 x n x * 2 + 2 α n γ κ x n x * ( x n + 1 x n + x n x * ) + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( 1 α n τ ) 2 x n x * 2 + 2 α n γ κ x n x * 2 + 2 α n γ κ x n x * x n x n + 1 + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( 1 2 α n ( τ γ κ ) ) x n x * 2 + α n 2 τ 2 x n x * 2 + 2 α n γ κ x n x * x n x n + 1 + 2 α n γ V ( x * ) μ F ( x * ) , x n + 1 x * ( 1 2 α n ( τ γ κ ) ) x n x * 2 + 2 α n ( τ γ κ ) [ α n τ 2 M 2 2 ( τ γ κ ) + γ κ M τ γ κ x n x n + 1 + 1 τ γ κ γ V ( x * ) μ F ( x * ) , x n + 1 x * ] .
We can rewrite the last relation as
x n + 1 x * 2 ( 1 γ ˜ n ) x n x * 2 + γ ˜ n υ ˜ n ,
where γ ˜ n = 2 α n ( τ γ κ ) and
υ ˜ n = α n τ 2 M 2 2 ( τ γ κ ) + γ κ M τ γ κ x n x n + 1 + 1 τ γ κ γ V ( x * ) μ F ( x * ) , x n + 1 x * .
It is now immediately clear that n = 1 γ ˜ n = and lim sup n υ ˜ n 0 . This enables us to apply Lemma 11 to the relation (39) to arrive at x n x * 0 , that is, x n x * in the norm. □
Corollary 17.
Let all the assumptions of Theorem 14 hold except r n = 1 for all n N , Θ ( x , y ) = 0 , A x = 0 and φ ( x ) = 0 , and Γ : = n = 1 F i x ( T n ) F i x ( G ) (instead of Γ : = n = 1 N F i x ( T n ) F i x ( G ) Ω ). Then, the sequence { x n } defined by
u n = t n x n + ( 1 t n ) y n , z n = P C ( I ρ B ) u n , y n = P C ( I λ D ) z n , x n + 1 = α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) T N n T N 1 n . . . T 1 n y n ,
where the initial guess x 1 H is arbitrary and converges strongly to x * Γ , where x * = P Γ ( I μ F + γ V ) ( x * ) , which solves the variational inequality (8).
Corollary 18.
Let all the assumptions of Theorem 14 hold except s n i = 1 2 for all i = 1 , 2 , . . . , N and n N , T i x = x for all i = 1 , 2 , . . . , N and Γ : = F i x ( G ) Ω (instead of Γ : = n = 1 N F i x ( T n ) F i x ( G ) Ω ). Then, the sequence { x n } is defined by
u n = t n x n + ( 1 t n ) y n , θ ( v n , y ) + φ ( y ) φ ( v n ) + A u n , y v n + 1 r n y v n , v n u n 0 , y C , z n = P C ( I ρ B ) v n , y n = P C ( I λ D ) z n , x n + 1 = α n γ V ( x n ) + β n x n + ( ( 1 β n ) I α n μ F ) y n ,
where the initial guess x 1 H is arbitrary and converges strongly to x * Γ , where x * = P Γ ( I μ F + γ V ) ( x * ) , which solves the variational inequality (8).

4. Numerical Test

In this section, first, we give a numerical example that satisfies all assumptions in Theorem 14 in order to illustrate the convergence of the sequence generated by the iterative process defined by (7). Next, we give another numerical example for (7) to compare its behavior with the iterative method (5) of Ke and Ma [29].
Example 19.
Let C = [ 100 , 100 ] H = R , and define θ ( x , y ) = 7 x 2 + x y + 6 y 2 , φ ( x ) = 3 x and A x = 2 x 3 . Then, A is 1 4 -ism, and from Lemma 6, T r Θ , φ is single-valued for all r > 0 . Now, we deduce a formula for T r Θ , φ ( x ) . For any y [ 100 , 100 ] and r > 0 , we have
θ ( z , y ) + φ ( y ) φ ( z ) + A x , y z + 1 r y z , z x 0 6 y 2 + y z 7 z 2 + 3 y 3 z + ( 2 x 3 ) ( y z ) + 1 r ( y z ) ( z x ) 0 6 r y 2 + r y z 7 r z 2 + 2 r x y 2 r x z + y z y x z 2 + x z 0 6 r y 2 + ( ( r + 1 ) z + ( 2 r 1 ) x ) y 7 r z 2 2 r x z z 2 + x z 0
Set G ( y ) = 6 r y 2 + ( ( r + 1 ) z + ( 2 r 1 ) x ) y 7 r z 2 2 r x z z 2 + x z . Then, G ( y ) is a quadratic function of y with coefficients a = 6 r , b = ( r + 1 ) z + ( 2 r 1 ) x and c = 7 r z 2 2 r x z z 2 + x z . Therefore, its discriminate Δ = b 2 4 a c is
Δ = [ ( r + 1 ) z + ( 2 r 1 ) x ] 2 24 r ( 7 r z 2 2 r x z z 2 + x z ) = [ ( 2 r 1 ) x + ( 13 r + 1 ) z ] 2 .
Since G ( y ) 0 for all y C , this is true if and only if Δ 0 . That is, [ ( 2 r 1 ) x + ( 13 r + 1 ) z ] 2 0 . Therefore, z = 1 3 r 1 + 3 r x , which yields T r Θ , φ ( x ) = 1 2 r 1 + 13 r x . Therefore, from Lemma 6, we have Ω = { 0 } . Let α n = 1 4 n , β n = n 1 2 n , t n = 1 3 , s n i = 1 2 , r n = n 1 6 n and T i x = x i for i = 1 , 2 , . . . , N . Suppose V ( x ) = 1 4 x , B x = 1 6 x , D x = 1 3 x and F x = 1 10 x . Hence, B is 3 ism, D is 2 ism, F is 1 2 -Lipschitzian and 1 5 ism, and V is 1 4 -Lipschitzian. Let γ = 1 5 , ρ = 4 , λ = 2 and μ = 1 . Hence, Γ = { 0 } . Then, from Theorem 14, the sequence { x n } , generated iteratively by
u n = 171 n 117 505 n 355 x n , v n = 4 n + 2 19 n 13 u n , z n = 4 n + 2 57 n 39 u n , y n = 4 n + 2 171 n 117 u n , x n + 1 = ( 1 80 n + n 1 2 n + 80 n 2 + 116 n + 38 2020 n 2 1420 n N + 1 2 N ) x n ,
converges strongly to 0 Γ , where 0 = P Γ ( 19 x 20 ) ( 0 ) .
Now, we compare the effectiveness of our algorithm with the algorithm (5) by a numerical example. In fact, Ke and Ma [29] proved the following strong convergence theorem.
Theorem 20.
Let C be a closed convex subset of H, T be a nonexpansive self-mappings on C with F i x ( T ) and f be a κ-contraction on C for some κ [ 0 , 1 ) . Pick any x 1 H ; let { x n } be a sequence generated by (5), where { α n } and { t n } are real sequences satisfying the following conditions:
( B 1 )
{ α n } ( 0 , 1 ) , lim n α n = 0 , n = 1 | α n + 1 α n | < and n = 1 α n = ;
( B 2 )
0 < ε t n t n + 1 < 1 .
Then, the sequence { x n } converges strongly to x * F i x ( T ) , which solves the variational inequality:
( I f ) x * , x * x 0 f o r a l l x F i x ( T ) .
Example 21.
Let all the assumptions of Example 19 hold except the mappings T i x = T x = x for all i = 1 , 2 , . . . , N and C = [ 20 , 20 ] . First, suppose the sequence { x n } be generated by (7). Then, the scheme (7) can be simplified as
u n = 171 n 117 505 n 355 x n , v n = 4 n + 2 19 n 13 u n , z n = 4 n + 2 57 n 39 u n , y n = 4 n + 2 171 n 117 u n , x n + 1 = ( 1 80 n + n 1 2 n + 80 n 2 + 116 n + 38 2020 n 2 1420 n ) x n ,
Therefore, the sequence { x n } converges strongly to 0 by Theorem 14. Now, let the sequence { x n } be generated by (5). Then, the scheme (5) can be simplified as
x n + 1 = 1 16 n x n + ( 1 1 4 n ) ( 1 3 x n + 2 3 x n + 1 ) .
Therefore, the sequence { x n } converges strongly to 0 by Theorem 20.
Next, the numerical comparison of algorithms (41) and (42) is provided.
According to the Table 1 and the Figure 1, we see that, although the initial points are different ( x 1 = 98 and x 1 = 100 ), in both cases, the sequence { x n } defined by (40) converges to 0 where N = 3 and n = 30 .
Table 2 and Figure 2 indicate that the sequence { x n } generated by (41) and (42) converges to 0 where x 1 = 18 and n = 50 . The efficiency of algorithm (42) in comparison with Algorithm (41) clearly appeared in this figure.
Remark 22.
Table 2 and Figure 2 show that the convergent rate of iterative algorithm (7) is faster than that of the iterative algorithm (5) of Ke and Ma. In fact, regarding to Table 2 and Figure 2, we consider that Algorithm (41) approaches 0 from the third term onwards, but Algorithm (42) does not approach 0 even until the fiftieth term.

5. Conclusions

We introduce a new composite iterative algorithm for finding a common element of the set of solutions of a general system of variational inequalities, a generalized mixed equilibrium problem and the set of common fixed points of a finite family of nonexpansive mappings in Hilbert spaces. Then, we prove that the sequence generated by the algorithm converges strongly to a common element of solution sets for these problems. Moreover, we deduce some consequences from our main result. Eventually, we provide a numerical example to illustrate the justification of the main result and another one to compare our algorithm with Algorithm (5), which shows that the convergent rate of our iterative algorithm is faster than that of the iterative algorithm (5) of Ke and Ma.

Author Contributions

Conceptualization, M.Y. and S.H.S.; methodology, M.Y.; software, M.Y.; validation, S.H.S.; formal analysis, M.Y.; investigation, M.Y.; resources, S.H.S.; data curation, S.H.S.; writing—original draft preparation, M.Y.; writing—review and editing, S.H.S.; visualization, M.Y.; project administration, S.H.S.; funding acquisition, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the referees for their valuable and useful comments. A part of this research was carried out while the second author was visiting the University of Alberta.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peng, J.W.; Yao, J.C. A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12, 1401–1432. [Google Scholar] [CrossRef]
  2. Anh, P.N.; An, L.T.H.; Tao, P.D. Yosida approximation methods for generalized equilibrium problems, J. Convex Anal. 2020, 27, 959–977. [Google Scholar]
  3. Dadashi, V.; Postolache, M. Hybrid Proximal Point Algorithm and Applications to Equilibrium Problems and Convex Programming. J. Optim. Theory Appl. 2017, 174, 518–529. [Google Scholar] [CrossRef]
  4. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. Inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert space. Optimization 2020, 70, 387–412. [Google Scholar] [CrossRef]
  5. Muu, L.D.; Le, X.T. On fixed point approach to equilibrium problem. J. Fixed Point Theory Appl. 2021, 23, 50. [Google Scholar] [CrossRef]
  6. Razani, A.; Yazdi, M. Viscosity approximation method for equilibrium and fixed point problems. Fixed Point Theory 2013, 14, 455–472. [Google Scholar]
  7. Razani, A.; Yazdi, M. A New Iterative Method for Generalized Equilibrium and Fixed Point Problems of Nonexpansive Mappings. Bull. Malays. Math. Sci. Soc. 2012, 35, 1049–1061. [Google Scholar]
  8. Vinh, N.T. A new projection algorithm for solving constrained equilibrium problems in Hilbert spaces. Optimization 2019, 68, 1447–1470. [Google Scholar] [CrossRef]
  9. Van Quy, N. An algorithm for a class of bilevel split equilibrium problems: Application to a differentiated Nash-Cournot model with environmental constraints. Optimization 2019, 68, 753–771. [Google Scholar] [CrossRef]
  10. Van Hunga, N.; O’Regan, D. Bilevel equilibrium problems with lower and upper bounds inlocally convex Hausdorff topological vector spaces. Topol. Appl. 2020, 269, 106939. [Google Scholar] [CrossRef]
  11. Wang, S.; Hu, C.; Chia, G. Strong convergence of a new composite iterative method for equilibrium problems and fixed point problems. Appl. Math. Comput. 2010, 215, 3891–3898. [Google Scholar] [CrossRef]
  12. Yao, Y.; Postolache, M.; Yao, C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics. 2019, 7, 61. [Google Scholar] [CrossRef]
  13. Yazdi, M. New iterative methods for equilibrium and constrained convex minimization problems. Asian-Eur. J. Math. 2019, 12, 1950042. [Google Scholar] [CrossRef]
  14. Yazdi, M.; Hashemi Sababe, S. A new extragradient method for equilibrium, split feasibility and fixed point problems. J. Nonlinear Convex Anal. 2021, 22, 759–773. [Google Scholar]
  15. Yazdi, M.; Hashemi Sababe, S. Strong convergence theorem for a general system of variational inequalities, equilibrium problems, and fixed point problems. Fixed Point Theory 2022, 23, 763–778. [Google Scholar]
  16. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef]
  17. Xu, H.K. An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116, 659–678. [Google Scholar] [CrossRef]
  18. Marino, G.; Xu, H.K. A general iterative method for nonexpansive mappings in hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef]
  19. Yamada, I.; Butnariu, D.; Censor, Y.; Reich, S. The hybrid steepest descent method for the variational inequality problems over the intersection of fixed points sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application; Elsevier: Amsterdam, The Netherlands, 2001. [Google Scholar]
  20. Tian, M. A general iterative algorithm for nonexpansive mappings in hilbert spaces. Nonlinear Anal. 2010, 73, 689–694. [Google Scholar] [CrossRef]
  21. Zhang, C.; Yang, C. A new explicit iterative algorithm for solving a class of variational inequalities over the common fixed points set of a finite family of nonexpansive mappings. Fixed Point Theory Appl. 2014, 2014, 60. [Google Scholar] [CrossRef]
  22. Jeong, J. Generalized viscosity approximation methods for mixed equilibrium problems and fixed point problems. Appl. Math. Comput. 2016, 283, 168–180. [Google Scholar] [CrossRef]
  23. Ceng, L.C.; Wang, C.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  24. Schneider, C. Analysis of the linearly implicit mid-point rule for differential-algebra equations. Electron. Trans. Numer. Anal. 1993, 1, 1–10. [Google Scholar]
  25. Somalia, S. Implicit midpoint rule to the nonlinear degenerate boundary value problems. Int. J. Comput. Math. 2002, 79, 327–332. [Google Scholar] [CrossRef]
  26. Van Veldhuxzen, M. Asymptotic expansions of the global error for the implicit midpoint rule (stiff case). Computing 1984, 33, 185–192. [Google Scholar] [CrossRef]
  27. Bader, G.; Deuflhard, P. Asemi-implicit mid-point rule for stiff systems of ordinary differential equations. Numer. Math. 1983, 41, 373–398. [Google Scholar] [CrossRef]
  28. Deuflhard, P. Recent progress in extrapolation methods for ordinary differential equations. SIAM Rev. 1985, 27, 505–535. [Google Scholar] [CrossRef]
  29. Ke, Y.; Ma, C. The generalized viscosity implicit rules of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 190. [Google Scholar] [CrossRef]
  30. Suzuki, T. Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305, 227–239. [Google Scholar] [CrossRef]
  31. Xu, H.K.; Aoghamdi, M.A.; Shahzad, N. The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 41. [Google Scholar] [CrossRef]
  32. Cai, G.; Shehu, Y.; Iyiolal, O.S. The modified viscosity implicit rules for variational inequality problems and fixed point problems of nonexpansive mappings in Hilbert spaces. RACSAM 2019, 113, 3545–3562. [Google Scholar] [CrossRef]
  33. Tian, M.; Liu, L. Iterative algorithms base on the viscosity approximation method for equilibrium and constrained convex minimization problem. Fixed Point Theory Appl. 2012, 201, 1–17. [Google Scholar]
  34. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  35. Ceng, L.C.; Yao, J.C. A hybrid iterative scheme for mixed equilibrium problems and fixed point problems, J. Comput. Appl. Math. 2008, 214, 186–201. [Google Scholar] [CrossRef]
  36. Ceng, L.C.; Yao, J.C. A relaxed extragradient-like method for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72, 1922–1937. [Google Scholar] [CrossRef]
  37. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  38. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  39. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
Figure 1. The convergence of { x n } with different initial values x 1 .
Figure 1. The convergence of { x n } with different initial values x 1 .
Symmetry 14 01507 g001
Figure 2. Comparison between Algorithm (41) and Algorithm (42).
Figure 2. Comparison between Algorithm (41) and Algorithm (42).
Symmetry 14 01507 g002
Table 1. The values of the sequence { x n } for Algorithm (40).
Table 1. The values of the sequence { x n } for Algorithm (40).
n x n 0 x n 0
1−98100
22.8916.95
30.0489730.2281
40.000204330.0011181
5 6.8356 × 10 7 3.7446 × 10 6
6 2.0617 × 10 9 1.1294 × 10 8
25 2.1386 × 10 59 1.1716 × 10 58
26 4.6134 × 10 62 2.5273 × 10 61
27 9.9195 × 10 65 5.4341 × 10 64
28 2.1264 × 10 67 1.1649 × 10 66
29 4.5454 × 10 70 2.49 × 10 69
30 9.6908 × 10 73 5.3088 × 10 72
Table 2. Comparison between Algorithm (41) and Algorithm (42).
Table 2. Comparison between Algorithm (41) and Algorithm (42).
n x n 0 for (41) x n 0 for (42)
11818
20.86417.8
30.01264917.601
40.0001011517.405
5 6.7666 × 10 7 17.211
46 2.9855 × 10 103 10.873
47 1.2388 × 10 105 10.752
48 5.1356 × 10 108 10.633
49 2.1269 × 10 110 10.514
50 8.8005 × 10 113 10.397
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yazdi, M.; Hashemi Sababe, S. A Viscosity Approximation Method for Solving General System of Variational Inequalities, Generalized Mixed Equilibrium Problems and Fixed Point Problems. Symmetry 2022, 14, 1507. https://doi.org/10.3390/sym14081507

AMA Style

Yazdi M, Hashemi Sababe S. A Viscosity Approximation Method for Solving General System of Variational Inequalities, Generalized Mixed Equilibrium Problems and Fixed Point Problems. Symmetry. 2022; 14(8):1507. https://doi.org/10.3390/sym14081507

Chicago/Turabian Style

Yazdi, Maryam, and Saeed Hashemi Sababe. 2022. "A Viscosity Approximation Method for Solving General System of Variational Inequalities, Generalized Mixed Equilibrium Problems and Fixed Point Problems" Symmetry 14, no. 8: 1507. https://doi.org/10.3390/sym14081507

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop