Next Article in Journal
Ethanol Prices and Agricultural Commodities: An Investigation of Their Relationship
Next Article in Special Issue
Iterative Algorithms for Split Common Fixed Point Problem Involved in Pseudo-Contractive Operators without Lipschitz Assumption
Previous Article in Journal
On a New Class of Fractional Difference-Sum Operators with Discrete Mittag-Leffler Kernels
Previous Article in Special Issue
Three-Step Projective Methods for Solving the Split Feasibility Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Convergence of a New Generalized Viscosity Implicit Rule and Some Applications in Hilbert Space

by
Mihai Postolache
1,2,3,*,†,‡,
Ashish Nandal
4,‡ and
Renu Chugh
5,‡
1
Department of General Education, China Medical University, Taichung 40402, Taiwan
2
Gh. Mihoc-C. Iacob Institute of Mathematical Statistics and Applied Mathematics, Romanian Academy, 050711 Bucharest, Romania
3
Department of Mathematics and Informatics, University “Politehnica” of Bucharest, 060042 Bucharest, Romania
4
Department of Mathematics, Pt NRS Government College, Rohtak 124001, India
5
Department of Mathematics, Maharshi Dayanand University, Rohtak 124001, India
*
Author to whom correspondence should be addressed.
Current address: Department of Mathematics and Informatics, University “Politehnica” of Bucharest, 060042 Bucharest, Romania.
These authors contributed equally to this work.
Mathematics 2019, 7(9), 773; https://doi.org/10.3390/math7090773
Submission received: 10 July 2019 / Revised: 13 August 2019 / Accepted: 16 August 2019 / Published: 22 August 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, based on the very recent work by Nandal et al. (Nandal, A.; Chugh, R.; Postolache, M. Iteration process for fixed point problems and zeros of maximal monotone operators. Symmetry 2019, 11, 655.), we propose a new generalized viscosity implicit rule for finding a common element of the fixed point sets of a finite family of nonexpansive mappings and the sets of zeros of maximal monotone operators. Utilizing the main result, we first propose and investigate a new general system of generalized equilibrium problems, which includes several equilibrium and variational inequality problems as special cases, and then we derive an implicit iterative method to solve constrained multiple-set split convex feasibility problem. We further combine forward-backward splitting method and generalized viscosity implicit rule for solving monotone inclusion problem. Moreover, we apply the main result to solve convex minimization problem.

1. Introduction

A problem which appears very often in different areas of mathematics and physical sciences consists of finding an element in the intersection of closed and convex subsets of a Hilbert space. This problem is generally named as convex feasibility problem (CFP). The applications of CFP lie in the center of various disciplines such as sensor networking [1], radiation therapy treatment planning [2], computerized tomography [3], and image restoration [4].
The multiple-sets split feasibility problem (MSSFP) is stated as finding a point belonging to a family of closed convex subsets in one space whose image under a bounded linear transformation belongs to another family of closed convex subsets in the image space. It generalizes the CFP and the split feasibility problem (SFP). The MSSFP was firstly introduced by Censor et al. [5] to model the inverse problem of the Intensity-Modulated Radiation Therapy. Recently, Buong [6] studied several iterative algorithms for solving MSSFP, which solves a certain variational inequality. Masad and Reich [7] generalized the MSSFP to the constrained multiple set split convex feasibility problem (CMSSCFP), in which several bounded linear operators are involved. For other new results in this direction, we address the reader to the works of Yao et al. [8,9,10,11,12,13,14]
Equilibrium problem (EP) theory includes many important mathematical problems, for instance, variational inequality problems, optimization problems, saddle point problems, Nash equilibrium problems and fixed point problems [15,16,17]. This problem has been generalized in several interesting and important problems. In 2010, Ceng and Yao [18] introduced and studied a system of generalized Equilibrium problem (SGEP). Several iterative methods have been proposed by a number of authors to solve SGEP (see, e.g., [19,20,21,22,23]).
Monotone inclusion problem with multivalued maximal monotone mapping and inverse-strongly monotone mapping is among the very important and extensively studied problems in recent years. This problem includes various important problems such as convex minimization problem, variational inequality problem, linear inverse problem and split feasibility problem. One of most popular methods for solving this inclusion problem is the forward-backward splitting method [24,25,26,27].
In a very recent paper [28], Nandal, Chugh and Postolache introduced an iterative algorithm to study fixed point problem of nonexpansive mappings and zero point problem of maximal monotone mappings. Then, applicability of the algorithm was shown by discussing and solving different kinds of problems; for instance, general system of variational inequalities, convex feasibility problem, zero point problem of inverse strongly monotone and maximal monotone mapping, split feasibility and its connected problems were solved under weaker control conditions on the parameters.
On the other hand, the implicit midpoint rule, the variational iteration method and the Taylor series method are some powerful methods for solving various important kinds of differential equations. The variational iteration method was first introduced by He [29,30,31]. In [32], Khuri and Sayfy established relation between variational iteration method and the fixed point theory. Very recently, He and Ji [33] suggested a simple approach using Taylor series technology to solve approximately the Lane-Emden equation. The major method for solving ODEs (in particular, stiff equations) is the implicit midpoint rule (see [34,35,36,37,38,39]). For instance, consider the initial value problem for the time dependent ODE x ( t ) = g ( x ( t ) ) with initial condition x ( 0 ) = x 0 . It is known that if g : R n R n is Lipschitz continuous and uniformly smooth, then the implicit method which is given by the implicit midpoint rule:
x n + 1 = x n + h g x n + x n + 1 2 ,
converges to the solution of the above mentioned initial value problem as h 0 uniformly over t [ 0 , t ˜ ) for any fixed t ˜ > 0 . If we take g = I T , where T is a nonlinear mapping, then the equilibrium problem associated with the above mentioned initial value problem reduces to the fixed point problem T x = x . Therefore, Alghamdi et al. [40] established the implicit midpoint rule for nonexpansive mappings in Hilbert space and also proved that their iterative method can be applied to find periodic solution of a nonlinear time dependent evolution equation. In [41], Xu et al. combined viscosity approximation method and implicit midpoint rule to approximate a fixed point of a nonexpansive mapping. Recently, Ke and Ma [42] established generalized viscosity implicit rule of nonexpansive mappings by replacing the midpoint with any point of the interval [ x n , x n + 1 ] .
Inspired by the above work, we introduce and study a new generalized viscosity implicit iterative rule based on Nandal, Chugh and Postolache’s [28] iterative method for approximating a common element of the fixed point sets of nonexpansive mappings and the sets of zeros of maximal monotone mappings. Then, we introduce and analyze a new general system of generalized equilibrium problems and apply our main result to solve this problem. Moreover, we utilize our main result to solve constrained multiple set split convex feasibility problem, monotone inclusion problem and convex minimization problem.

2. Preliminaries

In this paper, H is assumed a real Hilbert space with the inner product · , · and the norm · . Here, Fix ( S ) is used to denote the fixed point set of a mapping S. The strong and weak convergence of a sequence { x n } to x shall be denoted by x n x and x n x , respectively. Assume that D H is a closed convex set then the nearest point projection (metric projection) from H onto D is denoted by P D , that is, for each u H , u P D u u v , v D . In addition, for given u H and w D
w = P D u u w , v w 0 , v D .
The following is a well known result of Hilbert space
λ r + ( 1 λ ) s 2 = λ r 2 + ( 1 λ ) s 2 λ ( 1 λ ) r s 2 ,
for all r , s H and λ [ 0 , 1 ] .
Next, we recall the definitions of some important operators, which we use below.
Definition 1.
An operator G : H H is said to be
(1)
Nonexpansive if G r G s r s r , s H .
(2)
Contraction if there exist a constant k ( 0 , 1 ) such that G r G s k r s , ∀ r , s H .
(3)
α-averaged if there exist a constant α ( 0 , 1 ) and a nonexpansive mapping S such that G = ( 1 α ) I + α S .
(4)
θ-inverse strongly monotone (for short, θ-ism) if there exists θ > 0 such that
G r G s , r s θ G r G s 2 r , s H .
(5)
Firmly nonexpansive if G r G s , r s G r G s 2 r , s H .
Note that metric projection P D is an example of firmly nonexpansive and, further, every firmly nonexpansive is ( 1 / 2 ) -averaged in Hilbert space.
A set valued operator B : H 2 H is called maximal monotone, if B is monotone, i.e., r 1 s 1 , r s 0 r , s d o m ( B ) , r 1 B r and s 1 B s , and there does not exist any other monotone operator whose graph properly includes graph of B. Further, a maximal monotone operator B and μ > 0 generate an operator given as:
J μ B : = ( I + μ B ) 1 : H H ,
which is known as resolvent of B. It is well known [43] that, if B : H 2 H is maximal monotone and μ > 0 , then J μ B is firmly nonexpansive and Fix ( J μ B ) = B 1 0 = r H : 0 B r .
Next, we consider a sequence V n of nonexpansive mappings, which is called strongly nonexpansive sequence [44] if
x n y n ( V n x n V n y n ) 0 ,
whenever { x n } , y n H such that x n y n is bounded and x n y n V n x n V n y n 0 . In addition, note that definition of strongly nonexpansive mapping [45] can be obtained by taking V n = V n N .
Next, we collect several lemmas, which we use in our results.
Lemma 1
(Lemma 1, [28]). Let h : H H be a β-ism operator on H. Then, I 2 β h is nonexpansive.
Lemma 2
(Lemma 2.5, [46]). Let u n [ 0 , ) , ω n [ 0 , 1 ] and v n be three real sequences satisfying
u n + 1 ( 1 ω n ) u n + ω n v n , n 0 .
Suppose that n = 0 ω n = and lim sup n v n 0 . Then, lim n u n = 0 .
Lemma 3
(Corollary 3.13, [44]). Let D H be a nonempty set and suppose that σ n [ 0 , 1 ] satisfy lim inf n σ n > 0 . Then, a sequence V n of mappings of D into H defined by V n = σ n I + ( 1 σ n ) G n , is a strongly nonexpansive sequence, where G n : D H is a nonexpansive mapping for each n N and I is the identity mapping on D.
Lemma 4
(Example 3.2, [44]). In a Hilbert space, every sequence of firmly nonexpansive mappings is a strongly nonexpansive sequence.
Lemma 5
(Theorem 3.4, [44]). Let D , K H be nonempty sets. Suppose G n and V n are two strongly nonexpansive sequences where G n : D H and V n : K H are such that V n ( K ) D for each n N . Then, G n V n is a strongly nonexpansive sequence.
Lemma 6
(Lemma 2.1, [45]). If G i : 1 i p are strongly nonexpansive mappings and i = 1 p Fix ( G i ) , then i = 1 p Fix ( G i ) = Fix ( G p G p 1 G 2 G 1 ) .
Lemma 7
(Propositions 2.1 and 2.2, [47]). The composition of finitely many averaged mappings is averaged. That is, if G i : 1 i p are averaged mappings, then G 1 G p is also averaged. Furthermore, if i = 1 p Fix ( G i ) , then i = 1 p Fix ( G i ) = Fix ( G p G p 1 G 2 G 1 ) .
Lemma 8
(Theorem 10.4, [48]). Let D H be a nonempty closed convex set and G : D D be a nonexpansive mapping. Then, I G is demiclosed at 0, that is, if { x n } D with x n z and ( I G ) x n 0 , then z Fix ( G ) .
Lemma 9
(The Resolvent Identity; (Remark 1.3.48, [49])). For each μ , σ > 0 ,
J μ B x = J σ B σ μ x + 1 σ μ J μ B x .
Note that Lemma 9 uses the homotopy technology, which is also widely used in the homotopy perturbation method [50,51].
Lemma 10
(Lemma 3.1, [52]). Let h n be a sequence of real numbers such that there exists a subsequence n i of { n } such that h n i < h n i + 1 for all i N . Then, there exists a nondecreasing sequence { m q } N such that m q and the following properties are satisfied by all (sufficiently large) numbers q N :
h m q h m q + 1 , h q h m q + 1 .
In fact,
m q = max j q : h j < h j + 1 .

3. Main Results

Theorem 1.
Let H be a real Hilbert space. Let T i i = 1 m and V be nonexpansive self-mappings on H and B 1 , B 2 : H 2 H be maximal monotone mappings such that
Γ : = i = 1 m Fix ( T i ) Fix ( V ) B 1 1 0 B 2 1 0 .
Let g : H H be a contraction with coefficient k ( 0 , 1 ) and let { x n } be a generalized viscosity implicit rule defined by x 0 H and
z n = λ n x n + ( 1 λ n ) x n + 1 , y n = α n g ( x n ) + ( 1 α n ) J μ n B 2 V n z n , x n + 1 = J ρ n B 1 T m n T m 1 n T 2 n T 1 n y n ,
for all n 0 , where V n = ( 1 β n ) I + β n V , T i n = ( 1 γ n i ) I + γ n i T i for i = 1 , 2 , , m and { λ n } [ a , b ] for some a , b ( 0 , 1 ) . Suppose that α n , β n and γ n i are sequences in ( 0 , 1 ) and ρ n and μ n are sequences of positive numbers satisfying the following conditions:
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n β n lim sup n β n < 1 ;
(iii) 
0 < lim inf n γ n i lim sup n γ n i < 1 , for all i = 1 , 2 , , m ; and
(iv) 
for all sufficiently large n, min ρ n , μ n > ε for some ε > 0 .
Then, the sequence { x n } converges strongly to x Γ where x is the unique fixed point of the contraction P Γ g .
Proof. 
First, we show that { x n } is bounded.
Let x Γ . Set W n = J ρ n B 1 T m n T 2 n T 1 n and S n = J μ n B 2 V n . Clearly, W n and S n are nonexpansive mappings for each n 0 . By Lemmas 3 and 4, for each n 0 , W n and S n are composition of strongly nonexpansive mappings. Therefore, from Lemma 6, we get
Γ = i = 1 m Fix ( T i ) Fix ( V ) B 1 1 0 B 2 1 0 , = i = 1 m Fix ( T i n ) Fix ( V n ) Fix ( J ρ n B 1 ) Fix ( J μ n B 2 ) = Fix ( W n ) Fix ( S n ) .
From Equation (3), we have
x n + 1 x = W n y n x y n x = α n ( g ( x n ) g ( x ) ) + α n ( g ( x ) x ) + ( 1 α n ) ( S n z n x ) α n g ( x n ) g ( x ) + α n g ( x ) x + ( 1 α n ) S n z n x α n k x n x + α n g ( x ) x + ( 1 α n ) z n x α n k x n x + α n g ( x ) x + ( 1 α n ) λ n ( x n x ) + ( 1 λ n ) ( x n + 1 x ) α n k x n x + α n g ( x ) x + ( 1 α n ) λ n x n x + ( 1 α n ) ( 1 λ n ) x n + 1 x ,
which implies that
( λ n + α n ( 1 λ n ) ) x n + 1 x ( α n k + λ n α n λ n ) x n x + α n g ( x ) x .
Therefore, we obtain
x n + 1 x α n k + λ n α n λ n λ n + α n ( 1 λ n ) x n x + α n λ n + α n ( 1 λ n ) g ( x ) x = 1 α n ( 1 k ) λ n + α n ( 1 λ n ) x n x + α n ( 1 k ) λ n + α n ( 1 λ n ) 1 ( 1 k ) g ( x ) x .
Thus, we have
x n + 1 x max x n x , 1 1 k g ( x ) x .
By induction, we have
x n + 1 x max x 0 x , 1 1 k g ( x ) x .
This shows that { x n } is bounded and so are g ( x n ) and y n .
From Equations (2) and (3), we have
x n + 1 x 2 y n x 2 = α n ( g ( x n ) x ) + ( 1 α n ) ( S n z n x ) 2 α n g ( x n ) x 2 + ( 1 α n ) S n z n x 2 α n g ( x n ) x 2 + ( 1 α n ) z n x 2 = α n g ( x n ) x 2 + ( 1 α n ) λ n ( x n x ) + ( 1 λ n ) ( x n + 1 x ) 2 = α n g ( x n ) x 2 + ( 1 α n ) λ n x n x 2 + ( 1 α n ) ( 1 λ n ) x n + 1 x 2 ( 1 α n ) λ n ( 1 λ n ) x n + 1 x n 2 ,
which yields that
( 1 α n ) λ n ( 1 λ n ) x n + 1 x n 2 α n g ( x n ) x 2 x n + 1 x 2 + ( 1 α n ) λ n x n x 2 x n + 1 x 2 .
Since the fixed point set of nonexpansive mapping is closed and convex, the set Γ is nonempty closed and convex subset of H. Hence, the metric projection P Γ is well defined. In addition, since P Γ g : H H is a contraction mapping, there exist x Γ such that x = P Γ g ( x ) .
Now, we prove that x n x .
For this purpose, we examine two possible cases.
Case 1.Assume that there exists n 0 N such that the real sequence x n x is nonincreasing for all n n 0 . Since x n x is bounded, x n x is convergent. From Equation (4), we have
x n + 1 x n 0 a s n .
From the nonexpansiveness of W n , we have
0 y n x W n y n x α n g ( x n ) x + ( 1 α n ) S n z n x x n + 1 x α n g ( x n ) x + z n x x n + 1 x α n g ( x n ) x + λ n x n x + ( 1 λ n ) x n + 1 x x n + 1 x
= α n g ( x n ) x + λ n ( x n x x n + 1 x ) .
Since x n x is convergent, α n 0 and { λ n } is bounded, we have
y n x W n y n x 0 a s n .
Using Lemma 5, W n is strongly nonexpansive sequence, therefore, we have
y n W n y n 0 a s n .
From the definition of x n , we have
x n S n x n x n x n + 1 + x n + 1 y n + y n S n x n x n x n + 1 + W n y n y n + α n ( g ( x n ) S n ( x n ) ) + ( 1 α n ) ( S n z n S n x n ) x n x n + 1 + W n y n y n + α n g ( x n ) S n ( x n ) + ( 1 α n ) z n x n = x n x n + 1 + W n y n y n + α n g ( x n ) S n ( x n ) + ( 1 α n ) ( 1 λ n ) x n + 1 x n .
Thus, by Equations (5) and (7), and α n 0 , we obtain
x n S n x n 0 a s n .
Note that
S n z n x n S n z n S n x n + S n x n x n z n x n + S n x n x n = ( 1 λ n ) x n + 1 x n + S n x n x n .
From Equations (5) and (8), it follows that
x n S n z n 0 a s n .
Moreover, we have
y n x n α n g ( x n ) x n + ( 1 α n ) S n z n x n 0 a s n .
Next, we show that
x n W n x n 0 a s n .
From the definition of x n , we have
x n W n x n x n W n y n + W n y n W n x n x n x n + 1 + y n x n .
In view of Equations (5) and (10), we obtain Equation (11).
Moreover, we have
x n + 1 x = W n y n x W n y n W n x n + W n x n x
y n x n + W n x n x .
From the nonexpansiveness of T i n T i 1 n T 1 n for each i = 1 , 2 , , m and Equation (12), we have
0 x n x T i n T i 1 n T 1 n x n x x n x W n x n x
x n x x n + 1 x + y n x n .
In view of the fact that x n x is convergent and using Equation (10), we obtain
x n x T i n T i 1 n T 1 n x n x 0 a s n .
In addition, by using Lemma 5, T i n T i 1 n T 1 n is strongly nonexpansive sequence for each i = 1 , 2 , , m . Therefore, we have
x n T i n T i 1 n T 1 n x n 0 a s n f o r e a c h i = 1 , 2 , , m .
Now, consider
x n J ρ n B 1 x n x n J ρ n B 1 T m n T m 1 n T 1 n x n + J ρ n B 1 T m n T m 1 n T 1 n x n J ρ n B 1 x n x n W n x n + T m n T m 1 n T 1 n x n x n
It follows from Equations (11) and (14) that
x n J ρ n B 1 x n 0 a s n .
Choose a fixed number s such that ε > s > 0 and using Lemma 9, for all sufficiently large n, we have
x n J s B 1 x n x n J ρ n B 1 x n + J ρ n B 1 x n J s B 1 x n = x n J ρ n B 1 x n + J s B 1 s ρ n x n + 1 s ρ n J ρ n B 1 x n J s B 1 x n x n J ρ n B 1 x n + s ρ n x n + 1 s ρ n J ρ n B 1 x n x n = x n J ρ n B 1 x n + 1 s ρ n J ρ n B 1 x n x n 2 x n J ρ n B 1 x n .
In view of Equation (15), we obtain
x n J s B 1 x n 0 a s n .
Next, we show that
x n T i n x n 0 a s n f o r e a c h i = 1 , 2 , , m .
Clearly, from Equation (14) for i = 1 , Equation (17) holds. Now, for i = 2 , , m , we see that
x n T i n x n x n T i n T i 1 n T 1 n x n + T i n T i 1 n T 1 n x n T i n x n x n T i n T i 1 n T 1 n x n + T i 1 n T 1 n x n x n .
Thus, using Equation (14), we obtain Equation (17).
Notice that
x n T i n x n = γ n i ( x n T i x n )
It follows from given Condition (iii) and Equation (17) that
x n T i x n 0 a s n f o r e a c h i = 1 , 2 , , m .
From Equation (3), we have
x n + 1 x y n x = α n ( g ( x n ) x ) + ( 1 α n ) ( S n z n x ) α n g ( x n ) x + S n z n x α n g ( x n ) x + S n z n S n x n + S n x n x α n g ( x n ) x + z n x n + S n x n x
= α n g ( x n ) x + ( 1 λ n ) x n + 1 x n + S n x n x .
Moreover, using Equation (19) and nonexpansiveness of V n , we have
0 x n x V n x n x x n x S n x n x
x n x x n + 1 x + α n g ( x n ) x + ( 1 λ n ) x n + 1 x n .
Since x n x is convergent, using α n 0 and Equation (5), we obtain
x n x V n x n x 0 a s n .
In addition, by using Lemma 3, V n is strongly nonexpansive sequence. Therefore, we have
x n V n x n 0 a s n .
Notice that
x n V n x n = β n ( x n V x n ) .
It follows from given Condition (ii) and Equation (21) that
x n V x n 0 a s n .
Now, consider
x n J μ n B 2 x n x n J μ n B 2 V n x n + J μ n B 2 V n x n J μ n B 2 x n x n S n x n + V n x n x n .
It follows from Equations (8) and (21) that
x n J μ n B 2 x n 0 a s n .
Now, following the same way as for Equation (16), we obtain
x n J s B 2 x n 0 a s n .
Put U : = 1 m + 3 i = 1 m T i + V + J s B 1 + J s B 2 . U being a convex combination of nonexpansive mappings is nonexpansive and
Fix ( U ) = i = 1 m Fix ( T i ) Fix ( V ) B 1 1 0 B 2 1 0 = Γ .
We observe
x n U x n = x n 1 m + 3 i = 1 m T i x n + V x n + J s B 1 x n + J s B 2 x n = 1 m + 3 m x n i = 1 m T i x n + 1 m + 3 x n V x n + 1 m + 3 x n J s B 1 x n + 1 m + 3 x n J s B 2 x n 1 m + 3 i = 1 m x n T i x n + 1 m + 3 x n V x n + 1 m + 3 x n J s B 1 x n + 1 m + 3 x n J s B 2 x n
In view of Equations (16), (18), (22) and (23), we obtain
x n U x n 0 a s n .
As { x n } is bounded, take a subsequence x n i such that x n i z H . Then, by Equation (24) and Lemma 8, z Fix ( U ) = Γ , which results that
lim sup n g ( x ) x , x n x = lim i g ( x ) x , x n i x = g ( x ) x , z x = g ( x ) P Γ g ( x ) , z P Γ g ( x ) 0 ,
where the last inequality follows from Equation (1).
Finally, we show that x n x as n . In fact, we have
x n + 1 x 2 = W n y n x 2 y n x 2 = α n ( g ( x n ) x ) + ( 1 α n ) ( S n z n x ) 2 = α n 2 g ( x n ) x 2 + ( 1 α n ) 2 S n z n x 2 + 2 α n ( 1 α n ) g ( x n ) x , S n z n x α n 2 g ( x n ) x 2 + ( 1 α n ) 2 z n x 2 + 2 α n ( 1 α n ) g ( x n ) g ( x ) , S n z n x + 2 α n ( 1 α n ) g ( x ) x , S n z n x ( 1 α n ) 2 z n x 2 + 2 α n ( 1 α n ) g ( x n ) g ( x ) . S n z n x + E n ( 1 α n ) 2 z n x 2 + 2 α n ( 1 α n ) k x n x . z n x + E n ,
where E n = α n 2 g ( x n ) x 2 + 2 α n ( 1 α n ) g ( x ) x , S n z n x .
It turns out that
( 1 α n ) 2 z n x 2 + 2 k α n ( 1 α n ) x n x . z n x + ( E n x n + 1 x 2 ) 0 .
Solving this quadratic inequality for z n x , we get
z n x 1 2 ( 1 α n ) 2 2 k α n ( 1 α n ) x n x + 4 k 2 α n 2 ( 1 α n ) 2 x n x 2 4 ( 1 α n ) 2 ( E n x n + 1 x 2 ) = k α n x n x + k 2 α n 2 x n x 2 + x n + 1 x 2 E n 1 α n .
Note that
z n x = λ n ( x n x ) + ( 1 λ n ) ( x n + 1 x ) λ n x n x + ( 1 λ n ) x n + 1 x .
Therefore, we have
λ n x n x + ( 1 λ n ) x n + 1 x k α n x n x + k 2 α n 2 x n x 2 + x n + 1 x 2 E n 1 α n .
This implies that
k 2 α n 2 x n x 2 + x n + 1 x 2 E n ( λ n α n λ n + k α n ) x n x + ( 1 α n ) ( 1 λ n ) x n + 1 x ,
that is,
k 2 α n 2 x n x 2 + x n + 1 x 2 E n ( λ n α n λ n + k α n ) 2 x n x 2 + ( 1 α n ) 2 ( 1 λ n ) 2 x n + 1 x 2 + 2 ( λ n α n λ n + k α n ) ( 1 α n ) ( 1 λ n ) x n x . x n + 1 x ( λ n α n λ n + k α n ) 2 x n x 2 + ( 1 α n ) 2 ( 1 λ n ) 2 x n + 1 x 2 + ( λ n α n λ n + k α n ) ( 1 α n ) ( 1 λ n ) x n x 2 + x n + 1 x 2 ,
which gives the inequality
1 ( 1 α n ) 2 ( 1 λ n ) 2 ( 1 α n ) ( 1 λ n ) ( λ n α n λ n + k α n ) x n + 1 x 2 ( λ n α n λ n + k α n ) 2 + ( 1 α n ) ( 1 λ n ) ( λ n α n λ n + k α n ) k 2 α n 2 x n x 2 + E n ,
that is,
1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) x n + 1 x 2 ( λ n α n λ n + k α n ) ( 1 ( 1 k ) α n ) k 2 α n 2 x n x 2 + E n .
It follows that
x n + 1 x 2 ( λ n α n λ n + k α n ) ( 1 ( 1 k ) α n ) k 2 α n 2 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) x n x 2
+ E n 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) .
Note that λ n b implies that
1 λ n 1 b .
Consequently, we have
1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) 1 ( 1 α n ) ( 1 b ) ( 1 ( 1 k ) α n ) .
Let
v n = 1 α n 1 ( λ n α n λ n + k α n ) ( 1 ( 1 k ) α n ) k 2 α n 2 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n )
= 1 ( 1 ( 1 k ) α n ) 2 + k 2 α n 2 α n 1 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) .
Using Equation (27) in Equation (28), we have
v n 1 ( 1 ( 1 k ) α n ) 2 + k 2 α n 2 α n 1 1 ( 1 α n ) ( 1 b ) ( 1 ( 1 k ) α n ) ,
that is,
v n u n n 0 ,
where
u n = 1 ( 1 ( 1 k ) α n ) 2 + k 2 α n 2 α n 1 1 ( 1 α n ) ( 1 b ) ( 1 ( 1 k ) α n ) = 2 ( 1 k ) + ( 2 k 1 ) α n 1 ( 1 α n ) ( 1 b ) ( 1 ( 1 k ) α n ) .
Note that
lim n u n = 2 ( 1 k ) b > 0 .
Letting ξ satisfy
0 < ξ < 2 ( 1 k ) b ,
there exists an integer N ¯ large enough such that u n > ξ for all n N ¯ and, consequently, we have v n > ξ for all n N ¯ . Hence, we have
( λ n α n λ n + k α n ) ( 1 ( 1 k ) α n ) k 2 α n 2 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) 1 ξ α n ,
for all n N ¯ . Therefore, from Equation (26), we have, for all n N ¯ ,
x n + 1 x 2 ( 1 ξ α n ) x n x 2 + E n 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) .
Note that
E n = α n 2 g ( x n ) x 2 + 2 α n ( 1 α n ) g ( x ) x , S n z n x E n = α n α n g ( x n ) x 2 + 2 ( 1 α n ) g ( x ) x , S n z n x n + 2 ( 1 α n ) g ( x ) x , x n x α n α n g ( x n ) x 2 + 2 ( 1 α n ) g ( x ) x . S n z n x n + 2 ( 1 α n ) g ( x ) x , x n x .
Thus, by Equations (9) and (25) and by given condition on { α n } and { λ n } , we have
lim sup n E n ξ α n 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) lim sup n α n g ( x n ) x 2 + 2 ( 1 α n ) g ( x ) x . S n z n x n + 2 ( 1 α n ) g ( x ) x , x n x ξ 1 ( 1 α n ) ( 1 λ n ) ( 1 ( 1 k ) α n ) 0 .
From Equations (29) and (30) and Lemma 2, we have x n x as n .
Case 2.Assume that there exists a subsequence n j of { n } such that
x n j x < x n j + 1 x ,
for all j N . Then, by Lemma 10, there exists a nondecreasing sequence of integers m q N such that m q as q and
x m q x x m q + 1 x a n d x q x x m q + 1 x
for all q N . Now, using Equation (31) in Equation (4), we have
( 1 α m q ) λ m q ( 1 λ m q ) x m q + 1 x m q 2 α m q g ( x m q ) x 2 x m q + 1 x 2 .
Using given conditions on { α m q } and { λ m q } with boundedness of { x m q } and { g ( x m q ) } , we obtain
x m q + 1 x m q 0 a s q .
Similarly, using Equation (31) in Equation (6), we obtain
0 y m q x W m q y m q x α m q g ( x m q ) x + λ m q x m q x x m q + 1 x α m q g ( x m q ) x .
As α m q 0 and { g ( x m q ) } is bounded, we obtain
y m q x W m q y m q x 0 a s q .
By the same argument as in Case 1, we obtain
y m q W m q y m q 0 , x m q S m q x m q 0 , x m q S m q z m q 0 , y m q x m q 0 , x m q W m q x m q 0 a s q .
From Equation (13), we obtain
0 x m q x T i m q T i 1 m q T 1 m q x m q x x m q x x m q + 1 x + y m q x m q y m q x m q .
Using y m q x m q 0 , we have
x m q x T i m q T i 1 m q T 1 m q x m q x 0 a s q .
Again, using the same argument as in Case 1, we obtain
x m q T i m q T i 1 m q T 1 m q x m q 0 , x m q J ρ m q B 1 x m q 0 , x m q J s B 1 x m q 0 , x m q T i m q x m q 0 , x m q T i x m q 0 a s q .
Using Equation (31) in Equation (20), we have
0 x m q x V m q x m q x x m q x x m q + 1 x + α m q g ( x m q ) x + ( 1 λ m q ) x m q + 1 x m q α m q g ( x m q ) x + ( 1 λ m q ) x m q + 1 x m q
As α m q 0 , x m q + 1 x m q 0 , { λ m q } and g ( x m q ) are bounded, we obtain
x m q x V m q x m q x 0 a s q .
Following similar arguments as in Case 1, we have
x m q V m q x m q 0 , x m q V x m q 0 , x m q J μ m q B 2 x m q 0 , x m q J s B 2 x m q 0 , x m q U x m q 0 and
lim sup q g ( x ) x , x m q x 0 .
Now, from Equation (29), we have
x m q + 1 x 2 ( 1 ξ α m q ) x m q x 2 + E m q 1 ( 1 α m q ) ( 1 λ m q ) ( 1 ( 1 k ) α m q ) ,
where
E m q = α m q 2 g ( x m q ) x 2 + 2 α m q ( 1 α m q ) g ( x ) x , S m q z m q x α m q α m q g ( x m q ) x 2 + 2 ( 1 α m q ) g ( x ) x . S m q z m q x m q
+ 2 ( 1 α m q ) g ( x ) x , x m q x .
Applying Equation (31) in Equation (34), we have
ξ α m q x m q x 2 x m q x 2 x m q + 1 x 2 + E m q 1 ( 1 α m q ) ( 1 λ m q ) ( 1 ( 1 k ) α m q ) , ξ α m q x m q x 2 E m q 1 ( 1 α m q ) ( 1 λ m q ) ( 1 ( 1 k ) α m q ) .
Using the fact that α m q > 0 and Equation (35), we obtain
ξ x m q x 2 1 1 ( 1 α m q ) ( 1 λ m q ) ( 1 ( 1 k ) α m q ) α m q g ( x m q ) x 2 + 2 ( 1 α m q ) g ( x ) x . S m q z m q x m q + 2 ( 1 α m q ) g ( x ) x , x m q x .
Then, using α m q 0 , and Equations (32) and (33), we get
x m q x 0 a s q .
This together with Equation (34) implies that x m q + 1 x 0 as q .
However,
x q x x m q + 1 x ,
for all q N , which gives that x q x as q . □

4. Applications

We apply our main result in this section to solve a number of important problems.

4.1. New General System of Generalized Equilibrium Problems

Let C i i = 1 N be nonempty closed convex subsets of a Hilbert space H. Let, for each i = 1 , 2 , , N , Ξ i : C i × C i R be a bifunction and A i : H H be a nonlinear mapping. Consider the following problem of finding ( x 1 , x 2 , , x N ) C 1 × C 2 × C N such that
Ξ 1 ( x 1 , x 1 ) + A 1 x 2 , x 1 x 1 + 1 θ 1 x 1 x 2 , x 1 x 1 0 , x 1 C 1 , Ξ 2 ( x 2 , x 2 ) + A 2 x 3 , x 2 x 2 + 1 θ 2 x 2 x 3 , x 2 x 2 0 , x 2 C 2 , Ξ N 1 ( x N 1 , x N 1 ) + A N 1 x N , x N 1 x N 1 + 1 θ N 1 x N 1 x N , x N 1 x N 1 0 , x N 1 C N 1 , Ξ N ( x N , x N ) + A N x 1 , x N x N + 1 θ N x N x 1 , x N x N 0 , x N C N ,
where θ i > 0 for each i = 1 , 2 , , N . Here, Ω is used to denote the solution set of Equation (36). Next, we discuss some special cases of the problem in Equation (36) as follows:
(1)
If N = 2 , C 1 = C 2 = C , then the problem in Equation (36) reduces to the following general system of generalized equilibrium problems of finding ( x 1 , x 2 ) C × C such that
Ξ 1 ( x 1 , x 1 ) + A 1 x 2 , x 1 x 1 + 1 θ 1 x 1 x 2 , x 1 x 1 0 , x 1 C , Ξ 2 ( x 2 , x 2 ) + A 2 x 1 , x 2 x 2 + 1 θ 2 x 2 x 1 , x 2 x 2 0 , x 2 C ,
which was introduced and studied by Ceng and Yao [18].
(2)
If Ξ 1 = Ξ 2 = = Ξ N = Ξ , A 1 = A 2 = = A N = A , C 1 = C 2 = = C N = C , x 1 = x 2 = = x N = x , then the problem in Equation (36) reduces to the following generalized equilibrium problem of finding x C such that
Ξ ( x , x 1 ) + A x , x 1 x 0 , x 1 C ,
which was introduced and studied by Takahashi and Takahashi [53].
(3)
If A 1 = A 2 = = A N = 0 , C 1 = C 2 = = C N = C , x 1 = x 2 = = x N = x , then the problem in Equation (36) reduces to the following system of equilibrium problems of finding x C such that
Ξ i ( x , y ) 0 , y C , i = 1 , 2 , , N ,
which was considered by Combettes and Hirstoaga [54].
(4)
If A = 0 in Equation (37) or N = 1 in Equation (38), then we obtain equilibrium problem of finding x C such that
Ξ ( x , y ) 0 , y C .
The set of solutions of equilibrium problem is denoted by E P ( Ξ ) .
(5)
If Ξ 1 = Ξ 2 = = Ξ N = 0 , then the problem in Equation (36) reduces to the following general system of variational inequalities of finding ( x 1 , x 2 , , x N ) C 1 × C 2 × C N such that
θ 1 A 1 x 2 + x 1 x 2 , x x 1 0 x C 1 , θ 2 A 2 x 3 + x 2 x 3 , x x 2 0 x C 2 , θ N 1 A N 1 x N + x N 1 x N , x x N 1 0 x C N 1 , θ N A N x 1 + x N x 1 , x x N 0 x C N ,
which was considered and investigated by Nandal, Chugh and Postolache [28].
(6)
If N = 2 , C 1 = C 2 = C , then the problem in Equation (39) reduces to the following system of variational inequalities of finding ( x 1 , x 2 ) C × C such that
θ 1 A 1 x 2 + x 1 x 2 , x x 1 0 x C , θ 2 A 2 x 1 + x 2 x 1 , x x 2 0 x C ,
which was introduced and considered by Ceng et al. [55].
(7)
If N = 2 , C 1 = C 2 = C , A 1 = A 2 = A , then the problem in Equation (39) reduces to the following system of variational inequalities of finding ( x 1 , x 2 ) C × C such that
θ 1 A x 2 + x 1 x 2 , x x 1 0 x C , θ 2 A x 1 + x 2 x 1 , x x 2 0 x C ,
which was introduced and studied by Verma [56].
(8)
If x 1 = x 2 = x in Equation (40), then the problem in Equation (40) reduces to the classical variational inequality of finding x C such that
A x , x x 0 x C .
The set of solutions of classical variational inequality problem is denoted by V I ( C , A ) .
It is clear from above mentioned special cases that the problem in Equation (36) is very general and includes a number of equilibrium and variational inequality problems, which shows the special significance of this problem.
To study problem in Equation (36), we need following assumptions for a bifunction Ξ : C × C R .
  • (P1) Ξ ( u , u ) = 0 for all u C ;
  • (P2) Ξ is monotone, i.e., Ξ ( u , v ) + Ξ ( v , u ) 0 for all u , v C ;
  • (P3) lim t 0 + Ξ ( t u + ( 1 t ) v , w ) Ξ ( v , w ) for all u , v , w C ; and
  • (P4) for each fixed u C , v Ξ ( u , v ) is a convex and lower semicontinuous function.
Lemma 11
(Lemma 2.12, [54]). Let C H be a nonempty closed convex set and Ξ : C × C R be a bifunction satisfying (P1)–(P4). Then, for any θ > 0 and u H , there exists w C such that
Ξ ( w , v ) + 1 θ v w , w u 0 , v C .
Furthermore, if θ Ξ ( u ) = { w C : Ξ ( w , v ) + 1 θ v w , w u 0 , v C } , then
(a) 
θ Ξ is a single valued map;
(b) 
θ Ξ is firmly nonexpansive;
(c) 
Fix ( θ Ξ ) = E P ( Ξ ) ; and
(d) 
E P ( Ξ ) is closed and convex.
Lemma 12.
Let C i i = 1 N be nonempty closed convex subsets of a Hilbert space H. Let, for each i = 1 , 2 , , N , Ξ i : C i × C i R be a bifunction satisfying Conditions (P1)–(P4) and A i : H H be a nonlinear mapping. Then, for given x i C i , i = 1 , 2 , , N , ( x 1 , x 2 , , x N ) C 1 × C 2 × C N is a solution of the problem in Equation (36) if and only if x 1 is a fixed point of the mapping T : H H defined by
T = θ 1 Ξ 1 ( I θ 1 A 1 ) θ 2 Ξ 2 ( I θ 2 A 2 ) θ N 1 Ξ N 1 ( I θ N 1 A N 1 ) θ N Ξ N ( I θ N A N ) .
Proof. 
Observe that Equation (36) can be written as
Ξ 1 ( x 1 , x 1 ) + 1 θ 1 x 1 ( I θ 1 A 1 ) x 2 , x 1 x 1 0 , x 1 C 1 , Ξ 2 ( x 2 , x 2 ) + 1 θ 2 x 2 ( I θ 2 A 2 ) x 3 , x 2 x 2 0 , x 2 C 2 , Ξ N 1 ( x N 1 , x N 1 ) + 1 θ N 1 x N 1 ( I θ N 1 A N 1 ) x N , x N 1 x N 1 0 , x N 1 C N 1 , Ξ N ( x N , x N ) + 1 θ N x N ( I θ N A N ) x 1 , x N x N 0 , x N C N ,
θ 1 Ξ 1 ( I θ 1 A 1 ) x 2 = x 1 θ 2 Ξ 2 ( I θ 2 A 2 ) x 3 = x 2 θ N 1 Ξ N 1 ( I θ N 1 A N 1 ) x N = x N 1 θ N Ξ N ( I θ N A N ) x 1 = x N
x 1 = θ 1 Ξ 1 ( I θ 1 A 1 ) θ 2 Ξ 2 ( I θ 2 A 2 ) θ N 1 Ξ N 1 ( I θ N 1 A N 1 ) θ N Ξ N ( I θ N A N ) x 1 .
 □
Theorem 2.
Let C i i = 1 N be nonempty closed convex subsets of a Hilbert space H. Let, for each i = 1 , 2 , , N , Ξ i : C i × C i R be a bifunction satisfying Conditions (P1)–(P4) and A i be a η i -ism self-mapping on H. Assume that Ω = Fix ( T ) , where T is defined in Lemma 12. Let { x n } be a sequence defined by x 0 H and
y n = ( 1 α n ) ( λ n x n + ( 1 λ n ) x n + 1 ) , x n + 1 = θ 1 Ξ 1 ( I θ 1 A 1 ) θ 2 Ξ 2 ( I θ 2 A 2 ) θ N 1 Ξ N 1 ( I θ N 1 A N 1 ) θ N Ξ N ( I θ N A N ) y n , n 0
where θ i ( 0 , 2 η i ) and { λ n } [ a , b ] for some a , b ( 0 , 1 ) . Suppose α n ( 0 , 1 ) satisfying lim n α n = 0 and n = 0 α n = . Then, the sequence { x n } converges strongly to a point x Ω .
Proof. 
First, we prove that T = θ 1 Ξ 1 ( I θ 1 A 1 ) θ 2 Ξ 2 ( I θ 2 A 2 ) θ N 1 Ξ N 1 ( I θ N 1 A N 1 ) θ N Ξ N ( I θ N A N ) is an averaged map. Observe that I θ i A i = 1 θ i 2 η i I + θ i 2 η i I 2 η i A i , where θ i 2 η i ( 0 , 1 ) . Then, applying Lemma 1, I 2 η i A i is nonexpansive and therefore, I θ i A i is averaged for θ i ( 0 , 2 η i ) , i = 1 , 2 , , N . In addition, Lemma 11 implies that θ i Ξ i is firmly nonexpansive, that is, 1 / 2 -averaged for each i = 1 , 2 , , N . Hence, Lemma 7 implies that T is averaged on H and, therefore, T = ( 1 γ ) I + γ T 1 , for some γ ( 0 , 1 ) and a nonexpansive mapping T 1 where Fix ( T 1 ) = Fix ( T ) . Taking m = 1 , B 1 = B 2 = g = 0 , V = I and γ n 1 = γ in Theorem 1 yields the conclusion of Theorem 2. □

4.2. Constrained Multiple-Set Split Convex Feasibility Problem (CMSSCFP)

Let H 1 and H 2 be two real Hilbert spaces. Let C i i = 1 p and Q j j = 1 q be nonempty closed convex subsets of H 1 and H 2 , respectively. Let, for each j = 1 , 2 , , q , U j : H 1 H 2 be a bounded linear operator and let K be another closed convex subset of H 1 . The constrained multiple-set split convex feasibility problem (CMSSCFP) [7] is formulated as finding a point x K such that
x i = 1 p C i and U j x Q j for each j = 1 , 2 , , q .
This problem extends the multiple set split feasibility problem (MSSFP) [5], which is formulated as finding a point x H 1 such that
x i = 1 p C i and U x j = 1 q Q j ,
where U : H 1 H 2 is a bounded linear operator. It is clearly seen that by taking U j = U for each j = 1 , 2 , , q , CMSSCFP reduces to MSSFP.
If p = q = 1 then MSSFP reduces to the split feasibility problem (SFP), which is formulated as finding a point x H 1 such that
x C and U x Q .
Recently, Buong [6] proposed the following algorithm to solve MSSFP
x n + 1 = ( I η δ n F ) P 1 ( I ξ U ( I P 2 ) U ) x n ,
where P 1 = P C 1 P C p or P 1 = i = 1 p α i P C i and P 2 = P Q 1 P Q q or P 2 = j = 1 q β j P Q j and F be strongly monotone and Lipschitz continuous map. He proved that this algorithm converges to a solution of the following variational inequality problem: find x Γ such that
F x , x p 0 , p Γ ,
where Γ is solution set of the MSSFP in Equation (42).
Now, we present an implicit iterative method to solve the CMSSCFP in Equation (41). We use Ψ to denote the solution set of the CMSSCFP in Equation (41).
Theorem 3.
Let F be a θ-ism self-mapping on H 1 . Assume that Ψ F 1 0 is nonempty. Let { x n } be a sequence defined by x 0 H 1 and
y n q + 1 = ( 1 α n ) ( λ n x n + ( 1 λ n ) x n + 1 ) , y n j = ( I ξ n j U j ( I P Q j ) U j ) y n i + 1 , 1 j q , x n + 1 = ( I η δ n F ) P 1 y n 1 , n 0 ,
where P 1 = P C 1 P C p and { λ n } [ a , b ] for some a , b ( 0 , 1 ) . Suppose α n ( 0 , 1 ) , { η δ n } ( 0 , 2 θ ) and { ξ n j } ( 0 , 2 / U j 2 ) ( 1 j q ) satisfying
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n η δ n lim sup n η δ n < 2 θ ; and
(iii) 
0 < lim inf n ξ n j lim sup n ξ n j < 2 2 U j 2 U j 2 , ( 1 j q ) .
Then, the sequence { x n } converges strongly to a point x Ψ F 1 0 , which is also a solution of V I ( Ψ , F ) .
Proof. 
Let x ^ solve the CMSSCFP in Equation (41), i.e., x ^ Ψ ; then, x ^ i = 1 p C i and U j x ^ Q j for each j = 1 , 2 , , q .
Note that U j x ^ Q j is equivalent to P Q j ( U j x ^ ) = U j x ^ i.e., ( I P Q j ) U j x ^ = 0 . Therefore, U j ( I P Q j ) U j x ^ = 0 means x ^ ( U j ( I P Q j ) U j ) 1 0 for each j = 1 , 2 , , q . Thus,
Ψ i = 1 p C i j = 1 q ( U j ( I P Q j ) U j ) 1 0 .
Now, let x ^ i = 1 p C i j = 1 q ( U j ( I P Q j ) U j ) 1 0 , which implies
x ^ ( U j ( I P Q j ) U j ) 1 0 for each j = 1 , 2 , , q .
Choose z Ψ . Therefore, U j z Q j for each j = 1 , 2 , , q .
( I P Q j ) U j x ^ , U j z P Q j U j x ^ 0 for each j = 1 , 2 , , q .
Using Equations (43) and (44), for each j = 1 , 2 , , q , we have
( I P Q j ) U j x ^ 2 = ( I P Q j ) U j x ^ , U j x ^ U j z + ( I P Q j ) U j x ^ , U j z P Q j U j x ^ ( I P Q j ) U j x ^ , U j x ^ U j z U j ( I P Q j ) U j x ^ , x ^ z = 0 .
Therefore, U j x ^ Fix ( P Q j ) = Q j for each j = 1 , 2 , , q .
Thus, x ^ Ψ . Hence, Ψ = i = 1 p C i j = 1 q ( U j ( I P Q j ) U j ) 1 0 .
Next, we rewrite I η δ n F as
I η δ n F = 1 η δ n 2 θ I + η δ n 2 θ I 2 θ F
and for each j = 1 , 2 , , q , I ξ n j U j ( I P Q j ) U j as
I ξ n j U j ( I P Q j ) U j = 1 ξ n j U j 2 2 I + ξ n j U j 2 2 I 2 U j 2 U j ( I P Q j ) U j .
Using Lemma 15 in [28], U j ( I P Q j ) U j is 1 1 U j 2 U j 2 -ism. It follows from Lemma 1, I 2 θ F and I 2 2 U j 2 U j 2 U j ( I P Q j ) U j are nonexpansive. Since the metric projection P C i is ( 1 / 2 ) -averaged, therefore Lemma 7 implies that P 1 = P C 1 P C p is averaged mapping. Now, taking m = q + 2 , T m n = I η δ n F , T m 1 n = P 1 , T m 1 j n = I ξ n j U j ( I P Q j ) U j ( 1 j q ) , V = I and B 1 = B 2 = g = 0 in Theorem 1 proves that { x n } converges strongly to x Fix ( I 2 θ F )
Fix ( P 1 ) j = 1 q Fix I 2 2 U j 2 U j 2 U j ( I P Q j ) U j .
It can be easily proven that Fix ( I 2 θ F ) = F 1 0 and
Fix I 2 2 U j 2 U j 2 U j ( I P Q j ) U j = U j ( I P Q j ) U j 1 0 .
In addition, Fix ( P 1 ) = i = 1 p ( P C i ) = i = 1 p C i .
That is,
x n x Ψ F 1 0 .
In addition, note that Ψ F 1 0 V I ( Ψ , F ) . Thus, x is also a solution of V I ( Ψ , F ) .  □
Remark 1.
Theorem 3 generalizes and extends Buong’s result ([6] (Theorem 3.2)) in many directions. Theorem 3 extends the MSSFP studied in Buong [6] to the related, more general problem, CMSSCFP. In addition, we have considered F an inverse strongly monotone operator in Theorem 3, which is more general from the strongly monotone and Lipschitz continuous operator F taken in Buong’s result ([6] (Theorem 3.2)).

4.3. Monotone Inclusion and Fixed Point Problem

Let S : H H and B : H 2 H be two operators. Consider the inclusion problem of finding x ^ H such that
0 S x ^ + B x ^ .
The solution set of Equation (45) is denoted by ( S + B ) 1 0 . A popular method for solving Equation (45) is the forward-backward splitting method, which can be expressed via recursion:
x n + 1 = J r B ( x n r S x n ) , n 0 ,
where r > 0 . Now, we combine forward-backward splitting method and generalized viscosity implicit rule for finding a common element of set of solutions of Equation (45) and fixed point sets of a finite family of nonexpansive mappings.
Theorem 4.
Let H be a real Hilbert space. Let T i i = 1 m be nonexpansive self-mappings on H. Let S be a θ-ism of H into itself and let B : H 2 H be maximal monotone mapping such that Γ : = i = 1 m Fix ( T i ) ( S + B ) 1 0 . Let g : H H be a contraction with coefficient k ( 0 , 1 ) and let { x n } be a sequence defined by x 0 H and
z n = λ n x n + ( 1 λ n ) x n + 1 , y n = α n g ( x n ) + ( 1 α n ) J r n B ( z n r n S z n ) , x n + 1 = T m n T m 1 n T 2 n T 1 n y n ,
for all n 0 , where T i n = ( 1 γ n i ) I + γ n i T i for i = 1 , 2 , , m and { λ n } [ a , b ] for some a , b ( 0 , 1 ) . Suppose that α n , γ n i ( 0 , 1 ) and r n ( 0 , 2 θ ) , satisfying:
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n r n lim sup n r n < 2 θ ; and
(iii) 
0 < lim inf n γ n i lim sup n γ n i < 1 , for all i = 1 , 2 , , m .
Then, the sequence { x n } converges strongly to x Γ , where x is the unique fixed point of the contraction P Γ g .
Proof. 
Firstly, we rewrite I r n S as
I r n S = 1 r n 2 θ I + r n 2 θ I 2 θ S
and r n 2 θ ( 0 , 1 ) . By applying Lemma 1, I 2 θ S is nonexpansive and, therefore, for each n 0 , I r n S is averaged for r n ( 0 , 2 θ ) . Furthermore, it follows from [44] (Lemma 5.8) that for any r > 0
( S + B ) 1 0 = Fix ( J r B ( I r S ) ) .
By putting B 1 = 0 , B 2 = B , V = I 2 θ S , and β n = r n 2 θ in Theorem 1, the conclusion follows immediately. □
Remark 2.
Theorem 4 improves and extends Chang et al.’s result [27] (Theorem 3.2). By taking T 1 = T 2 = = T m = I , r n = r in Theorem 4, we obtain Theorem 3.2 of [27] with more relaxed conditions on the parameters. In addition, contraction coefficient k is bounded in ( 0 , 1 / 2 ) in [27] (Theorem 3.2), which we relax to ( 0 , 1 ) .

4.4. Convex Minimization Problem

Suppose f : H R is a convex smooth function and h : H R is a proper convex and lower semicontinuous function. In this subsection, we study the following convex minimization problem: find x H such that
f ( x ) + h ( x ) = min x H f ( x ) + h ( x ) .
Using Fermat’s rule, the problem in Equation (47) can be transformed into the following equivalent problem:
find x H such that 0 f ( x ) + h ( x ) ,
where f is a gradient of f and h is a subdifferential of h.
Theorem 5.
Assume that f : H R is a convex and differentiable function, and its gradient is 1 / θ -Lipschitz continuous where θ ( 0 , ) . In addition, assume that h : H R is a proper convex and lower semicontinuous function such that f + h attains a minimizer. Let g : H H be a contraction with coefficient k ( 0 , 1 ) and let { x n } be a sequence defined by x 0 H and
x n + 1 = α n g ( x n ) + ( 1 α n ) J r n h ( I r n f ) ( λ n x n + ( 1 λ n ) x n + 1 ) , n 0 ,
where { λ n } [ a , b ] for some a , b ( 0 , 1 ) . Suppose α n ( 0 , 1 ) and r n ( 0 , 2 θ ) satisfy the following conditions:
(i) 
lim n α n = 0 , n = 0 α n = ; and
(ii) 
0 < lim inf n r n lim sup n r n < 2 θ .
Then, { x n } converges strongly to a minimizer of f + h .
Proof. 
Since f is 1 / θ -Lipschitz continuous, it follows from Corollary 10 of [57] that f is θ -ism. Moreover, h is maximal monotone (see [58] (Theorem A)). Taking T 1 = T 2 = = T m = I , S = f and B = h in Theorem 4, we obtain the conclusion of Theorem 5 from Theorem 4. □
Remark 3.
If we take r n = r in Theorem 5, we obtain Theorem 4.2 of [27] as special case with improved conditions on the parameters.

5. Concluding Remarks

The base of this paper is the work done by Nandal, Chugh and Postolache [28] and Ke and Ma’s [42] generalized viscosity implicit rule for solving fixed point problems. Under mild conditions, strong convergence of the proposed method is proved. Furthermore, we consider a new general system of generalized equilibrium problems, which is generalization of several equilibrium and variational inequality problems considered in Ceng and Yao [18], Takahashi and Takahashi [53], Combettes and Hirstoaga [54], Nandal et al. [28], Ceng et al. [55] and Verma [56]. Theorem 3 extends the multiple set split feasibility problem (MSSFP) studied by Buong [6] to a related more general problem, the constrained multiple-set split convex feasibility problem (CMSSCFP), which in addition extends F from a strongly monotone and Lipschitz continuous operator to an inverse strongly monotone operator. Then, we combine the forward-backward splitting method and generalized viscosity implicit rule in Theorem 4 to solve monotone inclusion problem. Theorem 4 improves and extends Chang et al.’s result [27] (Theorem 3.2). Finally, we derive Theorem 5 to solve convex minimization problem, which extends [27] (Theorem 4.2). Further, our work can be extended to fractal calculus [59]. We have made an attempt to solve a number of problems of nonlinear analysis as application of the presented algorithm, however, finding real world applications is still an open question.

Author Contributions

All authors participated in the conceptualization, validation, formal analysis, investigation, writing—original draft preparation, and writing—review and editing.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blatt, D.; Hero, A.O., III. Energy based sensor network source localization via projection onto convex sets (POCS). IEEE Trans. Signal Process. 2006, 54, 3614–3619. [Google Scholar] [CrossRef]
  2. Censor, Y.; Altschuler, M.D.; Powlis, W.D. On the use of Cimmino’s simultaneous projections method for computing a solution of the inverse problem in radiation therapy treatment planning. Inverse Probl. 1988, 4, 607–623. [Google Scholar] [CrossRef]
  3. Herman, G.T. Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed.; Springer: London, UK, 2009. [Google Scholar]
  4. Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron. Phys. 1996, 95, 155–270. [Google Scholar]
  5. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  6. Buong, N. Iterative algorithms for the multiple-sets split feasibility problem in Hilbert spaces. Numer. Algorithms 2017, 76, 783–798. [Google Scholar] [CrossRef]
  7. Masad, E.; Reich, S. A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8, 367–371. [Google Scholar]
  8. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019, 1–3. [Google Scholar] [CrossRef]
  9. Yao, Y.; Liou, Y.C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar] [CrossRef]
  10. Yao, Y.; Yao, J.C.; Liou, Y.C.; Postolache, M. Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpathian J. Math. 2018, 34, 459–466. [Google Scholar]
  11. Yao, Y.; Postolache, M.; Qin, X.; Yao, J.C. Iterative algorithms for the proximal split feasibility problem. Univ. Politeh. Buch. Ser. A 2018, 80, 37–44. [Google Scholar]
  12. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  13. Yao, Y.; Agarwal, R.P.; Postolache, M.; Liu, Y.C. Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, 2014, 183. [Google Scholar] [CrossRef] [Green Version]
  14. Yao, Y.; Postolache, M.; Liou, Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  15. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Student 1994, 63, 123–145. [Google Scholar]
  16. Daniele, P.; Giannessi, F.; Maugeri, A. Equilibrium Problems and Variational Models; Kluwer: Dordrecht, The Netherlands, 2003. [Google Scholar]
  17. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: Berlin, Germany, 2002. [Google Scholar]
  18. Ceng, L.C.; Yao, J.C. A relaxed extragradient-likemethod for a generalized mixed equilibrium problem, a general system of generalized equilibria and a fixed point problem. Nonlinear Anal. 2010, 72, 1922–1937. [Google Scholar] [CrossRef]
  19. Bnouhachem, A. An iterative algorithm for system of generalized equilbrium problems and fixed point problem. Fixed Point Theory Appl. 2014, 2014, 235. [Google Scholar] [CrossRef]
  20. Ceng, L.C.; Ansari, Q.H.; Schaible, S.; Yao, J.C. Iterative methods for generalized equilibrium problems, systems of general generalized equilibrium problems and fixed point problems for nonexpansive mappings in Hilbert spaces. Fixed Point Theory 2011, 12, 293–308. [Google Scholar]
  21. Ceng, L.C.; Pang, C.T.; Wen, C.F. Implicit and explicit iterative methods for mixed equilibria with constraints of system of generalized equilibria and hierarchical fixed point problem. J. Inequal. Appl. 2015, 2015, 280. [Google Scholar] [CrossRef] [Green Version]
  22. Ceng, L.C.; Ansari, Q.H.; Schaible, S. Hybrid extragradient-like methods for generalized mixed equilibrium problems, system of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012, 53, 69–96. [Google Scholar] [CrossRef]
  23. Ceng, L.C.; Latif, A.; Al-Mazrooei, A.E. Iterative algorithms for systems of generalized equilibrium problems with the constraints of variational inclusion and fixed point problems. Abstr. Appl. Anal. 2014, 2014, 1–24. [Google Scholar] [CrossRef]
  24. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019. [Google Scholar] [CrossRef]
  25. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 42. [Google Scholar] [CrossRef]
  26. Yuying, T.; Plubbtieng, S. Strong convergence theorem by hybrid and shrinking projection methods for sum of two monotone operators. J. Inequal. Appl. 2017, 2017, 72. [Google Scholar] [CrossRef] [PubMed]
  27. Chang, S.S.; Wen, C.F.; Yao, J.C. Generalized viscosity implicit rules for solving quasi-inclusion problems of accretive operators in Banach spaces. Optimization 2017, 66, 1105–1117. [Google Scholar] [CrossRef]
  28. Nandal, A.; Chugh, R.; Postolache, M. Iteration process for fixed point problems and zeros of maximal monotone operators. Symmetry 2019, 11, 655. [Google Scholar] [CrossRef]
  29. He, J.H. Variational iteration method: A kind of non-linear analytical technique: Some examples. Int. J. Non-Linear Mech. 1999, 34, 699–708. [Google Scholar] [CrossRef]
  30. He, J.H.; Wu, X.H. Variational iteration method: new development and applications. Comput. Math. Appl. 2007, 54, 881–894. [Google Scholar] [CrossRef]
  31. He, J.H. A variational iteration approach to nonlinear problems and its applications. Mech. Appl. 1998, 20, 30–31. [Google Scholar]
  32. Khuri, S.A.; Sayfy, A. Variational iteration method: Green’s functions and fixed point iterations perspective. Appl. Math. Lett. 2014, 32, 28–34. [Google Scholar] [CrossRef]
  33. He, J.H.; Ji, F.Y. Taylor series solution for Lane-Emden equation. J. Math. Chem. 2019, 57, 1932–1934. [Google Scholar] [CrossRef]
  34. Auzinger, W.; Frank, R. Asymptotic error expansions for stiff equations: An analysis for the implicit midpoint and trapezoidal rules in the strongly stiff case. Numer. Math. 1989, 56, 469–499. [Google Scholar] [CrossRef]
  35. Deuflhard, P. Recent progress in extrapolation methods for ordinary differential equations. SIAM Rev. 1985, 27, 505–535. [Google Scholar] [CrossRef]
  36. Bader, G.; Deuflhard, P. A semi-implicit mid-point rule for stiff systems of ordinary differential equations. Numer. Math. 1983, 41, 373–398. [Google Scholar] [CrossRef]
  37. Veldhuxzen, M.V. Asymptotic expansions of the global error for the implicit midpoint rule (stiff case). Computing 1984, 33, 185–192. [Google Scholar] [CrossRef]
  38. Somalia, S. Implicit midpoint rule to the nonlinear degenerate boundary value problems. Int. J. Comput. Math. 2002, 79, 327–332. [Google Scholar] [CrossRef]
  39. Schneider, C. Analysis of the linearly implicit mid-point rule for differential-algebra equations. Electron. Trans. Numer. Anal. 1993, 1, 1–10. [Google Scholar]
  40. Alghamdi, M.A.; Alghamdi, M.A.; Shahzad, N.; Xu, H.K. The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory Appl. 2014, 96, 1–9. [Google Scholar] [CrossRef]
  41. Xu, H.K.; Alghamdi, M.A.; Shahzad, N. The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 41. [Google Scholar] [CrossRef] [Green Version]
  42. Ke, Y.; Ma, C. The generalized viscosity implicit rules of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 2015, 190. [Google Scholar] [CrossRef]
  43. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  44. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8, 471–489. [Google Scholar]
  45. Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houston J. Math. 1977, 3, 459–470. [Google Scholar]
  46. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  47. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  48. Geobel, K.; Kirk, W.A. Topics on Metric Fixed-Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  49. Denkowski, Z.; Migorski, S.; Papageorgiou, N.S. An Introduction to Nonlinear Analysis: Applications; Springer: New York, NY, USA, 2003. [Google Scholar]
  50. He, J.H. Homotopy perturbation technique. Comput. Meth. Appl. Mech. Eng. 1999, 178, 257–262. [Google Scholar] [CrossRef]
  51. He, J.H. A coupling method of homotopy technique and a perturbation technique for non linear problems. Int. J. Nonlinear Mech. 2000, 35, 37–43. [Google Scholar] [CrossRef]
  52. Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  53. Takahashi, S.; Takahashi, W. Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69, 1025–1033. [Google Scholar] [CrossRef]
  54. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  55. Ceng, L.C.; Wang, C.Y.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Meth. Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  56. Verma, R.U. On a new system of nonlinear variational inequalities and associated iterative algorithms. Math. Sci. Res. Hot-line 1999, 3, 65–68. [Google Scholar]
  57. Baillon, J.B.; Haddad, G. Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 1977, 26, 137–150. [Google Scholar] [CrossRef]
  58. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef] [Green Version]
  59. He, J.H. Fractal calculus and its geometrical explanation. Results Phys. 2018, 10, 272–276. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Postolache, M.; Nandal, A.; Chugh, R. Strong Convergence of a New Generalized Viscosity Implicit Rule and Some Applications in Hilbert Space. Mathematics 2019, 7, 773. https://doi.org/10.3390/math7090773

AMA Style

Postolache M, Nandal A, Chugh R. Strong Convergence of a New Generalized Viscosity Implicit Rule and Some Applications in Hilbert Space. Mathematics. 2019; 7(9):773. https://doi.org/10.3390/math7090773

Chicago/Turabian Style

Postolache, Mihai, Ashish Nandal, and Renu Chugh. 2019. "Strong Convergence of a New Generalized Viscosity Implicit Rule and Some Applications in Hilbert Space" Mathematics 7, no. 9: 773. https://doi.org/10.3390/math7090773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop