Next Article in Journal
A Variant of the Necessary Condition for the Absolute Continuity of Symmetric Multivariate Mixture
Next Article in Special Issue
Multi-Step Inertial Hybrid and Shrinking Tseng’s Algorithm with Meir–Keeler Contractions for Variational Inclusion Problems
Previous Article in Journal
Quadratic First Integrals of Time-Dependent Dynamical Systems of the Form q¨a=Γbcaq˙bq˙cω(t)Qa(q)
Previous Article in Special Issue
A New Approach of Some Contractive Mappings on Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Design for the Common Solution of Monotone Inclusions and Variational Inequalities

1
School of Mathematics and Statistics, Hebei University of Economics and Business, Shijiazhuang 050061, China
2
Department of Mathematics, Texas A & M University-Kingsville, Kingsville, TX 78363, USA
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1504; https://doi.org/10.3390/math9131504
Submission received: 3 June 2021 / Revised: 23 June 2021 / Accepted: 25 June 2021 / Published: 27 June 2021
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
Some new forward–backward multi-choice iterative algorithms with superposition perturbations are presented in a real Hilbert space for approximating common solution of monotone inclusions and variational inequalities. Some new ideas of constructing iterative elements can be found and strong convergence theorems are proved under mild restrictions, which extend and complement some already existing work.

1. Introduction and Preliminaries

Let C be a non-empty closed and convex subset of a real Hilbert space H . Symbols · and · , · denote the norm and inner-product of H, respectively. Symbols → and ⇀ denote the strong and weak convergence in H , respectively.
The classical variational inequality [1] is to find u C such that for any v C ,
v u , T u 0 ,
where T : C H is a nonlinear mapping. We use V I ( C , T ) to denote the set of solutions of the variational inequality (1).
The theory of variational inequality draws much attention of mathematicians due to its wide application in several branches of pure and applied sciences [1]. Until now, it is still a hot topic (see [2,3,4,5,6] and the references therein).
An operator A : D ( A ) H 2 H is called monotone ([7]) if for each x , y D ( A ) , there exist u A x and v A y such that x y , u v 0 . The monotone operator A is called maximal monotone if R ( I + k A ) = H , for any k > 0 . In a Hilbert space, a maximal monotone operator can also be called an m-accretive mapping.
A mapping B : D ( B ) H H is called a θ -inversely strongly monotone operator ([8]) if for each x , y D ( B ) and θ > 0 ,   x y , B x B y θ B x B y 2 .
Let U : D ( U ) H H be a mapping. If x D ( U ) and U x = 0 , then x is called a zero point of U . The set of zero points of U is denoted by N ( U ) . If x D ( U ) satisfies that U x = x , then x is called a fixed point of U. The set of fixed points of U is denoted by F ( U ) .
The monotone inclusion problem is to find u H such that
0 A u + B u ,
where A : H 2 H is maximal monotone and B : H H is θ -inversely strongly monotone. The study of monotone inclusions is a hot topic since quite a lot problems appear in minimization problem, convex programming, split feasibility problems, variational inequalities, inverse problem, and image processing can be modeled by it. The construction of iterative algorithms for approximating the solution of (2) has been considered (see [8,9,10,11,12,13,14] and the references therein). The forward–backward splitting iterative method is one of them, which means an iteration involves only A as the forward step and B as the backward step, not the sum A + B . The classical forward–backward splitting iterative method is as follows:
x 1 H c h o s e n a r b i t r a r i l y , x n + 1 = ( I + r n A ) 1 ( x n r n B x n ) , n N .
Some of the related work can be seen in [9,10] (and the references therein).
Recall that f : H H is called a contraction with contractive constant k ([15]) if k ( 0 , 1 ) is that f ( x ) f ( y ) k x y for x , y H .
A mapping S : H H is called non-expansive ([15]) if S x S y x y , for x , y H .
A mapping F : H H is called a strongly positive mapping with ξ ([15]) if ξ > 0 such that x , F x ξ x 2 for x H . Furthermore,
a I b F = s u p x 1 | ( a I b F ) x , x | ,
where I is the identity mapping, a [ 0 , 1 ] , and b [ 1 , 1 ] .
In [15], the study of monotone inclusion (2) is extended to the system of monotone inclusions:
0 A i u + B i u ,
for i N , where A i : H H is maximal monotone and B i : H H is θ i -inversely strongly monotone, for i N .
Moreover, the iterative algorithm presented in [15] is proved to be strongly convergent to not only the solution of monotone inclusions (3) but also the solution of one kind variational inequality. Specially, the authors constructed the following one by combining the ideas of the splitting method and the midpoint method:
x 0 C H c h o s e n a r b i t r a r i l y , y n = P C [ ( 1 α n ) ( x n + e n ) ] , z n = δ n y n + β n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( y n + z n 2 ) + ζ n e n , x n + 1 = γ n η f ( x n ) + ( I γ n F ) z n + e n , n N ,
where f is a contraction, F is a strongly positive linear bounded mapping, and P C is the metric projection. Under some conditions, x n p 0 i = 1 N ( A i + B i ) and p 0 solves the following variational inequality:
F p 0 η f ( p 0 ) , p 0 z , z i = 1 N ( A i + B i ) .
Recall that T : D ( T ) H H is called ϑ -strongly monotone ([16]) if for each x , y D ( T ) ,
T x T y , x y ϑ x y 2 ,
for some ϑ ( 0 , + ) . Furthermore, T : D ( T ) H H is called μ -strictly pseudo-contractive ([16]) if for each x , y D ( T ) ,
T x T y , x y x y 2 μ x y ( T x T y ) 2 ,
for some μ ( 0 , 1 ) .
In 2012, Ceng et al. proposed an iterative algorithm with a perturbed operator for approximating a zero point of the maximal monotone operator A in a Hilbert space ([16]).
x 1 H c h o s e n a r b i t r a r i l y , y n = α n x n + ( 1 α n ) ( I + r n A ) 1 x n , x n + 1 = β n f ( x n ) + ( 1 β n ) [ ( I + r n A ) 1 y n λ n μ n T ( ( I + r n A ) 1 y n ) ] , n N ,
where T : H H is a ϑ -strongly monotone and μ -strictly pseudo-contractive mapping with ϑ + μ > 1 , f : H H is a contraction, and A : H H is maximal monotone. Under some assumptions, { x n } is proved to be convergent strongly to the unique element p 0 N ( A ) , which solves the following variational inequality:
p 0 f ( p 0 ) , p 0 u 0 , u N ( A ) .
The mapping T, which is called a perturbed operator, only plays a role in the construction of the iterative algorithm (6) for selecting a particular zero point of A, but it is not involved in the variational inequality (7).
Later, in 2017, the work in (6) is extended to approximate the solutions of the systems of monotone inclusions (3). The following is a special case in Hilbert space presented in [17]:
x 1 C , u n = P C ( α n x n + β n a n ) , v n = τ n u n + ν n i = 1 ω i ( 2 ) ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) + ξ n b n , x n + 1 = δ n f ( x n ) + ( 1 δ n ) ( I ζ n i = 1 ω i ( 1 ) T i ) i = 1 ω i ( 2 ) ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) , n N .
In (8), i = 1 ω i ( 1 ) T i is called a superposition perturbation, where T i : H H is perturbed operator in the sense of (6); that is, T i : H H is a ϑ i -strongly monotone and μ i -strictly pseudo-contractive mapping, for each i N .
The iterative sequence { x n } generated by (8) is proved to be strongly convergent to p 0 i = 1 N ( A i + B i ) , which solves the variational inequality:
p 0 f ( p 0 ) , p 0 u 0 , u i = 1 N ( A i + B i ) .
In 2019, Wei et al., proposed some new iterative algorithms. The inertial forward–backward iterative algorithm for approximating the solution of monotone inclusions (3) in [18] is as follows:
x 0 , x 1 H c h o s e n a r b i t r a r i l y , e 1 H i s c h o s e n a r b i t r a r i l y , y n = x n + k n ( x n x n 1 ) , w n = α n x n + β n i = 1 ω n , i ( I + r n , i A i ) 1 ( I r n , i B i ) y n + γ n e n , C 1 = H = Q 1 , C n + 1 = { p C n : w n p 2 ( 1 γ n ) x n p 2 + γ n e n p 2 + k n 2 x n x n 1 2 2 β n k n x n p , x n 1 x n } , Q n + 1 = { p C n + 1 : x 1 p 2 x 1 P C n + 1 ( x 1 ) 2 + σ n + 1 } , x n + 1 Q n + 1 , n N .
The result that x n P m = 1 C m ( x 1 ) i = 1 N ( A i + B i ) , as n , is proved under some conditions.
The mid-point inertial forward–backward iterative algorithm in [18] is presented as follows:
x 0 , x 1 H c h o s e n a r b i t r a r i l y , e 1 H c h o s e n a r b i t r a r i l y , z 0 = x 0 , z n = δ n λ f ( x n ) + ( I δ n F ) x n , v n = z n + k n ( z n z n 1 ) , w n = α n v n + β n i = 1 ω n , i ( I + r n , i A i ) 1 ( I r n , i B i ) ( v n + w n 2 ) + γ n e n , C 1 = H = Q 1 , C n + 1 = { p C n : w n p 2 2 α n + β n 2 β n z n p 2 + 2 γ n 2 β n e n p 2 , + 2 α n + β n 2 β n k n z n z n 1 2 2 2 α n + β n 2 β n k n z n p , z n 1 z n } , Q n + 1 = { p C n + 1 : x 1 p 2 x 1 P C n + 1 ( x 1 ) 2 + σ n + 1 } , x n + 1 Q n + 1 , n N ,
where f is a contraction and F is a strongly positive linear bounded mapping. The result that x n P m = 1 C m ( x 1 ) = P i = 1 N ( A i + B i ) ( x 1 ) , as n , is proved under some conditions. Furthermore, under the additional assumptions that x ˜ = P i = 1 N ( A i + B i ) ( x 1 ) and x ˜ = P i = 1 N ( A i + B i ) ( x 1 ) [ λ f ( x ˜ ) F ( x ˜ ) + x ˜ ] , one has that x ˜ solves the variational inequality
F x ˜ λ f ( x ˜ ) , x ˜ z 0 , z i = 1 N ( A i + B i ) .
The inertial forward–backward iterative algorithm for approximating common solution of monotone inclusions and one kind variational inequalities, where T : C H is maximal monotone and τ -Lipschitz continuous, is presented as follows in [18]:
u 0 , u 1 C c h o s e n a r b i t r a r i l y , e 1 H c h o s e n a r b i t r a r i l y , y 0 = P C ( u 0 λ 0 T u 0 ) , y n = P C ( u n λ n T u n ) , v n = y n + k n ( y n y n 1 ) , w n = α n v n + β n i = 1 ω n , i ( I + r n , i A i ) 1 ( I r n , i B i ) P C ( u n λ n T u n ) + γ n e n , C 1 = C = Q 1 , C n + 1 = { p C n : w n p 2 α n y n p 2 + β n u n p 2 + γ n e n p 2 + k n 2 y n y n 1 2 2 α n k n y n p , y n 1 y n } , Q n + 1 = { p C n + 1 : u 1 p 2 u 1 P C n + 1 ( u 1 ) 2 + σ n + 1 } , u n + 1 Q n + 1 , n N ,
The result that u n P i = 1 N ( A i + B i ) V I ( C , T ) ( u 1 ) , as n , is proved.
Although two sets C n and Q n are needed in (10)–(12), infinite choices of iterative sequences can be made from them whose idea is totally different from that in (4) or (8).
Motivated by the above work, in this paper, we construct some new forward–backward multi-choice iterative algorithms with superposition perturbations in a Hilbert space. Furthermore, some strong convergence theorems for approximating common solution of monotone inclusions and variational inequalities are proved under mild conditions.
To begin our study, the following preliminaries are needed.
Definition 1 
([19]). There exists a unique element x 0 C such that x x 0 = i n f { x y : y C } , for each x H . Define the metric projection mapping P C : H C by P C x = x 0 , for any x H .
Lemma 1 
([20]). For a contraction f : H H , there is a unique element x H that satisfies f ( x ) = x .
Lemma 2 
([19]). For a monotone operator A : H H and r > 0 , one has that ( I + r A ) 1 : H H is non-expansive.
Lemma 3 
([21]). If T i : C C is non-expansive for i N and i = 1 a i = 1 for { a i } ( 0 , 1 ) , then i = 1 a i T i is non-expansive with F ( i = 1 a i T i ) = i = 1 F ( T i ) under the assumption that i = 1 F ( T i ) .
Lemma 4 
([15]). If S : C H is a single-valued mapping and T : H 2 H is maximal monotone, then
F ( ( I + r T ) 1 ( I r S ) ) = N ( T + S ) ,
for r > 0 .
Definition 2 
([22]). Suppose { K n } is a sequence of non-empty closed and convex subsets of H. One has:
(1) The strong lower limit of { K n } , s l i m i n f K n , is defined as the set of all x H such that there exists x n K n for almost all n and it tends to x as n in the norm.
(2) The weak upper limit of { K n } , w l i m s u p K n , is defined as the set of all x H such that there exists a subsequence { K n m } of { K n } and x n m K n m for every n m and it tends to x as n m in the weak topology;.
(3) The limit of { K n } , l i m K n , is the common value when s l i m i n f K n = w l i m s u p K n .
Lemma 5 
([22]). Let { K n } be a decreasing sequence of closed and convex subsets of H, i.e., K n K m if n m . Then, { K n } converges in H and l i m K n = n = 1 K n .
Lemma 6 
([23]). If l i m K n exists and is not empty, then P K n x P l i m K n x for every x H , as n .
Lemma 7 
([24]). Let r ( 0 , + ) . Then, there exists a continuous, strictly increasing and convex function g : [ 0 , 2 r ] [ 0 , + ) with g ( 0 ) = 0 such that k x + ( 1 k ) y 2 k x 2 + ( 1 k ) y 2 k ( 1 k ) g ( x y ) , for k [ 0 , 1 ] , x , y H with x r and y r .
Lemma 8 
([25]). Let B : H H be a ϑ-strongly monotone and μ-strictly pseudo-contractive mapping with ϑ + μ > 1 . Then, for any fixed number δ ( 0 , 1 ) , I δ B is a contraction with contractive constant 1 δ ( 1 1 ϑ μ ) .
Lemma 9 
([15]). Suppose F : H H is strongly positive bounded mapping with coefficient ξ > 0 and 0 < ρ F 1 , then I ρ F 1 ρ ξ .
Lemma 10 
([15]). Let f : H H be a contraction with contractive constant k ( 0 , 1 ) , F : H H be strongly positive bounded mapping with coefficient ξ > 0 and U : H H be a non-expansive mapping. Suppose 0 < η ξ 2 k and F ( U ) . If for each t ( 0 , 1 ) , define T t : H H by
T t x : = t η f ( x ) + ( I t F ) U x ,
then T t has a fixed point x t , for each t ( 0 , F 1 ] . Moreover, x t q 0 , as t 0 , where q 0 F ( U ) which satisfies the variational inequality:
F q 0 η f ( q 0 ) , q 0 z 0 , z F ( U ) .
Lemma 11 
([15]). In a real Hilbert space H, the following inequality holds:
x + y 2 x 2 + 2 y , x + y , x , y H .
Lemma 12 
([26]). Let { x n } and { b n } be two sequences of non-negative real number sequences satisfying
x n + 1 ( 1 t n ) x n + b n , n N ,
where { t n } ( 0 , 1 ) with n = 1 t n = + and t n 0 , as n . If l i m s u p n b n t n 0 , then l i m n x n = 0 .
Lemma 13 
([17]). Let H be a real Hilbert space, A i : H H be maximal monotone, B i : H H be θ i -inversely strongly monotone, and W i : H H be ϑ i -strongly monotone and μ i -strictly pseudo-contractive with ϑ i + μ i > 1 for i N . Suppose 0 < r n , i 2 θ i for i N and n N , k t ( 0 , 1 ) for t ( 0 , 1 ) ,   n = 1 c n W n < + , i = 1 a i = 1 = i = 1 c i and i = 1 N ( A i + B i ) . If, for each t ( 0 , 1 ) , Z t n : H H is defined by
Z t n u = t f ( u ) + ( 1 t ) ( I k t i = 1 c i W i ) i = 1 a i ( I r n , i A i ) 1 ( I r n , i B i ) u ,
then Z t n has a fixed point u t n . That is,
u t n = t f ( u t n ) + ( 1 t ) ( I k t i = 1 c i W i ) i = 1 a i ( I r n , i A i ) 1 ( I r n , i B i ) u t n .
Moreover, if k t t 0 , then u t n p 0 , as t 0 , where p 0 is the solution of variational inequality:
p 0 f ( p 0 ) , p 0 z 0 , z i = 1 N ( A i + B i ) .

2. Strong Convergence Theorems

Our discussion is based on the following assumptions in this section:
(a)
H is a real Hilbert space.
(b)
A i : H H is maximal monotone and B i : H H is θ i -inversely strongly monotone, for each i N .
(c)
f : H H is a contraction with contractive constant k ( 0 , 1 2 ] . Furthermore, if f ( x ) x , y x = 0 , then x = 0 or y = x , for x , y H .
(d)
F : H H is a strongly positive linear bounded mapping with ξ > 0 and F ( x ) η f ( x ) + f ( y ) y , x y 0 , for x , y H .
(e)
W i : H H is ϑ i -strongly monotone and μ i -strictly pseudo-contractive, for i N ;.
(f)
{ e n } H and { ε n } H are the computational errors.
(g)
{ a i } and { c i } are two real number sequences in ( 0 , 1 ) with i = 1 a i = i = 1 c i = 1 , for n N .
(h)
{ α n } , { β n } , { δ n } , { ζ n } , { ω n } and { λ n } are real number sequences in ( 0 , 1 ) , for n N .
(i)
{ σ n } and { r n , i } are real number sequences in ( 0 , + ) , for n , i N .
Theorem 1. 
Let { x n } be generated by the following iterative algorithm:
x 1 , y 1 H c h o s e n a r b i t r a r i l y , e 1 , ε 1 H c h o s e n a r b i t r a r i l y , C 1 = H = Q 1 , u n = ω n x n + ε n , v n = β n u n + ( 1 β n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n , z n = δ n f ( x n ) + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n , w n = α n x n + ( 1 α n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n , x n + 1 = λ n η f ( x n ) + ( I λ n F ) w n + e n , C n + 1 = { p C n : 2 α n x n + ( 1 α n ) z n w n , p α n x n 2 + ( 1 α n ) z n 2 w n 2 } Q n + 1 = { p C n + 1 : x 1 p 2 P C n + 1 ( x 1 ) x 1 2 + σ n + 1 } , n N , y n + 1 Q n + 1 , n N ,
Under the assumptions that:
(i)
0 i = 1 N ( A i + B i ) .
(ii)
μ i + ϑ i > 1 , μ i ( 0 , 1 ) and ϑ i ( 0 , 1 ) , for i N .
(iii)
0 < r n , i 2 θ i , for i , n N .
(iv)
σ n 0 , α n 0 , β n 0 , δ n 0 , and ζ n 0 , as n .
(v)
0 < η < ξ 2 k .
(vi)
i = 1 c i W i < + ; n = 1 e n < + , n = 1 ε n < + , and n = 1 ( 1 ω n ) < + .
(vii)
δ n λ n 0 , e n λ n 0 , ε n λ n 0 , 1 ω n λ n 0 , ζ n λ n 0 , as n .
(viii)
n = 1 λ n = + and λ n 0 ,
one has x n q 0 i = 1 N ( A i + B i ) , as n , where q 0 satisfies the following variational inequalities:
F q 0 η f ( q 0 ) , q 0 y 0 , y i = 1 N ( A i + B i ) ,
and
q 0 f ( q 0 ) , q 0 y 0 , y i = 1 N ( A i + B i ) .
Moreover, y n P m = 1 C m ( x 1 ) i = 1 N ( A i + B i ) , as n , which means
F q 0 η f ( q 0 ) ,   q 0 P m = 1 C m ( x 1 ) 0 , q 0 f ( q 0 ) ,   q 0 P m = 1 C m ( x 1 ) 0 .
Proof. 
We split the proof into eleven steps.
  • Step 1. { v n } is well-defined.
For s ( 0 , 1 ) , define U s : H H by
U s x : = s u + ( 1 s ) T x ,
for any x H and for fixed element u H , where T : H H is any fixed non-expansive mapping.
It is easy to check that U s x U s y = ( 1 s ) T x T y ( 1 s ) x y . Thus, U s is a contraction, which ensures from Lemma 1 that there exists x s H such that U s x s = x s . That is, x s = s u + ( 1 s ) T x s .
Since 0 < r n , i 2 θ i , for i , n N , for any x , y H ,
( I r n , i B i ) x ( I r n , i B i ) y 2 = ( x y ) r n , i ( B i x B i y ) 2 = x y 2 2 r n , i x y , B i x B i y + r n , i 2 B i x B i y 2 x y 2 + r n , i ( r n , i 2 θ i ) B i x B i y 2 x y 2 .
This ensures that ( I r n , i B i ) : H H is non-expansive, for i , n N . Since i = 1 a i = 1 , from Lemmas 2–4, one has i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) : H H is non-expansive, for n N . Moreover, F ( i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ) = i = 1 N ( A i + B i ) .
Considering T in (16) as i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) , one can see that { v n } is well-defined.
  • Step 2. C n is non-empty closed and convex subset of H, for any n N .
We can easily know from the construction of C n that C n is closed and convex subset of H, for any n N . We are left to show that C n . For this, it suffices to show that i = 1 N ( A i + B i ) C n , for n 2 .
In fact, for any p i = 1 N ( A i + B i ) , one has
w n p 2 α n x n p 2 + ( 1 α n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n p 2 α n x n p 2 + ( 1 α n ) z n p 2 .
Then,
2 α n x n + ( 1 α n ) z n w n , p α n x n 2 + ( 1 α n ) z n 2 w n 2 ,
which implies that p C n , for n 2 . Therefore, i = 1 N ( A i + B i ) C n , for all n N , and then C n , for all n N .
  • Step 3. Q n is a non-empty subset of H, for each n N , which ensures that { y n } is well-defined.
It follows from Step 2 and Definition 1 that, for σ n + 1 , there exists b n + 1 C n + 1 such that x 1 b n + 1 2 ( i n f z C n + 1 x 1 z ) 2 + σ n + 1 = P C n + 1 ( x 1 ) x 1 2 + σ n + 1 . Thus, Q n + 1 , for n N . Then, { y n } is well-defined.
  • Step 4. P C n ( x 1 ) P m = 1 C m ( x 1 ) , as n .
It follows from Lemma 5 that l i m C n exists and l i m C n = n = 1 C n . Then, Lemma 6 implies that P C n ( x 1 ) P m = 1 C m ( x 1 ) , as n .
  • Step 5. y n P m = 1 C m ( x 1 ) , as n .
Since y n + 1 Q n + 1 C n + 1 and C n is a convex subset of H, for t ( 0 , 1 ) , t P C n + 1 ( x 1 ) + ( 1 t ) y n + 1 C n + 1 , which implies that
P C n + 1 ( x 1 ) x 1 t P C n + 1 ( x 1 ) + ( 1 t ) y n + 1 x 1 .
Using Lemma 7, one has:
t P C n + 1 ( x 1 ) + ( 1 t ) y n + 1 x 1 2 = t ( P C n + 1 ( x 1 ) x 1 ) + ( 1 t ) ( y n + 1 x 1 ) 2 t P C n + 1 ( x 1 ) x 1 2 + ( 1 t ) y n + 1 x 1 2 t ( 1 t ) g ( P C n + 1 ( x 1 ) y n + 1 ) .
From (17) and (18), we have t g ( P C n + 1 ( x 1 ) y n + 1 ) y n + 1 x 1 2 P C n + 1 ( x 1 ) x 1 2 σ n + 1 . Letting t 1 first and then n , one has P C n + 1 ( x 1 ) y n + 1 0 , as n . Combining with Step 4, y n P m = 1 C m ( x 1 ) , as n .
  • Step 6. { u n } , { v n } , { z n } , { w n } and { x n } are all bounded.
For p i = 1 N ( A i + B i ) , one has for any n N ,
u n p ω n x n p + ( 1 ω n ) p + ε n .
Furthermore, v n p β n u n p + ( 1 β n ) v n p implies that for any n N ,
v n p u n p .
In view of Lemma 8 and (20), one has
z n p δ n f ( x n ) f ( p ) + δ n f ( p ) p + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n p δ n k x n p + δ n f ( p ) p + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( v n p ) + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) p p δ n k x n p + δ n f ( p ) p + ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] u n p + ( 1 δ n ) ζ n i = 1 c i W i p
Note that, for any n N ,
w n p α n x n p + ( 1 α n ) z n p .
Now, in view of Lemma 9, one has
x n + 1 p λ n η f ( x n ) F p + ( I λ n F ) ( w n p ) + e n λ n η k x n p + λ n η f ( p ) F p + ( 1 λ n ξ ) w n p + e n
Combing with inequalities (19)–(23), by induction, one has
x n + 1 p { λ n η k + ( 1 λ n ξ ) α n + ( 1 λ n ξ ) ( 1 α n ) δ n k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] ω n } x n p + λ n η f ( p ) F ( p ) + e n + ( 1 λ n ξ ) ( 1 α n ) δ n f ( p ) p + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) ζ n i = 1 c i W i p + ( 1 λ n ξ ) ( 1 α n ) ( 1 ω n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] p + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] ε n { λ n η k + ( 1 λ n ξ ) α n + ( 1 λ n ξ ) ( 1 α n ) δ n k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] } x n p + λ n ( ξ η k ) η f ( p ) F ( p ) ξ η k + e n + ( 1 λ n ξ ) ( 1 α n ) δ n ( 1 k ) f ( p ) p 1 k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) ζ n ( 1 i = 1 c i 1 ϑ i μ i ) i = 1 c i W i p 1 i = 1 c i 1 ϑ i μ i + ( 1 ω n ) p + ε n m a x { x n p , η f ( p ) F ( p ) ξ η k , f ( p ) p 1 k , i = 1 c i W i p 1 i = 1 c i 1 ϑ i μ i } + e n + ( 1 ω n ) p + ε n m a x { x 1 p , η f ( p ) F ( p ) ξ η k , f ( p ) p 1 k , i = 1 c i W i p 1 i = 1 c i 1 ϑ i μ i } + i = 1 n e i + i = 1 n ( 1 ω i ) p + i = 1 n ε i
Based on the assumptions, one has { x n } is bounded. Following (19)–(22), it is easy to see that { u n } , { v n } , { z n } and { w n } are all bounded.
Note that, for p i = 1 N ( A i + B i ) , i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n p v n p . Then, { i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n } is bounded. Similarly, { i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n } is bounded.
Since i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n , then { i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n } is bounded.
  • Step 7. There exists q 0 i = 1 N ( A i + B i ) , which is the solution of variational inclusion (14).
It follows from Lemma 10 that there exists z t such that
z t = t η f ( z t ) + ( I t F ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z t
and z t q 0 , as t 0 , where q 0 is the solution of (14).
  • Step 8. l i m s u p n η f ( q 0 ) F q 0 , x n + 1 q 0 0 , where q 0 is the same as that in Step 7.
Note that
w n z n = w n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( z n v n ) + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n v n + v n z n = α n [ x n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n ] + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( z n v n ) + β n [ i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n u n ] + v n z n α n x n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n + 2 z n v n + β n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n u n
Furthermore,
z n v n δ n f ( x n ) + ζ n ( 1 δ n ) i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n + β n u n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n + δ n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n
Since { x n } , { u n } , { v n } , { z n } , { i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n } , { i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n } and { i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n } are bounded, then, based on the assumptions and (24) and (25), w n z n 0 , as n .
Let z t be the same as that in Step 7, then z t z t q 0 + q 0 , which implies that { z t } is bounded.
Note that
i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( w n z n ) + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n w n w n z n + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n w n = w n z n + α n x n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n
Since α n 0 and w n z n 0 , as n , then i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n 0 , as n . In view of Lemma 11,
z t w n 2 = z t i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n 2 z t i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n 2 + 2 z t w n , i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n z t w n 2 + 2 z t i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n , t η f ( z t ) t F ( i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z t ) + 2 z t w n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n
Therefore,
t z t i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n , F ( i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z t ) η f ( z t ) z t w n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n ,
which implies that
l i m t 0 l i m s u p n z t i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n , F ( i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z t ) η f ( z t ) 0 .
Since z t q 0 as t 0 , then
l i m s u p n q 0 i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n , F q 0 η f ( q 0 ) 0 .
Since i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) w n w n 0 , w n z n 0 and x n + 1 w n = λ n ( η f ( x n ) F w n ) + e n 0 , then l i m s u p n q 0 x n + 1 , F q 0 η f ( q 0 ) 0 .
  • Step 9. x n q 0 , as n , where q 0 is the same as that in Steps 7 and 8.
In fact, using Lemma 11 again, one has
u n q 0 2 = ω n ( x n q 0 ) + ( ω n 1 ) q 0 + ε n 2 ω n x n q 0 2 + 2 ε n , u n q 0 + 2 ( 1 ω n ) q 0 , q 0 u n ω n x n q 0 2 + 2 ε n u n q 0 + 2 ( 1 ω n ) q 0 u n q 0 .
Furthermore, v n q 0 2 β n u n q 0 2 + ( 1 β n ) v n q 0 2 ensures that
v n q 0 2 u n q 0 2 .
In view of Lemma 11 again, one has
z n q 0 2 = δ n ( f ( x n ) q 0 ) + ( 1 δ n ) [ ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n q 0 ] 2 ( 1 δ n ) v n q 0 2 + 2 δ n z n q 0 , f ( x n ) q 0 2 ( 1 δ n ) ζ n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n , z n q 0 ( 1 δ n ) v n q 0 2 + 2 δ n z n q 0 , f ( x n ) f ( q 0 ) + 2 δ n z n q 0 , f ( q 0 ) q 0 + 2 ( 1 δ n ) ζ n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n z n q 0 ( 1 δ n ) v n q 0 2 + 2 δ n k z n x n x n q 0 + 2 δ n k x n q 0 2 + 2 δ n z n q 0 , f ( q 0 ) q 0 + 2 ζ n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n z n q 0
Note that
w n q 0 2 α n x n q 0 2 + ( 1 α n ) z n q 0 2 .
Now, in view of Lemma 9 and using (27)–(30), one has
x n + 1 q 0 2 = λ n η f ( x n ) + ( I λ n F ) w n + e n q 0 2 = λ n ( η f ( x n ) F ( q 0 ) ) + ( I λ n F ) ( w n q 0 ) + e n 2 ( 1 λ n ξ ) w n q 0 2 + 2 e n , x n + 1 q 0 + 2 λ n η f ( x n ) F q 0 , x n + 1 q 0 ( 1 λ n ξ ) w n q 0 2 + 2 e n x n + 1 q 0 + 2 λ n η f ( x n ) f ( q 0 ) , x n + 1 q 0 + 2 λ n η f ( q 0 ) F q 0 , x n + 1 q 0 ( 1 λ n ξ ) w n q 0 2 + 2 e n x n + 1 q 0 + 2 λ n η k x n q 0 x n + 1 q 0 + 2 λ n η f ( q 0 ) F ( q 0 ) , x n + 1 q 0 { ( 1 λ n ξ ) α n + λ n η k + 2 ( 1 λ n ξ ) ( 1 α n ) δ n k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) ω n } x n q 0 2 + 2 e n x n + 1 q 0 + 2 ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) ε n u n q 0 + 2 ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) ( 1 ω n ) q 0 u n q 0 + 2 δ n k ( 1 λ n ξ ) ( 1 α n ) x n q 0 x n z n + 2 ( 1 λ n ξ ) ( 1 α n ) δ n z n q 0 f ( q 0 ) q 0 + 2 ( 1 λ n ξ ) ( 1 α n ) ζ n z n q 0 i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n + λ n η k x n + 1 q 0 2 + 2 λ n η f ( q 0 ) F ( q 0 ) , x n + 1 q 0 .
Let M 1 = s u p n { 2 x n + 1 q 0 , 2 u n q 0 , 2 k x n q 0 x n z n , 2 u n q 0 q 0 , 2 f ( q 0 ) q 0 z n q 0 ,   2 z n q 0 i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n : n N } . Then, from Step 6, one has M 1 < + .
Therefore, it follows from (31) that
( 1 λ n η k ) x n + 1 q 0 2 { ( 1 λ n ξ ) α n + λ n η k + 2 ( 1 λ n ξ ) ( 1 α n ) δ n k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) } x n q 0 2 + [ e n + ε n + ( 1 ω n ) + 2 δ n + ζ n ] M 1 + 2 λ n η f ( q 0 ) F ( q 0 ) , x n + 1 q 0
If we set b n ( 1 ) = λ n ( ξ 2 η k ) 1 λ n η k , b n ( 2 ) = M 1 1 λ n η k [ e n + ε n + ( 1 ω n ) + 2 δ n + ζ n ] + 2 λ n 1 λ n η k η f ( q 0 ) F q 0 , x n + 1 q 0 , then (32) can be reduced as follows:
x n + 1 q 0 2 ( 1 b n ( 1 ) ) x n q 0 2 + b n ( 2 ) .
Based on the assumptions and Step 8, we know that b n ( 1 ) 0 , n = 1 b n ( 1 ) = + and l i m s u p n b n ( 2 ) b n ( 1 ) 0 . Then, from Lemma 12, x n q 0 , as n .
  • Step 10. There exists p 0 i = 1 N ( A i + B i ) , which is the solution of the variational inclusion
    p 0 f ( p 0 ) , p 0 y 0 , y i = 1 N ( A i + B i ) .
In fact, it follows from Lemma 13 that there exists u t n such that
u t n = t f ( u t n ) + ( 1 t ) ( I k t i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u t n
and u t n p 0 , as t 0 , where p 0 is the solution of (33).
  • Step 11. x n p 0 , as n , where p 0 is the same as that in Step 10.
It suffices to show that p 0 = q 0 .
Since p 0 i = 1 N ( A i + B i ) ,
F q 0 η f ( q 0 ) , q 0 p 0 0 .
Since F is strongly positive linear bounded, f is a contraction, and 0 < η < ξ 2 k ,
( F q 0 η f ( q 0 ) ) ( F p 0 η f ( p 0 ) ) , q 0 p 0 = F ( q 0 p 0 ) , q 0 p 0 + η f ( p 0 ) f ( q 0 ) , q 0 p 0 ξ q 0 p 0 2 η k q 0 p 0 2 0 .
Therefore, (34) ensures that
F p 0 η f ( p 0 ) , q 0 p 0 F q 0 η f ( q 0 ) , q 0 p 0 0 .
On the other hand, it follows from (33) that
f ( p 0 ) p 0 , q 0 p 0 0 .
Combining with (34), one has F q 0 η f ( q 0 ) + f ( p 0 ) p 0 , q 0 p 0 0 . Following Condition (d), we know that F q 0 η f ( q 0 ) + f ( p 0 ) p 0 , q 0 p 0 = 0 . Then, (34) and (37) ensure that F q 0 η f ( q 0 ) , q 0 p 0 = f ( p 0 ) p 0 , q 0 p 0 = 0 .
Since 0 i = 1 N ( A i + B i ) , from Condition (c), we know that p 0 = 0 or p 0 = q 0 . If p 0 = q 0 , then the result follows. If p 0 = 0 , then F q 0 η f ( q 0 ) , q 0 p 0 = 0 implies that F q 0 η f ( q 0 ) , q 0 = 0 . Therefore, ξ q 0 2 F q 0 , q 0 = η f ( q 0 ) , q 0 η k q 0 2 . Since ξ > 2 η k , then q 0 = 0 , which means that p 0 = q 0 = 0 . Therefore, x n p 0 = q 0 , as n .
This completes the proof. □
Theorem 2. 
Let { x n } be generated by the following iterative algorithm:
x 1 , y 1 H c h o s e n a r b i t r a r i l y , ε 1 , e 1 H c h o s e n a r b i t r a r i l y , C 1 = H = Q 1 , u n = ω n x n + ε n , v n = β n u n + ( 1 β n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) , z n = δ n f ( x n ) + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) , w n = α n x n + ( 1 α n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + z n 2 ) , x n + 1 = λ n η f ( x n ) + ( I λ n F ) w n + e n , C n + 1 = { p C n : 2 α n x n + ( 1 α n ) z n w n , p α n x n 2 + ( 1 α n ) z n 2 w n 2 } Q n + 1 = { p C n + 1 : x 1 p 2 P C n + 1 ( x 1 ) x 1 2 + σ n + 1 } , n N , y n + 1 Q n + 1 , n N .
Under the assumptions of Theorem 1, one has
x n q 0 i = 1 N ( A i + B i ) , as n , where q 0 is the unique solution of the system of variational inclusions (14) and (15). Moreover, y n P m = 1 C m ( x 1 ) i = 1 N ( A i + B i ) , as n .
Proof. 
The proof is split into eleven steps. Copy Steps 2–5, 7, 10 and 11 in Theorem 1. Furthermore, modify the other steps in Theorem 1 as follows
Step 1. { v n } is well-defined.
For s ( 0 , 1 ) , define U s : H H by U s x : = s u + ( 1 s ) T ( u + x 2 ) , for any x H and for fixed u H , where T : H H is any fixed non-expansive mapping.
It is easy to check that U s x U s y = ( 1 s ) T ( u + x 2 ) T ( u + y 2 ) 1 s 2 x y . Thus, U s is a contraction, which ensures from Lemma 1 that there exists x s H such that U s x s = x s . That is, x s = s u + ( 1 s ) T ( u + x s 2 ) .
Considering T here as i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) , similar to Step 1 of Theorem 1, one can see that { v n } is well-defined.
Step 6. { u n } , { v n } , { z n } , { w n } , and { x n } are all bounded.
For p i = 1 N ( A i + B i ) , one has
v n p β n u n p + ( 1 β n ) u n p 2 + ( 1 β n ) v n p 2 . This implies that (20) is still true.
In view of Lemma 8 and (20), one has
z n p δ n f ( x n ) f ( p ) + δ n f ( p ) p + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) p δ n k x n p + δ n f ( p ) p + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 p ) + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) p p δ n k x n p + δ n f ( p ) p + ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] u n p 2 + v n p 2 + ( 1 δ n ) ζ n i = 1 c i W i p δ n k x n p + δ n f ( p ) p + ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] u n p + ( 1 δ n ) ζ n i = 1 c i W i p ,
which ensures that (21) is still true.
Note that
w n p α n x n p + ( 1 α n ) u n p 2 + ( 1 α n ) z n p 2 .
Combining with inequalities (19)–(21) and (39), one has
x n + 1 p { λ n η k + ( 1 λ n ξ ) α n + ( 1 λ n ξ ) ( 1 α n ) δ n k 2 + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] ω n 2 + ( 1 λ n ξ ) ( 1 α n ) ω n 2 } x n p + λ n η f ( p ) F ( p ) + e n + ( 1 λ n ξ ) ( 1 α n ) ( 1 ω n ) 2 p + ( 1 λ n ξ ) ( 1 α n ) ε n 2 + ( 1 λ n ξ ) ( 1 α n ) δ n 2 f ( p ) p + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) ζ n i = 1 c i W i p 2 + ( 1 λ n ξ ) ( 1 α n ) ( 1 ω n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] p 2 + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] ε n 2 { λ n η k + ( 1 λ n ξ ) α n + ( 1 λ n ξ ) ( 1 α n ) 2 + ( 1 λ n ξ ) ( 1 α n ) δ n k 2 + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n ) [ 1 ζ n ( 1 i = 1 c i 1 ϑ i μ i ) ] 2 } x n p + λ n ( ξ η k ) η f ( p ) F ( p ) ξ η k + e n + ( 1 λ n ξ ) ( 1 α n ) 2 δ n ( 1 k ) f ( p ) p 1 k + ( 1 λ n ξ ) ( 1 α n ) 2 ( 1 δ n ) ζ n ( 1 i = 1 c i 1 ϑ i μ i ) i = 1 c i W i p 1 i = 1 c i 1 ϑ i μ i + 2 ( 1 ω n ) p + 2 ε n m a x { x n p , η f ( p ) p ξ η k , f ( p ) p 1 k , i = 1 c i W i p 1 i = 1 c i 1 ϑ i μ i } + e n + 2 ( 1 ω n ) p + 2 ε n m a x { x 1 p , η f ( p ) p ξ η k , f ( p ) p 1 k , i = 1 c i W i p 1 i = 1 c i 1 ϑ i μ i } + i = 1 n e i + 2 i = 1 n ( 1 ω i ) p + 2 i = 1 n ε i
Therefore, { x n } is bounded. Similar to Step 6 in Theorem 1, { u n } , { v n } , { z n } , { w n } , { i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) } , { i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + z n 2 ) } , and { i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) } are all bounded.
Step 8. l i m s u p n η f ( q 0 ) F q 0 , x n + 1 q 0 0 , where q 0 is the same as that in Step 7.
Note that
w n z n w n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + z n 2 + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + z n 2 u n + v n 2 ) + i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 v n + v n z n α n [ x n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + z n 2 ] + β n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 u n + 3 2 v n z n = α n x n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + z n 2 + 3 2 z n v n + β n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 u n
Furthermore,
z n v n δ n f ( x n ) + ζ n ( 1 δ n ) i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) + β n u n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) + δ n i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 )
Based on the assumptions, (40) and (41), and Step 6, one has w n z n 0 , as n . Copying the corresponding part of Step 8 in Theorem 1, one can see that l i m s u p n η f ( q 0 ) F q 0 , x n + 1 q 0 0 .
Step 9. x n q 0 , as n , where q 0 is the same as that in Steps 7 and 8.
Similar to Step 9 in Theorem 1, we can easily see that both (27) and (28) are still true.
In view of Lemma 11 and (28), one has
z n q 0 2 = δ n ( f ( x n ) q 0 ) + ( 1 δ n ) [ ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 q 0 ] 2 ( 1 δ n ) u n + v n 2 q 0 2 + 2 δ n z n q 0 , f ( x n ) q 0 2 ( 1 δ n ) ζ n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 , z n q 0 ( 1 δ n ) u n + v n 2 q 0 2 + 2 δ n z n q 0 , f ( x n ) f ( q 0 ) + 2 δ n z n q 0 , f ( q 0 ) q 0 + 2 ( 1 δ n ) ζ n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 z n q 0 ( 1 δ n ) u n q 0 2 + 2 δ n k z n x n x n q 0 + 2 δ n k x n q 0 2 + 2 δ n z n q 0 , f ( q 0 ) q 0 + 2 ζ n i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) u n + v n 2 z n q 0
Note that
w n q 0 2 α n x n q 0 2 + ( 1 α n ) u n q 0 2 2 + ( 1 α n ) z n q 0 2 2 .
Now, in view of Lemma 9 and using (27), (28), (42), and (43), one has
x n + 1 q 0 2 = λ n ( η f ( x n ) F q 0 ) + ( I λ n F ) ( w n q 0 ) + e n 2 ( 1 λ n ξ ) w n q 0 2 + 2 e n x n + 1 q 0 + 2 λ n η f ( x n ) f ( q 0 ) , x n + 1 q 0 + 2 λ n η f ( q 0 ) F q 0 , x n + 1 q 0 { ( 1 λ n ξ ) α n + λ n η k + ( 1 λ n ξ ) ( 1 α n ) δ n k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n 2 ) ω n } x n q 0 2 + 2 e n x n + 1 q 0 + ( 1 λ n ξ ) ( 1 α n ) ( 2 δ n ) ε n u n q 0 + ( 1 λ n ξ ) ( 1 α n ) ( 2 δ n ) ( 1 ω n ) q 0 u n q 0 + δ n η k ( 1 λ n ξ ) ( 1 α n ) x n q 0 x n z n + ( 1 λ n ξ ) ( 1 α n ) δ n z n q 0 f ( q 0 ) q 0 + ( 1 λ n ξ ) ( 1 α n ) ζ n z n q 0 i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) q 0 + λ n η k x n + 1 q 0 2 + 2 λ n η f ( q 0 ) F ( q 0 ) , x n + 1 q 0
Therefore,
( 1 λ n η k ) x n + 1 q 0 2 { ( 1 λ n ξ ) α n + λ n η k + ( 1 λ n ξ ) ( 1 α n ) δ n k + ( 1 λ n ξ ) ( 1 α n ) ( 1 δ n 2 ) } x n q 0 2 + [ e n + ε n + ( 1 ω n ) + 2 δ n + ζ n ] M 2 + 2 λ n η f ( q 0 ) F ( q 0 ) , x n + 1 q 0 ,
where M 2 = s u p n { 2 x n + 1 q 0 ,   x n q 0 z n x n ,   z n q 0 f ( q 0 ) q 0 ,   z n q 0 i = 1 c i W i i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) q 0 ,   u n q 0 ,   2 q 0 u n q 0 : n N } < + .
If we set b n ( 3 ) = b n ( 1 ) , b n ( 4 ) = [ e n + ε n + ( 1 ω n ) + 2 δ n + ζ n ] M 2 + 2 λ n η f ( q 0 ) F ( q 0 ) , x n + 1 q 0 1 λ n η k , then
x n + 1 q 0 2 ( 1 b n ( 3 ) ) x n q 0 2 + b n ( 4 ) .
Similar to Step 9 in Theorem 1, in view of Lemma 12, we have x n q 0 , as n .
This completes the proof. □
Remark 1. 
The restrictions imposed on the mappings f ( x ) and F ( x ) are available. For example, take F ( x ) = 3 4 x and f ( x ) = x 2 , for x ( , + ) . Take η = 1 2 , k = 1 2 ,   ξ = 3 4 . Then, we can easily see that F is a strongly positive linear bounded mapping with ξ, f is a contraction, and ξ > 2 η k . Moreover, F ( x ) η f ( x ) + f ( y ) y , x y = 1 2 x y 2 0 , for x , y ( , + ) . Furthermore, if f ( x ) x , y x = 0 , then x 2 ( y x ) = 0 , which implies that x = 0 or y = x .
Remark 2. 
In both (13) and (38), the idea of forward–backward splitting method is embodied, the superposition perturbation is considered and multi-choice sets are constructed, which extends and complements the corresponding studies.
Remark 3. 
From Theorems 1 and 2, we may find that the limit q 0 of the iterative sequence { x n } is not only the solution of the system of monotone inclusions (3) but also the solution of variational inequalities (14) and (15). That is, the study on iterative construction of the solution of (14) in [18] and the solution of (15) in [17] are unified in our paper.
Remark 4. 
From Theorems 1 and 2, we may find that the relationship between the metric projection P m = 1 C m ( x 1 ) and the common solution of variational inequalities and monotone inclusions q 0 is set up in our paper.

3. Applications

In this section, one kind capillarity system discussed in [18] is employed again to demonstrate the application of Theorems 1 and 2.
The discussion begins under the following assumptions:
(1)
Ω is a bounded conical domain in R n ( n N ) with its boundary Γ C 1 .
(2)
ϑ is the exterior normal derivative of Γ .
(3)
λ i is a positive number, for i N .
(4)
p i ( 2 n n + 1 , + ) , for i N . Moreover, if p i n , then suppose 1 q i , r i < + , for i N . If p i < n , then suppose 1 q i , r i n p i n p i , for i N .
(5)
| · | denotes the norm in R n and < · , · > the inner-product.
Now, examine the capillarity systems:
d i v [ ( 1 + | u ( i ) | p i 1 + | u ( i ) | 2 p i ) | u ( i ) | p i 2 u ( i ) ] + λ i ( | u ( i ) | q i 2 u ( i ) + | u ( i ) | r i 2 u ( i ) ) + u ( i ) ( x ) = f i ( x ) , x Ω < ϑ , ( 1 + | u ( i ) | p i 1 + | u ( i ) | 2 p i ) | u ( i ) | p i 2 u ( i ) > = 0 , x Γ , i N .
Lemma 14. 
(see [18]) For i N , define A i : L 2 ( Ω ) L 2 ( Ω ) by
(1)
D ( A i ) = { u L 2 ( Ω ) | f L 2 ( Ω ) s u c h t h a t f A i ˜ u } , where A i ˜ : W 1 , p i ( Ω ) ( W 1 , p i ( Ω ) ) * is defined by
w , A i ˜ u = Ω < ( 1 + | u | p i 1 + | u | 2 p i ) | u | p i 2 u , v > d x + λ i Ω | u ( x ) | q i 2 u ( x ) v ( x ) d x + λ i Ω | u ( x ) | r i 2 u ( x ) v ( x ) d x ,
for any u , w W 1 , p i ( Ω ) ;
(2)
A i u = { f L 2 ( Ω ) | f A i ˜ u } .
Then, A i : L 2 ( Ω ) L 2 ( Ω ) is maximal monotone, for each i N .
Lemma 15. 
(see [18]) Define B i : L 2 ( Ω ) L 2 ( Ω ) by
( B i u ) ( x ) = u ( x ) f i ( x ) , f o r a l l u ( x ) D ( B i ) ,
and then B i is θ i -inversely strongly accretive, for θ i ( 0 , 1 ] and i N .
Lemma 16. 
(see [18]) If, in (46), f i ( x ) λ i ( | k | q i 1 + | k | r i 1 ) s g n k + k , where k is a constant, then { u ( i ) k : i N } is the solution of capillarity system (46). Furthermore, { k } = i = 1 N ( A i + B i ) .
Theorem 3. 
Suppose f i ( x ) λ i ( | k | q i 1 + | k | r i 1 ) s g n k + k , A i and B i are the same as those in Lemmas 14 and 15, F : L 2 ( Ω ) L 2 ( Ω ) is a strongly positive linear bounded operator with coefficient ξ > 0 , f : L 2 ( Ω ) L 2 ( Ω ) is a contraction with coefficient k ( 0 , 1 ) and W i : L 2 ( Ω ) L 2 ( Ω ) is ϑ i -strongly monotone and μ i -strictly pseudo-contractive mapping, for i N .
Two iterative algorithms are constructed as follows:
x 1 , y 1 L 2 ( Ω ) c h o s e n a r b i t r a r i l y , e 1 , ε 1 L 2 ( Ω ) c h o s e n a r b i t r a r i l y , C 1 = L 2 ( Ω ) = Q 1 , u n = ω n x n + ε n , v n = β n u n + ( 1 β n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n , z n = δ n f ( x n ) + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) v n , w n = α n x n + ( 1 α n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) z n , x n + 1 = λ n η f ( x n ) + ( I λ n F ) w n + e n , C n + 1 = { p C n : 2 α n x n + ( 1 α n ) z n w n , p α n x n 2 + ( 1 α n ) z n 2 w n 2 } Q n + 1 = { p C n + 1 : x 1 p 2 P C n + 1 ( x 1 ) x 1 2 + σ n + 1 } , n N , y n + 1 Q n + 1 , n N ,
and
x 1 , y 1 L 2 ( Ω ) c h o s e n a r b i t r a r i l y , ε 1 , e 1 L 2 ( Ω ) c h o s e n a r b i t r a r i l y , C 1 = L 2 ( Ω ) = Q 1 , u n = ω n x n + ε n , v n = β n u n + ( 1 β n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) , z n = δ n f ( x n ) + ( 1 δ n ) ( I ζ n i = 1 c i W i ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + v n 2 ) , w n = α n x n + ( 1 α n ) i = 1 a i ( I + r n , i A i ) 1 ( I r n , i B i ) ( u n + z n 2 ) , x n + 1 = λ n η f ( x n ) + ( I λ n F ) w n + e n , C n + 1 = { p C n : 2 α n x n + ( 1 α n ) z n w n , p α n x n 2 + ( 1 α n ) z n 2 w n 2 } Q n + 1 = { p C n + 1 : x 1 p 2 P C n + 1 ( x 1 ) x 1 2 + σ n + 1 } , n N , y n + 1 Q n + 1 , n N .
Under the assumptions of Theorems 1 and 2, one has x n q 0 ( x ) i = 1 N ( A i + B i ) , where q 0 ( x ) is common solution of the capillarity system (46) and the system of variational inclusions (14) and (15).

4. Conclusions

Some new forward–backward multi-choice iterative algorithm with superposition perturbations are presented in a real Hilbert space. The iterative sequences are proved to be strongly convergent to not only the solution of monotone inclusions but also the solution of variational inequalities. In the near future, more work can be done to weaken the restrictions imposed on the contraction f and the strongly positive linear bounded mapping F.

Author Contributions

Conceptualization, L.W., X.-W.S. and R.P.A.; methodology, L.W., X.-W.S. and R.P.A.; software, L.W., X.-W.S. and R.P.A.; validation, L.W., X.-W.S. and R.P.A.; formal analysis, L.W., X.-W.S. and R.P.A.; investigation, L.W., X.-W.S. and R.P.A.; resources, L.W., X.-W.S. and R.P.A.; data curation, L.W., X.-W.S. and R.P.A.; writing—original draft preparation, L.W., X.-W.S. and R.P.A.; writing—review and editing, L.W., X.-W.S. and R.P.A.; visualization, L.W., X.-W.S. and R.P.A.; supervision, L.W., X.-W.S. and R.P.A. All authors have read and agreed to the published version of the manuscript.

Funding

Li Wei was supported by Natural Science Foundation of Hebei Province (A2019207064), Key Project of Science and Research of Hebei Educational Department (ZD2019073), and Key Project of Science and Research of Hebei University of Economics and Business (2018ZD06).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kinderleher, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: London, UK, 1980. [Google Scholar]
  2. Jolaoso, L.O.; Shehu, Y.; Cho, Y.J. Convergence analysis for variational inequalities ad fixed point problems in reflexive Banach space. J. Inequal. Appl. 2021, 44. [Google Scholar] [CrossRef]
  3. Ceng, L.C.; Petrusel, A.; Qin, X.L.; Yao, J.C. A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
  4. Cai, G.; Gibali, A.; Lyiola, O.S.; Shehu, Y. A new donble-projection method for solving variational inequalities in Banach spaces. J. Optim. Theory Appl. 2018, 178, 219–239. [Google Scholar] [CrossRef]
  5. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem. Comput. Appl. Math. 2020, 39, 38. [Google Scholar] [CrossRef]
  6. Cheng, Q.Q. Parallel hybrid viscosity method for fixed point problems, variational inequality problems nd split generalized equilibrium problems. J. Inequal. Appl. 2019, 2019, 1–25. [Google Scholar] [CrossRef]
  7. Pascali, D.; Sburlan, S. Nonlinear Mappings of Mnotone Type; Sijthhoff & Noordhoff International Publishers: Alphen ann den Rijn, The Netherlands, 1978. [Google Scholar]
  8. Wei, L.; Shi, A.F. Splitting-midpoint method for zeros of the sum of accretive operator and μ-inversely strongly accretive operator in a q-uniformly smooth Banach space and its applications. J. Inequal. Appl. 2015, 2015, 1–17. [Google Scholar] [CrossRef] [Green Version]
  9. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  10. Tseng, P. A mordified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  11. Khan, S.A.; Suantai, S.; Cholamjiak, W. Shrinking projection methods involving inertial forward-backward splitting methods for inclusion problems. Rev. Real Acad. Cienc. Exactas Fís. Nat. Ser. Mat. 2019, 113, 645–656. [Google Scholar] [CrossRef]
  12. Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  13. Dong, Q.L.; Jiang, D.; Cholamjiak, P.; Shehu, Y. A strong convergence result involving an inertial forward-backward algorithm for monotone inlusions. J. Fixed Point Theory Appl. 2017, 19, 3097–3118. [Google Scholar] [CrossRef]
  14. Qin, X.L.; Wang, L.; Yao, J.C. Inertial splitting method for maximal monotone mappings. J. Nonlinear Convex Anal. 2020, 21, 2325–2333. [Google Scholar]
  15. Wei, L.; Agarwal, R.P. A new iterative algorithm for the sum of infinite m-accretive mappings and infinite μi-inversely strongly accretive mappings and its applications to integro-differentail systems. Fixed Point Theory Appl. 2016, 2016, 1–22. [Google Scholar]
  16. Ceng, L.C.; Ansari, Q.H.; Schaible, S. Hybrid viscosity approximation method for zeros of m-accretive operators in Banach space. Numer. Funct. Anal. Optim. 2012, 33, 142–165. [Google Scholar] [CrossRef]
  17. Wei, L.; Duan, L.L.; Agarwal, R.P.; Chen, R.; Zheng, Y.Q. Mordified forward-backward splitting midpoint method with superposition perturbations for sum of two kind of infinite accretive mappings and its applications. J. Inequal. Appl. 2017, 2017, 1–22. [Google Scholar] [CrossRef] [Green Version]
  18. Wei, L.; Shang, Y.Z.; Agarwal, R.P. New inertial forward-backward mid-point methods for sum of infinitely many accretive mappings, variational inequaties, and applications. Mathematics 2019, 7, 466. [Google Scholar] [CrossRef] [Green Version]
  19. Takahashi, W. Nonlinear Functional Analysis. Fixed Point Theory and Its Applications; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  20. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Fixed Point Theory for Lipschtz-Type Mappings with Applications; Springer: New York, NY, USA, 2009. [Google Scholar]
  21. Bruck, R.E. Properties of fixed-point sets of nonexpansive mappings in Banach spaces. Trans. Am. Math. Soc. 1973, 179, 251–262. [Google Scholar] [CrossRef]
  22. Mosco, U. Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 1969, 3, 510–585. [Google Scholar] [CrossRef] [Green Version]
  23. Tsukada, M. Convergence of best approximations in a smooth Banach space. J. Approx. Theory 1984, 40, 301–309. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, H.K. Inequalities in Banach space with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  25. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Mann-type steepest descent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces. Numer. Funct. Anal. Optim. 2008, 29, 747–756. [Google Scholar] [CrossRef]
  26. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wei, L.; Shen, X.-W.; Agarwal, R.P. Iterative Design for the Common Solution of Monotone Inclusions and Variational Inequalities. Mathematics 2021, 9, 1504. https://doi.org/10.3390/math9131504

AMA Style

Wei L, Shen X-W, Agarwal RP. Iterative Design for the Common Solution of Monotone Inclusions and Variational Inequalities. Mathematics. 2021; 9(13):1504. https://doi.org/10.3390/math9131504

Chicago/Turabian Style

Wei, Li, Xin-Wang Shen, and Ravi P. Agarwal. 2021. "Iterative Design for the Common Solution of Monotone Inclusions and Variational Inequalities" Mathematics 9, no. 13: 1504. https://doi.org/10.3390/math9131504

APA Style

Wei, L., Shen, X. -W., & Agarwal, R. P. (2021). Iterative Design for the Common Solution of Monotone Inclusions and Variational Inequalities. Mathematics, 9(13), 1504. https://doi.org/10.3390/math9131504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop