Next Article in Journal
Analysis of General Humoral Immunity HIV Dynamics Model with HAART and Distributed Delays
Next Article in Special Issue
Convergence and Best Proximity Points for Generalized Contraction Pairs
Previous Article in Journal
Brauer-Type Inclusion Sets of Zeros for Chebyshev Polynomial
Previous Article in Special Issue
Sehgal Type Contractions on Dislocated Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Theorems for Modified Inertial Viscosity Splitting Methods in Banach Spaces

Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(2), 156; https://doi.org/10.3390/math7020156
Submission received: 6 January 2019 / Revised: 28 January 2019 / Accepted: 30 January 2019 / Published: 8 February 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this article, we study a modified viscosity splitting method combined with inertial extrapolation for accretive operators in Banach spaces and then establish a strong convergence theorem for such iterations under some suitable assumptions on the sequences of parameters. As an application, we extend our main results to solve the convex minimization problem. Moreover, the numerical experiments are presented to support the feasibility and efficiency of the proposed method.

1. Introduction

Throughout this paper, we let E be a real Banach space with norm . and E * be its dual space. The normalized duality mapping J from E into 2 E * is defined by the following equation:
J ( x ) = { f E * : x , f = f x = x 2 } x E .
we denote the generalized duality pairing between E and E * by . , . and the single-valued duality mapping by j.
The inclusion problem is to find x E such that
0 ( A + B ) x
where A : E E is an operator and B : E 2 E is a set-valued operator. Please note that on the one hand, this problem takes into account some special cases, such as variational inequalities, convex programming, minimization problem, and split feasibility problem [1,2,3]. On the other hand, as an important branch of nonlinear functional analysis and optimization theory, it has been studied numerous times in the literature to solve the real-world problem, such as machine learning, image reconstruction, and signal processing; see [4,5,6,7] and the references therein.
In 2012, Takashashi et al. [8] studied a Halpern-type iterative method for an α -inverse strongly monotone mapping A and a maximal monotone operator B in a Hilbert space as follows:
x n + 1 = β n x n + ( 1 β n ) ( α n u + ( 1 α n ) J r n B ( x n r n A x n ) ) ,
under certain conditions, the algorithm was showed to converge strongly to a solution of A + B . Furthermore, Lopez et al. [9] introduced the following method for accretive operators:
x n + 1 = α n u + ( 1 α n ) ( J r n B ( x n r n ( A x n + a n ) ) + b n ) ,
they studied strong convergence theorems for Halpern-type splitting methods in Banach spaces. In 2016, Pholasa et al. [10] extended the above results [8,9] and studied the modified forward-backward splitting methods in Banach spaces:
x n + 1 = β n x n + ( 1 β n ) ( α n u + ( 1 α n ) J r n B ( x n r n A x n ) ) ,
it was proved that x n converges strongly to a point z = Q ( u ) under some mild conditions, where Q is the sunny nonexpansive retraction.
Inertial extrapolation is an important technique to speed up the convergence rate [11,12,13,14]. Recently, the fast-iterative algorithms by using inertial extrapolation studied by some authors [15,16,17]. For instance, in 2003, Moudafi et al. [18] studied the following inertial proximal point algorithm of a maximal monotone operator:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( I + λ n T ) 1 ( y n ) .
If λ n is non-decreasing and θ n [ 0 , 1 ) is chosen such that
n = 1 θ n x n x n 1 2 < ,
then x n converges to a zero point of T. In 2015, Lorenz et al. [19] applied inertial extrapolation technique to forward-backward algorithm for monotone operators in Hilbert spaces. They proved that the iterative process defined by
y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( I + r n B ) 1 ( y n r n A y n ) .
converges weakly to a solution of the inclusion 0 ( A + B ) ( x ) . In 2018, Cholamjiak et al. [20] proposed a Halpern-type inertial iterative method for monotone operators in Hilbert spaces and they proved the strong convergence of the algorithm.
Inspired and motivated by the above-mentioned works, we apply inertial extrapolation algorithms and viscosity approximation to give an extension, and then we study a modified splitting method for accretive operators in Banach spaces. The strong convergence theorems for such iterations are established and some applications including the numerical experiments are presented to support our main theorem.

2. Preliminaries

Recall that a Banach space E is said to be uniformly convex if for any ϵ ( 0 , 2 ] , there exists a δ = δ E ( ϵ ) > 0 such that x , y E with x = y = 1 , and x y ϵ , then x + y / 2 1 δ . We denote the modulus of smoothness ρ E : R + R + of E as follows:
ρ E ( t ) = sup { x + t y + x t y 2 1 : x = 1 , y = 1 } ,
for 1 < q 2 , a Banach space E is said to be q-uniformly smooth if there exists a constant c q > 0 such that ρ E ( t ) c q t q , t > 0 . E is said to be uniformly smooth if lim n ρ E ( t ) / t = 0 . It is obvious that q-uniformly smooth Banach space must be uniformly smooth and E is uniformly smooth if and only if the norm of E is uniformly Fréchet differentiable.
Let I be the identity operator. We denote by D ( A ) = { z E : A z } , R ( A ) = { A z : z D ( A ) } the domain and range of an operator A E × E , respectively. A is called accretive if for each x , y D ( A ) , there exists j ( x y ) J ( x y ) such that
u v , j ( x y ) 0 , u A x , v A y .
An accretive operator A is called α -inverse strongly accretive, if for each x , y D ( A ) , there exists j ( x y ) J ( x y ) such that
u v , j ( x y ) α u v 2 , u A x , v A y .
It is well-known that an accretive operator A is m-accretive if R ( I + r A ) = E for all r > 0. If A is an accretive operator which satisfies the range condition, then, for each r > 0 , the mapping J r A : R ( I + r A ) D ( A ) is defined by J r A = ( I + r A ) 1 , which is called the resolvent operator of A.
Let C be a nonempty, closed and convex subset of E, and let D be a nonempty subset of C. A mapping T : C D is called a retraction of C onto D, if for all x D , there is T x = x . We called T is sunny if T has the following property: T ( t x + ( 1 t ) T x ) = T x for each x C and t 0 whenever t x + ( 1 t ) T x C . It is known that a sunny nonexpansive retraction is a sunny retraction which is also nonexpansive.
The following lemmas are needed to prove our results.
Lemma 1
([21]).Let E be a smooth Banach space. Then the following inequality holds:
x + y 2 x 2 + 2 y , j ( x + y ) , x , y E .
Lemma 2
([22]).For any r > 0 , give 0 < s r and x E , if
T r : = J r B ( I r A ) = ( I + r B ) 1 ( I r A )
then F i x ( T r ) = ( A + B ) 1 ( 0 ) . In addition, there holds the relation
x T s x 2 x T r x .
Lemma 3
([23]).If a Banach space E is uniformly smooth, then the duality mapping J is single valued and norm-to-norm uniformly continuous on each bounded subset of E.
Lemma 4
([21]).Let E be a uniformly smooth Banach space and T : C C be a nonexpansive mapping with a fixed point. Let f : C C be a contraction with coefficient ρ ( 0 , 1 ) and t ( 0 , 1 ) , the unique fixed point x t C of the contraction C x t f ( x ) + ( 1 t ) T x converges strongly as t 0 to a fixed point of T. Define a mapping Q : C D by Q u = s lim t 0 x t . Then Q is the unique sunny nonexpansive retract from C onto D.
Lemma 5
([24]).Assume { a n } R + , { δ n } ( 0 , 1 ) and { b n } R be the sequences such that
a n + 1 ( 1 δ n ) a n + b n , n 0 ,
(i) If Σ n = 0 δ n = ; (ii) l i m s u p n b n δ n 0 o r Σ n = 1 | b n | < ; then l i m n a n = 0 .
Lemma 6
([25]).Assume { s n } is a sequence of nonnegative real numbers such that
s n + 1 ( 1 γ n ) s n + γ n τ n , n 1 ,
s n + 1 s n η n + d n , n 1 ,
where { γ n } is a sequence in ( 0 , 1 ) , { η n } is a sequence of nonnegative real numbers and { τ n } , { d n } are real sequences such that
(i) 
Σ n = 0 γ n = ,
(ii) 
lim n d n = 0 ,
(iii) 
lim k η n k = 0 implies lim sup k τ n k 0 for any subsequence { n k } { n } .
Then lim n s n = 0 .
Lemma 7
([26]).Let A be a single-valued α-isa in a real uniformly convex Banach space with Fréchet differentiable norm. Then, for all x , y E and given s > 0 , there exists a continuous, strictly increasing and convex function Φ : R + R + with Φ ( 0 ) = 0 such that
T r x T r y 2 x y 2 r ( 2 α r k ) A x A y 2 Φ ( ( I J r B ) ( I r A ) x ( I J r B ) ( I r A ) y )
where k is the uniform smoothness coefficient of E.
Lemma 8
([27]).Let E be a uniformly convex Banach space. Then, for all x , y E and t [ 0 , 1 ] , there exists a convex continuous and strictly increasing function g : [ 0 , ) [ 0 , ) with g ( 0 ) = 0 such that
t x + ( 1 t ) y 2 t x 2 + ( 1 t ) y 2 t ( 1 t ) g ( x y ) .

3. Main Results

Theorem 1.
Let E be a uniformly convex and uniformly smooth Banach space. Let A : E E be an α-inverse-strongly accretive mapping and B : E 2 E be an m-accretive operator. Assume that Ω = ( A + B ) 1 ( 0 ) . Let f : E E be a contraction with coefficient ρ [ 0 , 1 ) and { β n } ( 0 , 1 ) , { α n } , { δ n } be real number sequences in [ 0 , 1 ) and r n ( 0 , + ) . Define a sequence { x n } in E as follows:
w n = x n + α n ( x n x n 1 ) , y n = δ n w n + ( 1 δ n ) J r n B ( w n r n A w n ) , x n + 1 = β n f ( x n ) + ( 1 β n ) y n .
for all n N , where x 0 , x 1 E and J r n B = ( I + r n B ) 1 . Assume that the following conditions hold:
( i ) n = 1 α n x n x n 1 < ; ( i i ) lim n β n = 0 , n = 1 β n = ; ( i i i ) 0 < lim inf n r n < lim sup n r n < 2 α k ; ( i v ) lim sup n δ n < 1 .
Then the sequence { x n } converges strongly to z = Q ( f ( z ) ) , where Q is the sunny nonexpansive retraction of E onto Ω.
Proof. 
Let T n = J r n B ( I r n A ) , z = Q ( f ) . Then, we have
y n z = δ n w n + ( 1 δ n ) T n w n z δ n w n z + ( 1 δ n ) T n w n z δ n w n z + ( 1 δ n ) w n z w n z x n + α n ( x n x n 1 ) z x n z + α n ( x n x n 1 ) .
In view of Lemma 2, we have
x n + 1 z = β n f ( x n ) + ( 1 β n ) y n z β n f ( x n ) z + ( 1 β n ) y n z β n f ( x n ) f ( z ) + β n f ( z ) z + ( 1 β n ) y n z β n ρ x n z + β n f ( z ) z + ( 1 β n ) ( x n z + α n ( x n x n 1 ) ) = [ 1 β n ( 1 ρ ) ] x n z + β n f ( z ) z + ( 1 β n ) α n ( x n x n 1 ) .
From the restriction and Lemma 5, we find that { x n } is bounded. Hence { w n } , { y n } are also bounded.
Using the inequality in Lemma 1 and Lemma 8 , we find that
w n z 2 = x n + α n ( x n x n 1 ) z 2 x n z 2 + 2 α n x n x n 1 , j ( w n z ) ,
and
x n + 1 z 2 = β n f ( x n ) + ( 1 β n ) y n z 2 = β n ( f ( x n ) f ( z ) ) + ( 1 β n ) ( y n z ) + β n ( f ( z ) z ) 2 β n ( f ( x n ) f ( z ) ) + ( 1 β n ) ( y n z ) 2 + 2 β n f ( z ) z , j ( x n + 1 z ) β n f ( x n ) f ( z ) 2 + ( 1 β n ) y n z 2 β n ( 1 β n ) g ( ( f ( x n ) f ( z ) ) ( y n z ) ) + 2 β n f ( z ) z , j ( x n + 1 z ) β n ρ 2 x n z 2 + ( 1 β n ) y n z 2 + 2 β n f ( z ) z , j ( x n + 1 z ) .
In view of Lemmas 7 and 8, we get
y n z 2 = δ n w n + ( 1 δ n ) T n w n z 2 δ n w n z 2 + ( 1 δ n ) T n w n z 2 δ n w n z 2 + ( 1 δ n ) [ w n z 2 r n ( 2 α r n k ) A w n A z 2 Φ ( ( I J r n B ) ( I r n A ) w n ( I J r n B ) ( I r n A ) z ) ] = w n z 2 ( 1 δ n ) r n ( 2 α r n k ) A w n A z 2 ( 1 δ n ) Φ ( w n r n A w n T n w n + r n A z ) .
Substitute (2), (4) into (3), we get
x n + 1 z 2 β n ρ 2 x n z 2 + 2 β n f ( z ) z , j ( x n + 1 z ) + ( 1 β n ) w n z 2 ( 1 β n ) ( 1 δ n ) r n ( 2 α r n k ) A w n A z 2 ( 1 β n ) ( 1 δ n ) Φ ( w n r n A w n T n w n + r n A z ) β n ρ 2 x n z 2 + 2 β n f ( z ) z , j ( x n + 1 z ) + ( 1 β n ) x n z 2 + 2 ( 1 β n ) α n x n x n 1 , j ( w n z ) ( 1 β n ) ( 1 δ n ) r n ( 2 α r n k ) A w n A z 2 ( 1 β n ) ( 1 δ n ) Φ ( w n r n A w n T n w n + r n A z ) = ( 1 β n ( 1 ρ 2 ) ) x n z 2 + 2 β n f ( z ) z , j ( x n + 1 z ) + 2 ( 1 β n ) α n x n x n 1 , j ( w n z ) ( 1 β n ) ( 1 δ n ) r n ( 2 α r n k ) A w n A z 2 ( 1 β n ) ( 1 δ n ) Φ ( w n r n A w n T n w n + r n A z ) .
We can check that β n ( 1 ρ 2 ) is in ( 0 , 1 ) , by the condition ( i i i ) , we can show that ( 1 β n ) ( 1 δ n ) r n ( 2 α r n k ) is positive. Then, we have
x n + 1 z 2 ( 1 β n ( 1 ρ 2 ) ) x n z 2 + 2 β n f ( z ) z , j ( x n + 1 z ) + 2 ( 1 β n ) α n x n x n 1 , j ( w n z ) ,
and
x n + 1 z 2 x n z 2 ( 1 β n ) ( 1 δ n ) r n ( 2 α r n k ) A w n A z 2 ( 1 β n ) ( 1 δ n ) Φ ( w n r n A w n T n w n + r n A z ) + 2 β n f ( z ) z , j ( x n + 1 z ) + 2 ( 1 β n ) α n x n x n 1 , j ( w n z ) .
For each n 1 , let
s n = x n z 2 , γ n = β n ( 1 ρ 2 ) , τ n = 2 1 ρ 2 f ( z ) z , j ( x n + 1 z ) + 2 α n ( 1 β n ) β n ( 1 ρ 2 ) x n x n 1 , j ( w n z ) , η n = ( 1 β n ) ( 1 δ n ) r n ( 2 α r n k ) A w n A z 2 + ( 1 β n ) ( 1 δ n ) Φ ( w n r n A w n T n w n + r n A z ) , d n = 2 β n f ( z ) z , j ( x n + 1 z ) + 2 ( 1 β n ) α n x n x n 1 , j ( w n z ) .
we find from (6), (7) that
s n + 1 ( 1 γ n ) s n + γ n τ n ,
and also
s n + 1 s n η n + d n .
Notice that n = 1 β n = , we see that n = 1 γ n = . By the boundedness of { w n } , { x n } and the restriction lim n β n = 0 , implies that lim n d n = 0 .
On the other hand, using Lemma 6, it remains to show that lim k η n k = 0 implies lim sup k τ n k 0 , for any subsequence { n k } { n } . Let η n k be a subsequence of η n such that lim k η n k = 0 . It follows from the restrictions and the property of ϕ , we derive from (8) the following
lim k A w n k A z = 0 = lim k w n k r n k A w n k T n k w n k + r n k A z = 0 .
By the triangle inequality, it turns out that
lim k T n k w n k w n k = 0 .
and moreover, since 0 < lim inf n r n , there exists ϵ > 0 , such that r n ϵ for all n > 0 , in view of the inequality in Lemma 2, we have
T ϵ w n k w n k 2 T n k w n k w n k .
It turns out that
lim sup k T ϵ w n k w n k 0 .
Therefore, we can get
T ϵ w n k w n k = 0 .
Please note that
T ϵ w n k x n k T ϵ w n k w n k + w n k x n k T ϵ w n k w n k + α n x n k x n k 1
We get from condition ( i ) and (9) that
lim k T ϵ w n k x n k = 0 .
Put z t = t f ( z t ) + ( 1 t ) T ϵ z t for any t ( 0 , 1 ) . Apply Lemma 4, we get z t Q ( f ) = z , t 0 . Then we have
z t x n k 2 = t f ( z t ) + ( 1 t ) T ϵ z t x n k 2 = t ( f ( z t ) x n k ) + ( 1 t ) ( T ϵ z t x n k ) 2 ( 1 t ) 2 T ϵ z t x n k 2 + 2 t f ( z t ) x n k , j ( z t x n k ) = ( 1 t ) 2 T ϵ z t x n k 2 + 2 t f ( z t ) z t , j ( z t x n k ) + 2 t z t x n k , j ( z t x n k ) ( 1 t ) 2 ( T ϵ z t T ϵ w n k + T ϵ w n k x n k ) 2 + 2 t f ( z t ) z t , j ( z t x n k ) + 2 t z t x n k , j ( z t x n k ) ( 1 t ) 2 ( z t w n k + T ϵ w n k x n k ) 2 + 2 t f ( z t ) z t , j ( z t x n k ) + 2 t z t x n k 2 ( 1 t ) 2 ( z t x n k + α n x n k x n k 1 + T ϵ w n k x n k ) 2 + 2 t f ( z t ) z t , j ( z t x n k ) + 2 t z t x n k 2
This implies that
z t f ( z t ) , j ( z t x n k ) ( 1 t ) 2 2 t ( z t x n k + α n x n k x n k 1 + T ϵ w n k x n k ) 2 + 2 t 1 2 t z t x n k 2 .
From (10), (11) we obtain
lim sup k z t f ( z t ) , j ( z t x n k ) ( 1 t ) 2 2 t M 2 + + 2 t 1 2 t M 2 = t 2 M 2 0 , a s t 0 ,
for some M > 0 large enough. Since the duality mapping J is norm-to-norm uniformly continuous on bounded sets of E, we see that j ( z t x n k ) j ( z x n k ) 0 , t 0 . Then, we have that
z t f ( z t ) , j ( z t x n k ) z f ( z t ) , j ( z x n k ) = z t z + z f ( z t ) , j ( z t x n k ) z f ( z t ) , j ( z x n k ) z t z , j ( z t x n k ) + z f ( z t ) , j ( z t x n k ) z f ( z t ) , j ( z x n k ) z t z z t x n k + z f ( z t ) j ( z t x n k ) j ( z x n k )
From (12), (13) and let t 0 , we get that
lim sup k z f ( z ) , j ( z x n k ) 0 .
On the other hand, we have
y n k x n k = δ n k w n k + ( 1 δ n k ) T n k w n k x n k δ n k w n k x n k + ( 1 δ n k ) T n k w n k x n k δ n k w n k x n k + ( 1 δ n k ) T n k w n k w n k + ( 1 δ n k ) w n k x n k = w n k x n k + ( 1 δ n k ) T n k w n k w n k α n k x n k x n k 1 + ( 1 δ n k ) T n k w n k w n k
x n k + 1 x n k = β n k f ( x n k ) + ( 1 β n k ) y n k x n k β n k f ( x n k ) x n k + ( 1 β n k ) y n k x n k β n k f ( x n k ) f ( z ) + β n k f ( z ) x n k + ( 1 β n k ) y n k x n k β n k f ( x n k ) f ( z ) + β n k f ( z ) x n k + ( 1 β n k ) α n k x n k x n k 1 + ( 1 β n k ) ( 1 δ n k ) T n k w n k w n k
From condition ( i ) , ( i i ) and (9), we have
lim n x n k + 1 x n k = 0 .
From (14) and (15), we obtain
lim sup k z f ( z ) , j ( z x n k + 1 ) 0 .
This implies that lim sup k τ n k 0 that means by Lemma 6, we get lim n s n = 0 . Hence, we see that x n z , n . This finishes the proof. □
Corollary 1.
Let E be a uniformly convex and uniformly smooth Banach space. Let A : E E be an α-inverse-strongly accretive mapping and B : E 2 E be an m-accretive operator. Assume that Ω = ( A + B ) 1 ( 0 ) . Let { β n } ( 0 , 1 ) , { α n } , { δ n } be real number sequences in [ 0 , 1 ) and r n ( 0 , + ) . Define a sequence { x n } in E as follows:
w n = x n + α n ( x n x n 1 ) , y n = δ n w n + ( 1 δ n ) J r n B ( w n r n A w n ) , x n + 1 = β n u + ( 1 β n ) y n .
for all n N , where u , x 0 , x 1 E and J r n B = ( I + r n B ) 1 . Assume that the following conditions hold:
( i ) n = 1 α n x n x n 1 < ; ( i i ) lim n β n = 0 , n = 1 β n = ; ( i i i ) 0 < lim inf n r n < lim sup n r n < 2 α k ; ( i v ) lim sup n δ n < 1 .
Then the sequence { x n } converges strongly to z = Q ( u ) , where Q is the sunny nonexpansive retraction of E onto Ω.
Proof. 
In this case, the map f : E E defined by f ( x ) = u , x E is a strict contraction with constant ρ = 0 . The proof follows from Theorem 1 above. □
Corollary 2.
Let H be a uniformly convex and uniformly smooth Hilbert space. Let A : H H be an α-inverse-strongly monotone operator and B : H 2 H be a maximal monotone operator. Assume that Ω = ( A + B ) 1 ( 0 ) . Let f : H H be a contraction with coefficient ρ [ 0 , 1 ) and { β n } ( 0 , 1 ) , { α n } , { δ n } be real number sequences in [ 0 , 1 ) and r n ( 0 , 2 α ) . Define a sequence { x n } in E as follows:
w n = x n + α n ( x n x n 1 ) , y n = δ n w n + ( 1 δ n ) J r n B ( w n r n A w n ) , x n + 1 = β n f ( x n ) + ( 1 β n ) y n .
for all n N , where x 0 , x 1 E and J r n B = ( I + r n B ) 1 . Assume that the following conditions hold:
( i ) n = 1 α n x n x n 1 < ; ( i i ) lim n β n = 0 , n = 1 β n = ; ( i i i ) 0 < lim inf n r n < lim sup n r n < 2 α k ; ( i v ) lim sup n δ n < 1 .
Then the sequence { x n } converges strongly to z = P ( f ( z ) ) , where P is the metric projection of H onto Ω.
Proof. 
We only need to replace Banach space E with Hilbert space H in the proof of Theorem 1. □
Corollary 3.
(Convex minimization problem) Let H be a real Hilbert space. Let f : H R be a convex and differentiable function with K-Lipschitz continuous gradient f and g : H R a convex and lower semi-continuous function which f + g attains a minimizer. Let { β n } ( 0 , 1 ) , { α n } , { δ n } be real number sequences in [ 0 , 1 ) and r n ( 0 , 2 α ) . Define a sequence { x n } in H as follows:
w n = x n + α n ( x n x n 1 ) , y n = δ n w n + ( 1 δ n ) J r n g ( w n r n f ( w n ) ) , x n + 1 = β n f ( x n ) + ( 1 β n ) y n .
for all n N , where x 0 , x 1 E and J r n B = ( I + r n B ) 1 . Assume that the following conditions hold:
( i ) n = 1 α n x n x n 1 < ; ( i i ) lim n β n = 0 , n = 1 β n = ; ( i i i ) 0 < lim inf n r n < lim sup n r n < 2 α ; ( i v ) lim sup n δ n < 1 .
Then the sequence { x n } converges strongly to a minimizer of f + g .
Proof. 
We get that gradient f is K-Lipschitz continuous, then it is 1 K inverse strongly monotone, and g is a convex and lower semi-continuous function, so g is maximal monotone. Thus, let A = f and B = g in Theorem 1, the conclusion of Theorem 1 still holds. □

4. Applications and Numerical Experiments

In this section, we give a concrete example of the numerical results to support the main theorem. Furthermore, we give it to compare the efficiency of our proposed algorithm with the algorithm of Pholasa et al. [10]. And we also show the algorithm presented in this paper converges more quickly. The whole codes are written by Matlab R2013b. All the results are carried out by personal computer with Intel(R) Core(TM) i7-4710MQ CPU @ 2.50GHz and RAM 8.00GB.
Example 1.
Let l 3 be a uniformly convex and uniformly smooth Banach space, we set A x = 5 x + ( 1 , 1 , 1 , 0 , 0 , 0 ... ) and B x = 6 x where x = ( x 1 , x 2 , x 3 , ... ) l 3 . We can check that A : l 3 l 3 is a 1 5 -isa, B : l 3 l 3 is an m-accretive operator and R ( I + r B ) = l 3 for all r > 0 . we take r n = 0 . 02 , α n = 0 . 4 for all n N . Let β n = 1 1000 n + 1 , δ n = 1 200 n and f ( x ) = 1 3 x be a contraction with coefficient ρ = 1 3 . Starting x 0 = ( 1.8 , 3.2 , 9.6 , ... ) , x 1 = ( 1.4290014 , 2.5542525 , 7.6982578 , ... ) and using algorithm (1) in Theorem 1, we obtain the following numerical results.
From Table 1 we see that x 600 = ( 0.0909 , 0.0909 , 0.0909 , 0.0000 , 0.0000 , 0.0000 , ... ) is an approximation of a solution with an error 1 . 8770214 × 10 9 . And we make the same choices for x 1 as reported in Table 1. In terms of the number of iterations and the errors, we provide the numerical examples to demonstrate the performance and to compare our proposed algorithm with the iterative algorithm with α n = 0 .
In these 600 experiments, Table 2 shows that the final approximation solution is the same as Table 1. Figure 1 shows that the number of iterations and errors of our algorithm and the algorithm with α n = 0 for the above initial points. We can see that the convergence of our algorithm is faster than the algorithm of Pholasa et al. [10].

5. Conclusions

In this paper, we give a modified inertial viscosity splitting algorithm for accretive operators in Banach spaces. The strong convergence theorems are established, and the numerical experiments are presented to support that the inertial extrapolation greatly improves the efficiency of the algorithm. In Theorem 1 and Corollary 1, if f ( x n ) = u and A is an inverse strongly monotone operator in Hilbert space, it is the main results of Cholamjiak et al. [20]. In Theorem 1, if α n = 0 , f ( x n ) = u and E is a uniformly convex and q-uniformly smooth Banach space, it is the main results of Pholasa et al. [10]. Furthermore, some other results are also improved (see [8,9,18,19,26]).
The introduction of the inertial viscosity splitting algorithms sheds new light on inclusion problem. Combined with recent research findings ([4,13,19,20]), Theorem 1 can be further applied to the fixed-point problem, the split feasibility problem and the variational inequality problem. Indeed, it is an important but unsolved problem to choose the optimal inertia parameters α n in the acceleration algorithm. In the future, more work will be devoted to the wide application of the proposed algorithm and the improvement of its convergence rate.

Author Contributions

Conceptualization, C.P. and Y.W.; methodology, C.P. and Y.W.; software, C.P. and Y.W.; validation, C.P. and Y.W.; formal analysis, C.P. and Y.W.; investigation, C.P. and Y.W.;resources, C.P. and Y.W.; data curation, C.P.; writing–original draft preparation, C.P. and Y.W.; writing–review and editing, C.P. and Y.W.; visualization, C.P. and Y.W.; supervision, Y.W.; project administration, Y.W.

Funding

This research received no external funding.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 11671365) and the Natural Science Foundation of Zhejiang Province (Grant no. LY14A010011).

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Yao, Y.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
  2. Zhu, J.H.; University, Y. Approximation of solutions to finite family of variational inclusion and the set of common fixed point for finite family of λ-strict pseudocontraction mapping in Banach spaces. Math. Pract. Theory 2013, 43, 207–217. [Google Scholar]
  3. Yao, Y.H.; Chen, R.D.; Xu, H.K. Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72, 3447–3456. [Google Scholar] [CrossRef]
  4. Attouch, H.; Peypouquet, J.; Redont, P. Backward–forward algorithms for structured monotone inclusions in hilbert spaces. J. Math. Anal. Appl. 2016, 457, 1095–1117. [Google Scholar] [CrossRef]
  5. Bagarello, F.; Cinà, M.; Gargano, F. Projector operators in clustering. Math. Methods Appl. Sci. 2016, arXiv:1605.03093. [Google Scholar] [CrossRef]
  6. Zegeye, H.; Shahzad, N.; Yao, Y. Minimum-norm solution of variational inequality and fixed point problem in banach spaces. Optimization 2015, 64, 453–471. [Google Scholar] [CrossRef]
  7. Czarnecki, M.O.; Noun, N.; Peypouquet, J. Splitting forward-backward penalty scheme for constrained variational problems. arXiv, 2014; arXiv:1408.0974. [Google Scholar]
  8. Takahashi, W.; Wong, N.C.; Yao, J.C. Two generalized strong convergence theorems of Halpern’s type in Hilbert spaces and applications. Taiwan. J. Math. 2012, 16, 1151–1172. [Google Scholar] [CrossRef]
  9. López, G.; Martínmárquez, V.; Wang, F.; Xu, H.K. Forward-Backward splitting methods for accretive operators in Banach spaces. Soc. Ind. Appl. Math. 2012, 2012, 933–947. [Google Scholar] [CrossRef]
  10. Pholasa, N.; Cholamjiak, P.; Cho, Y.J. Modified forward-backward splitting methods for accretive operators in Banach spaces. J. Nonlinear Sci. Appl. 2016, 9, 2766–2778. [Google Scholar] [CrossRef]
  11. Alvarez, F.; Attouch, H. An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  12. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial douglas–rachford splitting for monotone inclusion problems. Appl. Math. d Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
  13. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed cq algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2017, 13, 1–21. [Google Scholar] [CrossRef]
  14. Chan, R.H.; Ma, S.; Yang, J.F. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
  15. Bot, R.I.; Csetnek, E.R. An inertial alternating direction method of multipliers. Mathematics 2000, arXiv:1404.4582. [Google Scholar]
  16. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  17. Vong, S.; Liu, D. An inertial mann algorithm for nonexpansive mappings. J. Fixed Point Theory Appl. 2018, 20, 102. [Google Scholar] [CrossRef]
  18. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef]
  19. Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  20. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 42. [Google Scholar] [CrossRef]
  21. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef]
  22. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Band 62 von Mathematics and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1990. [Google Scholar]
  23. Matsushita, S.; Takahashi, W. Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2004, 2004, 829453. [Google Scholar] [CrossRef]
  24. Wang, Y.H.; Pan, C.J. The modified viscosity implicit rules for uniformly L-Lipschitzian asymptotically pseudocontractive mappings in Banach spaces. J. Nonlinear Sci. Appl. 2017, 10, 1582–1592. [Google Scholar] [CrossRef]
  25. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. SAbstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
  26. Shehu, Y.; Cai, G. Strong convergence result of forward-backward splitting methods for accretive operators in banach spaces with applications. Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A Matemáticas 2018, 112, 71–87. [Google Scholar] [CrossRef]
  27. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
Figure 1. Error plotting of x n + 1 x n l 3 .
Figure 1. Error plotting of x n + 1 x n l 3 .
Mathematics 07 00156 g001
Table 1. Numerical results of Example 1 for iteration process.
Table 1. Numerical results of Example 1 for iteration process.
n x n x n + 1 x n l 3
1 ( 1.4290014 , 2.5542525 , 7.6982578 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 2.1732179
10 ( 0.0677632 , 0.0506892 , 0.0273635 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 8.911330 × 10 2
20 ( 0.0908530 , 0.0908328 , 0.0907403 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.9408448 × 10 4
30 ( 0.0908917 , 0.0908919 , 0.0908926 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 6.7808717 × 10 7
40 ( 0.0908964 , 0.0908964 , 0.0908964 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 5.1789690 × 10 7
50 ( 0.0908991 , 0.0908991 , 0.0908991 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 3.1714100 × 10 7
60 ( 0.0909009 , 0.0909009 , 0.0909009 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 2.1345961 × 10 7
70 ( 0.0909021 , 0.0909021 , 0.0909021 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.5346777 × 10 7
80 ( 0.0909030 , 0.0909030 , 0.0909030 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.1564503 × 10 7
90 ( 0.0909037 , 0.0909037 , 0.0909037 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 9.0267844 × 10 8
100 ( 0.0909043 , 0.0909043 , 0.0909043 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 7.2416422 × 10 8
200 ( 0.0909067 , 0.0909067 , 0.0909067 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.7357133 × 10 8
300 ( 0.0909075 , 0.0909075 , 0.0909075 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 7.6097663 × 10 9
400 ( 0.0909079 , 0.0909079 , 0.0909079 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 4.2517012 × 10 9
500 ( 0.0909082 , 0.0909082 , 0.0909082 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 2.7101525 × 10 9
600 ( 0.0909083 , 0.0909083 , 0.0909083 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.8770214 × 10 9
Table 2. Numerical results for iteration process Algorithm (1) with α n = 0 in Example 1.
Table 2. Numerical results for iteration process Algorithm (1) with α n = 0 in Example 1.
n x n x n + 1 x n l 3
1 ( 1.4290014 , 2.5542525 , 7.6982578 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.9308196
10 ( 0.1216527 , 0.2789583 , 0.9980697 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 2.702205 × 10 1
20 ( 0.0670158 , 0.0493531 , 0.031391 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 3.034530 × 10 2
30 ( 0.0882106 , 0.0862275 , 0.0771619 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 3.407800 × 10 3
40 ( 0.0905947 , 0.0903721 , 0.0893543 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 3.8297598 × 10 4
50 ( 0.0908649 , 0.0908399 , 0.0907256 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 4.3220033 × 10 5
60 ( 0.0908968 , 0.0908940 , 0.0908812 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 5.0028407 × 10 6
70 ( 0.0909015 , 0.0909012 , 0.0908997 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 6.7427458 × 10 7
80 ( 0.0909028 , 0.0909028 , 0.0909026 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.7183921 × 10 7
90 ( 0.0909036 , 0.0909036 , 0.0909036 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 9.9664118 × 10 8
100 ( 0.0909042 , 0.0909042 , 0.0909042 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 7.5993686 × 10 8
200 ( 0.0909067 , 0.0909067 , 0.0909067 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.7674516 × 10 8
300 ( 0.0909075 , 0.0909075 , 0.0909075 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 7.6990015 × 10 9
400 ( 0.0909079 , 0.0909079 , 0.0909079 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 4.2884018 × 10 9
500 ( 0.0909082 , 0.0909082 , 0.0909082 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 2.7286629 × 10 9
600 ( 0.0909083 , 0.0909083 , 0.0909083 , 0.0000000 , 0.0000000 , 0.0000000 , ... ) 1.8876276 × 10 9

Share and Cite

MDPI and ACS Style

Pan, C.; Wang, Y. Convergence Theorems for Modified Inertial Viscosity Splitting Methods in Banach Spaces. Mathematics 2019, 7, 156. https://doi.org/10.3390/math7020156

AMA Style

Pan C, Wang Y. Convergence Theorems for Modified Inertial Viscosity Splitting Methods in Banach Spaces. Mathematics. 2019; 7(2):156. https://doi.org/10.3390/math7020156

Chicago/Turabian Style

Pan, Chanjuan, and Yuanheng Wang. 2019. "Convergence Theorems for Modified Inertial Viscosity Splitting Methods in Banach Spaces" Mathematics 7, no. 2: 156. https://doi.org/10.3390/math7020156

APA Style

Pan, C., & Wang, Y. (2019). Convergence Theorems for Modified Inertial Viscosity Splitting Methods in Banach Spaces. Mathematics, 7(2), 156. https://doi.org/10.3390/math7020156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop