Next Article in Journal
Analytical and Approximate Solution for Solving the Vibration String Equation with a Fractional Derivative
Next Article in Special Issue
Weak Measurable Optimal Controls for the Problems of Bolza
Previous Article in Journal
Necessary and Sufficient Second-Order Optimality Conditions on Hadamard Manifolds
Previous Article in Special Issue
Strong Convergence of Modified Inertial Mann Algorithms for Nonexpansive Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Krasnoselskii–Mann Viscosity Approximation Method for Nonexpansive Mappings

1
Department of Mathematics, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
2
School of Science, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1153; https://doi.org/10.3390/math8071153
Submission received: 22 June 2020 / Revised: 10 July 2020 / Accepted: 11 July 2020 / Published: 14 July 2020
(This article belongs to the Special Issue Nonlinear Analysis and Optimization)

Abstract

:
We show that the viscosity approximation method coupled with the Krasnoselskii–Mann iteration generates a sequence that strongly converges to a fixed point of a given nonexpansive mapping in the setting of uniformly smooth Banach spaces. Our result shows that the geometric property (i.e., uniform smoothness) of the underlying space plays a role in relaxing the conditions on the choice of regularization parameters and step sizes in iterative methods.

1. Introduction

Iteratively finding a fixed point for a nonexpansive mapping is an active topic of nonlinear operator theory and optimization. A nonexpansive mapping does not increase distances. A typical example of a nonexpansive mapping is metric (i.e., nearest point) projection onto a closed convex subset of a Hilbert space. Thus, projection methods in Hilbert spaces fall, in principle, into the category of fixed point algorithms.
Whereas Picard’s successive iterates always converge in the norm topology to the unique fixed point of a contraction, this is not the case for nonexpansive mapping (think of a rotation around the origin counterclockwise in a two-dimensional plane). Averaged iterative methods are thus employed. The Krasnoselskii–Mann (KM) method [1,2] is an averaged method. Let C be a nonempty closed convex subset of a real Banach space X and let T : C C be a nonexpansive mapping [3] (i.e., T x T y x y for x , y C ). Then, KM generates a sequence of iterates, ( x n ) through the iteration procedure:
x n + 1 = ( 1 τ n ) x n + τ n T x n , n = 0 , 1 , ,
where the initial guess x 0 C and ( τ n ) [ 0 , 1 ] , which is interpreted as step sizes.
Reich [4] proved the weak convergence to a fixed point of T (if any) of KM (1) in a Banach space X that is uniformly convex with a Fréchet differentiable norm under the divergence condition n = 0 τ n ( 1 τ n ) = (thus, constant step sizes τ n τ ( 0 , 1 ) work). Strong convergence does not hold in general, even in a Hilbert space. See the counterexample [5] in 2 . An implicit version of KM for strongly accretive and strongly pseudo-contractive mappings may also be found in [6].
Halpern’s method [7] is another averaged method for finding a fixed point of a nonexpansive mapping T. This method generates a sequence ( x n ) via the process:
x n + 1 = α n u + ( 1 α n ) T x n , n = 0 , 1 , ,
where the initial guess x 0 C is arbitrary, u C is a (fixed) point known as anchor, and α n ( 0 , 1 ) is known as a regularization parameter at iteration n.
There is an essential difference between KM (1) and Halpern (2): the former provides a convex combination of the nth iterate x n with T x n as the ( n + 1 ) th iterate x n + 1 , and the latter provides a convex combination of the fixed anchor u with T x n as the ( n + 1 ) th iterate x n + 1 . Thus, Halpern’s method (2) is, in nature, contractive with coefficient 1 α n < 1 at iteration n. Regarding the convergence of Halpern’s method (2), we have the following result:
Theorem 1
([7,8,9,10,11]). Let X be a uniformly smooth Banach space, C a nonempty closed convex subset of X, and T : C C a nonexpansive mapping with a fixed point. Then, the sequence ( x n ) generated by Halpern’s algorithm (2) converges strongly to a fixed point of T if the following conditions are satisfied:
  • (H1) lim n α n = 0 ,
  • (H2) n = 0 α n = ,
  • (H3)either n = 0 | α n + 1 α n | < or lim n α n α n + 1 = 1 .
Halpern’s method was extended to the viscosity approximation method (VAM) for nonexpansive mappings [12,13,14], following Attouch [15], for selecting a particular fixed point of a given nonexpansive mapping. More precisely, VAM replaces the anchor u with a general ρ -contraction f : C C (i.e., f ( x ) f ( y ) ρ x y for all x , y C and some ρ [ 0 , 1 ) ). Consequently, VAM generates a sequence ( x n ) via the iteration process:
x n + 1 = α n f ( x n ) + ( 1 α n ) T x n , n = 0 , 1 , .
It was proved that VAM (3) converges in norm to a fixed point of T in a Hilbert space [13] and, more generally, in a uniformly smooth Banach space [14] under the same conditions (H1)–(H3) in Theorem (1).
Gwinner [16] combined KM (1) and VAM (3) to propose the following iteration method:
x n + 1 = β n [ ( 1 α n ) T x n + α n f ( x n ) ] + ( 1 β n ) x n , n = 0 , 1 , ,
where the initial guess x 0 C is arbitrary and ( α n ) , ( β n ) are two sequences in [ 0 , 1 ] satisfying some conditions to be specified. This algorithm is obtained by first applying the viscosity approximation method to the nonexpansive mapping T and then applying KM to the viscosized mapping ( 1 α n ) T + α n f . Hence, we call (4) the Krasnoselskii–Mann viscosity approximation method (KMVAM).
We now outline Gwinner’s method to study the convergence of (4). His method is somewhat implicit. Let z n be the unique fixed point of the contraction T n : C C defined by:
T n z : = ( 1 α n ) T z + α n f ( z ) , z C .
z n is the unique solution to the fixed point equation:
z n = ( 1 α n ) T z n + α n f ( z n ) .
It is shown that T n is ( 1 α ( 1 ρ ) ) -contraction, with ρ being the contraction coefficient of f.
Gwinner proved the following result:
Theorem 2
([16], Theorem 4). Let X be a Banach space, C a bounded closed convex subset of X, and T : C C a nonexpansive mapping with fixed points. Let ( x n ) and ( z n ) be defined by (4) and (6), respectively. Assume that ( α n ) and ( β n ) satisfy the conditions:
  • (G1) lim n α n = 0 ,
  • (G2) n = 1 α n β n = ,
  • (G3) lim n | α n + 1 α n | α n + 1 2 β n = 0 .
Assume, in addition, that:
  • (G4)the sequence ( z n ) defined by the fixed point equation (6) converges in norm to a fixed point z of T.
Then, ( x n ) converges in norm to the same fixed point z of T.
We observed that Gwinner used condition (G4) to obtain the strong convergence of ( x n ) . This raises two interesting problems:
(P1)
What Banach spaces X satisfy the property that each sequence ( z n ) defined by (6) converges in norm to a fixed point of T, given any closed convex subset C of X, any nonexpansive mapping T : C C with fixed points, and any contraction f : C C ?
(P2)
Can a particular structure (i.e., geometric property) of X relax the conditions (G1)–(G3) in Theorem (2) for the choices of the parameters ( α n ) and ( β n ) ?
Both problems have partial answers. Uniformly smooth Banach spaces [17] and reflexive Banach spaces with a weakly continuous duality map J μ for some gauge μ [18] satisfy the property (G4), which is known as Reich’s property [18], due to Reich [17] first proving the property (G4) (with f being constant) in a uniformly smooth Banach space.
In this paper, we address the second problem and provide an affirmative answer. More precisely, we prove that in a uniformly smooth Banach space X, the conclusion of Theorem (2) remains valid if the square raised to α n + 1 in the denominator of condition (G3) is removed. This is a genuine improvement of the choice of ( α n ) . Assuming constant step sizes β n β ( 0 , 1 ] , conditions (G1)-(G3) are satisfied for the choice α n = ( 1 + n ) τ for 0 < τ < 1 , which excludes the standard choice of α n = ( 1 + n ) 1 . In contrast, our choice includes α n = ( 1 + n ) 1 (see Theorem (3) and and Remark (1) in Section 3).
The paper is organized as follows. The next section introduces uniformly smooth Banach spaces and two inequalities that are helpful in the subsequent argument. Our main result is presented in Section 3, where we prove the strong convergence of Algorithm (4) under certain conditions on the parameters ( α n ) and ( β n ) weaker than Gwinner’s conditions (G1)–(G3) with a different proof. Our result shows that intelligently manipulating the geometric property (i.e., uniform smoothness) of the underlying space X can improve the choices of the regularization parameters ( α n ) and the step sizes ( β n ) in the algorithm (4). Finally, a brief summary of this paper is given in Section 4.

2. Preliminaries

2.1. Uniform Smooth Banach Spaces

Let ( X , · ) be a real Banach space and let S ( X ) be the unit sphere of X, i.e., S ( X ) = { x X : x = 1 } . Consider the limit:
lim τ 0 x + τ y x τ ,
where x , y X . A Banach space X is said to be smooth if the limit (7) exists for each pair of x , y S ( X ) . A smooth Banach space X is:
  • Fréchet differentiable if the limit (7) is attained uniformly over y S ( X ) ,
  • uniformly G a ^ teaux differentiable if the limit (7) is attained uniformly over x S ( X ) , and
  • uniformly smooth if the limit (7) is attained uniformly for x , y S ( X ) .
Examples of uniformly convex Banach spaces include Hilbert spaces H and l p (and also L p ) spaces for 1 < p < .
Uniform smoothness can be characterized by the normalized duality map J : X X * , which is defined by:
J ( x ) = { ξ X * : x , ξ = x 2 = ξ 2 } , x X .
X is uniformly smooth if and only if J is single-valued and uniformly continuous on each bounded subset of X. For more knowledge on geometric properties of Banach spaces, the reader is referred to the book [19].

2.2. Two Lemmas

Below we list two lemmas that are used in the proof of the main result in Section 3.
Lemma 1.
In a Banach space X, the following inequality holds:
u + w 2 u 2 + 2 w , J ( u + w ) , u , w X .
Lemma 2
([20]). Assume ( τ n ) is a sequence of nonnegative real numbers satisfying the condition:
τ n + 1 ( 1 λ n ) τ n + λ n β n + σ n
for all n 0 , where ( λ n ) and ( σ n ) are sequences in (0,1) and ( β n ) is a sequence in R . Assume
(i)
n = 1 λ n = ,
(ii)
lim sup n β n 0 (or n = 1 λ n | β n | < ),
(iii)
n = 1 σ n < .
Then lim n τ n = 0 .

3. Strong Convergence of Krasnoselskii–Mann Viscosity Approximation Method

Let X be a Banach space and let C be a nonempty closed convex subset of X. For convenience, we use the notation:
  • N C : = { T : T : C C a   nonexpansive   mapping   such   that   Fix ( T ) } ,
  • Fix ( T ) : = { x C : T x = x } is the set of fixed points of T,
  • Π C : = { f : f : C C a   ρ contraction   for   some   ρ [ 0 , 1 ) } .
Some related class of mappings may be found in [21,22].
Given T N C , f Π C and α ( 0 , 1 ) . Define a contraction T α Π C by:
T α x : = ( 1 α ) T x + α f ( x ) , x C .
It is easy to show that T α is a ( 1 α ( 1 ρ ) ) -contraction. Let z α C be the unique fixed point of T α . Equivalently, we have:
z α = ( 1 α ) T z α + α f ( z α ) .
Lemma 3
([14,17]). Assume X is a uniformly smooth Banach space. Then ( z α ) converges as α 0 to a point Q ( f ) Fix ( T ) , and Q : Π C Fix ( T ) defines a retraction, satisfying the variational inequality:
( I f ) Q ( f ) , J ( Q ( f ) p ) 0 , f Π C , p Fix ( T ) .
Lemma 4.
Let T N C and f Π C . Then, for x C and p Fix ( T ) , we have:
T α x p ( 1 α ( 1 ρ ) ) x p + α f ( p ) p .
Here, ρ [ 0 , 1 ) is the contraction coefficient of f.
Proof. 
We have, noticing that T α p = ( 1 α ) p + α f ( p ) :
T α x p T α x T α p + T α p p ( 1 α ( 1 ρ ) ) x p + α f ( p ) p .
This proves (13). □
In terms of T α n , the KMVAM (4) can be rewritten as:
x n + 1 = β n T α n x n + ( 1 β n ) x n .
We next discuss certain properties of ( x n ) .
Property 1.
( x n ) is bounded. For p Fix ( T ) , we have:
x n + 1 p = β n T α n x n p + ( 1 β n ) x n p β n [ ( 1 α n ( 1 ρ ) ) x n p + α n f ( p ) p ] + ( 1 β n ) x n p = ( 1 α n β n ( 1 ρ ) ) x n p + α n β n f ( p ) p max x n p , ( 1 ρ ) 1 f ( p ) p .
By induction, we have:
x n p max x 0 p , ( 1 ρ ) 1 f ( p ) p
for all n 0 ; in particular, { x n } is bounded.
Property 2.
Asymptotic estimate for x n + 1 x n :
x n + 1 x n ( 1 α n β n ( 1 ρ ) ) x n x n 1 + ( | α n β n α n 1 β n 1 | + | β n β n 1 | ) M ,
where M is a constant such that M sup { T x n x n + f ( x n ) x n : n 0 } .
Toward this, we use (14) to obtain:
x n + 1 x n = β n T α n x n β n 1 T α n 1 x n 1 + ( 1 β n ) x n ( 1 β n 1 ) x n 1 .
After some manipulations, we can rewrite x n + 1 x n as:
x n + 1 x n = β n ( T α n x n T α n x n 1 ) + β n ( T α n x n 1 T α n 1 x n 1 ) + ( β n β n 1 ) ( T α n 1 x n 1 x n 1 ) + ( 1 β n ) ( x n x n 1 ) = β n ( T α n x n T α n x n 1 ) + β n ( α n α n 1 ) ( f ( x n 1 ) T x n 1 ) + α n 1 ( β n β n 1 ) ( f ( x n 1 ) T x n 1 ) + ( β n β n 1 ) ( T x n 1 x n 1 ) + ( 1 β n ) ( x n x n 1 ) = β n ( T α n x n T α n x n 1 ) + ( α n β n α n 1 β n 1 ) ( f ( x n 1 ) T x n 1 ) + ( β n β n 1 ) ( T x n 1 x n 1 ) + ( 1 β n ) ( x n x n 1 ) .
It follows from Lemma (4) that:
x n + 1 x n β n ( 1 α n ( 1 ρ ) ) x n x n 1 + | α n β n α n 1 β n 1 | M + | β n β n 1 | M + ( 1 β n ) x n x n 1 = ( 1 α n β n ( 1 ρ ) ) x n x n 1 + ( | α n β n α n 1 β n 1 | + | β n β n 1 | ) M .
This is (15), and Property 2 is verified.
Property 3.
Approximating fixed point property of ( x n ) : x n T x n x n x n + 1 / β n + α n M . Indeed, from (14), we have:
x n + 1 T x n β n T α n x n T x n + ( 1 β n ) x n T x n = β n α n T x n f ( x n ) + ( 1 β n ) x n T x n α n β n M + ( 1 β n ) x n T x n .
It turns out that:
x n T x n x n x n + 1 + x n + 1 T x n x n x n + 1 + α n β n M + ( 1 β n ) x n T x n .
Consequently, x n T x n x n x n + 1 / β n + α n M and Property 3 is proved.
Lemma 5.
Suppose x n T x n 0 . Then:
lim sup n x * f ( x * ) , J ( x * x n ) 0
where x * = Q ( f ) , and Q is the retraction defined by (12).
Proof. 
Notice z α x * in norm as α 0 , where z α satisfies the fixed point Equation (11), from which we obtain:
z α x n = ( 1 α ) ( T z α x n ) + α ( f ( z α ) x n ) .
By Lemma 1, we derive that:
z α x n 2 ( 1 α ) 2 T z α x n 2 + 2 α f ( z α ) x n , J ( z α x n ) ( 1 α ) 2 ( T z α T x n + T x n x n ) 2 + 2 α ( f ( z α ) z α , J ( z α x n ) + z α x n 2 ) ( 1 α ) 2 z α x n 2 + T x n x n ( 2 z α x n + T x n x n ) + 2 α ( f ( z α ) z α , J ( z α x n ) + z α x n 2 ) .
Therefore:
z α f ( z α ) , J ( z α x n ) τ α T x n x n + α τ ,
where τ > 0 is such that τ max { z α x n 2 , z α x n + ( 1 / 2 ) T x n x n } for all α ( 0 , 1 ) and n 0 .
Since T x n x n 0 , it follows from (18) that
lim sup n z α f ( z α ) , J ( z α x n ) α τ .
Now since z α x * in norm as α 0 and since the duality map J is norm-to-norm uniformly continuous over any bounded subset of X, taking the limit as α 0 in (19) and swapping the order of the two limits yields (16). □
We are now in the position to prove the strong convergence of the KMVAM (4) by showing that wise manipulations of the geometric property (i.e., uniform smoothness) of the underlying space X can improve Theorem 2. Hence, the solution to problem (P2) in the Introduction is affirmative.
Theorem 3.
Let X be a uniformly smooth Banach space, C a nonempty closed convex subset of X, T N C , and f Π C . Assume the following conditions:
  • (A1) lim n α n = 0 and n = 0 α n = ,
  • (A2)either n = 1 ( | α n β n α n 1 β n 1 | + | β n β n 1 | ) <
    or lim n | α n β n α n 1 β n 1 | α n β n = 0 (i.e., lim n α n 1 β n 1 α n β n = 1 ) and n = 1 | β n β n 1 | ) < ,
  • (A3) β n β ̲ > 0 for all n 0 .
Then, ( x n ) converges strongly to x * = Q ( f ) , where Q is the retraction defined by (12).
Proof. 
Noticing T α n x n = ( 1 α n ) T x n + α n f ( x n ) , we have:
x n + 1 x * = β n ( T α n x n x * ) + ( 1 β n ) ( x n x * ) = β n ( 1 α n ) ( T x n x * ) + ( 1 β n ) ( x n x * ) + α n β n ( f ( x n ) x * ) .
Applying Lemma 1, we obtain:
x n + 1 x * 2 β n ( 1 α n ) ( T x n x * ) + ( 1 β n ) ( x n x * ) 2 + 2 α n β n f ( x n ) x * , J ( x n + 1 x * ) β n ( 1 α n ) 2 T x n x * 2 + ( 1 β n ) x n x * 2 + 2 α n β n f ( x n ) x * , J ( x n + 1 x * ) [ 1 β n + β n ( 1 α n ) 2 ] x n x * 2 + 2 α n β n f ( x n ) x * , J ( x n + 1 x * ) .
Since f is a ρ -contraction, we obtain:
f ( x n ) x * , J ( x n + 1 x * ) = f ( x n ) f ( x * ) , J ( x n + 1 x * ) + f ( x * ) x * , J ( x n + 1 x * ) ρ x n x * · x n + 1 x * + f ( x * ) x * , J ( x n + 1 x * ) ( ρ / 2 ) ( x n x * 2 + x n + 1 x * 2 ) + f ( x * ) x * , J ( x n + 1 x * ) .
Substituting this into (20), we obtain:
x n + 1 x * 2 ( 1 α n β n ( 2 ρ α n ) ) x n x * 2 + ρ α n β n x n + 1 x * 2 + 2 α n β n f ( x * ) x * , J ( x n + 1 x * ) .
Therefore:
x n + 1 x * 2 1 α n β n ( 2 ρ α n ) 1 ρ α n β n x n x * 2 + 2 α n β n 1 ρ α n β n f ( x * ) x * , J ( x n + 1 x * ) .
Setting
γ n = 1 1 α n β n ( 2 ρ α n ) 1 ρ α n β n = α n β n ( 2 ( 1 ρ ) α n ) 1 ρ α n β n = O ( α n β n )
and
δ n = 2 1 ρ α n β n f ( x * ) x * , J ( x n + 1 x * ) ,
we can rewrite (21) as:
x n + 1 x * 2 ( 1 γ n ) x n x * 2 + γ n δ n .
To use Lemma 2 to prove x n x * 2 0 , we need to verify these two conditions:
( γ )
n = 0 γ n = and
( δ )
lim sup n δ n 0 .
First, we verify ( γ ). From (22) and (A3), we find that γ n = O ( α n ) , which implies ( γ ) by virtue of (A1).
Regarding ( δ ), using condition (A2), we can apply Lemma 2 to Property 2 to obtain x n + 1 x n 0 , which in turns implies that x n T x n 0 via Property 3. Then, by Lemma 5, we obtain (16), which implies ( δ ) for α n 0 .
Now the two conditions ( γ ) and ( δ ) are sufficient to guarantee x n x * 0 (i.e., x n x * in norm) by virtue of Lemma 2. This completes the proof. □
Remark 1.
In the proof of Theorem 3, we manipulated the uniform smoothness of X (i.e., norm-to-norm uniform continuity of the duality J). As a result, we relaxed the conditions on the selections of the parameters ( α n ) and ( β n ) . Note that the parameter α n is referred to as a regularization parameter and therefore tends to zero, and the parameter β n , as a step size in KM, is better not to be diminishing. In the case of a constant step size, i.e., β n = β for all n, the conditions (G1)–(G3) of Theorem 2 are reduced to the conditions:
  • (G1)’ lim n α n = 0 ,
  • (G2)’ n = 1 α n = ,
  • (G3)’ lim n | α n + 1 α n | α n + 1 2 = 0 .
The conditions (A1)–(A3) of Theorem 3 are:
  • (A1)’ lim n α n = 0 and n = 0 α n = ,
  • (A2)’either n = 1 | α n α n 1 | < or lim n | α n α n 1 | α n = 0 (i.e., lim n α n 1 α n = 1 ).
(A2)’ is genuinely weaker than (G3)’. For instance, if we take α n = 1 ( 1 + n ) α for all n 0 , then (G1)’–(G3)’ hold for 0 < α < 1 , but (A1)’–(A2)’ hold for 0 < α 1 .
Note that the conditions (G1)’–(G3)’ were also used by Lions [8] for proving the strong convergence of Halpern’s method (2) in a Hilbert space, which were improved by Xu [11] by removing the square in the denominator of condition (G3)’ in a uniformly smooth Banach space. Note that in a recent paper [23], the conclusion of Theorem 3 was proved under Gwinner’s conditions (G1)–(G3) of Theorem 2 in a reflexive Banach space with a weakly continuous duality map. The class of uniformly smooth Banach spaces is different from the class of reflexive Banach space with a weakly duality map. For example, L p ( 1 < p < , p 2 ) is uniformly smooth, but fails to have a weakly continuous duality map [24].
A key difference of our proof of Theorem 3 from Gwinner’s proof of Theorem ([16], Theorem 4) is that we used the uniform smoothness of the underlying space X, which allowed us to discover more helpful information about ( x n ) from the implicitly defined net of ( z α ) (see (18) and (19)), which leads to a more accurate estimate for x n x * , whereas Gwinner estimated x n z α n (not estimated directly on x n x * ), due to the lack of available geometric properties of X. This again verifies that the geometric properties of the underlying Banach space can improve the convergence of iterative methods in Banach spaces.

4. Conclusions

In this paper, we proved the strong convergence of the Krasnoselskii–Mann viscosity approximation method (KMVAM) to a fixed point of a given nonexpansive self-mapping T of a closed convex subset C of a uniformly smooth Banach space X, and identified the limit as the unique retraction from the family of all contractions on C onto the set of fixed points of T. Our argument showed that wise manipulations of the uniform smoothness of X can relax the selections of the regularization parameters and step sizes in KMVAM.

Author Contributions

N.A., T.A., S.C., and H.-K.X. contributed equally in this research paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group no (RG-1440-058).

Acknowledgments

The authors thank the three anonymous referees for their helpful comments that improved the presentation of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krasnoselski, M.A. Two remarks on the method of successive approximation. Uspehi Mat. Nauk 1955, 10, 123–127. [Google Scholar]
  2. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  3. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory. In Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  4. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
  5. Genel, A.; Lindenstrauss, J. An example concerning fixed points. Israel J. Math. 1975, 22, 81–86. [Google Scholar] [CrossRef]
  6. Ćirić, L.; Rafiq, A.; Radenović, S.; Rajović, M.; Ume, J.S. On Mann implicit iterations for strongly accretive and strongly pseudo-contractive mappings. Appl. Math. Comput. 2008, 189, 128–137. [Google Scholar] [CrossRef]
  7. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  8. Lions, P.L. Approximation de points fixes de contractions. C.R. Acad. Sci. Sèr. A-B Paris 1977, 284, 1357–1359. [Google Scholar]
  9. Lopez, G.; Martín-Márquez, V.; Xu, H.K. Halpern’s iteration for nonexpansive mappings. Contemp. Math. 2010, 513, 211–230. [Google Scholar]
  10. Wittmann, R. Approximation of fixed points of nonexpansive mappings. Arch. Math. 1992, 58, 486–491. [Google Scholar] [CrossRef]
  11. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  12. Aibinu, M.O.; Kim, J.K. On the rate of convergence of viscosity implicit iterative algorithms. Nonlinear Funct. Anal. Appl. 2020, 25, 135–152. [Google Scholar]
  13. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  14. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  15. Attouch, H. Viscosity approximation methods for minimization problems. SIAM J. Optim. 1996, 6, 769–806. [Google Scholar] [CrossRef]
  16. Gwinner, J. On the convergence of some iteration processes in uniformly convex Banach spaces. Proc. Am. Math. Soc. 1978, 71, 29–35. [Google Scholar] [CrossRef]
  17. Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
  18. O’Hara, J.G.; Pillay, P.; Xu, H.K. Iterative approaches to convex feasibility problems in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2006, 64, 2022–2042. [Google Scholar] [CrossRef]
  19. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1990. [Google Scholar]
  20. López, G.; Martín-Márquez, V.; Xu, H.K. Perturbation techniques for nonexpansive mappings with applications. Nonlinear Anal. Real World Appl. 2009, 10, 2369–2383. [Google Scholar] [CrossRef]
  21. Manojlović, V. On conformally invariant extremal problems. Appl. Anal. Discret. Math. 2009, 3, 97–119. [Google Scholar] [CrossRef] [Green Version]
  22. Todorcˇcević, V. Subharmonic behavior and quasiconformal mappings. Anal. Math. Phys. 2019, 9, 1211–1225. [Google Scholar] [CrossRef]
  23. Xu, H.K.; Altwaijry, N.; Chebbi, S. Strong convergence of Mann’s iteration process in Banach spaces. Mathematics 2020, 8, 954. [Google Scholar] [CrossRef]
  24. Opial, Z. Weak convergence of the successive approximations for non expansive mappings in Banach spaces. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Altwaijry, N.; Aldhaban, T.; Chebbi, S.; Xu, H.-K. Krasnoselskii–Mann Viscosity Approximation Method for Nonexpansive Mappings. Mathematics 2020, 8, 1153. https://doi.org/10.3390/math8071153

AMA Style

Altwaijry N, Aldhaban T, Chebbi S, Xu H-K. Krasnoselskii–Mann Viscosity Approximation Method for Nonexpansive Mappings. Mathematics. 2020; 8(7):1153. https://doi.org/10.3390/math8071153

Chicago/Turabian Style

Altwaijry, Najla, Tahani Aldhaban, Souhail Chebbi, and Hong-Kun Xu. 2020. "Krasnoselskii–Mann Viscosity Approximation Method for Nonexpansive Mappings" Mathematics 8, no. 7: 1153. https://doi.org/10.3390/math8071153

APA Style

Altwaijry, N., Aldhaban, T., Chebbi, S., & Xu, H. -K. (2020). Krasnoselskii–Mann Viscosity Approximation Method for Nonexpansive Mappings. Mathematics, 8(7), 1153. https://doi.org/10.3390/math8071153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop