1. Introduction
Iteratively finding a fixed point for a nonexpansive mapping is an active topic of nonlinear operator theory and optimization. A nonexpansive mapping does not increase distances. A typical example of a nonexpansive mapping is metric (i.e., nearest point) projection onto a closed convex subset of a Hilbert space. Thus, projection methods in Hilbert spaces fall, in principle, into the category of fixed point algorithms.
Whereas Picard’s successive iterates always converge in the norm topology to the unique fixed point of a contraction, this is not the case for nonexpansive mapping (think of a rotation around the origin counterclockwise in a two-dimensional plane). Averaged iterative methods are thus employed. The Krasnoselskii–Mann (KM) method [
1,
2] is an averaged method. Let
C be a nonempty closed convex subset of a real Banach space
X and let
be a nonexpansive mapping [
3] (i.e.,
for
). Then, KM generates a sequence of iterates,
through the iteration procedure:
where the initial guess
and
, which is interpreted as step sizes.
Reich [
4] proved the weak convergence to a fixed point of
T (if any) of KM (
1) in a Banach space
X that is uniformly convex with a Fréchet differentiable norm under the divergence condition
(thus, constant step sizes
work). Strong convergence does not hold in general, even in a Hilbert space. See the counterexample [
5] in
. An implicit version of KM for strongly accretive and strongly pseudo-contractive mappings may also be found in [
6].
Halpern’s method [
7] is another averaged method for finding a fixed point of a nonexpansive mapping
T. This method generates a sequence
via the process:
where the initial guess
is arbitrary,
is a (fixed) point known as
anchor, and
is known as a regularization parameter at iteration
n.
There is an essential difference between KM (
1) and Halpern (
2): the former provides a convex combination of the
nth iterate
with
as the
th iterate
, and the latter provides a convex combination of the fixed anchor
u with
as the
th iterate
. Thus, Halpern’s method (
2) is, in nature, contractive with coefficient
at iteration
n. Regarding the convergence of Halpern’s method (
2), we have the following result:
Theorem 1 ([
7,
8,
9,
10,
11])
. Let X be a uniformly smooth Banach space, C a nonempty closed convex subset of X, and a nonexpansive mapping with a fixed point. Then, the sequence generated by Halpern’s algorithm (2) converges strongly to a fixed point of T if the following conditions are satisfied: Halpern’s method was extended to the viscosity approximation method (VAM) for nonexpansive mappings [
12,
13,
14], following Attouch [
15], for selecting a particular fixed point of a given nonexpansive mapping. More precisely, VAM replaces the anchor
u with a general
-contraction
(i.e.,
for all
and some
). Consequently, VAM generates a sequence
via the iteration process:
It was proved that VAM (
3) converges in norm to a fixed point of
T in a Hilbert space [
13] and, more generally, in a uniformly smooth Banach space [
14] under the same conditions (H1)–(H3) in Theorem (1).
Gwinner [
16] combined KM (
1) and VAM (
3) to propose the following iteration method:
where the initial guess
is arbitrary and
are two sequences in
satisfying some conditions to be specified. This algorithm is obtained by first applying the viscosity approximation method to the nonexpansive mapping
T and then applying KM to the viscosized mapping
. Hence, we call (
4) the Krasnoselskii–Mann viscosity approximation method (KMVAM).
We now outline Gwinner’s method to study the convergence of (
4). His method is somewhat implicit. Let
be the unique fixed point of the contraction
defined by:
is the unique solution to the fixed point equation:
It is shown that
is
-contraction, with
being the contraction coefficient of
f.
Gwinner proved the following result:
Theorem 2 ([
16], Theorem 4)
. Let X be a Banach space, C a bounded closed convex subset of X, and a nonexpansive mapping with fixed points. Let and be defined by (4) and (6), respectively. Assume that and satisfy the conditions: Assume, in addition, that:
(G4)the sequence defined by the fixed point equation (6) converges in norm to a fixed point z of T.
Then, converges in norm to the same fixed point z of T.
We observed that Gwinner used condition (G4) to obtain the strong convergence of . This raises two interesting problems:
- (P1)
What Banach spaces
X satisfy the property that each sequence
defined by (
6) converges in norm to a fixed point of
T, given any closed convex subset
C of
X, any nonexpansive mapping
with fixed points, and any contraction
?
- (P2)
Can a particular structure (i.e., geometric property) of X relax the conditions (G1)–(G3) in Theorem (2) for the choices of the parameters and ?
Both problems have partial answers. Uniformly smooth Banach spaces [
17] and reflexive Banach spaces with a weakly continuous duality map
for some gauge
[
18] satisfy the property (G4), which is known as Reich’s property [
18], due to Reich [
17] first proving the property (G4) (with
f being constant) in a uniformly smooth Banach space.
In this paper, we address the second problem and provide an affirmative answer. More precisely, we prove that in a uniformly smooth Banach space
X, the conclusion of Theorem (2) remains valid if the square raised to
in the denominator of condition (G3) is removed. This is a genuine improvement of the choice of
. Assuming constant step sizes
, conditions (G1)-(G3) are satisfied for the choice
for
, which excludes the standard choice of
. In contrast, our choice includes
(see Theorem (3) and and Remark (1) in
Section 3).
The paper is organized as follows. The next section introduces uniformly smooth Banach spaces and two inequalities that are helpful in the subsequent argument. Our main result is presented in
Section 3, where we prove the strong convergence of Algorithm (
4) under certain conditions on the parameters
and
weaker than Gwinner’s conditions (G1)–(G3) with a different proof. Our result shows that intelligently manipulating the geometric property (i.e., uniform smoothness) of the underlying space
X can improve the choices of the regularization parameters
and the step sizes
in the algorithm (
4). Finally, a brief summary of this paper is given in
Section 4.
3. Strong Convergence of Krasnoselskii–Mann Viscosity Approximation Method
Let X be a Banach space and let C be a nonempty closed convex subset of X. For convenience, we use the notation:
,
is the set of fixed points of T,
.
Some related class of mappings may be found in [
21,
22].
Given
,
and
. Define a contraction
by:
It is easy to show that
is a
-contraction. Let
be the unique fixed point of
. Equivalently, we have:
Lemma 3 ([
14,
17])
. Assume X is a uniformly smooth Banach space. Then converges as to a point , and defines a retraction, satisfying the variational inequality: Lemma 4. Let and . Then, for and , we have:Here, is the contraction coefficient of f. Proof. We have, noticing that
:
This proves (
13). □
In terms of
, the KMVAM (
4) can be rewritten as:
We next discuss certain properties of
.
Property 1. is bounded. For , we have:By induction, we have:for all ; in particular, is bounded. Property 2. Asymptotic estimate for :where M is a constant such that . Toward this, we use (14) to obtain:After some manipulations, we can rewrite as:It follows from Lemma (4) that:This is (15), and Property 2 is verified. Property 3. Approximating fixed point property of : . Indeed, from (14), we have:It turns out that:Consequently, and Property 3 is proved. Lemma 5. Suppose . Then:where , and Q is the retraction defined by (12). Proof. Notice
in norm as
, where
satisfies the fixed point Equation (
11), from which we obtain:
By Lemma 1, we derive that:
Therefore:
where
is such that
for all
and
.
Since
, it follows from (
18) that
Now since
in norm as
and since the duality map
J is norm-to-norm uniformly continuous over any bounded subset of
X, taking the limit as
in (
19) and swapping the order of the two limits yields (
16). □
We are now in the position to prove the strong convergence of the KMVAM (
4) by showing that wise manipulations of the geometric property (i.e., uniform smoothness) of the underlying space
X can improve Theorem 2. Hence, the solution to problem (P2) in the Introduction is affirmative.
Theorem 3. Let X be a uniformly smooth Banach space, C a nonempty closed convex subset of X, , and . Assume the following conditions:
(A1) and ,
(A2)either
or (i.e., ) and ,
(A3) for all .
Then, converges strongly to , where Q is the retraction defined by (12). Proof. Noticing
, we have:
Applying Lemma 1, we obtain:
Since
f is a
-contraction, we obtain:
Substituting this into (
20), we obtain:
Therefore:
Setting
and
we can rewrite (
21) as:
To use Lemma 2 to prove
, we need to verify these two conditions:
- ()
and
- ()
First, we verify (
). From (
22) and (A3), we find that
, which implies (
) by virtue of (A1).
Regarding (
), using condition (A2), we can apply Lemma 2 to Property 2 to obtain
, which in turns implies that
via Property 3. Then, by Lemma 5, we obtain (
16), which implies (
) for
.
Now the two conditions () and () are sufficient to guarantee (i.e., in norm) by virtue of Lemma 2. This completes the proof. □
Remark 1. In the proof of Theorem 3, we manipulated the uniform smoothness of X (i.e., norm-to-norm uniform continuity of the duality J). As a result, we relaxed the conditions on the selections of the parameters and . Note that the parameter is referred to as a regularization parameter and therefore tends to zero, and the parameter , as a step size in KM, is better not to be diminishing. In the case of a constant step size, i.e., for all n, the conditions (G1)–(G3) of Theorem 2 are reduced to the conditions:
(G1)’,
(G2)’,
(G3)’.
The conditions (A1)–(A3) of Theorem 3 are:
(A1)’ and ,
(A2)’either or (i.e., ).
(A2)’ is genuinely weaker than (G3)’. For instance, if we take for all , then (G1)’–(G3)’ hold for , but (A1)’–(A2)’ hold for .
Note that the conditions (G1)’–(G3)’ were also used by Lions [8] for proving the strong convergence of Halpern’s method (2) in a Hilbert space, which were improved by Xu [11] by removing the square in the denominator of condition (G3)’ in a uniformly smooth Banach space. Note that in a recent paper [23], the conclusion of Theorem 3 was proved under Gwinner’s conditions (G1)–(G3) of Theorem 2 in a reflexive Banach space with a weakly continuous duality map. The class of uniformly smooth Banach spaces is different from the class of reflexive Banach space with a weakly duality map. For example, () is uniformly smooth, but fails to have a weakly continuous duality map [24]. A key difference of our proof of Theorem 3 from Gwinner’s proof of Theorem ([16], Theorem 4) is that we used the uniform smoothness of the underlying space X, which allowed us to discover more helpful information about from the implicitly defined net of (see (18) and (19)), which leads to a more accurate estimate for , whereas Gwinner estimated (not estimated directly on ), due to the lack of available geometric properties of X. This again verifies that the geometric properties of the underlying Banach space can improve the convergence of iterative methods in Banach spaces.