1. Introduction
Let and be two real Hilbert spaces with the inner product and the induced norm .
The
split feasibility problem (SFP for short) is as follows:
where
C and
Q are nonempty closed convex subsets of
and
, respectively, and
A is a bounded linear operator of
into
.
If the set of solutions of the problem (1) is nonempty, then
solving problem (1) is equivalent to
where
and
denotes the metric projection of
onto
C and
is the corresponding adjoint operator of
A.
Recently, many problems in engineering and technology can be modeled by problem (1) and many authors have shown that the SFP has many applications in our real life such as image reconstruction, signal processing and intensity-modulated radiation therapy (see [
1,
2,
3]).
In 1994, Censor and Elfving [
4] used their algorithm to solve the SFP in the finite-dimensional Euclidean space. In 2002, Byrne [
5] improved the algorithm of Censor and Elfving and presented a new method called the
algorithm for solving the SFP (1) as follows:
The
split common fixed point problem (shortly, SCFPP) is formulated as follows:
where
and
are nonlinear mappings; here,
denotes the set of fixed points of the mapping
U. We use
S to denote the solution set of problem (4).
Note that, since every closed convex subset of a Hilbert space is the fixed point set of its associating projection if and , the SFP becomes a special case of the SCFPP.
In 2007, Censor and Segal [
6] first studied the SCFPP and, to solve the SCFPP, they proposed the following iterative algorithm:
where
is a properly chosen stepsize. Algorithm (5) was originally designed to solve the problem (4) for directed mappings.
In 2010, Moudafi [
7] proposed an iterative method to solve the SCFPP for quasi-nonexpansive mappings. In 2014, combining the Moudafi method with the Halpern iterative method, Kraikaew and Saejung [
8] proposed a new iterative algorithm which does not involve the projection operator to solve the split common fixed point problem. More specifically, their algorithm generates a sequence
via the recursions:
where
is a fixed element,
U and
T are quasi-nonexpansive operators.
Recently, many authors have studied the SCFPP, the generalized SCFPP and some relative problems (see, for instance, refs. [
3,
4,
5,
9,
10,
11,
12,
13] and they have also proposed a lot of algorithms to solve the SCFPP (see [
14,
15,
16,
17] and the references therein).
On the other hand, the bounded perturbation resilience and superiorization of iterative methods have been studied by some authors (see [
18,
19,
20,
21,
22,
23]). These problems have received much attention because of their applications in convex feasibility problems [
24], image reconstruction [
25] and inverse problems of radiation therapy [
26] and so on.
Let
denote an algorithm operator. If the iteration
is replaced by
where
is a sequence of nonnegative real numbers and
is a sequence in
H such that
Then, the algorithm is still convergent and so the algorithm
is the bounded perturbation resilient [
19].
In 2016, Jin, Censor and Jiang [
21] introduced the projected scaled gradient method (PSG for short) with bounded perturbations for solving the following minimization problem:
where
f is a continuous differentiable, convex function. The method PSG generates a sequence
defined by
where
is a diagonal scaling matrix and
denotes the sequence of outer perturbations satisfying
.
Recently, Xu [
22] projected the superiorization techniques for the relaxed PSG as follows:
where
is a sequence in
Recently, for solving minimization problem of the combination of two convex functions
, Guo and Cui [
20] considered the modified proximal gradient method:
and, under suitable conditions, they proved some strong convergence theorems of the method. The definition of proximal operator
is as follows.
Definition 1 (see [
27])
. Let be the space of functions on a real Hilbert space H that are proper, lower semicontinuous and convex. The proximal operator of is defined byThe proximal operator of φ of order is defined as the proximal operator of , that is, Now, we propose a viscosity method for the problem (4) as follows:
If we treat the above algorithm as the basic algorithm
, the bounded perturbation of it is a sequence
generated by the iterative process:
In this paper, mainly based on the above works [
6,
20,
22], we prove that our main iterative method (11) is the bounded perturbation resilient and, under some mild conditions, our algorithms strongly converge to a solution of the split common fixed point problem, which is also the unique solution of the variational inequality problem (13). Finally, we give two numerical examples to demonstrate the effectiveness of our iterative schemes.
3. The Main Results
In 2000, Moudafi [
34] proposed the viscosity approximation method:
which converges strongly to a fixed point
of the nonexpansive mapping
N (see [
35,
36]). In 2004, Xu [
29] further proved that
is also the unique solution of the following variational inequality problem:
where
is a
-contraction. By Lemma 2, we get
is strongly monotone, hence the solution of problem (13) is unique.
In this section, we present a viscosity iterative method for solving problem (4). Meanwhile, the algorithm approximates the unique fixed point of variational inequality problem (13).
Putting
we can rewrite the iteration (11) as follows:
where
Since
U is nonexpansive and
h is contractive, it is easy to get
Theorem 1. Let , be two real Hilbert spaces and be a bounded linear operator with , where is the adjoint of A. Suppose that and are two averaged mappings with the coefficients and , respectively. Assume that the problem is consistent (i.e., ). Let be a ρ-contraction with . For any , define the sequence by . If the following conditions are satisfied:
- (i)
and ;
- (ii)
;
- (iii)
.
Then, the sequence converges strongly to a point , which is also the unique solution of the variational inequality problem .
Proof. Set . Then, by Proposition 1, it follows that is - as .
Step 1. Show that
is bounded. For any
, we have
Note that the condition (iii) and (15) imply that
and, from the conditions (i), (iii) and
, it is easy to show that
is bounded. Therefore, there exists
, such that
. Thus, since the induction argument shows that
it turns out that the sequence
is bounded and so are
,
and
.
Step 2. Show that, for any sequence
if
then
First, if
, then we have
where
Second, we can rewrite
as
where
and
is nonexpansive. By the condition (ii), we get
. Thus, it follows from (14), (17) and (18) that
Using the condition (i), it is easy to get
and
. In order to complete the proof, from Lemma 4, it suffices to verify that
as
, which implies that
for any subsequence
. Indeed,
as
implies that
as
from the condition (iii). Thus, from (18), it follows that
Step 3. Show that
where
is the set of all weak cluster points of
. To see (21), we prove the following:
Take
and assume that
is a subsequence of
weakly converging to
. Without loss of generality, we still use
to denote
. Assume
. Then, we have
. Setting
, we deduce that
Since
as
, it follows immediately from (22) that
as
. Thus, we have
Using Lemma 3, we get
. Since both
U and
T are averaged, it follows from Proposition 1 (ii) that
Then, by Lemma 5, we obtain
immediately. Meanwhile, we have
In addition, since
is the unique solution of the variational inequality problem (13), we have
together with (20) and hence
. This completes the proof. □
Next, we consider the bounded perturbation of (14) generated by the following iterative process:
Theorem 2. Assume that the sequences and satisfy the condition . Let , be two real Hilbert spaces and be a bounded linear operator with , where is the adjoint of A. Suppose that and are two averaged mappings with the coefficients and , respectively. Assume that problem is consistent (i.e., ). Let be a ρ-contraction with . For any , define the sequence by . If the following conditions are satisfied:
- (i)
and ;
- (ii)
;
- (iii)
.
Then, the sequence converges strongly to , where is a solution of the problem , which is also the unique solution of the variational inequality problem .
Proof. Then, Equation (
25) can be rewritten as follows:
In fact, by Proposition 1 (iii) and the nonexpansiveness of
T, it is not hard to show that
is
Lipschitz. Thus, we have
From the condition (iii) and condition (6), we have . Consequently, using Theorem 1, it follows that the algorithm (14) is bounded perturbation resilient. This completes the proof. □
4. Numerical Results
In this section, we consider the following numerical examples to present the effectiveness, realization and convergence of Theorems 1 and 2:
Example 1. Let . Suppose and Take and , where C and Q are defined as follows:andwhere denotes the element of y. We can compute the solution set
Take the experiment parameters
and
in the following iterative algorithms and the stopping criteria is
According to the iterative process of Theorem 1, the sequence
is generated by
As
, we have
. Then, taking the random initial guess
and using MATLAB software (MATLAB R2012a, MathWorks, Natick, MA, USA), we obtain the numerical experiment results in
Table 1.
Next, we consider the algorithm with bounded perturbation resilience. Choose the the bounded sequence
and the summarable nonnegative real sequence
as follows:
and
for some
, where the indicator function
and
is the normal cone to
C. The point
is taken from
. Setting
the numerical results can be seen in
Table 2.
As we have seen above, the accuracy of the solution is improved with the decrease of the stop criteria. In addition, the sequence converges to the point , which is a solution of the numerical example. Of course, it is also the unique solution of the variational inequality
In addition, we contrast the approximate value of solution
of Example 1 under the same parameter conditions, the same iterative numbers and the same initial value. The numerical results are reported in
Table 3 and
Table 4, where
and
denote the iterative sequences generated by the algorithm (14) in this paper and Theorem 3.2 in Ref. [
8], respectively.
Example 2. Let . Suppose and It is obvious that T is and the set of fixed points is nonempty. Let and Then, we use the iterative algorithm of Theorem 1 to approximate a point such that .
Take the experiment parameters and in the following iterative algorithms. Let and the stopping criteria is
Then, taking the random initial guess
and using MATLAB software, we obtain the numerical experiment results in
Table 5.
Next, we consider the bounded perturbation. The definitions of
and
are similar to the Example 1. Setting
the numerical results can be seen in
Table 4.
As we have seen in
Table 5 and
Table 6, the sequence
approximates to the point
, which is a solution of the numerical example. Of course, it is also the unique solution of the variational inequality