Next Article in Journal
Degrees of L-Continuity for Mappings between L-Topological Spaces
Previous Article in Journal
On Two Open Problems on Double Vertex-Edge Domination in Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Methods for Finding Solutions of a Class of Split Feasibility Problems over Fixed Point Sets in Hilbert Spaces

1
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
3
Center of Excellence in Nonlinear Analysis and Optimization, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
4
Faculty of Science and Agricultural Technology, Rajamangala University of Technology Lanna, Chiang Rai 57120, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(11), 1012; https://doi.org/10.3390/math7111012
Submission received: 9 September 2019 / Revised: 16 October 2019 / Accepted: 22 October 2019 / Published: 24 October 2019

Abstract

:
We consider the split feasibility problem in Hilbert spaces when the hard constraint is common solutions of zeros of the sum of monotone operators and fixed point sets of a finite family of nonexpansive mappings, while the soft constraint is the inverse image of a fixed point set of a nonexpansive mapping. We introduce iterative algorithms for the weak and strong convergence theorems of the constructed sequences. Some numerical experiments of the introduced algorithm are also discussed.

1. Introduction

The split feasibility problem (SFP), which was introduced by Censor and Elfving [1], is the problem of finding a point x * R n such that
x * C L 1 Q ,
where C and Q are nonempty closed convex subsets of R n , and L is an n × n matrix. SFP problems have many applications in many fields of science and technology, such as signal processing, image reconstruction, and intensity-modulated radiation therapy; for more information, the readers may see [1,2,3,4] and the references therein. In [1], Censor and Elfving proposed the following algorithm: for arbitrary x 1 R n ,
x n + 1 = L 1 P Q P L ( C ) ( L x n ) , n N ,
where L ( C ) = { y R n | y = L x , for some x C } , and P Q and P L ( C ) are the metric projections onto Q and L ( C ) , respectively. Observe that the introduced algorithm needs the computations of matrix inverses, which may lead to an expensive computation. To overcome this drawback, Byrne [2] suggested the following so-called CQ algorithm: for arbitrary x 1 R n ,
x n + 1 = P C x n + γ L ( P Q I ) L x n , n N ,
where γ 0 , 2 / L 2 , and L is the transpose of the matrix L. Notice that Algorithm (2) generates a sequence { x n } by relying on the transpose operator instead of the inverse operator of the considered matrix L. Later on, in 2010, Xu [5] considered SFP in infinite-dimensional Hilbert spaces setting. That is, for two real Hilbert spaces H 1 and H 2 , and nonempty closed convex subsets C and Q of H 1 and H 2 , respectively, and bounded linear operator L : H 1 H 2 : for a given x 1 H 1 , the sequence { x n } is constructed by
x n + 1 = P C x n + γ L * ( P Q I ) L x n , n N ,
where γ 0 , 2 / L 2 and L * is the adjoint operator of L. In [5], the conditions to guarantee weak convergence of the sequence { x n } to a solution of SFP was considered.
On the other hand, for a Hilbert space H, the variational inclusion problem (VIP), which was initially considered by Martinet [6], has the following formal form: find x * H such that
0 B x * ,
where B : H 2 H is a set-valued operator. The popular iteration method for finding a solution of problem (4) is the following so-called proximal point algorithm: for a given x 1 H ,
x n + 1 = J λ n B x n , n N ,
where { λ n } ( 0 , ) and J λ n B = ( I + λ n B ) 1 is the resolvent of the maximal monotone operator B corresponding to λ n ; see [7,8,9,10]. Subsequently, for set-valued mappings B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 , and a bounded linear operator L : H 1 H 2 , by using the concept of SFP, Byrne et al. [11] proposed the following so-called split null point problem (SNPP): finding a point x * H 1 such that
0 B 1 ( x * ) and 0 B 2 ( L x * ) .
In [11], the following iterative algorithm was suggested: for λ > 0 and an arbitrary x 1 H 1 ,
x n + 1 = J λ B 1 x n γ L * ( I J λ B 2 ) L x n , n N ,
where γ 0 , 2 / L 2 , and J λ B 1 and J λ B 2 are the resolvent of maximal monotone operators B 1 and B 2 , respectively. They showed that, under the suitable control conditions, the sequence { x n } converges weakly to a solution of problem (5).
Due to the importance of the two above concepts, many authors have been interested and studied approximating the common solutions of a fixed point of nonlinear mappings and the VIP problems; see [12,13,14] for example. In 2015, Takahashi et al. [15] considered the problem of finding a point
x * B 1 0 L 1 F ( T ) ,
where B : H 1 2 H 1 is a maximal monotone operator, L : H 1 H 2 is a bounded linear operator, and T : H 2 H 2 is a nonexpansive mapping. They suggested the following iterative algorithm: for any x 1 H 1 ,
x n + 1 = J λ n B I γ n L * ( I T ) L x n , n N ,
where { λ n } and { γ n } satisfy some suitable control conditions, and J λ n B is the resolvent of a maximal monotone operator B associated with λ n . They discussed the weak convergence theorem of Algorithm (7) for the solution set of problem (6). Moreover, in [15], Takahashi et al. also considered the problem of finding a point
x * F ( S ) B 1 0 L 1 F ( T ) ,
where S : H 1 H 1 is a nonexpansive mapping. They suggested the following iterative algorithm: for any x 1 H 1 ,
x n + 1 = α n x n + ( 1 α n ) S J λ n B ( I λ n L * ( I T ) L ) x n , n N ,
where { α n } and { λ n } satisfy some suitable control conditions and provided the weak convergence theorem of Algorithm (9) to a solution point of problem (8).
Now, let us consider a generalized concept of the problem (4): finding a point x * H such that
0 A x * + B x * ,
where A : H H , and B : H 2 H . If A and B are monotone operators on H, then the elements in the solution set of problem (10) will be called the zeros of the sum of monotone operators. It is well known that there are a number of real world problems that arise in the form of problem (10); see [16,17,18,19] for example and the references therein. By considering the VIP problem (10), Suwannaprapa et al. [20] extended problem (6) to the following problem setting: finding a point
x * ( A + B ) 1 0 L 1 F ( T ) ,
when A : H 1 H 1 is a monotone operator and B : H 1 2 H 1 is a maximal monotone operator. They proposed the following algorithm
x n + 1 = J λ n B ( I λ n A ) γ n L * ( I T ) L x n , n N ,
and showed the weak convergence theorem of Algorithm (12). Later, in 2018, Zhu et al. [21] considered the problem of finding a point x * H 1 and such that
x * F ( S ) ( A + B ) 1 0 L 1 F ( T ) = : F ,
when S : H 1 H 1 and T : H 2 H 2 are nonexpansive mappings, and proposed the following iterative algorithm: for any x 1 H 1 ,
u n = J λ n B ( I λ n A ) γ n L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n , n N ,
where f : H 1 H 1 is a contraction mapping. They showed that, under the suitable control conditions, the generated sequence { x n } converges strongly to a point z F , where z = P F f ( z ) .
In this paper, motivated by the above literature, we will consider a problem of finding a point x * H 1 such that
x * i = 1 N F ( S i ) ( A + B ) 1 0 L 1 F ( T ) ,
where S i : H 1 H 1 , i = 1 , , N and T : H 2 H 2 are nonexpansive mappings. We will denote Γ for the solution set of problem (15). We aim to suggest the algorithms for finding a common solution of problem (15) and provide some suitable conditions to guarantee that the constructed sequence { x n } of each algorithm converges to a point in Γ .

2. Preliminaries

Throughout this paper, we denote by R and N for the sets of real numbers and natural numbers, respectively. A real Hilbert space H will be equipped with the inner product · , · and norm · , respectively. For a sequence { x n } in H, we denote the strong convergence and weak convergence of { x n } to x in H by x n x and x n x , respectively.
Let T : H H be a mapping. Then, T is said to be
(i)
Lipschitz if there exists K 0 such that
T x T y     K x y , x , y H .
The number K is called a Lipschitz constant. Moreover, if K [ 0 , 1 ) , we say that T is contraction.
(ii)
Nonexpansive if
T x T y     x y , x , y H .
(iii)
Firmly nonexpansive if
T x T y 2 x y , T x T y , x , y H .
(iv)
Averaged if there is α ( 0 , 1 ) such that
T = ( 1 α ) I + α S ,
where I is the identity operator on H and S : H H is a nonexpansive mapping. In the case (16), we say that T is α -averaged.
(v)
β-inverse strongly monotone (β-ism) if, for a positive real number β ,
T x T y , x y β T x T y 2 , x , y H .
For a mapping T : H H , the notation F ( T ) will stand for the set of fixed points of T that is F ( T ) = { x H : T x = x } . It is well known that, if T is a nonexpansive mapping, then F ( T ) is closed and convex. Furthermore, it should be observed that firmly nonexpansive mappings are 1 2 -averaged mappings.
Next, we collect the important properties that are needed in this work.
Lemma 1.
The following are true [16,22]:
(i) 
The composite of finitely many averaged mappings is averaged. In particular, if T i is α i -averaged for α i ( 0 , 1 ) , i = 1 , 2 , then T 1 T 2 is α-averaged, where α = α 1 + α 2 α 1 α 2 .
(ii) 
If the mappings { T i } i = 1 N are averaged and have a common fixed point, then
i = 1 N F ( T i ) = F ( T 1 T 2 T N ) .
(iii) 
If A is β-ism and λ ( 0 , β ] , then T : = I λ A is firmly nonexpansive.
(iv) 
A mapping T : H H is nonexpansive if and only if I T is 1 2 -ism.
Let B : H 2 H be a set-valued mapping. We donote D ( B ) for the effective domain of B, that is, D ( B ) = { x H : B x } . The set-valued mapping B is said to be monotone if
x y , u v 0 , x , y D ( B ) , u B x , v B y .
A monotone mapping B is said to be maximal when its graph is not properly contained in the graph of any other monotone operator. For a maximal monotone operator B : H 2 H and λ > 0 , we define the resolvent J λ B by
J λ B : = ( I + λ B ) 1 : H D ( B ) .
It is well known that, under these settings, the resolvent J λ B is a single-valued and firmly nonexpansive mapping. Moreover, F J λ B = B 1 0 { x H : 0 B x } , λ > 0 ; see [15,23].
The following lemma is a useful fact for obtaining our main results.
Lemma 2
([24]). Let C be a nonempty closed and convex subset of a real Hilbert space H, and A : C H be an operator. If B : H 2 H is a maximal monotone operator, then F J λ B ( I λ A ) = ( A + B ) 1 0 .
We also use the following lemmas for proving the main result.
Lemma 3
([15]). Let H 1 and H 2 be Hilbert spaces. Let L : H 1 H 2 be a nonzero bounded and linear operator, and T : H 2 H 2 be a nonexpansive mapping. Then, for 0 < γ < 1 L 2 , I γ L * ( I T ) L is γ L 2 -averaged.
Lemma 4
([25]). Let C be a closed convex subset of a Hilbert space H and T : C C be a nonexpansive mapping. Then, U : = I T is demiclosed, that is, x n x 0 and U x n y 0 imply U x 0 = y 0 .
The following fundamental results are needed in our proof.
For each x , y H and λ R , we know that
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 ;
see [23].
Let C be a nonempty closed and convex subset of a Hilbert space H. For each point x H , there exists a unique nearest point in C, denoted by P C x . That is,
x P C x     x y , y C .
The operator P C is called the metric projection of H onto C; see [26]. The following property of P C is well known:
x P C x , y P C x 0 , x H , y C .
The following lemmas are important for proving the convergence theorems in this work.
Lemma 5
([15]). Let H be a Hilbert space and let { x n } be a sequence in H. Assume that C is a nonempty closed convex subset of H satisfying the following properties:
(i) 
for every x * C , l i m n x n x * exists;
(ii) 
if a subsequence { x n j } { x n } converges weakly to x * , then x * C .
Then, there exists x 0 C such that x n x 0 .
Lemma 6
([9,27]). Assume that { a n } is a sequence of nonnegative real numbers satisfying the following relation:
a n + 1 ( 1 α n ) a n + α n σ n + δ n , n N ,
where { α n } , { σ n } and { δ n } are sequences of real numbers satisfying
(i) 
{ α n } [ 0 , 1 ] , n = 1 α n = ;
(ii) 
lim sup n σ n 0 ;
(iii) 
δ n 0 , n = 1 δ n < .
Then, a n 0 as n .

3. Main Results

In our main results, the following assumptions will be concerned in order to show the convergence theorems for the introduced algorithm to a solution of problem (15).
(A1)
A : H 1 H 1 is a β -inverse strongly monotone operator;
(A2)
B : H 1 2 H 1 is a maximal monotone operator;
(A3)
L : H 1 H 2 is a bounded linear operator;
(A4)
T : H 2 H 2 is a nonexpansive mapping;
(A5)
S i : H 1 H 1 , i = 1 , , N are nonexpansive mappings;
(A6)
f : H 1 H 1 is a contraction mapping with coefficient η ( 0 , 1 ) .
Now, we provide the main algorithm and its convergence theorems.

3.1. Weak Convergence Theorems

Theorem 1.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ n B ( I λ n A ) x n γ n L * ( I T ) L x n , x n + 1 = α n x n + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where the sequences { λ n } , { γ n } and { α n } satisfy the following conditions:
(i) 
0 < λ n < β ,
(ii) 
0 < a γ n b 1 < 1 L 2 ,
(iii) 
0 < a α n b 2 < 1 ,
for some a, b 1 , b 2 R , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A1)–(A5) hold and Γ . Then, the sequence { x n } converges weakly to an element in Γ.
Proof. 
Firstly, we set
T n : = J λ n B ( I λ n A ) I γ n L * ( I T ) L
and u n = x n γ n L * ( I T ) L x n , for each n N . It follows that
y n = T n x n = J λ n B ( I λ n A ) u n ,
for each n N . We note that J λ n B is 1 2 -averaged. Since A is β -ism, in view of Lemma 1(iii), for each λ n ( 0 , β ) , we have that ( I λ n A ) is 1 2 -averaged. Subsequently, by Lemma 1(i), we get J λ n B ( I λ n A ) is 3 4 -averaged. Moreover, by Lemma 3, for each γ n 0 , 1 / L 2 , we know that I γ n L * ( I T ) L is γ n L 2 -averaged. Consequently, by Lemma 1(i), we get T n is δ n -averaged, where δ n = 3 + γ n L 2 4 , for each n N . Now, for each n N , we can write
T n = ( 1 δ n ) I + δ n V n ,
where δ n : = 3 + γ n L 2 4 and V n is a nonexpansive mapping.
Next, we let z Γ . Then, z ( A + B ) 1 0 and L z F ( T ) , imply z = J λ n B ( I λ n A ) z and I γ n L * ( I T ) L z = z . Subsequently, we have
T n z = J λ n B ( I λ n A ) I γ n L * ( I T ) L z = J λ n B ( I λ n A ) z = z
and hence z F ( T n ) = F ( V n ) . Consider,
y n z 2 = T n x n z 2 = ( 1 δ n ) x n + δ n V n x n z 2 = ( 1 δ n ) x n z 2 + δ n V n x n z 2 δ n ( 1 δ n ) x n V n x n 2 x n z 2 δ n ( 1 δ n ) x n V n x n 2 ,
for each n N . By condition (ii), we know that δ n 3 4 , 1 , so we have
y n z 2 x n z 2 ,
for each n N . Thus,
y n z     x n z ,
for each n N .
Furthermore, since z Γ , we also have z i = 1 N F ( S i ) ; this implies z = S i z = U i z , for each i = 1 , , N . It follows that U N U N 1 U 1 z = z . We denote U N for the operator U N U N 1 U 1 . From above, we get U N z = z .
By the definition of x n + 1 and the relation (19), we obtain
x n + 1 z 2 = α n x n + ( 1 α n ) U N y n z 2 = α n ( x n z ) + ( 1 α n ) ( U N y n z ) 2 α n x n z 2 + ( 1 α n ) y n z 2 α n ( 1 α n ) x n U N y n 2 α n x n z 2 + ( 1 α n ) x n z 2 δ n ( 1 δ n ) x n V n x n 2 α n ( 1 α n ) x n U N y n 2 = x n z 2 ( 1 α n ) δ n ( 1 δ n ) x n V n x n 2 α n ( 1 α n ) x n U N y n 2 x n z 2 ,
for each n N . Thus,
x n + 1 z     x n z ,
for each n N . Therefore, for all z Γ , lim n x n z exists.
Now, from the relation (20), we see that
( 1 α n ) δ n ( 1 δ n ) x n V n x n 2 x n z 2 x n + 1 z 2 ,
for each n N . By the existence of { x n } , and the conditions (ii) and (iii), we get
lim n x n V n x n = 0 .
In addition, from the relation (20), we obtain
α n ( 1 α n ) x n U N y n 2     x n z 2 x n + 1 z 2 ,
for each n N . By the existence of { x n } and the condition (iii), we get
lim n x n U N y n = 0 .
Consider
x n y n = x n T n x n = x n ( 1 δ n ) x n + δ n V n x n δ n x n V n x n ,
for each n N . By using the fact (21), we obtain
lim n x n y n = 0 .
Next, consider
x n x n + 1 = x n α n x n ( 1 α n ) U N y n ( 1 α n ) x n U N y n ,
for each n N . Then, by using the fact (22), we have
lim n x n x n + 1 = 0 .
Next, since L z F ( T ) , so we have ( I T ) L z = 0 . Note that I T is 1 2 -ism. Then, we have the following relation
( I T ) L x n ( I T ) L z 2 2 ( I T ) L x n ( I T ) L z , L x n L z ,
for each n N . By ( I T ) L z = 0 above, we obtain
( I T ) L x n 2 2 ( I T ) L x n , L x n L z ,
for each n N .
By the relation (26) and z Γ , we have
y n z 2 = J λ n B ( I λ n A ) x n γ n L * ( I T ) L x n J λ n B ( I λ n A ) z 2 ( x n z ) γ n L * ( I T ) L x n 2 = x n z 2 2 γ n x n z , L * ( I T ) L x n + γ n 2 L * ( I T ) L x n 2 = x n z 2 2 γ n L x n L z , ( I T ) L x n + γ n 2 L * ( I T ) L x n 2 x n z 2 γ n ( I T ) L x n 2 + γ n 2 L * 2 ( I T ) L x n 2 = x n z 2 γ n 1 γ n L 2 ( I T ) L x n 2 ,
for each n N . Then,
γ n 1 γ n L 2 ( I T ) L x n 2     x n z 2 y n z 2 ,
for each n N . By the condition (ii), for each n N , we have
( I T ) L x n 2 1 a 1 b 2 L 2 ( x n z 2 y n z 2 ) 1 a 1 b 2 L 2 ( x n z + y n z ) x n y n .
By using the fact (23), we get
lim n ( I T ) L x n = 0 .
Next, we will prove the weak convergence of { x n } by using Lemma 5. Remember that we have lim n x n z existing for all z Γ . Thus, it remains to prove that, if there is a subsequence { x n j } of { x n } that converges weakly to a point x * H 1 , then x * Γ .
Assume that x n j x * ; we first show that x * L 1 F ( T ) . Consider
T L x * L x * 2 = T L x * L x * , T L x * T L x n j + T L x * L x * , T L x n j L x n j + T L x * L x * , L x n j L x * ,
for each j N . Since L is a bounded linear operator, so we have L x n j L x * . By using this one and together with the fact (28), from the equality (29), we have T L x * = L x * . Hence, L x * F ( T ) or x * L 1 F ( T ) .
Next, we will show that x * ( A + B ) 1 0 . Consider
x * J λ n B ( I λ n A ) x * = x * J λ n B ( I λ n A ) x * , x * x n j + x * J λ n B ( I λ n A ) x * , x n j J λ n B ( I λ n A ) x n j + x * J λ n B ( I λ n A ) x * , J λ n B ( I λ n A ) x n j J λ n B ( I λ n A ) x * ,
for each j N . Observe that
y n J λ n B ( I λ n A ) x n = J λ n B ( I λ n A ) x n γ n L * ( I T ) L x n J λ n B ( I λ n A ) x n x n γ n L * ( I T ) L x n x n γ n L ( I T ) L x n ,
for each n N . By using the fact (28) to the inequality (31), we obtain
lim n y n J λ n B ( I λ n A ) x n = 0 .
Since
x n J λ n B ( I λ n A ) x n     x n y n + y n J λ n B ( I λ n A ) x n ,
for each n N , by the facts (23) and (32), we have
lim n x n J λ n B ( I λ n A ) x n = 0 .
Thus, from the inequality (30), by using the fact (33) and together with x n j x * , we obtain
lim j x * J λ n B ( I λ n A ) x * = 0 .
Therefore, x * = J λ n B ( I λ n A ) x * and hence x * ( A + B ) 1 0 .
Finally, we will show that x * i = 1 N F ( S i ) . Consider
y n U N y n     y n x n + x n U N y n ,
for each n N . By using the facts (22) and (23), we obtain
lim n y n U N y n = 0 .
By using the fact (35) and y n j x * , for each j N , we obtain from Lemma 4 that x * F ( U N ) . Since U i , i = 1 , , N are averaged mappings, by Lemma 1(ii), we have F ( U 1 U 2 U N ) = i = 1 N F ( U i ) . This implies that x * i = 1 N F ( U i ) = i = 1 N F ( S i ) . From the above results, we have that x * i = 1 N F ( S i ) ( A + B ) 1 0 L 1 F ( T ) . That is, x * Γ . Finally, by Lemma 5, we can conclude that { x n } converges weakly to a point in Γ . Hence, the proof is completed. □

3.2. Strong Convergence Theorems

Theorem 2.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ B ( I λ A ) x n γ L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where λ ( 0 , β ) , γ ( 0 , 1 L 2 ) , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A1)–(A6) hold, Γ , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ Γ , where x ¯ = P Γ f ( x ¯ ) .
Proof. 
Firstly, we will show the boundedness of { x n } . Let z Γ and follow the lines proof of the inequality (19), we can obtain
y n z     x n z ,
for each n N . Moreover, by the definition of x n + 1 and U N z = z , we obtain
x n + 1 z = α n ( f ( x n ) z ) + ( 1 α n ) U N y n z α n f ( x n ) z + ( 1 α n ) U N y n z α n f ( x n ) f ( z ) + α n f ( z ) z + ( 1 α n ) y n z α n η x n z + α n f ( z ) z + ( 1 α n ) x n z = 1 α n ( 1 η ) x n z + α n f ( z ) z 1 α n ( 1 η ) x n z + α n ( 1 η ) f ( z ) z 1 η max x n z , f ( z ) z 1 η max x 1 z , f ( z ) z 1 η ,
for each n N . This implies that x n z is a bounded sequence. Consequently, y n z is also a bounded sequence. These imply that { x n } and { y n } are bounded.
Next, we note that P Γ f ( · ) is a contraction mapping. We now let x ¯ be the unique fixed point of P Γ f ( · ) . We consider
x n + 1 x ¯ 2 = α n f ( x n ) + ( 1 α n ) U N y n x ¯ , x n + 1 x ¯ = α n f ( x n ) x ¯ , x n + 1 x ¯ + ( 1 α n ) U N y n x ¯ , x n + 1 x ¯ = α n f ( x n ) f ( x ¯ ) , x n + 1 x ¯ + α n f ( x ¯ ) x ¯ , x n + 1 x ¯ + ( 1 α n ) U N y n x ¯ , x n + 1 x ¯ α n 2 f ( x n ) f ( x ¯ ) 2 + x n + 1 x ¯ 2 + 1 α n 2 U N y n x ¯ 2 + x n + 1 x ¯ 2 + α n f ( x ¯ ) x ¯ , x n + 1 x ¯ α n η 2 2 x n x ¯ 2 + α n 2 x n + 1 x ¯ 2 + 1 α n 2 x n x ¯ 2 + 1 α n 2 x n + 1 x ¯ 2 + α n f ( x ¯ ) x ¯ , x n + 1 x ¯ 1 α n ( 1 η 2 ) 2 x n x ¯ 2 + 1 2 x n + 1 x ¯ 2 + α n f ( x ¯ ) x ¯ , x n + 1 x ¯ ,
for each n N . This gives
x n + 1 x ¯ 2 1 α n ( 1 η 2 ) x n x ¯ 2 + α n ( 1 η 2 ) 2 1 η 2 f ( x ¯ ) x ¯ , x n + 1 x ¯ ,
for each n N .
Next, we will show that lim n x n + 1 x n = 0 . Consider, for each n N ,
x n + 1 x n = α n f ( x n ) α n f ( x n 1 ) + α n f ( x n 1 ) α n 1 f ( x n 1 ) + ( 1 α n ) U N y n ( 1 α n ) U N y n 1 + ( 1 α n ) U N y n 1 ( 1 α n 1 ) U N y n 1 α n f ( x n ) f ( x n 1 ) + | α n α n 1 | f ( x n 1 ) + ( 1 α n ) U N y n U N y n 1 + | α n α n 1 | U N y n 1 α n η x n x n 1 + ( 1 α n ) y n y n 1 + 2 | α n α n 1 | M ,
where M = sup n f ( x n ) + U N y n . In the second term of the inequality (39), by the definition of y n and J λ B ( I λ A ) I γ L * ( I T ) L being a nonexpansive mapping, it follows that
y n y n 1 = J λ B ( I λ A ) I γ L * ( I T ) L x n J λ B ( I λ A ) I γ L * ( I T ) L x n 1 x n x n 1 ,
for each n N . Substituting the inequality (40) into the inequality (39), we get
x n + 1 x n α n η x n x n 1 + ( 1 α n ) x n x n 1 + 2 | α n α n 1 | M = 1 α n ( 1 η ) x n x n 1 + 2 | α n α n 1 | M ,
for each n N . Thus, by Lemma 6, we have
lim n x n + 1 x n = 0 .
Furthermore, by the definition of x n + 1 and the relation (19) in Theorem 1, we get
x n + 1 x ¯ 2 = α n f ( x n ) x ¯ 2 + ( 1 α n ) U N y n x ¯ 2 α n ( 1 α n ) f ( x n ) U N y n 2 α n f ( x n ) x ¯ 2 + ( 1 α n ) x n x ¯ 2 δ ( 1 δ ) x n V x n 2 α n ( 1 α n ) f ( x n ) U N y n 2 = x n x ¯ 2 + α n f ( x n ) x ¯ 2 x n x ¯ 2 ( 1 α n ) δ ( 1 δ ) x n V x n 2 α n ( 1 α n ) f ( x n ) U N y n 2 ,
for each n N . Then, we have that
( 1 α n ) δ ( 1 δ ) x n V x n 2 x n x ¯ 2 x n + 1 x ¯ 2 + α n f ( x n ) x ¯ 2 x n x ¯ 2 = x n x ¯ + x n + 1 x ¯ x n x n + 1 + α n f ( x n x ¯ 2 x n x ¯ 2 ] ,
for each n N . By using the fact (41), the condition (i) and δ 3 4 , 1 , we get
lim n x n V x n = 0 .
Subsequently, we have
y n x n = ( 1 δ ) x n + δ V x n x n δ x n V x n ,
for each n N . Thus, by the fact (43), we obtain
lim n y n x n = 0 .
Moreover, by the same proof in Theorem 1, we also have
lim n ( I T ) L x n = 0 .
Next, since { x n } is bounded on H 1 , there exists a subsequence { x n j } of { x n } that converges weakly to x * H 1 . We will show that x * Γ . Now, we know from Theorem 1 that x * L 1 F ( T ) and x * ( A + B ) 1 0 . It remains to show that x * i = 1 N F ( S i ) . Consider, for each n N ,
x n + 1 U N y n = α n f ( x n ) + ( 1 α n ) U N y n U N y n α n f ( x n ) U N y n α n f ( x n ) f ( x ¯ ) + α n f ( x ¯ ) U N y n α n η x n x ¯ + α n f ( x ¯ ) U N y n .
Thus, by condition(i), we obtain
lim n x n + 1 U N y n = 0 .
Since
y n U N y n     y n x n + x n x n + 1 + x n + 1 U N y n ,
for each n N , by using the facts (41), (45) and (48), we have
lim n y n U N y n = 0 .
By using the relation (50) and y n j x * , for each j N , we obtain from Lemma 4 that x * F ( U N ) = i = 1 N F ( U i ) = i = 1 N F ( S i ) . From the above results, we obtain that x * Γ .
Finally, we will prove that { x n } converges strongly to x ¯ = P Γ f ( x ¯ ) . Now, we know that { x n } is bounded and from the relation (41) we have x n + 1 x n 0 , as n . Without loss of generality, by passing to a subsequence if necessary, we may assume that a subsequence { x n j + 1 } of { x n + 1 } converges weakly to x * H 1 . Thus, we obtain
lim sup n 2 1 η 2 f ( x ¯ ) x ¯ , x n + 1 x ¯ = lim j 2 1 η 2 f ( x ¯ ) x ¯ , x n j + 1 x ¯ = 2 1 η 2 f ( x ¯ ) x ¯ , x * x ¯ 0 .
From the inequality (38), by using Lemma 6, we can conclude that x n x ¯ 0 , as n . Thus, x n x ¯ , as n . Since y n x n 0 , as n , so we conclude y n x ¯ , as n . This completes the proof. □

4. Some Deduced Results

If S 1 = S 2 = = S N = I (the identity operator), we see that problem (15) reduces to problem (11). Thus, we have the following results.
Corollary 1.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ n B ( I λ n A ) x n γ n L * ( I T ) L x n , x n + 1 = α n x n + ( 1 α n ) y n , n N ,
where the sequences { λ n } , { γ n } and { α n } satisfy the following conditions:
(i) 
0 < λ n < β ,
(ii) 
0 < a γ n b 1 < 1 L 2 ,
(iii) 
0 < a α n b 2 < 1 ,
for some a, b 1 , b 2 R . Suppose that the assumptions (A1)–(A4) hold and ( A + B ) 1 0 L 1 F ( T ) . Then, the sequence { x n } converges weakly to an element in ( A + B ) 1 0 L 1 F ( T ) .
Corollary 2.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ B ( I λ A ) x n γ L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) y n , n N ,
where λ ( 0 , β ) and γ ( 0 , 1 L 2 ) . Suppose that the assumptions (A1)–(A4) and (A6) hold, ( A + B ) 1 0 L 1 F ( T ) , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ ( A + B ) 1 0 L 1 F ( T ) , where x ¯ = P ( A + B ) 1 0 L 1 F ( T ) f ( x ¯ ) .
If A = 0 (the zero operator) and F ( S ) : = i = 1 N F ( S i ) , then problem (15) is reduced to problem (8). Thus, we also have the following results.
Corollary 3.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ n B x n γ n L * ( I T ) L x n , x n + 1 = α n x n + ( 1 α n ) U y n , n N ,
where the sequences { λ n } , { γ n } and { α n } satisfy the following conditions:
(i) 
0 < λ n < ,
(ii) 
0 < a γ n b 1 < 1 L 2 ,
(iii) 
0 < a α n b 2 < 1 ,
for some a, b 1 , b 2 R , and U = ( 1 κ ) I + κ S for κ ( 0 , 1 ) , and S is a nonexpansive mapping. Suppose that the assumptions (A2)–(A4) hold and F ( S ) B 1 0 L 1 F ( T ) . Then, the sequence { x n } converges weakly to an element in F ( S ) B 1 0 L 1 F ( T ) .
Corollary 4.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ B x n γ L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) U y n , n N ,
where λ ( 0 , β ) , γ ( 0 , 1 L 2 ) , and U = ( 1 κ ) I + κ S for κ ( 0 , 1 ) and S is a nonexpansive mapping. Suppose that the assumptions (A2)–(A4), (A6) hold, F ( S ) B 1 0 L 1 F ( T ) , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ F ( S ) B 1 0 L 1 F ( T ) , where x ¯ = P F ( S ) B 1 0 L 1 F ( T ) f ( x ¯ ) .
If A = 0 and L = I , then problem (15) is reduced to a type of the common fixed points of nonexpansive mappings; see [28]. That is, in this case, we will consider a problem of finding a point
x * i = 1 N F ( S i ) F ( J λ n B ) F ( T ) = : Ω .
In addition, the following results can be obtained from the main Theorems 1 and 2, respectively.
Corollary 5.
Let H be a Hilbert space. For any x 1 H , define
y n = J λ n B ( 1 γ n ) x n + γ n T x n , x n + 1 = α n x n + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where the sequences { λ n } , { γ n } and { α n } satisfy the following conditions:
(i) 
0 < λ n < ,
(ii) 
0 < a γ n b < 1 ,
(iii) 
0 < a α n b < 1 ,
for some a, b R , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A2), (A4), and (A5) hold and Ω . Then, the sequence { x n } converges weakly to an element in Ω.
Corollary 6.
Let H 1 and H 2 be Hilbert spaces. For any x 1 H 1 , define
y n = J λ B ( 1 γ ) x n + γ T x n , x n + 1 = α n f ( x n ) + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where λ ( 0 , ) , γ ( 0 , 1 ) , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A2), (A4)–(A6) hold, Ω , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ Ω , where x ¯ = P Ω f ( x ¯ ) .

5. Applications

In this section, we discuss the applications of problem (15) via Theorems 1 and 2, respectively.

5.1. Variational Inequality Problem

Let the normal cone to C at u C be defined by
N C ( u ) = z H : z , y u 0 , y C .
It is well known that N C is a maximal monotone operator. By considering B : = N C : H 2 H , then we can see that problem (10) is reduced to the problem of finding a point x * C such that
A x * , x x * 0 , x C .
Let V I P ( C , A ) be denoted for the solution set of problem (59). Notice that, in this case, we have J λ B = : P C . By these settings, problem (15) is reduced to a problem of finding a point
x * i = 1 N F ( S i ) V I P ( C , A ) L 1 F ( T ) = : Γ A , S , T .
Subsequently, by applying Theorems 1 and 2, we obtain the following convergence theorems.
Theorem 3.
Let H 1 and H 2 be Hilbert spaces and C be a nonempty closed convex subset of H 1 . For any x 1 H 1 , define
y n = P C ( I λ n A ) x n γ n L * ( I T ) L x n , x n + 1 = α n x n + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where the sequences { λ n } , { γ n } , and { α n } satisfy the following conditions:
(i) 
0 < λ n < β ,
(ii) 
0 < a γ n b 1 < 1 L 2 ,
(iii) 
0 < a α n b 2 < 1 ,
for some a, b 1 , b 2 , R , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A1), (A3)–(A5) hold and Γ A , S , T . Then, the sequence { x n } converges weakly to an element in Γ A , S , T .
Theorem 4.
Let H 1 and H 2 be Hilbert spaces and C be a nonempty closed convex subset of H 1 . For any x 1 H 1 , define
y n = P C ( I λ A ) x n γ L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where λ ( 0 , β ) , γ ( 0 , 1 L 2 ) , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A1), (A3)–(A6) hold, Γ A , S , T , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ Γ A , S , T , where x ¯ = P Γ A , S , T f ( x ¯ ) .

5.2. Convex Minimization Problem

We consider a convex function g : H R , which is Fréchet differentiable. Let C be a given closed convex subset of H. By setting A : = g (the gradient of g) and B : = N C , we see that the problem of finding a point x * ( A + B ) 1 0 is equivalent to the following problem: find a point x * C such that
g ( x * ) , x x * 0 , x C .
It is well known that the equation (63) is equivalent to the minimization problem of finding x * C such that
x * arg min x C g ( x ) .
Therefore, in this case, problem (15) reduces to a problem of finding a point
x * i = 1 N F ( S i ) arg min x C g ( x ) L 1 F ( T ) = : Γ g , S , T .
Then, by applying Theorems 1 and 2, we obtain the following results.
Theorem 5.
Let H 1 and H 2 be Hilbert spaces and C be a nonempty closed convex subset of H 1 . Let g : H 1 R be convex and Fréchet differentiable such that g is a ν-Lipschitz continuous. For any x 1 H 1 , define
y n = P C ( I λ n g ) x n γ n L * ( I T ) L x n , x n + 1 = α n x n + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where the sequences { λ n } , { γ n } and { α n } satisfy the following conditions:
(i) 
0 < λ n < 1 ν ,
(ii) 
0 < a γ n b 1 < 1 L 2 ,
(iii) 
0 < a α n b 2 < 1 ,
for some a, b 1 , b 2 , R , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A3)–(A5) hold and Γ g , S , T . Then, the sequence { x n } converges weakly to an element in Γ g , S , T .
Proof. 
Notice that, by the convex assumption of g together with the ν -Lipschitz continuity of g , we have g is 1 ν -ism (see [29]). Thus, the conclusion can be followed immediately from Theorem 1. □
Theorem 6.
Let H 1 and H 2 be Hilbert spaces and C be a nonempty closed convex subset of H 1 . Let g : H 1 R be convex and Fréchet differentiable such that g is a ν-Lipschitz continuous. For any x 1 H 1 , define
y n = P C ( I λ n g ) x n γ L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where λ ( 0 , 1 ν ) , γ ( 0 , 1 L 2 ) , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A3)–(A6) hold, Γ g , S , T , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ Γ g , S , T , where x ¯ = P Γ g , S , T f ( x ¯ ) .

5.3. Split Common Fixed Point Problem

Consider a nonexpansive mapping V : H 1 H 1 . By Lemma 1(iv), we know that A : = I V is a 1 2 -ism, and A x * = 0 if and only if x * F ( V ) . Thus, in the case that B : = 0 (the zero operator), we see that problem (11) is reduced to the problem of finding a point
x * F ( V ) such that L x * F ( T ) .
Problem (67) is called the split common fixed point problem (SCFP), and it has been studied by many authors; see [30,31,32,33] for example. Then, problem (15) is reduced to a problem of finding a point
x * i = 1 N F ( S i ) F ( V ) L 1 F ( T ) = : Γ V , S , T .
By applying Theorems 1 and 2, we can obtain the following results.
Theorem 7.
Let H 1 and H 2 be Hilbert spaces. Let V : H 1 H 1 be nonexpansive mapping. For any x 1 H 1 , define
y n = ( 1 λ n ) I λ n V x n γ n L * ( I T ) L x n , x n + 1 = α n x n + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where the sequences { λ n } , { γ n } and { α n } satisfy the following conditions:
(i) 
0 < λ n < 1 2 ,
(ii) 
0 < a γ n b 1 < 1 L 2 ,
(iii) 
0 < a α n b 2 < 1 ,
for some a, b 1 , b 2 , R , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A3)–(A5) hold and Γ V , S , T . Then, the sequence { x n } converges weakly to an element in Γ V , S , T .
Proof. 
Observe that Algorithm (18) is reduced to Algorithm (69), by setting A : = I V and B : = 0 . Remember that the zero operator is monotone and continuous. Consequently, it is a maximal monotone operator. Moreover, we know that its resolvent operator is nothing but the identity operator on H 1 . Using these facts, the result is followed immediately. □
Theorem 8.
Let H 1 and H 2 be Hilbert spaces. Let V : H 1 H 1 be a nonexpansive mapping. For any x 1 H 1 , define
y n = ( 1 λ ) I λ V x n γ L * ( I T ) L x n , x n + 1 = α n f ( x n ) + ( 1 α n ) U N U N 1 U 1 y n , n N ,
where λ ( 0 , 1 2 ) , γ ( 0 , 1 L 2 ) , and U i = ( 1 κ i ) I + κ i S i for κ i ( 0 , 1 ) , i = 1 , , N . Suppose that the assumptions (A3)–(A6) hold, Γ V , S , T , and the sequence { α n } ( 0 , 1 ) satisfies the following conditions:
(i) 
lim n α n = 0 ;
(ii) 
n = 1 α n = and n = 1 | α n α n 1 | < .
Then, { y n } and { x n } both converge strongly to x ¯ Γ V , S , T , where x ¯ = P Γ V , S , T f ( x ¯ ) .
Proof. 
We get the above result by setting A : = I V and B : = 0 into Algorithm (36).

6. Numerical Experiments

In this section, we will consider the numerical experiments of Theorems 1 and 2.
Example 1.
Let H 1 = R 2 and H 2 = R 3 be equipped with the Euclidean norm. Let x ˜ : = 3 2 and x ^ : = 1 4 be two fixed vectors in H 1 . We consider the operators P C 1 and P C 2 , where C 1 and C 2 are the following nonempty convex subsets of H 1 :
C 1 : = u H 1 : x ˜ , u 6 , C 2 : = u H 1 : x ^ , u 1 .
Now, we notice that F ( P C 1 ) F ( P C 2 ) = C 1 C 2 .
Next, for each x : = x 1 x 2 H 1 , we will consider the following two norms:
x 1 = | x 1 | + | x 2 | a n d x = max | x 1 | , | x 2 | .
For a function g : H 1 R , which is defined by
g ( x ) = x 1 , x H 1 .
We know that g is a convex function and its subdifferential operator is
g ( x ) = z H 1 : x , z = x 1 , z 1 , x H 1 .
Furthermore, since g is a convex function, we know that g ( · ) is a maximal monotone operator. Moreover, for each λ > 0 , we have
J λ g ( x ) = u 1 u 2 H 1 : u i = x i min { | x i | , λ } s g n ( x i ) , for i = 1 , 2 ,
where s g n ( · ) stands for the signum function.
On the other hand, we let x ¯ : = 4 3 H 1 and y ¯ : = 2 1 1 H 2 be other fixed vectors. We consider 1-ism operators P Q 1 , where Q 1 is the following convex subset of H 1 :
Q 1 : = u H 1 : x ¯ , u 7 .
Furthermore, we consider a nonexpansive single value mapping on H 2 , P Q 2 , where Q 2 are the following convex subset of H 2 :
Q 2 : = v H 2 : y ¯ v 2 .
We also notice that, since Q 2 is a nonempty set, so we have F ( P Q 2 ) = Q 2 .
Now, let us consider a 3 × 2 matrix L : = 1 1 2 1 2 1 3 1 3 1 4 . We can check that L : H 1 H 2 with L = 1.3330 .
Under the above settings, we will discuss some numerical experiments of the constructed Algorithm (18). In fact, in this suitation, we are considering that Algorithm (18) converges to a point x * H 1 such that
x * ( C 1 C 2 ) ( P Q 1 + g ) 1 0 L 1 ( Q 2 ) .
Notice that the solution set of problem (71) is x 3 x 1 4 H 1 : 1 x 2.5358 . We consider the experiments by using stopping criterion by x n + 1 x n max { 1 , x n } 1.0 e 04 .
We first consider Algorithm (18) with five cases of the stepsize parameters α n and λ n , with the initial vectors 0 0 , 1 1 , 1 1 and 10 10 in H 1 . The results are showed in the following Table 1, with fixed values of γ n = 0.5 L 2 and κ 1 = κ 2 = 0.5 . From Table 1, we see that, for each initial point, the case of stepsize parameters α n = 0.1 , λ n = 0.9 shows the better convergence rate than the other cases.
Next, in Table 2, we set the stepsize parameters α n = 0.1 , λ n = 0.9 and consider different three cases of γ n that are γ n = 0.1 L 2 , 0.5 L 2 , 0.9 L 2 . From the presented result in Table 2, we may suggest that the larger stepsize of parameter γ n should provide faster convergence.
Example 2.
Let H 1 = R 2 and H 2 = R 3 . We consider some operators and function as in Example 1 that are P C 1 , P Q 1 , P Q 2 , L and g. Furthermore, we consider a contraction mapping f : = 1 10 0 0 1 20 .
This means, in this suitation, we are considering the problem
C 1 ( P Q 1 + g ) 1 0 L 1 ( Q 2 ) .
We notice that the solution set of problem (72) is x 3 x 1 4 H 1 : 1 3 x 2.5358 .
In Table 3, we compare the iteration number of Algorithm (14) and Algorithm (36), under the different initial points. We use α n = 0.1 , λ n = λ = 0.9 and γ n = γ = 0.9 L 2 in both experiments. From Table 3, one may see that Algorithm (36) shows a faster convergence than Algorithm (14).

7. Conclusions

In this work, we focus on the problem of finding a common solution of a class of a split feasibility problem and the common fixed points of nonexpansive mappings, namely problem (15), which is a generalization of the problems (8) and (11). By providing the suitable control conditions to the process, in Theorem 1, we can guarantee that the proposed algorithm converges weakly to a solution. Furthermore, the strong convergence theorem of the proposed algorithm (Theorem 2) is also discussed. Some important applications and numerical experiments of the considered problems are also discussed. We point out that the main motivation of the introduced algorithm in this work aims to avoid the complexity of computation of the resolvent operator when we are dealing with the problems that are occurring in the form of the sum of two maximal monotone operators.

Author Contributions

Conceptualization, S.S., N.P. and M.S.; methodology, M.S.; writing—original draft preparation, M.S.; writing—review and editing, S.S. and N.P.

Funding

This research received no external funding.

Acknowledgments

This research was supported by Chiang Mai University, Chiang Mai, Thailand.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  3. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  4. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 17. [Google Scholar] [CrossRef]
  6. Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. D’Informatique Rech. OpéRationnelle 1970, 3, 154–158. [Google Scholar]
  7. Eckstein, J.; Bertsckas, D.P. On the Douglas Rachford splitting method and the proximal point algorithm for maximal monotone operators. Appl. Math. Mech. Engl. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  8. Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar]
  9. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  10. Yao, Y.; Noor, M.A. On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 2008, 217, 46–55. [Google Scholar] [CrossRef] [Green Version]
  11. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  12. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces, Lecture Notes in Mathematic 2057; Springer: Heidelberg, Germany, 2012; pp. 154–196. [Google Scholar]
  13. Rockafellar, R.T. Monotone operators and the proximal point algorithm. Siam J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  14. Zhang, L.; Hao, Y. Fixed point methods for solving solutions of a generalized equilibrium problem. J. Nonlinear Sci. Appl. 2016, 9, 149–159. [Google Scholar] [CrossRef]
  15. Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set Valued Var. Anal. 2015, 23, 205–221. [Google Scholar] [CrossRef]
  16. Boikanyo, O.A. The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, 2016, 10. [Google Scholar] [CrossRef]
  17. Moudafi, A.; Thera, M. Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1997, 94, 425–448. [Google Scholar] [CrossRef]
  18. Qin, X.; Cho, S.Y.; Wang, L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory A 2014, 2014, 10. [Google Scholar] [CrossRef]
  19. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. Siam J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  20. Suwannaprapa, M.; Petrot, N.; Suantai, S. Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory A 2017, 2017, 17. [Google Scholar] [CrossRef]
  21. Zhu, J.; Tang, J.; Chang, S. Strong convergence theorems for a class of split feasibility problems and fixed point problem in Hilbert spaces. J. Inequal. Appl. 2018, 2018, 15. [Google Scholar] [CrossRef]
  22. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  23. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009; pp. 82–163. [Google Scholar]
  24. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8, 471–489. [Google Scholar]
  25. Takahashi, W. Nonlinear Functional Analysis: Fixed Point Theory and Its Applications; Yokohama Publishers: Yokohama, Japan, 2000; pp. 55–92. [Google Scholar]
  26. Takahashi, W.; Toyoda, M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118, 417–428. [Google Scholar] [CrossRef]
  27. Liu, L.S. Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappins in Banach spaces. J. Math. Anal. Appl. 1995, 194, 114–125. [Google Scholar] [CrossRef]
  28. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  29. Baillon, J.B.; Haddad, G. Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 1977, 26, 137–150. [Google Scholar] [CrossRef]
  30. Cui, H.; Wang, F. Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory A 2014, 2014, 8. [Google Scholar] [CrossRef]
  31. Moudafi, A. A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. Theor. 2011, 74, 4083–4087. [Google Scholar] [CrossRef]
  32. Shimizu, T.; Takahashi, W. Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 1997, 211, 71–83. [Google Scholar] [CrossRef]
  33. Zhao, J.; He, S. Strong convergence of the viscosity approximation process for the split common fixed-point problem of quasi-nonexpansive mappings. J. Appl. Math. 2012, 2012, 12. [Google Scholar] [CrossRef]
Table 1. Numerical experiments for the different stepsize parameters of α n and λ n to Algorithm (18) with some initial points.
Table 1. Numerical experiments for the different stepsize parameters of α n and λ n to Algorithm (18) with some initial points.
Case → α n = 0.5 , λ n = 0.5 α n = 0.1 , λ n = 0.1 α n = 0.1 , λ n = 0.9 α n = 0.9 , λ n = 0.1 α n = 0.9 , λ n = 0.9
#Initial Point ↓ItersSol Iters Sol Iters Sol Iters Sol Iters Sol
( 0 , 0 ) 206 0.9961 0.4985 353 0.9916 0.4976 95 0.9985 0.4993 1,645 0.9277 0.4793 566 0.9862 0.4935
( 1 , 1 ) 193 0.9961 0.4984 297 0.9916 0.4976 94 0.9986 0.4993 1164 0.9277 0.4793 555 0.9862 0.4935
( 1 , 1 ) 207 0.9961 0.4985 351 0.9916 0.4976 96 0.9986 0.4993 1647 0.9277 0.4793 573 0.9862 0.4935
( 10 , 10 ) 31 1.7922 1.0935 64 1.8762 1.1544 9 1.5947 0.9460 382 1.8824 1.1348 95 1.6352 0.9741
Table 2. Influence of the stepsize parameter γ n of Algorithm (18) for different initial points.
Table 2. Influence of the stepsize parameter γ n of Algorithm (18) for different initial points.
Case → γ n = 0.1 L 2 γ n = 0.5 L 2 γ n = 0.9 L 2
#Initial Point ↓Iters Sol Iters Sol Iters Sol
( 0 , 0 ) 98 0.9986 0.4993 95 0.9985 0.4993 94 0.9986 0.4993
( 1 , 1 ) 95 0.9986 0.4993 94 0.9986 0.4993 94 0.9986 0.4993
( 1 , 1 ) 99 0.9986 0.4994 96 0.9986 0.4993 94 0.9986 0.4993
( 10 , 10 ) 9 1.5721 0.9291 9 1.5947 0.9460 9 1.3762 0.7821
Table 3. Numerical comparison between Algorithm (14) and Algorithm (36) for different initial points.
Table 3. Numerical comparison between Algorithm (14) and Algorithm (36) for different initial points.
Case →Algorithm (14)Algorithm (36)
#Initial Point ↓ItersSolItersSol
( 0 , 0 ) 15 0.4045 0.0689 15 0.4013 0.0747
( 1 , 1 ) 15 0.4045 0.0689 15 0.4014 0.0747
( 1 , 1 ) 15 0.4045 0.0689 14 0.4013 0.0747
( 1 , 1 ) 16 0.4045 0.0689 16 0.4013 0.0747
( 1 , 1 ) 26 0.4047 0.0690 25 0.4015 0.0748
( 10 , 10 ) 26 0.4047 0.0690 26 0.4015 0.0748
( 10 , 10 ) 29 0.4047 0.0690 25 0.4015 0.0748
( 10 , 10 ) 34 0.4047 0.0690 16 0.4013 0.0747
( 10 , 10 ) 34 0.4047 0.0690 33 0.4015 0.0748

Share and Cite

MDPI and ACS Style

Suantai, S.; Petrot, N.; Suwannaprapa, M. Iterative Methods for Finding Solutions of a Class of Split Feasibility Problems over Fixed Point Sets in Hilbert Spaces. Mathematics 2019, 7, 1012. https://doi.org/10.3390/math7111012

AMA Style

Suantai S, Petrot N, Suwannaprapa M. Iterative Methods for Finding Solutions of a Class of Split Feasibility Problems over Fixed Point Sets in Hilbert Spaces. Mathematics. 2019; 7(11):1012. https://doi.org/10.3390/math7111012

Chicago/Turabian Style

Suantai, Suthep, Narin Petrot, and Montira Suwannaprapa. 2019. "Iterative Methods for Finding Solutions of a Class of Split Feasibility Problems over Fixed Point Sets in Hilbert Spaces" Mathematics 7, no. 11: 1012. https://doi.org/10.3390/math7111012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop