Next Article in Journal
The Prouhet Tarry Escott Problem: A Review
Next Article in Special Issue
Variational Inequalities Approaches to Minimization Problems with Constraints of Generalized Mixed Equilibria and Variational Inclusions
Previous Article in Journal
Study of a High Order Family: Local Convergence and Dynamics
Previous Article in Special Issue
Some Mann-Type Implicit Iteration Methods for Triple Hierarchical Variational Inequalities, Systems Variational Inequalities and Fixed Point Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Algorithm for the Split Common Fixed Point Problem with Its Applications to Signal Processing

by
Wachirapong Jirakitpuwapat
1,†,
Poom Kumam
2,3,*,†,
Yeol Je Cho
2,4,5,† and
Kanokwan Sitthithakerngkiet
6,†
1
KMUTT-Fixed Point Research Laboratory, Room SCL 802 Fixed Point Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha Uthit Rd., Bang Mod, Thung Khru, Bangkok 10140, Thailand
2
KMUTT-Fixed Point Theory and Applications Research Group, Theoretical and Computational Science Center (TaCS), Science Laboratory Building, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea
5
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
6
Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(3), 226; https://doi.org/10.3390/math7030226
Submission received: 24 December 2018 / Revised: 18 February 2019 / Accepted: 20 February 2019 / Published: 28 February 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In 2014, Cui and Wang constructed an algorithm for demicontractive operators and proved some weak convergence theorems of their proposed algorithm to show the existence of solutions for the split common fixed point problem without using the operator norm. By Cui and Wang’s motivation, in 2015, Boikanyo constructed also a new algorithm for demicontractive operators and obtained some strong convergence theorems for this problem without using the operator norm. In this paper, we consider a viscosity iterative algorithm in Boikanyo’s algorithm to approximate to a solution of this problem and prove some strong convergence theorems of our proposed algorithm to a solution of this problem. Finally, we apply our main results to some applications, signal processing and others and compare our algorithm with five algorithms such as Cui and Wang’s algorithm, Boikanyo’s algorithm, forward-backward splitting algorithm and the fast iterative shrinkage-thresholding algorithm (FISTA).

1. Introduction

Assume that C and Q are nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively. Assume that A : H 1 H 2 is a bounded linear operator with the adjoint A .
In 1994, the split feasibility problem was proposed by Censor and Elfving [1] as follows:
Find a point x H 1 such that x C and A x Q .
It is interesting to note that, when taking C = H 1 and Q = { b } , the split feasibility problem reduces to the linear inverse problem:
Find a point x H 1 such that A x = b .
The most popular ways for solving the linear inverse problem is to reformulate it as a least squares problem. Similarly, the split feasibility problem was solved by equivalently reformulating it as the convex optimization problem:
min x C 1 2 A x P Q ( A x ) 2 ,
where P Q ( · ) is the projection operator on set Q defined by
P Q ( v ) = arg min z Q z v .
In 2002, based on the reformulation (3), the so-called CQ algorithm was presented by Byrne. He solved this problem by using the algorithm: For an arbitrary x 1 H ,
x n + 1 = A 1 P Q ( P A ( C ) ( A x n ) ) , n N ,
which converges to a solution of the convex optimization problem. Since the algorithm (4) requires the inverse matrix of A, it is disadvantage to calculate this algorithm. We note that x H solves the problem (2) is equivalent to the fixed point problem, that is, x is a fixed point of T, where T : = P C ( I ρ A ( I P Q ) A ) for any ρ > 0 .
In 2002, Byrne [2] constructed the following algorithm (5), which does not compute the inverse matrix of A: For any x 0 , { x n } is generated by
x n + 1 = P C ( I ρ A ( I P Q ) A ) x n , n N ,
where ρ ( 0 , 2 L ) and L is the largest eigenvalue of A A .
Recently, the split feasibility problem has been apllied to approximation theory, signal processing, image recovery, control theory, biomedical engineering, geophysics and communications by many authors. Refer to the papers [3,4,5,6,7,8,9].
Especially, the split common fixed point problem is as follows:
Find a point x H such that x Fix ( U ) and A x Fix ( T ) ,
where U : H H and T : K K are operators, Fix ( U ) and Fix ( T ) denote the fixed point sets of U and T, respectively. In 2009, this problem was proposed by Censor and Segal [10] and they constructed the following algorithm for solving the problem: For any x 0 H , { x n } is generated by
x n + 1 = U ( I ρ A ( I T ) A ) x n , n N .
This algorithm can be extended to many cases as follows:
  • Quasi-nonexpansive operators by Moudafi [11];
  • Finitely many directed operators by Wang and Xu [12];
  • Demicontractive operators by Moudafi [13]. In the case when U and T are directed operators, the step size ρ satisfies 0 < ρ < 2 A 2 and { x n } generated by the algorithm (7) converges weakly to a solution of the problem (6) when a solution exists.
The algorithm (7) needs to compute A , which is not easily computed. In 2014, Cui and Wang [14] proposed the following Algorithm 1 without using the operator norm: For an initial x 0 H ,
x n + 1 = U λ ( x n ρ n A ( I T ) A x n ) , n 0 ,
where
ρ n = ( 1 τ ) ( I T ) A x n 2 2 A ( I T ) A x n 2 , A x n T ( x n ) , 0 otherwise ,
where U and T are demicontractive operators with constants 0 κ < 1 and 0 τ < 1 such that I U and I T are demiclosed at zero, respectively, denote U λ : = ( 1 λ ) I + λ U for any λ ( 0 , 1 κ ) and A is a bounded linear operator, and they proved that the algorithm (8) converges weakly to a solution of the problem (6) when a solution exists.
Algorithm 1: Cui and Wang’s algorithm
  Input: Set λ ( 0 , 1 κ ) , where 0 κ < 1 . Choose x 0 H .
1 for n = 1 , 2 , do
2   Update x n + 1 via (8),
3 end for
In 2015, Boikanyo [15] extended Cui and Wang’s results and proposed the following Algorithm 2 for demicontrative operators U and T with U λ : = ( 1 λ ) I + λ U for any λ ( 0 , 1 κ ) , which converges strongly to a solution of the problem (6) when a solution exists: For any u H ,
x n + 1 = α n u + ( 1 α n ) U λ ( x n ρ n A ( I T ) A x n ) , n 0 ,
where
ρ n = ( 1 τ ) ( I T ) A x n 2 2 A ( I T ) A x n 2 , A x n T ( x n ) , 0 otherwise ,
and { α n } is a sequence in [ 0 , 1 ) such that
lim n α n = 0 , n = 0 α n = .
Algorithm 2: Boikanyo’s algorithm
  Input: Set λ ( 0 , 1 κ ) where 0 κ < 1 , and α n [ 0 , 1 ) such that lim n α n = 0 and n = 0 α n = . Choose u , x 0 H .
1 for n = 1 , 2 , do
2   Update x n + 1 via (9).
3 end for
In 2016, Huimin et al. [16] proposed the following Algorithm 3 for demicontrative operators U, T with U λ : = ( 1 λ ) I + λ U for any λ ( 0 , 1 κ ) , where λ ( 0 , 1 κ ) , and f is a contraction operator on Fix ( U ) which converges strongly to a solution of the problem (6) when a solution exists:
x n + 1 = α n f ( x n ) + ( 1 α n ) U λ ( x n ρ n A ( I T ) A x n ) , n 0 ,
where
ρ n = ( 1 τ ) ( I T ) A x n 2 2 A ( I T ) A x n 2 , A x n T ( x n ) , 0 otherwise ,
and { α n } is a sequence in [ 0 , 1 ) such that
lim n α n = 0 , n = 0 α n = .
Algorithm 3: Algorithm of Huimin et al. [16]
  Input: Set λ ( 0 , 1 κ ) , where λ ( 0 , 1 κ ) , and α n [ 0 , 1 ) such that lim n α n = 0 and n = 0 α n = . Choose u , x 0 H .
1 for n = 1 , 2 , do
2   Update x n + 1 via (10).
3 end for
In this paper, motivated by Boikanyo’s algorithm [15] and the algorithm of Huimin et al. [16], we will propose the following Algorithm 4 for demicontrative operators U and T with U λ n : = ( 1 λ n ) I + λ n U for any λ n ( 0 , 1 κ ) :
y n = α n f ( x n ) + ( 1 α n ) U λ n ( x n ρ n A ( I T ) A x n ) , x n + 1 = ( 1 β n ) y n + β n f ( y n ) , n 0 ,
where
ρ n = ( 1 τ ) ( I T ) A x n 2 2 A ( I T ) A x n 2 , A x n T ( x n ) , 0 otherwise ,
U and T are demicontrative operators such that I U and I T are demiclosed at zero, f is a contraction operator on Fix ( U ) and the sequences { α n } , { β n } in [ 0 , 1 ) are such that
lim n α n = 0 , n = 0 α n = , n = 0 β n <
and we prove that our algorithm { x n } generated by (11) converges strongly to a solution of the problem (6) when a solution exists. However, { x n } and { y n } converge to the same point because from the condition 0 β n < 1 and n = 1 β n < .
Algorithm 4: Our algorithm
  Input: Set λ n ( 0 , 1 κ ) , where λ ( 0 , 1 κ ) , α n , β n [ 0 , 1 ) such that lim n α n = 0 , n = 0 α n = and n = 0 β n < . Choose x 0 H ;
1 for each n = 1 , 2 , do;
2 Update y n and x n + 1 via (11), respectively.
3 end for
Remark 1.
In fact, our algorithm was changed from the algorithm of Huimin et al. including the point u in Boikano’s algorithm to the viscosity term and linear convex combination. The algorithm of Huimin et al. is a special case of our algorithm when β n = 0 and { λ n } is a constant sequence. The algorithm of Huimin et al. and our algorithm are different because they were generated the distinct terms x n . However, they converge strongly to a same solution of the split common fixed point problem.
For example, let
y = 1.5 7 , ϵ = 0.5 1 , A = 1 0 0 1 2 3 ,
α n = 0 . 1 n , β n = 1 n 2 , λ n = 1 2 ,
f ( x ) = x 2 1 0 4 + 2 1 0 , t = 10 ,
where is transpose. If x O u r , 100 = 1.5024 1.4672 0.8540 is generated by our algorithm and x H , 100 = 1.5034 1.0701 1.1177 is generated by the algorithm of Huimin et al., then two algorithms, the algorithm of Huimin et al. (10) and our algorithm (11) converge strongly to a same solution of the problem (15).

2. Preliminaries

Let H be a real Hilbert space. Let x n x denote that { x n } converges weakly to x and x n x denote that { x n } converges strongly to x.
The following inequality holds:
x + y 2 x 2 + 2 y , x + y , x , y H .
Definition 1.
Let T : H H be an operator such that Fix ( T ) . Then T is said to be:
1. 
Nonexpansive if
T x T y x y , x , y H ;
2. 
Contractive if there exists k [ 0 , 1 ) such that
T x T y k x y , x , y H ;
3. 
Quasi-nonexpansive if
T x x x x , x , y H , x Fix ( T ) ;
4. 
Directed if
x T x 2 + x T x 2 x x 2 0 , x , y H , x Fix ( T ) ;
5. 
τ-demicontractive with τ [ 0 , 1 ) if
T x x 2 x x 2 + τ x T x 2 , x , y H , x Fix ( T ) .
Remark 2.
Easily, we obtain the following conclusions:
1. 
Every contraction operator is nonexpansive;
2. 
Every nonexpansive operator is quasi-nonexpansive;
3. 
Every quasi-nonexpansive operator is 0-demicontractive operator;
4. 
Every direct operator is 1 -demicontractive operator.
Definition 2.
Assume that T : H H is an operator. Then I T is demiclosed at zero if, for any { x n } in H, x n x and ( I T ) x n 0 imply T x = x .
Remark 3.
Every nonexpansive operator is demiclosed at zero [17].
Definition 3.
Assume that C is a nonempty closed convex subset of H. The metric projection P C from H onto C is defined as follows: For all x H ,
x P C x = inf { x y : y C } .
Note that the metric projection P C is nonexpansive [17].
Lemma 1
([18]). Assume that C is a nonempty closed convex subset of H and P C is a nonexpansive operator from H onto C. For any x H , it satisfies the inequality:
P C x x , P C x y 0 , y C .
Lemma 2
([19]). Assume that { α n } is a sequence of nonnegative numbers such that
α n + 1 ( 1 β n ) α n + γ n , n 0 ,
where β n ( 0 , 1 ) and γ n R such that
1. 
n = 1 β n = ;
2. 
lim sup n γ n β n 0 or n = 1 | γ n | < .
Then lim n α n = 0 .
Lemma 3
([20]). Assume that A : H H is a τ-demicontractive operator with τ < 1 . Define U λ : = ( 1 λ ) I + λ U for any λ ( 0 , 1 τ ) . Then, for any x H and x Fix ( U ) ,
U λ x x 2 x x 2 λ ( 1 τ λ ) x U x 2 .
Lemma 4
([14]). Assume that A : H H is a bounded linear operator. Assume that T : H H is a τ-demicontractive operator. If A 1 ( Fix ( T ) ) , then
1. 
( I T ) A x = 0 if and only if A ( I T ) A x = 0 for all x H ;
2. 
In particular, for all x A 1 ( Fix ( T ) ) ,
x ρ A ( I T ) A x x 2 x x 2 ( 1 τ ) 2 ( I T ) A x 4 4 A ( I T ) A x 2 ,
where x H , A x T ( A x ) and
ρ = ( 1 τ ) ( I T ) A x 2 2 A ( I T ) A x 2 .

3. Main Results

Theorem 1.
Assume that H 1 and H 2 are real Hilbert spaces. Assume that U : H 1 H 1 and T : H 2 H 2 are a κ-demicontractive operator and a τ-demicontractive operator with constants 0 κ < 1 and 0 τ < 1 , respectively such that I U and I T are demiclosed at zero, respectively. Assume that A : H 1 H 2 is a bounded linear operator with the adjoint A of A. Assume that f is a contraction operator with constant η. Assume that S is a set of all solution of the problem (6) such that S . If lim n α n = 0 , n = 0 α n = and n = 0 β n < , then the sequence { x n } generated by algorithm (11) converges strongly to a point x S , which is a solution x = P S f ( x ) of the following variational inequality:
x f ( x ) , x z 0 , z S .
Proof. 
Let a n = x n ρ n A ( I T ) A x n for each n 0 and let z S . Since β n [ 0 , 1 ) and n = 0 β n < , we have lim n β n = 0 . For the proof, we have the following four steps:
Step 1. Show that { x n } is bounded.
Case ρ n = 0 : Thus a n = x n . By Lemma 3, we get
U λ n a n z 2 = U λ n x n z 2 x n z 2 λ n ( 1 κ λ n ) x n U x n 2 x n z 2 .
Case ρ n 0 : By Lemmas 3 and 4, we get
U λ n a n z 2 a n z 2 λ n ( 1 κ λ n ) a n U a n 2 = x n ρ n A ( I T ) A x n 2 λ n ( 1 κ λ n ) a n U a n 2 x n z 2 ( 1 τ ) 2 ( I T ) A x n 4 4 A ( I T ) A x n 2 λ n ( 1 κ λ n ) a n U a n 2 x n z 2 .
Thus U λ a n z x n z . Observe that
x n + 1 z = ( 1 β n ) y n + β n f ( y n ) z ( 1 β n ) y n z + β n f ( y n ) z ( 1 β n ) y n z + β n f ( y n ) f ( z ) + β n f ( z ) z y n z + β n f ( z ) z = α n f ( x n ) + ( 1 α n ) U λ n a n z + β n f ( z ) z α n f ( x n ) z + ( 1 α n ) U λ n a n z + β n f ( z ) z α n f ( x n ) f ( z ) + α n f ( z ) z + ( 1 α n ) U λ n a n z + β n f ( z ) z η α n x n z + α n f ( z ) z + ( 1 α n ) U λ n a n z + β n f ( z ) z η α n x n z + α n f ( z ) z + ( 1 α n ) x n z + β n f ( z ) z = ( 1 ( 1 η ) α n ) x n z + α n f ( z ) z + β n f ( z ) z max { x n z , 1 1 η f ( z ) z } + β n f ( z ) z max { x 0 z , 1 1 η f ( z ) z } + f ( z ) z n = 0 β n .
Thus { x n } is bounded. Moreover, { f ( x n ) } , { y n } and { f ( y n ) } are also bounded.
Step 2. Show that, if the subsequence { x n k + 1 } of { x n } weakly converges to q Fix ( f ) , then the subsequence { y n k } of { y n } weakly converges to q. Now, we consider
x n k + 1 y n k , q = β n k y n k f ( y n k ) , q = β n k y n k f ( y n k ) + q 2 + y n k f ( y n k ) q 2 4 .
Since { y n } and { f ( y n ) } are bounded, { y n k } weakly converges to q.
Step 3. Show that the inequality holds:
x n + 1 x 2 ( 1 α n ) x n x 2 + 2 α n f ( x n ) x , y n x + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 .
Case ρ n = 0 : By Lemma 3, we get
x n + 1 x 2 = ( 1 β n ) y n + β n f ( y n ) x 2 ( ( 1 β n ) y n x + β n f ( y n ) x ) 2 ( ( 1 β n ) y n x + β n f ( y n ) f ( x ) + β n f ( x ) x ) 2 ( y n x + β n f ( x ) x ) 2 = y n x 2 + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 = α n f ( x n ) + ( 1 α n ) U λ n x n x 2 + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 = α n ( f ( x n ) x ) + ( 1 α n ) ( U λ n x n x ) 2 + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 ( 1 α n ) 2 U λ n x n x 2 + 2 α n f ( x n ) x , y n x + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 ( 1 α n ) 2 ( x n x 2 λ n ( 1 κ λ n ) x n U x n 2 ) + 2 α n f ( x n ) x , y n x + β n 2 f ( x ) x 2 + 2 β n y n x f ( x ) x .
Case ρ n 0 : By Lemmas 3 and 4, we get
x n + 1 x 2 = ( 1 β n ) y n + β n f ( y n ) x 2 ( ( 1 β n ) y n x + β n f ( y n ) x ) 2 ( ( 1 β n ) y n x + β n f ( y n ) f ( x ) + β n f ( x ) x ) 2 y n x 2 + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 = α n f ( x n ) + ( 1 α n ) U λ n a n x 2 + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 = α n ( f ( x n ) x ) + ( 1 α n ) ( U λ n a n x ) 2 + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 ( 1 α n ) 2 U λ n a n x 2 + 2 α n f ( x n ) x , y n x + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 ( 1 α ) 2 ( a n x 2 λ n ( 1 κ λ n ) a n U a n 2 ) + 2 α n f ( x n ) x , y n x + β n 2 f ( x ) x 2 + 2 β n y n x f ( x ) x ( 1 α ) 2 ( x n x 2 ( 1 τ ) 2 ( I T ) A x n 4 4 A ( I T ) A x n 2 λ n ( 1 κ λ n ) a n U a n 2 ) + 2 α n f ( x n ) x , y n x + β n 2 f ( x ) x 2 + 2 β n y n x f ( x ) x .
Therefore, we have
x n + 1 x 2 ( 1 α n ) x n x 2 + 2 α n f ( x n ) x , y n x + 2 β n y n x f ( x ) x + β n 2 f ( x ) x 2 .
Step 4. Show that x n x for each n 0 . Let s n = x n x . In this step, we consider two cases.
Case 1. Assume that there is n 0 N such that { s n } is decreasing for all n n 0 . Since { s n } is monotonic and bounded, { s n } is convergent. First, we show that
lim sup n f ( x ) x , y n x 0 .
There are two parts to show this.
Part 1. Let ρ n = 0 . Since { f ( x n ) } and { y n } are bounded and Step 3, we get
λ n ( 1 κ λ n ) x n U x n 2 s n s n + 1 + α n M + β n N ,
where
M = sup n N { 2 f ( x n ) x , y n x }
and
N = sup n N { 2 y n x f ( x ) x + β n f ( x ) x 2 } .
Since { s n } is convergent and lim n α n = 0 , we have lim n x n U x n = 0 . By since ρ n = 0 , we have
lim n ( I T ) A x n = 0 .
By the boundedness of { x n } , there is a subsequence { x n k } of { x n } such that x n k q and
lim sup n f ( x ) x , x n x = lim k f ( x ) x , x n k x = f ( x ) x , q x .
Since lim n x n U x n = 0 and the demiclosedness of I U at zero, we have q Fix ( U ) . Since A is a bounded linear operator, A is continuous. Therefore, x n k q imply A x n k A q . Form lim n ( I T ) A x n = 0 and the demiclosedness of I T at zero, it follows that A q Fix ( T ) and so q S . By Step 2, it follows that
0 f ( x ) x , q x = lim k f ( x ) x , x n k x = lim sup n f ( x ) x , x n x = lim sup n f ( x ) x , y n 1 x .
Part 2. Let ρ n 0 . Since { f ( x n ) } and { y n } are bounded, by Step 3, we get
λ n ( 1 κ λ n ) a n U a n 2 + ( 1 τ ) 2 ( I T ) A x n 4 4 A ( I T ) A x n 2 s n s n + 1 + α n M + β n N ,
where
M = sup n N { 2 f ( x n ) x , y n x }
and
N = sup n N { 2 y n x f ( x ) x + β n f ( x ) x 2 } .
Thus we obtain
0 λ n ( 1 κ λ n ) a n U a n 2 s n s n + 1 + α n M + β n N
and
0 ( 1 τ ) 2 ( I T ) A x n 4 4 A ( I T ) A x n 2 s n s n + 1 + α n M + β n N .
Since { s n } is convergent and lim n α n = 0 , we obtain
lim n a n U a n = lim n ( I T ) A x n 4 A ( I T ) A x n 2 = 0 .
Moreover, we get lim n ( I T ) A x n = 0 . However, it follows that
( I T ) A x n = A ( I T ) A x n 2 1 A ( I T ) A x n A ( I T ) A x n 2 A ( I T ) A x n .
Thus we have
lim n x n a n = lim n ( 1 τ ) ( I T ) A x n 2 2 A ( I T ) A x n = 0 .
By the boundedness of { x n } , there is a subsequence { x n k } of { x n } such that x n k q . Since lim n x n a n = 0 and x n k q , there is a subsequence { a n k } of { a n } such that a n k q and
lim sup n f ( x ) x , x n x = lim k f ( x ) x , x n k x = f ( x ) x , q x .
Since lim n a n U a n = 0 , by the demiclosedness of I U at zero, we have q Fix ( U ) . Since A is a bounded linear operator, A is continuous. Therefore, x n k q imply A x n k A q . Form lim n ( I T ) A x n = 0 and the demiclosedness of I T at zero, we have A q Fix ( T ) and q S . By Step 2, it follow that
0 f ( x ) x , q x = lim k f ( x ) x , x n k x = lim sup n f ( x ) x , x n x = lim sup n f ( x ) x , y n 1 x .
Second, we show that lim n x n + 1 x n = 0 . There are two parts.
Part 1. If ρ n = 0 , then we get
x n + 1 x n = ( 1 β n ) y n + β n f ( y n ) x n ( 1 β n ) y n x n + β n f ( y n ) x n = ( 1 β n ) α n f ( x n ) + ( 1 α n ) U λ n x n x n + β n f ( y n ) x n α n f ( x n ) x n + ( 1 α n ) x n U λ n x n + β n f ( y n ) x n = α n f ( x n ) x n + λ n x n U x n + β n f ( y n ) x n .
Part 2. If ρ n 0 , then we get
x n + 1 x n ( 1 β n ) y n + β n f ( y n ) x n ( 1 β n ) y n x n + β n f ( y n ) x n = ( 1 β n ) α n f ( x n ) + ( 1 α n ) U λ n a n x n + β n f ( y n ) x n α n f ( x n ) x n + ( 1 α n ) x n U λ n a n + β n f ( y n ) x n α n f ( x n ) x n + x n a n + a n U λ n a n + β n f ( y n ) x n = α n f ( x n ) x n + x n a n + λ n a n U a n + β n f ( y n ) x n .
Therefore, we have lim n x n + 1 x n = 0 .
Third, we show that x n x . We get the inequality:
lim sup n f ( x ) x , y n x 0 .
Now, we have
x n + 1 x 2 ( 1 α n ) x n x 2 + 2 α n lim sup k f ( x k ) x , y k x + β n sup k N { 2 y k x f ( x ) x + β k f ( x ) x 2 } .
By Lemma 2, we have lim n s n = lim n x n x = 0 and so x n x .
Case 2. Assume that there is not n 0 N such that { s n } is decreasing for all n n 0 . Thus there is a subsequence { s n i + 1 } of { s n } such that s n i + 1 < s n i + 1 + 1 for all i N .
First, we show that
lim sup n i f ( x ) x , y n i x 0 .
There are two parts.
Part 1. Let ρ n i = 0 . Since { f ( x n i ) } and { y n i } are bounded, by Step 3, we get
λ n i ( 1 κ λ n i ) x n i U x n i 2 s n i s n i + 1 + α n i M + β n i N α n i M + β n i N ,
where
M R = sup n i N { 2 f ( x n i ) x , y n i x }
and
N = sup n N { 2 y n i x f ( x ) x + β n i f ( x ) x 2 } .
Since lim i α n i = 0 , we have
lim i x n i U x n i = 0 .
Since ρ n i = 0 , we have
lim i ( I T A ) x n i = 0 .
By the boundedness of { x n i } , there is a subsequence { x n i j } of { x n i } such that x n i j q and
lim sup i f ( x ) x , x n i x = lim j f ( x ) x , x n i j x = f ( x ) x , q x .
Since lim j x n i j U x n i j = 0 and the demiclosedness of I U at zero, we have q Fix ( U ) . Since A is a bounded linear operator, A is continuous. Therefore, x n i j q imply A x n i j A q . Form lim j ( I T ) A x n i j = 0 and the demiclosedness of I T at zero, we have A q Fix ( T ) and so q S . By Step 2, it follows that
0 f ( x ) x , q x = lim j f ( x ) x , x n i j x = lim sup i f ( x ) x , x n i x = lim sup i f ( x ) x , y n i 1 x .
Part 2. Let ρ n i 0 . Since { f ( x n i ) } and { y n i } are bounded, by Step 3, we get
λ n i ( 1 κ λ n i ) a n i U a n i 2 + ( 1 τ ) 2 ( I T ) A x n i 4 4 A ( I T ) A x n i 2 s n i s n i + 1 + α n i M + β n i N α n i M + β n i N ,
where
M = sup n i N { 2 f ( x n i ) x , y n i x }
and
N = sup n N { 2 y n i x f ( x ) x + β n i f ( x ) x 2 } .
Then we obtain
0 λ n i ( 1 κ λ n i ) a n i U a n i 2 α n i M + β n i N
and
0 ( 1 τ ) 2 ( I T ) A x n i 4 4 A ( I T ) A x n i 2 α n i M + β n i N .
Since lim i α n i = 0 , we obtain
lim i a n i U a n i = lim i ( I T ) A x n i 4 A ( I T ) A x n i 2 = 0 .
Moreover, we get lim n ( I T ) A x n i = 0 . However, we have
( I T ) A x n i = A ( I T ) A x n i 2 1 A ( I T ) A x n i A ( I T ) A x n i 2 A ( I T ) A x n i .
Thus we have
lim i x n i + 1 a n i = lim i ( 1 τ ) ( I T ) A x n i 2 2 A ( I T ) A x n i = 0 .
By the boundedness of { x n i } , there is a subsequence { x n i j } of { x n i } and x n i j q . Since lim i x n i a n i = 0 and x n i j q , we have a n i j q such that
lim sup i f ( x ) x , x n i x = lim j f ( x ) x , x n i j x = f ( x ) x , q x .
Since lim i a n i U a n i = 0 , by the demiclosedness of I U at zero, we have q Fix ( U ) . Since A is a bounded linear operator, A is continuous. Therefore, x n i j q imply A x n i j A q . Form lim i ( I T ) A x n i = 0 and the demiclosedness of I T at zero, we have A q Fix ( T ) and so q S . By Step 2, it follows that
0 f ( x ) x , q x = lim j f ( x ) x , x n i j x = lim sup i f ( x ) x , x n i x = lim sup i f ( x ) x , y n i 1 x .
Second, we show that
lim i x n i + 1 x n i = 0 .
There are two parts.
Part 1. If ρ n i = 0 , then we compute
x n i + 1 x n i ( 1 β n i ) y n i + β n i f ( y n i ) x n i ( 1 β n i ) y n i x n i + β n i f ( y n i ) x n i = ( 1 β n i ) α n i f ( x n i ) + ( 1 α n i ) U λ n i x n i x n i + β n i f ( y n i ) x n i α n i f ( x n i ) x n i + ( 1 α n i ) x n i U λ n i x n i + β n i f ( y n i ) x n i = α n i f ( x n i ) x n i + λ n i x n i U x n i + β n i f ( y n i ) x n i .
Part 2. If ρ n i 0 , then we compute
x n i + 1 x n i ( 1 β n i ) y n i + β n i f ( y n i ) x n i ( 1 β n i ) y n i x n i + β n i f ( y n i ) x n i = ( 1 β n i ) α n i f ( x n i ) + ( 1 α n i ) U λ n i a n i x n i + β n i f ( y n i ) x n i α n i f ( x n i ) x n i + ( 1 α n i ) x n i U λ n i a n i + β n i f ( y n i ) x n i α n i f ( x n i ) x n i + x n i a n i + a n i U λ n a n i + β n i f ( y n i ) x n i = α n i f ( x n i ) x n i + x n i a n i + λ n i a n i U a n i + β n i f ( y n i ) x n i .
Therefore, we have
lim i x n i + 1 x n i = 0 .
Third, we show that x n x . From the inequality s n i + 1 s n i + 1 + 1 , we get
lim sup i f ( x ) x , y n i x 0 .
Observe that
α n i s n i + 1 + ( 1 α n i ) ( s n i + 1 s n i ) 2 α n i lim sup i f ( x ) x , y n i x + β n i sup k N { 2 y k x f ( x ) x + β k f ( x ) x 2 } .
Then we have
0 s n i + 1 2 lim sup i f ( x ) x , y n i x + β n i sup k N { 2 y k x f ( x ) x + β k f ( x ) x 2 } .
Therefore, since { y n } is bounded and lim n β n = 0 , from lim n s n = lim n x n x = 0 , it follows that x n x . This completes the proof. ☐

4. Special Cases

We consider some special cases of Theorem 1 based on some relations of directed operators, τ -demicontractive operators and quasi-nonexpansive operators. See Figure 1. For some details, see Remark 2. Therefore, the following results follows easily from Theorem 1:
Case 1. Assume that U : H H is a quasi-nonexpansive operator such that I U is demiclosed at zero and T : K K is a quasi-nonexpansive operator such that I T is demiclosed at zero, respectively.
Corollary 1.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).
Case 2. Assume that U : H H is a quasi-nonexpansive operator such that I U is demiclosed at zero and T : K K is a directed operator such that I T is demiclosed at zero, respectively.
Corollary 2.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n =
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).
Case 3. Assume that U : H H is a directed operator such that I U is demiclosed at zero and T : K K is a quasi-nonexpansive operator such that I T is demiclosed at zero, respectively.
Corollary 3.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).
Case 4. Assume that U : H H is a quasi-nonexpansive operator such that I U is demiclosed at zero and T : K K is a τ -demicontractive operator such that I T is demiclosed at zero, respectively.
Corollary 4.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).
Case 5. Assume that U : H H is a τ -demicontractive operator such that I U is demiclosed at zero and T : K K is a quasi-nonexpansive operator such that I T is demiclosed at zero, respectively.
Corollary 5.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).
Case 6. Assume that U : H H is a directed operator such that I U is demiclosed at zero and T : K K is a directed operator such that I T is demiclosed at zero, respectively.
Corollary 6.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).
Case 7. Assume that U : H H is a directed operator such that I U is demiclosed at zero and T : K K is a τ -demicontractive operator such that I T is demiclosed at zero, respectively.
Corollary 7.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution the variational inequality (12).
Case 8. Assume that U : H H is a τ -demicontractive operator such that I U is demiclosed at zero and T : K K is a directed operator such that I T is demiclosed at zero, respectively.
Corollary 8.
Assume that S is a set of all solutions of the problem (6) such that S . Suppose that
n = 0 β n < , lim n α n = 0 , n = 0 α n = .
Then the sequence { x n } generated by the algorithm (11) converges strongly to x S and, also, x = P S f ( x ) is a solution of the variational inequality (12).

5. Application to Signal Processing

For most of the contents in this section, we follow those of Cui and Ceng [21]. We consider some applications of our algorithm to inverse problems occurring from signal processing. For example, we consider the following equation:
y = A x + ϵ ,
where x R N is recovered, y R k is noisy observations, A : R N R k is a bounded linear observation operator. It determines a process with loss of information. For finding solutions of the linear inverse problems (13), a successful one of some models is the convex unconstrained minimization problem:
min x R N 1 2 y A x 2 + υ x 1 ,
where υ > 0 and · 1 is the 1 norm. It is well know that the problem (14) is equivalent to the constrained least squares problem:
min x R N 1 2 y A x 2 subject to x C ,
where C = { x R N : x 1 t } . The problem (15) is a particular case of the problem (1), where Q = { y } . Therefore, we can solve the problem by the proposed algorithm. In this case, U = P C is the projection onto the closed 1 -ball in R N and T = P Q , see [22,23]. Denoted P C λ n : = ( 1 λ n ) I + λ n P C for each n 1 , where λ n ( 0 , 1 ) . Then we have the following algorithm:
y n = α n f ( x n ) + ( 1 α n ) P C λ n ( x n ρ n A ( I P Q ) A x n ) , x n + 1 = ( 1 β n ) y n + β n f ( y n ) , n 0 ,
where
ρ n = ( 1 τ ) ( A x n y ) 2 2 A ( A x n y ) 2 , A x n y , 0 otherwise ,
f is a contraction operator on C and the sequences { α n } , { β n } in [ 0 , 1 ) are such that
lim n α n = 0 , n = 0 α n = , n = 0 β n < .
Theorem 2.
Then the sequence { x n } generated by the algorithm (16) converges strongly to a solution x of the problem (15).
Example 1.
Let A be the random matrix ( k × N ) such that each entire is in [ 0 , 1 ] . Let y = A x be such that x 1 t . Set up the problem (15). We choose λ = 1 2 , α = 1 n , β = 1 n 2 , u = 1 1 , f ( x ) = ( x p ) / 4 + p and initial x 1 randomly be such that p 1 , x 1 1 t . Thus C = { x R N : x 1 t } . See Figure 2 and Figure 3.
Remark 4.
Figure 2, Figure 3, Figure 4 and Figure 5 show that the sequence { β n } improves the convergence profile of [14,15]. Our algorithm (Algorithm 5) converges faster than Cui and Wang’s algorithm and Boikanyo’s algorithm. Moreover, we compared our algorithm with the forward-backward splitting algorithm [24] and the fast iterative shrinkage-thresholding algorithm (FISTA) [25]. Sometimes, our algorithm converges faster than other algorithms, Figure 4 and Figure 5, but, sometimes, our algorithm converges slower than other algorithms, Figure 2 and Figure 3 . It depends on the control condition. This experiment is an example for the convergence of some algorithms.
Algorithm 5: A General Viscosity Algorithms (Our Algorithm)
  Input: Set λ n ( 0 , 1 ) , α n , β n [ 0 , 1 ) such that lim n α n = 0 , n = 0 α n = , n = 0 β n < . Choose x 0 H .
1 for n = 1 , 2 , do
2  if A x n y , then
3     ρ n = ( 1 τ ) ( A x n y ) 2 2 A ( A x n y ) 2
4  else
5     ρ n = 0
6  end
7   y n = α n f ( x n ) + ( 1 α n ) P C λ n ( x n ρ n A ( A x n y ) )
8   x n + 1 = ( 1 β n ) y n + β n f ( y n )
9 end for

6. Conclusions

First, we proposed a new algorithm for demicontractive operators and improved that the sequence generated by our algorithm strongly converges to a solution of the problem (6). Moreover, our algorithm does not compute the norm of the bounded linear operator. Next, we obtained some results for many cases of operators such as a directed operator, a quasi-nonexpansive operator, a nonexpansive operator and a contraction operator.

Author Contributions

All four authors contributed equally to work. All authors read and approved the final manuscript. P.K. conceived and designed the experiments. W.J. performed the experiments. W.J. and Y.J.C. analyzed the data. K.S. and W.J. wrote the paper. Authorship must be limited to those who have contributed substantially to the work reported.

Funding

Petchra Pra Jom Klao Ph.D. Research Scholarship (Grant No. 10/2560), TRF Research Scholar Award (Grant No. RSA6080047) and King Mongkut’s University of Technology North Bangkok (Grant No. KKMUTNB-62-KNOW-40).

Acknowledgments

The first author should like to thank the Petchra Pra Jom Klao Ph.D. Research Scholarship and the King Mongkut’s University of Technology Thonburi (KMUTT) for financial support. The authors acknowledge the financial support provided by King Mongkut’s University of Technology Thonburi through the “KMUTT 55th Anniversary Commemorative Fund”. This project Poom Kumam was partially supported by the Thailand Research Fund (TRF) and the King Mongkut’s University of Technology Thonburi (KMUTT) under the TRF Research Scholar Award (Grant No. RSA6080047). Moreover, this research was funded by the King Mongkut’s University of Technology North Bangkok, Contract no. KKMUTNB-62-KNOW-40.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  3. Padcharoen, A.; Kumam, P.; Cho, Y.J. Split common fixed point problems for demicontractive operators. Numer. Algorithms 2018. [Google Scholar] [CrossRef]
  4. Ansari, Q.H.; Rehan, A.; Yao, J.C. Split feasibility and fixed point problems for asymptotically k-strict pseudo-contractive mappings in intermediate sense. Fixed Point Theory 2017, 18, 57–68. [Google Scholar] [CrossRef]
  5. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  6. Stark, H. (Ed.) Image Recovery: Theory and Application; Academic Press, Inc.: Orlando, FL, USA, 1987; pp. 1–543. [Google Scholar]
  7. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75, 2116–2125. [Google Scholar] [CrossRef]
  8. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [Green Version]
  9. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103. [Google Scholar] [CrossRef]
  10. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  11. Moudafi, A. A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. 2011, 74, 4083–4087. [Google Scholar] [CrossRef]
  12. Wang, F.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. Theory Methods Appl. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
  13. Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Cui, H.; Wang, F. Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, 2014, 78. [Google Scholar] [CrossRef] [Green Version]
  15. Boikanyo, O.A. A strongly convergent algorithm for the split common fixed point problem. Appl. Math. Comput. 2015, 265, 844–853. [Google Scholar] [CrossRef]
  16. He, H.; Liu, S.; Chen, R.; Wang, X. Strong convergence results for the split common fixed point problem. AIP Conf. Proc. 2016, 1750, 050016. [Google Scholar] [CrossRef]
  17. Goebel, K.; Kirk, W. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  18. Takahashi, W. Nonlinear Functional Analysis. Fixed Point Theory and Its Applications; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  19. Xu, H. An Iterative Approach to Quadratic Optimization. J. Optim. Theory Appl. 2003, 116, 659–678. [Google Scholar] [CrossRef]
  20. Mainge, P.E. Strong Convergence of Projected Subgradient Methods for Nonsmooth and Nonstrictly Convex Minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  21. Cui, H.; Ceng, L. Iterative solutions of the split common fixed point problem for strictly pseudo-contractive mappings. J. Fixed Point Theory Appl. 2018, 20, 92. [Google Scholar] [CrossRef]
  22. Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; Chandra, T. Efficient Projections Onto the 1-ball for Learning in High Dimensions. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; ACM: New York, NY, USA, 2008; pp. 272–279. [Google Scholar]
  23. Wang, F. A new iterative method for the split common fixed point problem in Hilbert spaces. Optimization 2017, 66, 407–415. [Google Scholar] [CrossRef]
  24. Nesterov, Y. Gradient Methods for Minimizing Composite Objective Function; CORE Discussion Papers 2007076; Université Catholique de Louvain, Center for Operations Research and Econometrics (CORE): Louvain-la-Neuve, Belgium, 2007. [Google Scholar]
  25. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
Figure 1. Diagram relations operator.
Figure 1. Diagram relations operator.
Mathematics 07 00226 g001
Figure 2. Case N = t = 10 and k = 9 .
Figure 2. Case N = t = 10 and k = 9 .
Mathematics 07 00226 g002
Figure 3. Case N = t = 10 and k = 10 .
Figure 3. Case N = t = 10 and k = 10 .
Mathematics 07 00226 g003
Figure 4. Case N = t = 100 and k = 90 .
Figure 4. Case N = t = 100 and k = 90 .
Mathematics 07 00226 g004
Figure 5. Case N = t = 100 and k = 100 .
Figure 5. Case N = t = 100 and k = 100 .
Mathematics 07 00226 g005

Share and Cite

MDPI and ACS Style

Jirakitpuwapat, W.; Kumam, P.; Cho, Y.J.; Sitthithakerngkiet, K. A General Algorithm for the Split Common Fixed Point Problem with Its Applications to Signal Processing. Mathematics 2019, 7, 226. https://doi.org/10.3390/math7030226

AMA Style

Jirakitpuwapat W, Kumam P, Cho YJ, Sitthithakerngkiet K. A General Algorithm for the Split Common Fixed Point Problem with Its Applications to Signal Processing. Mathematics. 2019; 7(3):226. https://doi.org/10.3390/math7030226

Chicago/Turabian Style

Jirakitpuwapat, Wachirapong, Poom Kumam, Yeol Je Cho, and Kanokwan Sitthithakerngkiet. 2019. "A General Algorithm for the Split Common Fixed Point Problem with Its Applications to Signal Processing" Mathematics 7, no. 3: 226. https://doi.org/10.3390/math7030226

APA Style

Jirakitpuwapat, W., Kumam, P., Cho, Y. J., & Sitthithakerngkiet, K. (2019). A General Algorithm for the Split Common Fixed Point Problem with Its Applications to Signal Processing. Mathematics, 7(3), 226. https://doi.org/10.3390/math7030226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop