Next Article in Journal
On Infinitely Many Rational Approximants to ζ(3)
Next Article in Special Issue
Iterative Algorithms for Pseudomonotone Variational Inequalities and Fixed Point Problems of Pseudocontractive Operators
Previous Article in Journal
Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation
Previous Article in Special Issue
Strongly Convex Functions of Higher Order Involving Bifunction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tseng Type Methods for Inclusion and Fixed Point Problems with Applications

by
Raweerote Suparatulatorn
1,2,3,† and
Anchalee Khemphet
1,2,3,*,†
1
Research Center in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Centre of Excellence in Mathematics, CHE, Si Ayutthaya Rd., Bangkok 10400, Thailand
3
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(12), 1175; https://doi.org/10.3390/math7121175
Submission received: 3 November 2019 / Revised: 21 November 2019 / Accepted: 26 November 2019 / Published: 3 December 2019

Abstract

:
An algorithm is introduced to find an answer to both inclusion problems and fixed point problems. This algorithm is a modification of Tseng type methods inspired by Mann’s type iteration and viscosity approximation methods. On certain conditions, the iteration obtained from the algorithm converges strongly. Moreover, applications to the convex feasibility problem and the signal recovery in compressed sensing are considered. Especially, some numerical experiments of the algorithm are demonstrated. These results are compared to those of the previous algorithm.

1. Introduction

The concept of inclusion problems and fixed point problems has been interesting to many mathematical researchers. The reason is that these problems can be applied to several other problems. For instance, these problems are applicable to solving convex programming, the minimization problem, variational inequalities, and the split feasibility problem. As a result, some applications of such problems are able to be taken into consideration, such as machine learning, the signal recovery problem, the image restoration problem, sensor networks in computerized tomography and data compression, and intensity modulated radiation therapy treatment planning, see [1,2,3,4,5,6,7,8,9].
First we define all following notations needed throughout the paper. Suppose that H is a real Hilbert space. Define self-maps S and A on H, and a multi-valued operator B : H 2 H . Any x H is said to be a fixed point of S if S x = x . Denote the set of all such x by F i x ( S ) . Definitely, the fixed point problem for the operator S is a problem of seeking a fixed point of S. Then the solution set is F i x ( S ) . Next, the inclusion problem for the operators A and B is a problem of searching for a point x H with 0 ( A + B ) x . Assume the notation ( A + B ) 1 ( 0 ) defined as the solution set of the problem. Finally, the inclusion and fixed point problem is obviously a combination of these two problems. This means that it is a problem of looking for a point
x H with 0 ( A + B ) x and x F i x ( S ) .
Thus, the solution set for this problem is ( A + B ) 1 ( 0 ) F i x ( S ) .
In literature, a number of tools have been brought to investigate the inclusion problems, see [7,10,11,12,13]. One of the most popular methods, called the forward–backward splitting method, was suggested by Lions and Mercier [14], and Passty [15]. In 1997, Chen and Rockafellar [16] were interested in this method. As a result, the weak convergence theorem was obtained. In order to show that the forward–backward splitting method converges weakly, the additional hypotheses on A and B have to be assumed. Later, Tseng [17] improved this method by weakening the previous assumptions on A and B for weak convergence. This method is called the modified forward–backward splitting method, or Tseng’s splitting algorithm. Recently, Gibali and Thong [6] studied another modified method. To be more precise, the step size rule for the problems was changed according to Mann and viscosity ideas. In addition, this new method converges strongly under suitable assumptions and is convenient as a practical matter.
Besides, there has been numerous research on the inclusion and fixed point problems. The method of approximation of a solution has been developed. In fact, Zhang and Jiang [18] suggested a hybrid algorithm for the inclusion and fixed point problems for some operators. A few years later, Thong and Vinh [19] investigated another method relying on the inertial forward–backward splitting algorithm, Mann algorithm, and viscosity method. On top of that, a very recent work of Cholamjiak, Kesornprom, and Pholasa [20] has been introduced to solve the inclusion problem of two monotone operators and the fixed point problems of nonexpansive mappings.
With preceding inspirational research, this work suggests another algorithm to solve the inclusion and fixed point problems. This study focuses on the inclusion problems for A and B such that A is a Lipschitz continuous and monotone operator and B is a maximal monotone operator, and the fixed point problem for S such that S is nonexpansive. The main result guarantees that this algorithm converges strongly with some appropriate assumptions. Furthermore, some examples of numerical experiment of the proposed algorithm are provided to demonstrate the significant results.
To begin, some definitions and known results needed to support our work are given in the next section. Then the new algorithm is given along with essential conditions in Section 3. The proof that the iteration constructed by this algorithm converges strongly is provided in detail. Next, applications and numerical results to the convex feasibility problem and the signal recovery in compressed sensing are discussed to show the efficiency of the algorithm in Section 4. Lastly, this work is summarized in the Conclusion Section.

2. Mathematical Preliminaries

In what follows, recall the real Hilbert space H. Let C be a nonempty, closed and convex subset of H. Define the metric projection P C from H onto C by
P C x = arg min y C x y .
It has been known for a fact that P C can be distinguished by
x P C x , y P C x 0
for any x H and y C .
Now assume that x , y , z H . Then the following equalities and inequality are valid for inner product spaces,
x + y 2 x 2 + 2 y , x + y ,
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 ,
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 α γ x z 2 β γ y z 2
for any α , β , γ [ 0 , 1 ] such that α + β + γ = 1 .
To obtain the desired results, the following definitions are needed to be used later.
Definition 1.
A self-map S on H is called
(i) 
firmly nonexpansive if for each x , y H ,
S x S y 2 S x S y , x y ,
(ii) 
β-cocoercive (or β-inverse strongly monotone) if there is β > 0 satisfying, for each x , y H ,
S x S y , x y β S x S y 2 ,
(iii) 
L-Lipschitz continuous if there is L > 0 satisfying, for each x , y H ,
S x S y L x y ,
(iv) 
nonexpansive if S is L-Lipschitz continuous when L = 1 ,
(v) 
L-contraction if S is L-Lipschitz continuous when L < 1 .
According to these definitions, it can be observed that every β -cocoercive mapping is monotone and 1 β -Lipschitz continuous.
Definition 2.
Assume that S : H 2 H is a multi-valued operator. The set
g r a ( S ) = { ( x , u ) H × H : u S x }
is called the graph of S.
Definition 3.
An operator S : H 2 H is called
(i) 
monotone if for all ( x , u ) , ( y , v ) g r a ( S ) ,
u v , x y 0 ,
(ii) 
maximal monotone if there is no proper monotone extension of g r a ( S ) .
Definition 4.
Let S : H 2 H be a maximal monotone. For each λ > 0 , the operator
J λ S = ( I + λ S ) 1
is called the resolvent operator of S.
It is well known that if S : H 2 H is a multi-valued maximal monotone and λ > 0 , then J λ S is a single-valued firmly nonexpansive mapping, and S 1 ( 0 ) is a closed convex set.
Now, to accomplish the purpose of this work, some crucial lemmas are needed as follows. Here, the notations ⇀ and → represent weak and strong convergence, respectively.
Lemma 1.
[21] If B : H 2 H is maximal monotone and A : H H is Lipschitz continuous and monotone, then A + B is maximal monotone.
Lemma 2.
[22] Let C be a closed convex subset of H, and S : C C a nonexpansive mapping with F i x ( S ) . If there exists { x n } in C satisfying x n z and x n S x n 0 , then z = S z .
Lemma 3.
[23] Assume that { a n } and { c n } are nonnegative sequences of real numbers such that n = 1 c n < , and { b n } is a sequence of real numbers with lim   sup n   b n 0 . Let { δ n } be a sequence in ( 0 , 1 ) with n = 1 δ n = . If there exists n 0 N such that for any n n 0 ,
a n + 1 ( 1 δ n ) a n + δ n b n + c n ,
then lim n a n = 0 .
Lemma 4.
[24] Assume that { Γ n } is a sequence of real numbers. Suppose that there is a subsequence { Γ n j } j 0 of { Γ n } satisfying Γ n j < Γ n j + 1 for each j 0 . Let { ψ ( n ) } n n be a sequence of integers defined by
ψ ( n ) = max { k n : Γ k < Γ k + 1 } .
Then { ψ ( n ) } n n is a nondecreasing sequence with lim n ψ ( n ) = . Moreover, for each n n , we have Γ ψ ( n ) Γ ψ ( n ) + 1 and Γ n Γ ψ ( n ) + 1 .

3. Convergence Analysis

To find a solution to the inclusion and fixed point problems, a new algorithm is introduced. The convergence of the sequence obtained from the algorithm is proved in Theorem 1. First, some assumptions are required in order to accomplish our goal. In particular, the following Conditions 1 through 4 are maintained in this section. Here, denote the solution set ( A + B ) 1 ( 0 ) F i x ( S ) by Ω .
Condition 1.
Ω is nonempty.
Condition 2.
A is L-Lipschitz continuous and monotone, and B is maximal monotone.
Condition 3.
S is nonexpansive, and let f : H H be ρ -Lipschitz continuous, where ρ [ 0 , 1 ) .
Condition 4.
Let { α n } and { β n } be sequences in ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = , and there exist positive real numbers a and b with 0 < a < β n < b < 1 α n for each n N .
Second, the following algorithm is constructed for solving the inclusion and fixed point problems. This algorithm is inspired by the algorithm of Tseng for monotone variational inclusion problems, and the iterative method of Mann [25] and Moudafi [26] viscosity approximating scheme for fixed point problems.
Algorithm 1 An iterative algorithm for solving inclusion and fixed point problems
Initialization: Let λ 1 > 0 and μ ( 0 , 1 ) . Assume x 1 H .
Iterative Steps: Obtain the iteration { x n } as follows:
Step 1.
Define
y n = ( I + λ n B ) 1 ( I λ n A ) x n , z n = y n λ n ( A y n A x n ) ,
and
x n + 1 = α n f ( x n ) + ( 1 α n β n ) x n + β n S z n .
Step 2.
Define
λ n + 1 = min μ x n y n A x n A y n , λ n if A x n A y n 0 ; λ n otherwise .
Replace n by n + 1 and then repeat Step 1.
With this algorithm, the following lemmas can be achieved in the same manner as in [6].
Lemma 5.
[6] As given in the algorithm together with all four conditions assumed, { λ n } is a convergent sequence with a lower bound of min λ 1 , μ L .
Lemma 6.
[6] As given in the algorithm together with all four conditions assumed, the following inequalities are true. For any p Ω ,
z n p 2 x n p 2 1 μ 2 λ n 2 λ n + 1 2 x n y n 2 ,
and
z n y n μ λ n λ n + 1 x n y n .
We are almost ready to show the strong convergence theorem for our algorithm. The remaining fact needed for the theorem is stated and verified below.
Lemma 7.
As given in the algorithm together with all four conditions assumed, suppose that
lim n x n y n = lim n x n z n = lim n z n S z n = 0 .
If there is a weakly convergent subsequence { x n k } of { x n } , then the limit of { x n k } belongs to the solution set Ω.
Proof. 
Let z H such that x n k z . From lim n x n y n = 0 , by similar proof as Lemma 7 in [6], we have z ( A + B ) 1 ( 0 ) . Since lim n x n z n = 0 and x n k z , it follows that z n k z . This together with lim n z n S z n = 0 , by Lemma 2, z F i x ( S ) . Therefore, z Ω . □
Finally, the main theorem is presented and proved as follows.
Theorem 1.
Assume that Conditions 1–4 are valid. Let { x n } be a sequence obtained from the algorithm with some initial point x 1 H , λ 1 > 0 and μ ( 0 , 1 ) .Then x n p , where p = P Ω f ( p ) .
Proof. 
Since lim n 1 μ 2 λ n 2 λ n + 1 2 = 1 μ 2 > 0 , one can find n 0 N such that for each n n 0 ,
1 μ 2 λ n 2 λ n + 1 2 > 0 .
Consequently, by Equation (7), for any n n 0 ,
z n p x n p .
Next, we prove all following claims.
Claim 1. { x n } , { y n } , { z n } and { f ( x n ) } are bounded sequences.
Since Condition 3 and the inequality Equation (10) hold, the following relation is obtained for each n n 0 :
x n + 1 p = α n ( f ( x n ) p ) + ( 1 α n β n ) ( x n p ) + β n ( S z n p ) α n f ( x n ) p + ( 1 α n β n ) x n p + β n S z n p α n f ( x n ) f ( p ) + α n f ( p ) p + ( 1 α n β n ) x n p + β n z n p α n ρ x n p + α n f ( p ) p + ( 1 α n ) x n p = [ 1 α n ( 1 ρ ) ] x n p + α n ( 1 ρ ) f ( p ) p 1 ρ max x n p , f ( p ) p 1 ρ .
Therefore, x n + 1 p max x n 0 p , f ( p ) p 1 ρ for any n n 0 . Consequently, { x n } is bounded, and so are { y n } and { z n } . In addition, { f ( x n ) } is also bounded because f is a contraction. Thus, we have Claim 1.
Now for each n N , set Γ n = x n p 2 .
Claim 2. For any n N ,
β n 1 μ 2 λ n 2 λ n + 1 2 x n y n 2 + β n ( 1 α n β n ) x n S z n 2 Γ n Γ n + 1 + α n f ( x n ) p 2 .
Using Equation (4), we get
Γ n + 1 = α n ( f ( x n ) p ) + ( 1 α n β n ) ( x n p ) + β n ( S z n p ) 2 = α n f ( x n ) p 2 + ( 1 α n β n ) Γ n + β n S z n p 2 α n ( 1 α n β n ) f ( x n ) x n 2 β n ( 1 α n β n ) x n S z n 2 α n β n f ( x n ) S z n 2 α n f ( x n ) p 2 + ( 1 α n β n ) Γ n + β n z n p 2 β n ( 1 α n β n ) x n S z n 2 .
Applying Equation (7), we have
Γ n + 1 α n f ( x n ) p 2 + ( 1 α n ) Γ n β n ( 1 α n β n ) x n S z n 2 β n 1 μ 2 λ n 2 λ n + 1 2 x n y n 2 α n f ( x n ) p 2 + Γ n β n ( 1 α n β n ) x n S z n 2 β n 1 μ 2 λ n 2 λ n + 1 2 x n y n 2 .
Therefore, Claim 2 is obtained as follows:
β n 1 μ 2 λ n 2 λ n + 1 2 x n y n 2 + β n ( 1 α n β n ) x n S z n 2 Γ n Γ n + 1 + α n f ( x n ) p 2 .
Moreover, we show that the inequality Equation (12) below is true.
Claim 3. For each n n 0 ,
Γ n + 1 [ 1 α n ( 1 ρ ) ] Γ n + α n ( 1 ρ ) 2 1 ρ β n x n S z n x n + 1 p + f ( p ) p , x n + 1 p .
Indeed, setting t n = ( 1 β n ) x n + β n S z n . By Condition 3, we get
t n p ( 1 β n ) x n p + β n S z n p ( 1 β n ) x n p + β n z n p x n p
for n n 0 , and
x n t n = β n x n S z n .
Using the definition of Γ n , and Equations (2), (3), (13), and (14), for all n n 0 , we obtain
Γ n + 1 = ( 1 α n ) ( t n p ) + α n ( f ( x n ) f ( p ) ) α n ( x n t n ) α n ( p f ( p ) ) 2 ( 1 α n ) ( t n p ) + α n ( f ( x n ) f ( p ) ) 2 2 α n x n t n + p f ( p ) , x n + 1 p ( 1 α n ) t n p 2 + α n f ( x n ) f ( p ) 2 2 α n x n t n + p f ( p ) , x n + 1 p ( 1 α n ) Γ n + α n ρ Γ n + 2 α n x n t n , p x n + 1 + 2 α n f ( p ) p , x n + 1 p [ 1 α n ( 1 ρ ) ] Γ n + 2 α n x n t n x n + 1 p + 2 α n f ( p ) p , x n + 1 p = [ 1 α n ( 1 ρ ) ] Γ n + 2 α n β n x n S z n x n + 1 p + 2 α n f ( p ) p , x n + 1 p = [ 1 α n ( 1 ρ ) ] Γ n + α n ( 1 ρ ) 2 1 ρ β n x n S z n x n + 1 p + f ( p ) p , x n + 1 p .
Recall that our task is to show that x n p which is now equivalent to show that Γ n 0 .
Claim 4. Γ n 0 .
Consider the following two cases.
Case a. We can find N N satisfying Γ n + 1 Γ n for each n N .
Since each term Γ n is nonnegative, it is convergent. Further, notice that the argument in Condition 4 implies that lim n α n = 0 and lim n β n ( 0 , 1 ) . Due to the fact that lim n 1 μ 2 λ n 2 λ n + 1 2 > 0 , according to Claim 2,
lim n x n y n = lim n x n S z n = 0 .
From Equation (8), we immediately get
lim n z n y n = 0 .
In addition, by using the triangle inequality, the following inequalities are obtained:
x n z n x n y n + z n y n ,
z n S z n x n z n + x n S z n .
Obviously, lim n x n z n = lim n z n S z n = 0 . Note that for each n N ,
x n + 1 x n x n + 1 S z n + x n S z n α n f ( x n ) x n + ( 2 β n ) x n S z n .
Consequently, by Equation (15) and Condition 4, lim n x n + 1 x n = 0 . Next, for the reason that { x n } is bounded, there is z H so that x n k z for some subsequence { x n k } of { x n } . Then Lemma 7 implies that z Ω . As a result, by the definition of p and Equation (1), it is straightforward to show that
lim   sup n f ( p ) p , x n p = lim k f ( p ) p , x n k p = f ( p ) p , z p 0 .
Consider that the following result is obtained because lim n x n + 1 x n = 0 .
lim   sup n f ( p ) p , x n + 1 p lim   sup n f ( p ) p , x n + 1 x n + lim   sup n f ( p ) p , x n p 0 .
Applying Lemma 3 to the inequality from Claim 3 with δ n = α n ( 1 ρ ) , a n = Γ n , c n = 0 , and b n = 2 1 ρ β n x n S z n x n + 1 p + f ( p ) p , x n + 1 p , as a consequence, lim n Γ n = 0 .
Case b. We can find n j N satisfying n j j and Γ n j < Γ n j + 1 for all j N .
According to Lemma 4, the inequlity Γ ψ ( n ) Γ ψ ( n ) + 1 is obtained, where ψ : N N is as in Equation (5), and n n for some n N . This implies, by Claim 2, for each n n , as follows.
β ψ ( n ) 1 μ 2 λ ψ ( n ) 2 λ ψ ( n ) + 1 2 x ψ ( n ) y ψ ( n ) 2 + β ψ ( n ) ( 1 α ψ ( n ) β ψ ( n ) ) x ψ ( n ) S z ψ ( n ) 2 Γ ψ ( n ) Γ ψ ( n ) + 1 + α ψ ( n ) f ( x ψ ( n ) ) p 2 α ψ ( n ) f ( x ψ ( n ) ) p 2 .
Similar as in Case a, since α n 0 , we obtain
lim n x ψ ( n ) y ψ ( n ) = lim n x ψ ( n ) S z ψ ( n ) = 0 ,
and furthermore,
lim   sup n f ( p ) p , x ψ ( n ) + 1 p 0 .
Finally, by Claim 3, for all n max { n , n 0 } , the following inequalities hold.
Γ ψ ( n ) + 1 [ 1 α ψ ( n ) ( 1 ρ ) ] Γ ψ ( n ) + α ψ ( n ) ( 1 ρ ) 2 1 ρ β ψ ( n ) x ψ ( n ) S z ψ ( n ) x ψ ( n ) + 1 p + f ( p ) p , x ψ ( n ) + 1 p [ 1 α ψ ( n ) ( 1 ρ ) ] Γ ψ ( n ) + 1 + α ψ ( n ) ( 1 ρ ) 2 1 ρ β ψ ( n ) x ψ ( n ) S z ψ ( n ) x ψ ( n ) + 1 p + f ( p ) p , x ψ ( n ) + 1 p .
Some simple calculations yield
Γ ψ ( n ) + 1 2 1 ρ β ψ ( n ) x ψ ( n ) S z ψ ( n ) x ψ ( n ) + 1 p + f ( p ) p , x ψ ( n ) + 1 p .
This follows that lim   sup n   Γ ψ ( n ) + 1 0 . Thus, lim n x ψ ( n ) + 1 p 2 = lim n Γ ψ ( n ) + 1 = 0 . In addition, by Lemma 4,
lim n Γ n lim n Γ ψ ( n ) + 1 = 0 .
Hence, x n converges strongly to p. □

4. Applications and Its Numerical Results

The inclusion and fixed points problems are usable to many problems. Owing to this, some applications can be considered. In particular, the algorithm constructed in Section 3 is used for the convex feasibility problem and the signal recovery in compressed sensing. Not only the numerical results of the algorithm, but also the comparison to the numerical results of Gibali and Thong, Algorithm 3.1 in [6], are shown for each problem. As a reference, all numerical experiments presented are obtained from Matlab R2015b running on the same laptop computer.
Example 1.
Assume that H = L 2 ( [ 0 , 2 π ] ) . For x , y L 2 ( [ 0 , 2 π ] ) , let x , y = 0 2 π x ( t ) y ( t ) d t be an inner product on H, and, as a consequence, let x = ( 0 2 π | x ( t ) | 2 d t ) 1 2 be the induced norm on H. Define the half-space
C = x L 2 ( [ 0 , 2 π ] ) : 1 , x 1 = x L 2 ( [ 0 , 2 π ] ) : 0 2 π x ( t ) d t 1 ,
where 1 1 L 2 ( [ 0 , 2 π ] ) . Suppose that Q is the closed ball of radius 4 centered at sin L 2 ( [ 0 , 2 π ] ) . That is,
Q = x L 2 ( [ 0 , 2 π ] ) : x sin 2 16 = x L 2 ( [ 0 , 2 π ] ) : 0 2 π | x ( t ) sin ( t ) | 2 d t 16 .
Next, given the mappings S , T : L 2 ( [ 0 , 2 π ] ) L 2 ( [ 0 , 2 π ] ) such that S = P C Q and ( T x ) ( s ) = x ( s ) for x L 2 ( [ 0 , 2 π ] ) , define A x = 1 2 T x P Q T x 2 = T ( I P Q ) T x and B = i C , the subdifferential of indicator function on C. Then the convex feasibility problem is a problem of finding a point
x H such that x C Q .
Clearly, A is L-Lipschitz continuous, where L = T 2 = 1 , and B is maximal monotone, see [27]. For each n N , choose α n = 1 n + 1 and β n = n 2 ( n + 1 ) , and assume that μ = 0.85 and λ 1 = 7.55 . The stopping criterion is set to be when x n y n < 10 5 . To solve this problem, we apply our algorithm, and, additionally, Algorithm 3.1 [6] to the problem. Then the numerical experiments are presented in Table 1.
Accordingly, the new algorithm yields better results than Algorithm 3.1 [6] with the appropriate choice of the function f. In detail, the new algorithm spends less elapsed time and has fewer iterations than Algorithm 3.1 [6]. This difference occurs since the new algorithm contains the terms of mappings f and S which allow us to be more flexible with better options for f and S. In fact, Algorithm 3.1 [6] is a special case of our algorithm.
Example 2.
Recall that the signal recovery in compressed sensing is able to expressed in a mathematical model as follows.
b = T x + ε ,
where x R N is the original signal, b R M is the observed data, T : R N R M × N is a bounded linear operator, and ε R M is the Gaussian noise distributed as N ( 0 , σ 2 ) for M , N N such that M < N . Solving the linear equation system of Equation (23) has been known to be equivalent to solving the convex unconstrained optimization problem:
min x R N 1 2 T x b 2 2 + τ x 1 ,
where τ > 0 is a parameter. Next, let A = g be the gradient of g and B = h the subdifferential of h, where g ( x ) = 1 2 T x b 2 2 and h ( x ) = τ x 1 . Then g ( x ) = T t ( T x b ) is 1 T 2 -cocoercive, and ( I γ g ) is nonexpansive for any 0 < γ < 2 T 2 , see [28]. Moreover, h is maximal monotone, see [13]. Additionally, by Proposition 3.1 (iii) of [4], x is a solution to Equation (24) 0 g ( x ) + h ( x ) x = prox γ h ( I γ g ) ( x ) for any γ > 0 , where prox γ h ( x ) = arg min u R N h ( u ) + 1 2 γ x u 2 .
In this experiment, the size of signal is selected to be N = 512 , and M = 256 , where the original signal contains m nonzero elements. Let T be the Gaussian matrix generated by command r a n d n ( M , N ) . Choose S = prox 1 T 2 h ( I 1 T 2 g ) and f ( · ) = 1 10 ( · ) . Assume the same α n , β n , μ , and λ 1 as in the preceding example. Given that the initial point x 1 is chosen to be T t b , use the mean-squared error to measure the restoration accuracy. Precisely, M S E n = 1 N x n x 2 < 5 × 10 5 , where x is the original signal. Then the results are displayed in Table 2. In other words, the number of iterations and elapsed time of each algorithm are provided for different numbers of nonzero elements. The numerical results show that our algorithm has a better elapsed time and number of iterations than Algorithm 3.1 [6].
On top of that, we compare the recovery signals of each algorithm. In Figure 1, the original signal and the measurement are shown in the case when m = 40 and σ = 0.01 . Then the signals recovered by using Algorithm 3.1 [6] and the new algorithm are presented in Figure 2. Therefore, the errors of each algorithm are compared in Figure 3. The outcome is that the signal recovered from our algorithm contains less error than Algorithm 3.1 [6].
Next, another numerical result is given for σ = 0.1 in Table 3. Likewise, the original signal and the measurement are shown when m = 40 in Figure 4. By using Algorithm 3.1 [6] and the new algorithm, the signal is recovered as in Figure 5. Then Figure 6 shows that the error of the result obtained from our algorithm is less than the result obtained from Algorithm 3.1 [6].
Overall, based on Table 2 and Table 3, similar results as in Example 1 is obtained. The new algorithm improves the elapsed time, and reduces the number of iterations compared to Algorithm 3.1 [6]. This means that the new algorithm displays better results than Algorithm 3.1 [6], mainly because of the more general setting in the new algorithm.

5. Conclusions

To sum up, a modified Tseng type algorithm is created based on the methods of Mann iterations and viscosity approximation. The purpose is to find a common solution to the inclusion problem of an L-Lipschitz continuous and monotone single-valued operator and a maximal monotone multivalued operator, and the fixed point problem of a nonexpansive operator. With some extra conditions, the iteration defined by the algorithm converges strongly to the solution of the problem. In applications, this algorithm can be applied to the convex feasibility problem and the signal recovery in compressed sensing. The numerical experiments of our algorithm yield better results compared to the previous algorithm.

Author Contributions

Writing–original draft preparation, R.S.; writing–review and editing, A.K.

Funding

This research was funded by Centre of Excellence in Mathematics, The Commission on Higher Education, Thailand, and Chiang Mai University.

Acknowledgments

This research was supported by Centre of Excellence in Mathematics, The Commission on Higher Education, Thailand, and this research work was partially supported by Chiang Mai University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  3. Cholamjiak, P.; Shehu, Y. Inertial forward–backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 2019, 64, 409–435. [Google Scholar] [CrossRef]
  4. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  5. Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; Chandra, T. Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–8 July 2008. [Google Scholar]
  6. Gibali, A.; Thong, D.V. Tseng type methods for solving inclusion problems and its applications. Calcolo 2018, 55, 49. [Google Scholar] [CrossRef]
  7. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J.; Sitthithakerngkiet, K. Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 2019, 1–19. [Google Scholar] [CrossRef]
  8. Suparatulatorn, R.; Khemphet, A.; Charoensawan, P.; Suantai, S.; Phudolsitthiphat, N. Generalized self-adaptive algorithm for solving split common fixed point problem and its application to image restoration problem. Int. J. Comput. Math. 2019, 1–15. [Google Scholar] [CrossRef]
  9. Suparatulatorn, R.; Charoensawan, P.; Poochinapan, K. Inertial self-adaptive algorithm for solving split feasible problems with applications to image restoration. Math. Methods Appl. Sci. 2019, 42, 7268–7284. [Google Scholar] [CrossRef]
  10. Attouch, H.; Peypouquet, J.; Redont, P. Backward-forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 2018, 457, 1095–1117. [Google Scholar] [CrossRef]
  11. Dong, Y.D.; Fischer, A. A family of operator splitting methods revisited. Nonlinear Anal. 2010, 72, 4307–4315. [Google Scholar] [CrossRef]
  12. Huang, Y.Y.; Dong, Y.D. New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 2014, 237, 60–68. [Google Scholar] [CrossRef]
  13. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  14. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  15. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, H.G.; Rockafellar, R.T. Convergence rates in forward–backward splitting. SIAM J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef] [Green Version]
  17. Tseng, P. A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  18. Zhang, J.; Jiang, N. Hybrid algorithm for common solution of monotone inclusion problem and fixed point problem and application to variational inequalities. SpringerPlus 2016, 5, 803. [Google Scholar] [CrossRef] [Green Version]
  19. Thong, D.V.; Vinh, N.T. Inertial methods for fixed point problems and zero point problems of the sum of two monotone mappings. Optimization 2019, 68, 1037–1072. [Google Scholar] [CrossRef]
  20. Cholamjiak, P.; Kesornprom, S.; Pholasa, N. Weak and strong convergence theorems for the inclusion problem and the fixed-point problem of nonexpansive mappings. Mathematics 2019, 7, 167. [Google Scholar] [CrossRef] [Green Version]
  21. Brézis, H.; Chapitre, I.I. Operateurs maximaux monotones. North-Holland Math. Stud. 1973, 5, 19–51. [Google Scholar]
  22. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  23. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  24. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and non- strictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  25. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  26. Moudafi, A. Viscosity approximating methods for fixed point problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  27. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  28. Iiduka, H.; Takahashi, W. Strong convergence theorems for nonexpansive nonself-mappings and inverse-strongly-monotone mappings. J. Convex Anal. 2004, 11, 69–79. [Google Scholar]
Figure 1. The original signal and the measurement when m = 40 and σ = 0.01 .
Figure 1. The original signal and the measurement when m = 40 and σ = 0.01 .
Mathematics 07 01175 g001
Figure 2. The recovery signals by Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.01 .
Figure 2. The recovery signals by Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.01 .
Mathematics 07 01175 g002
Figure 3. Error plotting of Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.01 .
Figure 3. Error plotting of Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.01 .
Mathematics 07 01175 g003
Figure 4. The original signal and the measurement when m = 40 and σ = 0.1 .
Figure 4. The original signal and the measurement when m = 40 and σ = 0.1 .
Mathematics 07 01175 g004
Figure 5. The recovery signals by Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.1 .
Figure 5. The recovery signals by Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.1 .
Mathematics 07 01175 g005
Figure 6. Error plotting of Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.1 .
Figure 6. Error plotting of Algorithm 3.1 [6] and the new algorithm when m = 40 and σ = 0.1 .
Mathematics 07 01175 g006
Table 1. Numerical experiments of Example 1.
Table 1. Numerical experiments of Example 1.
x 1 Algorithm 3.1 [6]New Algorithm
f ( x ( t ) ) = sin ( t ) 2 x ( t ) f ( x ( t ) ) = 0
Elapsed Time (s)No. of Iter.Elapsed Time (s)No. of Iter.Elapsed Time (s)No. of Iter.
t 2 10 7.0002182.498253.45687
2 t 16 3.184672.164443.38656
e t 4 2 + t 2 24 71.314136202.3074822.73589
e t 3 2 51.214631110.1887810.75448
2 sin 4 ( 5 t ) 3 cos ( 2 t ) 8.8008205.359392.02764
Table 2. Numerical comparison between Algorithm 3.1 [6] and the new algorithm for σ = 0.01 .
Table 2. Numerical comparison between Algorithm 3.1 [6] and the new algorithm for σ = 0.01 .
m Nonzero ElementsAlgorithm 3.1 [6]New Algorithm
Elapsed Time (s)No. of Iter.Elapsed Time (s)No. of Iter.
m = 8 0.326317030.1293688
m = 16 0.465533310.19851285
m = 24 0.596646070.29901777
m = 32 0.664447780.33211808
m = 40 0.732356440.40512144
Table 3. Numerical comparison between Algorithm 3.1 [6] and the new algorithm for σ = 0.1 .
Table 3. Numerical comparison between Algorithm 3.1 [6] and the new algorithm for σ = 0.1 .
m Nonzero ElementsAlgorithm 3.1 [6]New Algorithm
Elapsed Time (s)No. of Iter.Elapsed Time (s)No. of Iter.
m = 8 0.384820590.1292811
m = 16 0.522935980.25831417
m = 24 0.595440980.29271562
m = 32 0.935055370.42212218
m = 40 1.102573500.50072865

Share and Cite

MDPI and ACS Style

Suparatulatorn, R.; Khemphet, A. Tseng Type Methods for Inclusion and Fixed Point Problems with Applications. Mathematics 2019, 7, 1175. https://doi.org/10.3390/math7121175

AMA Style

Suparatulatorn R, Khemphet A. Tseng Type Methods for Inclusion and Fixed Point Problems with Applications. Mathematics. 2019; 7(12):1175. https://doi.org/10.3390/math7121175

Chicago/Turabian Style

Suparatulatorn, Raweerote, and Anchalee Khemphet. 2019. "Tseng Type Methods for Inclusion and Fixed Point Problems with Applications" Mathematics 7, no. 12: 1175. https://doi.org/10.3390/math7121175

APA Style

Suparatulatorn, R., & Khemphet, A. (2019). Tseng Type Methods for Inclusion and Fixed Point Problems with Applications. Mathematics, 7(12), 1175. https://doi.org/10.3390/math7121175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop