Next Article in Journal
Federated Learning-Inspired Technique for Attack Classification in IoT Networks
Previous Article in Journal
Bistability and Robustness for Virus Infection Models with Nonmonotonic Immune Responses in Viral Infection Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parallel Hybrid Algorithms for a Finite Family of G-Nonexpansive Mappings and Its Application in a Novel Signal Recovery

by
Suthep Suantai
1,2,
Kunrada Kankam
3,
Watcharaporn Cholamjiak
3,* and
Watcharaporn Yajai
3
1
Research Group in Mathematics and Applied Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(12), 2140; https://doi.org/10.3390/math10122140
Submission received: 26 May 2022 / Revised: 11 June 2022 / Accepted: 16 June 2022 / Published: 20 June 2022
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
This article considers a parallel monotone hybrid algorithm for a finite family of G-nonexpansive mapping in Hilbert spaces endowed with graphs and suggests iterative schemes for finding a common fixed point by the two different hybrid projection methods. Moreover, we show the computational performance of our algorithm in comparison to some methods. Strong convergence theorems are proved under suitable conditions. Finally, we give some numerical experiments of our algorithms to show the efficiency and implementation of the LASSO problems in signal recovery with different types of blurred matrices and noise.

1. Introduction

Let C be a nonempty subset of a real Banach space X. Let Δ denotes the diagonal of the cartesian product C × C , i.e., Δ = { ( s , s ) : s C } . Assume that G is a directed graph such that V ( G ) is the set of its vertices that coincides with C, and  E ( G ) is the set of its edges. We assume that G has no parallel edge and Δ E ( G ) . A graph of G is defined by ( V ( G ) , E ( G ) ) . A mapping Θ : C C is said to be G-nonexpansive if Θ satisfies the conditions: Θ preserves edges of G, i.e.,
( s , t ) E ( G ) ( Θ s , Θ t ) E ( G ) , ( s , t ) E ( G ) ;
and Θ non-indecreases the weights of edges of G in the following way:
( s , t ) E ( G ) Θ s Θ t s t , ( s , t ) E ( G ) .
It’s easy to see that G-nonexpansive mapping generalizes nonexpansive mapping. Many problems in mathematical sciences have been solved by finding a fixed-point approximation of a nonexpansive mapping in many metric spaces. Iterative sequences have been proposed for finding fixed points and their applications by many mathematicians, see [1,2,3]. One of the most famous is the S-iteration method introduced by Agarwal et al. [4] for some operators in norm linear spaces. Recently, Suparatulatorn et al. [2] used the S-iteration method for finding a fixed point of three different G-nonexpansive mappings Θ 1 , Θ 2 , Θ 3 with directed graphs. The weak convergence was proved under some conditions on the parameters in Hilbert spaces endowed with graphs. This modified S-iteration method is defined as Algorithm 1:
Algorithm 1: Choose s 1 C , and  { α n } , { β n } are real sequences in [ 0 , 1 ] .
(STEP 1) Compute
u n = ( 1 β n ) s n + β n Θ 1 s n .
(STEP 2) Construct s n + 1 by        
s n + 1 = ( 1 α n ) Θ 1 s n + α n Θ 2 u n , n 1 .
(STEP 3) Set n : = n + 1 , and go to (Step 1).
In 2017, Sridarat et al. [5] studied the convergence analysis of SP-iteration in Hilbert spaces endowed with graphs. A weak convergence theorem was proved. The following iteration in Algorithm 2 is known as SP-iteration:
Algorithm 2: Choose s 1 C , and  { α n } , { β n } , { γ n } are real sequences in [ 0 , 1 ] .
(STEP 1) Compute
u n = ( 1 γ n ) s n + γ n Θ 3 s n .
(STEP 2) Compute
t n = ( 1 β n ) u n + β n Θ 2 u n .
(STEP 3) Construct s n + 1 by
s n + 1 = ( 1 α n ) t n + α n Θ 1 t n , n 1 .
(STEP 4) Set n : = n + 1 , and go to (Step 1).
In studying fixed-point algorithms, the rate of convergence is very important to show the efficiency of an algorithm. Recently, Yambangwai et al. [6] introduced a new modified three-step iteration method for three different G-nonexpansive mappings in Banach spaces with a graph and showed a better rate of convergence compared with Sridarat et al. [5]. The algorithm also gets a weak convergence theorem under some suitable conditions. This algorithm is defined as Algorithm 3:
Algorithm 3: Choose s 1 C , and  { α n } , { β n } , { γ n } are real sequences in [ 0 , 1 ] .
(STEP 1) Compute
u n = ( 1 γ n ) s n + γ n Θ 3 s n .
(STEP 2) Compute
t n = ( 1 β n ) u n + β n Θ 2 u n .
(STEP 3) Construct s n + 1 by
s n + 1 = ( 1 α n ) Θ 2 u n + α n Θ 1 t n , n 1 .
(STEP 4) Set n : = n + 1 , and go to (Step 1).
For obtaining strong convergence theorems, Nakajo and Takahashi [7] proposed the following hybrid projection algorithm which is well-known as the CQ projection algorithm for finding a fixed point of a nonexpansive mapping in a real Hilbert space H. For each t H , argmax s H s t = { x H : s t x t for all s H } They investigated the sequence { s n } generated by Algorithm 4 as follows:
Algorithm 4: Choose s 1 C , Q 1 = C = C 1 and { α n } [ 0 , a ] for some a [ 0 , 1 ) .
(STEP 1) Set
t n = α n s n + ( 1 α n ) Θ n s n .
(STEP 2) Compute
t ¯ n = argmax { t n i s n : i = 1 , 2 , , N } .
(STEP 3) Compute
C n = { z C : t ¯ n z s n z }
and
Q n = { z C : z s n , s 1 s n 0 } .
(STEP 4) Construct s n + 1 by
s n + 1 = P C n Q n ( s 1 ) , n 1 .
(STEP 5) Set n : = n + 1 , and go to (Step 1).
In 2005, Anh and Hieu [8,9] introduced a projection method which is called the parallel monotone hybrid algorithm for solving common fixed point problems of a finite family of quasi ϕ -nonexpansive mappings { Θ i } i = 1 N in a Banach space. This algorithm is presented in a real Hilbert space as Algorithm 5:
Algorithm 5: Choose s 1 C , C 1 = C and { α n } is a sequence in [ 0 , 1 ] .
(STEP 1) Set
t n i = α n s n + ( 1 α n ) Θ i s n , i = 1 , 2 , , N .
(STEP 2) Compute
t ¯ n = argmax { t n i s n : i = 1 , 2 , , N } .
(STEP 3) Compute
C n + 1 = { v C n : v t ¯ n v s n } .
(STEP 4) Construct s n + 1 by
s n + 1 = P C n + 1 ( s 1 ) , n 1 .
(STEP 5) Set n : = n + 1 , and go to (Step 1).
A strong convergence theorem has been proved under a condition on the parameter α n with lim sup n α n < 1 . Recently, there have been some works involving the parallel method for solving the fixed point problem (see [2,10,11]).
In this work, we wish to study the parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces and introduce algorithms, based on the hybrid projection method. We then prove the strong convergence theorems of the proposed methods using the parallel monotone hybrid algorithm. Finally, some numerical experiments in signal recovery are provided to show the efficiency and implementation of our algorithms.

2. Main Results

In this section, we prove strong convergence theorems for a finite family of G- nonexpansive mappings and propose a projection Algorithm 6 for finding a common fixed point.
Assume that six below conditions hold.
(1)
The mappings Θ i : C C are G -nonexpansive for all i = 1 , 2 , , N .
(2)
The solution set F : = i = 1 N F ( Θ i ) .
(3)
F is closed and F ( Θ i ) × F ( Θ i ) E ( G ) for all i = 1 , 2 , , N .
(4)
{ s n } dominates p for all p F and if there exists a subsequence { s n k } of { s n } such that s n k w C , then ( s n k , w ) E ( G ) .
(5)
The sequence { α n i } [ 0 , 1 ] and lim inf n α n i ( 1 α n i ) > 0 for all i = 1 , 2 , , N .
(6)
lim inf n α n 0 α n i > 0 for all i = 1 , 2 , , N .
Algorithm 6: Choose s 1 C , Q 1 = C and { α n i } is a sequence in [ 0 , 1 ] .
(STEP 1) Set
t n i = ( 1 α n i ) s n + α n i Θ i s n , i = 1 , 2 , , N .
(STEP 2) Compute
t ¯ n = argmax { t n i s n : i = 1 , 2 , , N } .
(STEP 3) Compute
C n = { v C : t ¯ n v 2 s n v 2 }
and
Q n = { v Q n 1 : s 1 s n , s n v 0 } .
(STEP 4) Construct s n + 1 by
s n + 1 = P C n Q n ( s 1 ) .
(STEP 5) Set n : = n + 1 , and go to (Step 1).
Theorem 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H. Let G = ( V ( G ) , E ( G ) ) be a directed graph such that V ( G ) = C and E ( G ) be convex. Assume that { s n } is generated by Algorithm 6. Then under conditions (1)–(5), { s n } strongly converges to w = P F ( s 1 ) .
Proof. 
We divide the proof in five steps.
Claim 1: P C n Q n is well-defined for every s 1 H . By Theorem 3.2 of Tiammee et al. [12], we obtain that F is closed and convex, and  F ( Θ i ) is convex for all i = 1 , 2 , , N . From definitions of C n and Q n , and from Lemma 2.2 in [13] that C n Q n is closed and convex. Let p F . Since { s n } dominates p and Θ i is edge-preserving, we have ( Θ i s n , p ) E ( G ) for all i = 1 , 2 , , N . This implies that ( t n i , p ) = ( ( 1 α n i ) s n + α n i Θ i s n , p ) E ( G ) ans as E ( G ) is convex, we have
t n i p = ( 1 α n i ) s n + α n i Θ i s n p ( 1 α n i ) s n p + α n i Θ i s n p s n p .
This implies that t ¯ n p s n p . Thus, we have p C n . Therefore F C n Q n . This implies that P C n Q n ( s 1 ) is well-defined.
Claim 2: lim n s n s 1 exists. By the definition of the metric projection P F , and since F is a nonempty, closed and convex subset of H, we know that there exists a unique v F such that v = P F ( s 1 ) . From s n + 1 = P C n Q n ( s 1 ) , we get
s n + 1 s 1 z s 1 , for all z C n Q n and n N .
On the other hand, as  F C n Q n , we obtain
s n + 1 s 1 v s 1 , for all n N .
Thus, { s n } is bounded. Since s n = P Q n ( s 1 ) and s n + 1 Q n , we get
s n s 1 s n + 1 s 1 , for all n N .
It follows that the sequence { s n s 1 } is bounded and non-decreasing. Therefore lim n s n s 1 exists.
Claim 3: lim n s n = w C . For m > n , by the definition of C n , since s m = P C m ( s 1 ) C m C n , it follows from the property of the metric projection P C m that
s m s n 2 s m s 1 2 s n s 1 2 .
Since s n s 1 exists, therefore by Step 2, s m s n as n . From the completeness of a Hilbert space, { s n } is a Cauchy sequence. Hence, there exists w C such that s n w as n . In particular, we get
lim n s n + 1 s n = 0 .
Claim 4: w F . Since s n + 1 C n , therefore from (4), we have
t ¯ n s n + 1 s n s n + 1 0 as n .
Now, we obtain
t ¯ n s n t ¯ n s n + 1 + s n + 1 s n .
That is
lim n t ¯ n s n = 0 .
We know that t n i s n t ¯ n s n . By (5), we obtain
lim n t n i s n = 0 ,
for all i = 1 , 2 , , N . It follows from the properties in a real Hilbert space, that
t n i p 2 = ( 1 α n i ) s n + α n i Θ i s n p 2 = ( 1 α n i ) s n p 2 + α n i Θ i s n p 2 α n i ( 1 α n i ) Θ i s n s n 2 ( 1 α n i ) s n p 2 + α n i s n p 2 α n i ( 1 α n i ) Θ i s n s n 2 = s n p 2 α n i ( 1 α n i ) Θ i s n s n 2 .
By (7), we obtain
α n i ( 1 α n i ) Θ i s n s n 2 s n p 2 t n i p 2 .
By our Assumption (5) and (8), we obtain
lim n Θ i s n s n = 0 .
From s n w as n , the assumption (1) and Lemma 6 in [2], we have w F .
Claim 5: w = P F ( s 1 ) . Since s n + 1 = P C n Q n ( s 1 ) and w F C n Q n , we have
s 1 s n , s n p 0 , p C n Q n .
By taking the limit in (9), we obtain
s 1 w , w p 0 , p C n Q n .
Since F C n Q n , so we have s 1 w , w p 0 , for each p F , which gives w = P F ( s 1 ) . This completes the proof.   □
We know that if Θ is G -nonexpansive, that Θ is nonexpansive. From direct consequences of Theorem 1, we have the following corollary.
Corollary 1.
Assume that { s n } is a sequence generated by Algorithm 6. Let C be a nonempty closed and convex subset of a real Hilbert space H, and let Θ i : C C be a nonexpansive mapping for all i = 1 , 2 , , N such that F : = i = 1 N F ( Θ i ) . Then under conditions (3)–(5), { s n } strongly convergence to w = P F ( s 1 ) .
Next, we propose the following Algorithm 7:
Algorithm 7: Choose s 1 C , Q 1 = C and { α n i } is a sequence in [ 0 , 1 ] for all i = 0 , 1 , , N such that i = 0 N α n i = 1 for all n 1 .
(STEP 1) Set
t n = α n 0 s n + i = 1 N α n i Θ i s n ,
(STEP 2) Compute
C n = { v C n : v t n 2 v s n 2 }
and
Q n = { v Q n 1 : s 1 s n , s n v 0 } .
(STEP 3) Compute
s n + 1 = P C n Q n ( s 1 ) .
(STEP 4) Set n : = n + 1 , and go to (Step 1).
Theorem 2.
Let C be a nonempty closed and convex subset of a real Hilbert space H. Let G = ( V ( G ) , E ( G ) ) be a directed graph such that V ( G ) = C and E ( G ) be convex. Assume that { s n } is generated by Algorithm 7. Then under conditions (1)–(4) and (6), { s n } strongly converges to w = P F ( s 1 ) .
Proof. 
We shell show that P C n Q n is well-defined and F C n Q n , n 0 . Similar to Step 1 in Theorem 1, we can show that C n Q n is closed and convex, n 0 . Also, we can show that
t n p = α n 0 s n + i = 1 N α n i Θ i s n p α n 0 s n p + i = 1 N α n i Θ i s n p s n p .
Thus, we have p C n . Therefore F C n Q n . This implies that P C n Q n ( s 1 ) is well-defined. By the same proof of Step 2–3 in Theorem 1, we obtain lim n s n s 1 exists. Hence, there exists w C such that s n w as n . In particular, we have
lim n s n + 1 s n = 0 .
Next, we show that w F . By s n + 1 C n , it follows from (11) that
t n s n t n s n + 1 + s n + 1 s n 2 s n + 1 s n 0
as n . For p Ω , it follows from Lemma 2.1 in [14] and since { s n } dominates p that
t n p 2 = α n 0 s n + i = 1 N α n i Θ i s n p 2 α n 0 s n p 2 + i = 1 N α n i Θ i s n p 2 i = 1 N α n 0 α n i Θ i s n s n 2 s n p 2 i = 1 N α n 0 α n i Θ i s n s n 2 .
This implies that
i = 1 N α n 0 α n i Θ i s n s n 2 s n p 2 t n p 2 .
By our assumption (3) and (12), we obtain
lim n Θ i s n s n = 0
for all i = 1 , 2 , , N . From the fact that s n w as n , the assumption (1) and Lemma 6 in [2], we have w F . By the same proof of Step 5 in Theorem 1, we obtain w P F ( s 1 ) . This completes the proof.   □
Corollary 2.
Assume that { s n } is a sequence generated by Algorithm 7. Let C be a nonempty closed and convex subset of a real Hilbert space H. Let Θ i : C C be a nonexpansive mapping for all i = 1 , 2 , , N such that F : = i = 1 N F ( Θ i ) . Then under conditions (3), (4) and (6), { s n } convergence strongly to w = P F ( s 1 ) .

3. Numerical Experiments

In this section, we give numerical results to support our main theorem. We now give an example in a Euclidean space R 3 with a numerical experiment to support our main results.
Example 1.
Let H = R 3 and C = [ 0 , ) × [ 10 , 5 ] × [ 0 , ) . Assume that ( s , t ) E ( G ) if and only if 1 s 1 , t 1 , 9 s 2 , t 2 1.5 and 0 s 3 , t 3 1.25 or s = t for all s = ( s 1 , s 2 , s 3 ) , t = ( t 1 , t 2 , t 3 ) C . Define mappings Θ 1 , Θ 2 , Θ 3 : C C by
Θ 1 s = ( log s 1 2 + 2 , arctan s 2 4 , 1 ) ; Θ 2 s = ( 2 , 0 , tan ( s 3 1 ) 4 + 1 ) ; Θ 3 s = ( 2 , e s 2 1 2 , 1 )
for all s = ( s 1 , s 2 , s 3 ) C . It is easy to check that Θ 1 and Θ 2 are G-nonexpansive such that F ( Θ 1 ) F ( Θ 2 ) = { ( 2 , 0 , 1 ) } . On the other hand, Θ 1 is not nonexpansive since for s = ( 0.31 , 1 , 7 ) and t = ( 0.22 , 1 , 7 ) , this implies that Θ 1 s Θ 1 t > 0.1 > s t . Θ 2 is not nonexpansive since for s = ( 5 , 0.5 , 2.11 ) and t = ( 5 , 0.5 , 2.28 ) , we have Θ 2 s Θ 2 t > 0.3 > s t . Moreover, Θ 3 is not nonexpansive since for s = ( 1 , 1.19 , 0.2 ) and t = ( 1 , 1.02 , 0.2 ) , we have Θ 3 s Θ 3 t > 0.2 > s t . In this section, CPU and Iter are denoted by the time of CPU and the number of iterations, respectively. All numerical experiments presented were obtained from MATLAB R2019b running on the same laptop computer. In our experiment, we give three cases as follows:
Case 1: Algorithm 6 with α n = 0.1 of the initial point (1.01, −0.02, 1.26).
Case 2: Algorithm 6 with α n = 0.5 of the initial point (1.01, −0.02, 1.26).
Case 3: Algorithm 6 with α n = 0.9 of the initial point (1.01, −0.02, 1.26).
The numerical results are reported as follows:
From Table 1 and Figure 1, we see that in the case of two or more inputting Θ i ( i 2 ) of the proposed parallel monotone hybrid Algorithm 6 achieves fewer iterations than the inputted one. In the case of three inputting, a little more CPU time is required than in some cases of one or two inputting.
Next, we give an experiment of Algorithm 7. We give 9 cases for parameter α as follows:
Case 1: α 0 = 0.1 , α 1 = 0.9 3 , α 2 = 0.9 3 and α 3 = 0.9 3 .
Case 2: α 0 = 0.5 , α 1 = 0.5 3 , α 2 = 0.5 3 and α 3 = 0.5 3 .
Case 3: α 0 = 0.9 , α 1 = 0.1 3 , α 2 = 0.1 3 and α 3 = 0.1 3 .
Case 4: α 0 = 0.1 , α 1 = 0.9 6 , α 2 = 2 ( 0.9 6 ) and α 3 = 3 ( 0.9 6 ) .
Case 5: α 0 = 0.1 , α 1 = 0.9 6 , α 2 = 3 ( 0.9 6 ) and α 3 = 2 ( 0.9 6 ) .
Case 6: α 0 = 0.1 , α 1 = 2 ( 0.9 6 ) , α 2 = 0.9 6 and α 3 = 3 ( 0.9 6 ) .
Case 7: α 0 = 0.1 , α 1 = 2 ( 0.9 6 ) , α 2 = 3 ( 0.9 6 ) and α 3 = 0.9 6 .
Case 8: α 0 = 0.1 , α 1 = 3 ( 0.9 6 ) , α 2 = 0.9 6 and α 3 = 2 ( 0.9 6 ) .
Case 9: α 0 = 0.1 , α 1 = 3 ( 0.9 6 ) , α 2 = 2 ( 0.9 6 ) and α 3 = 0.9 6 .
For all cases, we choose the initial point (1.48, −0.06, 2.72).
From Table 2, we see that the CPU time and the number of iterations of Algorithm 7 decrease when the parameter α 0 approaches 0 and the rate of α n , by 3 : 2 : 1 this has an effect on the CPU time and the number of iterations for input many mappings Θ i .
Next, we present some numerical examples of signal recovery. We provide a comparison between Algorithms 2, 3, 6 and 7. In this case, we set Θ ( s n ) = prox λ g ( s n λ f ( s n ) ) . It is known that Θ is a nonexpansive mapping when λ ( 0 , 2 / L ) and L is the Lipschitz constant of f . Compressed sensing can be modeled as the following underdeterminated linear equation system:
t = A s + ϵ ,
where s R N is a original signal vector, t R M is the observed signal which disturbed by filer operator A : R N R M ( M < N ) and noisy ϵ . It is known that the solution of (13) can be seen as solving the LASSO problem:
min s R N 1 2 t A s 2 2 + λ s 1 ,
where λ > 0 . So we can apply our method for solving (14) in the case that f ( s ) = 1 2 t A s 2 2 and g ( s ) = λ s 1 . It is noted that f ( s ) = A T ( A s t ) .
The goal of this paper is to remove noise without knowing the type of it. Thus, we focus on the following problem:
min s R N 1 2 A 1 s t 1 2 2 + λ 1 s 1 , min s R N 1 2 A 2 s t 2 2 2 + λ 2 s 1 , min s R N 1 2 A N s t N 2 2 + λ N s 1 ,
where s is the original signal, A i is a bounded linear operator and t i is observed signal with noisy for all i = 1 , 2 , , N . We can apply the Algorithms 6 and 7 to solve the problem (15) by setting Θ i s = prox λ i g i ( s n λ i f i ( s n ) ) .
In our experiment, the observations t 1 , t 2 , t 3 are generated by different Gaussian noise white signal-to-noise ratio SNR and normal distribution with zero mean and one invariance matrix A 1 , A 2 , A 3 R M × N , respectively. The initial point s 1 is picked randomly. We use the mean squared error (MSE) for showing the restoration accuracy. This MSE is defined by
MSE = 1 N s n s * 2 2 < 10 5 ,
where s * is an estimated signal of s.
In what follows, let α n i = 0.5 for all i = 1 , 2 , 3 and let the step sizes λ 1 = 1 A 1 2 , λ 2 = 1 A 2 2 and λ 3 = 1 A 3 2 . The numerical results are shown as follows:
The performance of the studied proposed Algorithm 6 with the following original signal (Figure 2) is tested.
The different types of blurred matrices A 1 , A 2 and A 3 are shown in Figure 3.
The results of the Algorithm 6 with ( N = 1 ) by inputting A i , i = 1 , 2 , 3 for the following three cases:
Case 1.1: Inputting A 1 , SNR = 40 on the proposed algorithm;
Case 1.2: Inputting A 2 , SNR = 50 on the proposed algorithm;
Case 1.3: Inputting A 3 , SNR = 60 on the proposed algorithm;
Are shown in Figure 4 which are composed of the recovered signal.
Next, we present finding the common solutions to signal recovery problem (15) with ( N 2 ) by using the Algorithm 6. So, we can consider the results algorithm in the following three cases:
Case 2.1: Inputting A 1 , SNR = 40 and A 2 , SNR = 50;
Case 2.2: Inputting A 1 , SNR = 40 and A 3 , SNR = 60;
Case 2.3: Inputting A 2 , SNR = 50 and A 3 , SNR = 60;
Case 3.1: Inputting A 1 , SNR = 40, A 2 , SNR = 50 and A 3 , SNR = 60;
Are shown in Figure 5 which are composed of the recovered signal.
From Figure 4 and Figure 5, we see that Case 2.1–2.3 has less number of iterations and CPU time than Case 1.1–1.3 in all of the cases.
Finally, we present the common solution the signal recovery problem (15) with ( N = 3 ) of Case 3.1 and a comparison among Algorithms 2 and 3 shown in Figure 6.
From Figure 5 and Figure 6, we see that Case 3.1 has a lower number of iterations and CPU time than Case 2.1–2.3 all of the cases. This means that the efficiency of the proposed Algorithm 6 is better when the number of subproblems is increasing. Moreover, we see that our proposed Algorithm 6 gets less CPU time and fewer iterations than the other two algorithms. The following Figure 7 shows the efficiency of our parallel Algorithm 6 in all cases and a comparison with the other two algorithms by MSE versus the number of iterations.
Next, we analyze the convergence and the effects of { α n } in Algorithm 6 in each case.
From Table 3, we observe that the CPU time and the number of iterations of Algorithm 6 small reduction when the parameter { α n } approaches 1. The following Figure 8 and Figure 9 show numerical results for each { α n } in Table 3.
Next, we provide a comparison between Algorithms 2, 3 and 7. For convenience, we set all conditions as in the previous example.
The performance of the studied proposed Algorithm 7 with the following original signal (Figure 10) is tested.
The different types of blurred matrices A 1 , A 2 and A 3 are shown in Figure 11.
The results of the Algorithm 7 with ( N = 1 ) by inputting A i , i = 1 , 2 , 3 for the following three cases:
Case 1.1: Inputting A 1 , SNR = 40 on the proposed algorithm;
Case 1.2: Inputting A 2 , SNR = 50 on the proposed algorithm;
Case 1.3: Inputting A 3 , SNR = 60 on the proposed algorithm.
Figure 12 which are composed of the recovered signal.
Next, we present the finding of the common solutions to the signal recovery problem (15) with ( N 2 ) by using the Algorithm 7. So, we can consider the results algorithm in the following three cases:
Case 2.1: Inputting A 1 , SNR = 40 and A 2 , SNR = 50;
Case 2.2: Inputting A 1 , SNR = 40 and A 3 , SNR = 60;
Case 2.3: Inputting A 2 , SNR = 50 and A 3 , SNR = 60;
Case 3.1: Inputting A 1 , SNR = 40, A 2 , SNR = 50 and A 3 , SNR = 60;
Are shown in Figure 13 which are composed of the recovered signal.
From Figure 12 and Figure 13, we see that Case 2.1–2.3 has a lower number of iterations and CPU time than Case 1.1–1.3 in all of the cases. Finally, we present the common solution of signal recovery problem (15) with ( N = 3 ) of Case 3.1 and a comparison between Algorithms 2 and 3 which has been shown in Figure 14.
Figure 15 shows the efficiency of our parallel Algorithm 7 in all cases and a comparison with the other two algorithms by MSE versus the number of iterations.
From Figure 14 and Figure 15, we see that Case 3.1 has a lower number of iterations and CPU time than Case 2.1–2.3 all of the cases. This means that the efficiency of the proposed Algorithm 7 is also better when the number of subproblems is increasing. Moreover, we see that our proposed Algorithm 7 requires a lower CPU time and number of iterations than the other two algorithms.

4. Conclusions

This paper proposes two parallel hybrid projection algorithms for finding a common fixed point of a finite family of G-nonexpansive mappings in Hilbert spaces with a directed graph. Under some suitable conditions on the update parameters generated in two different algorithms, we obtain strong convergence theorems. We also give examples of numerical experiments to support our main results and compare the rate between the proposed and the existing two methods. It is found that our algorithms have a better convergence behavior than these methods through experiments.

Author Contributions

Funding acquisition and Supervision, S.S.; Writing—review & editing, W.C.; Writing—original draft, W.Y. and Software, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Chiang Mai University, Thailand.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research was supported by Chiang Mai University, the Thailand Science Research and Innovation Fund, and University of Phayao (Grant No. FF65-RIM072; FF65-UoE002).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khuangsatung, W.; Kangtunyakarn, A. The Method for Solving Fixed Point Problem of G-Nonexpansive Mapping in Hilbert Spaces Endowed with Graphs and Numerical Example. Indian J. Pure Appl. Math. 2020, 51, 155–170. [Google Scholar] [CrossRef]
  2. Suparatulatorn, R.; Cholamjiak, W.; Suantai, S. A modified S-iteration process for G-nonexpansive mappings in Banach spaces with graphs. Numer. Algorithms 2018, 77, 479–490. [Google Scholar] [CrossRef]
  3. Tripak, O. Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 2016, 1–8. [Google Scholar] [CrossRef] [Green Version]
  4. Agarwal, R.P.; ORegan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61. [Google Scholar]
  5. Sridarat, P.; Suparaturatorn, R.; Suantai, S.; Cho, Y.J. Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 2019, 42, 2361–2380. [Google Scholar] [CrossRef]
  6. Yambangwai, D.; Aunruean, S.; Thianwan, T. A new modified three-step iteration method for G-nonexpansive mappings in Banach spaces with a graph. Numer. Algorithms 2020, 84, 537–565. [Google Scholar] [CrossRef]
  7. Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef] [Green Version]
  8. Anh, P.K.; Van Hieu, D. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  9. Anh, P.K.; Van Hieu, D. Parallel hybrid iterative methods for variational inequalities, equilibrium problems, and common fixed point problems. Vietnam J. Math. 2016, 44, 351–374. [Google Scholar] [CrossRef]
  10. Cholamjiak, P.; Suantai, S.; Sunthrayuth, P. An explicit parallel algorithm for solving variational inclusion problem and fixed point problem in Banach spaces. Banach J. Math. Anal. 2020, 14, 20–40. [Google Scholar] [CrossRef]
  11. Hieu, D.V. Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities. Afr. Mat. 2017, 28, 677–692. [Google Scholar] [CrossRef]
  12. Tiammee, J.; Kaewkhao, A.; Suantai, S. On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 2015, 1–12. [Google Scholar] [CrossRef] [Green Version]
  13. Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. Theory Methods Appl. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
  14. Chidume, C.E.; Ezeora, J.N. Krasnoselskii-type algorithm for family of multi-valued strictly pseudo-contractive mappings. Fixed Point Theory Appl. 2014, 2014, 1–7. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Graph of number of the iterations versus error.
Figure 1. Graph of number of the iterations versus error.
Mathematics 10 02140 g001
Figure 2. The original signal N = 1024 , M = 512 , and 20 spikes.
Figure 2. The original signal N = 1024 , M = 512 , and 20 spikes.
Mathematics 10 02140 g002
Figure 3. Measured values with SNR = 40, 50, 60, respectively.
Figure 3. Measured values with SNR = 40, 50, 60, respectively.
Mathematics 10 02140 g003
Figure 4. Recovered signal by Case 1.1, Case 1.2 and Case 1.3, respectively.
Figure 4. Recovered signal by Case 1.1, Case 1.2 and Case 1.3, respectively.
Mathematics 10 02140 g004
Figure 5. Recovered signal by Case 2.1, Case 2.2 and Case 2.3, respectively.
Figure 5. Recovered signal by Case 2.1, Case 2.2 and Case 2.3, respectively.
Mathematics 10 02140 g005
Figure 6. Recovered signal by Algorithms 2, 3 and 6 inputting A 1 , A 2 and A 3 , respectively.
Figure 6. Recovered signal by Algorithms 2, 3 and 6 inputting A 1 , A 2 and A 3 , respectively.
Mathematics 10 02140 g006
Figure 7. MSE versus number of iterations in case N = 1024 and M = 512.
Figure 7. MSE versus number of iterations in case N = 1024 and M = 512.
Mathematics 10 02140 g007
Figure 8. Graph of number of iterations versus Error where N = 512 and M = 265 .
Figure 8. Graph of number of iterations versus Error where N = 512 and M = 265 .
Mathematics 10 02140 g008
Figure 9. Graph of the number of iterations versus MSE where N = 1024 and M = 512 .
Figure 9. Graph of the number of iterations versus MSE where N = 1024 and M = 512 .
Mathematics 10 02140 g009
Figure 10. The original signal N = 1024 , M = 512 and 20 spikes.
Figure 10. The original signal N = 1024 , M = 512 and 20 spikes.
Mathematics 10 02140 g010
Figure 11. Measured values with SNR = 40, 50, 60, respectively.
Figure 11. Measured values with SNR = 40, 50, 60, respectively.
Mathematics 10 02140 g011
Figure 12. Recovered signal by Case 1.1, Case 1.2 and Case 1.3, respectively.
Figure 12. Recovered signal by Case 1.1, Case 1.2 and Case 1.3, respectively.
Mathematics 10 02140 g012
Figure 13. Recovered signal by Case 2.1, Case 2.2 and Case 2.3, respectively.
Figure 13. Recovered signal by Case 2.1, Case 2.2 and Case 2.3, respectively.
Mathematics 10 02140 g013
Figure 14. Recovered signal by Algorithms 2, 3 and 7 inputting A 1 , A 2 and A 3 , respectively.
Figure 14. Recovered signal by Algorithms 2, 3 and 7 inputting A 1 , A 2 and A 3 , respectively.
Mathematics 10 02140 g014
Figure 15. MSE versus number of iterations in case N = 1024 and M = 512.
Figure 15. MSE versus number of iterations in case N = 1024 and M = 512.
Mathematics 10 02140 g015
Table 1. The convergence behavior of inputting Θ i , i = 1 , 2 , 3 , stop condition (Cauchy error) < 10 9 .
Table 1. The convergence behavior of inputting Θ i , i = 1 , 2 , 3 , stop condition (Cauchy error) < 10 9 .
Θ 1 Θ 2 Θ 3 Θ 1 , Θ 2 Θ 1 , Θ 3 Θ 2 , Θ 3 Θ 1 , Θ 2 , Θ 3
Case 1Iter369348418313339311301
CPU0.01920.01300.01130.01610.01920.01920.0166
Case 2  Iter  78728965716461
  CPU  0.01940.01220.01010.01660.01660.01640.0209
Case 3  Iter  52545043373635
  CPU  0.02230.01210.01090.01270.01810.01910.0199
Table 2. The convergence behavior of inputting Θ i , i = 1 , 2 , 3 , stop condition (Cauchy error) < 10 9 .
Table 2. The convergence behavior of inputting Θ i , i = 1 , 2 , 3 , stop condition (Cauchy error) < 10 9 .
Θ 1 Θ 2 Θ 3 Θ 1 , Θ 2 Θ 1 , Θ 3 Θ 2 , Θ 3 Θ 1 , Θ 2 , Θ 3
Case 1  Iter  43426037444437
  CPU  0.011270.00810.00660.00610.00970.00870.0075
Case 2  Iter  7381103681047068
  CPU  0.01180.00630.00530.00750.00990.00730.0076
Case 3  Iter  357419448335365340324
  CPU  0.01250.00850.00530.00750.00990.00730.0076
Case 4  Iter  43426035515340
  CPU  0.01230.00610.00480.00840.01090.00600.0109
Case 5  Iter  43426035513736
  CPU  0.01200.00620.00610.00610.00860.00870.0102
Case 6  Iter  43426040473841
  CPU  0.01430.00890.00560.00700.00820.00780.0098
Case 7  Iter  43426035473749
  CPU  0.01220.00670.00510.00530.01050.00850.0094
Case 8  Iter  43426040433841
  CPU  0.01240.00710.00560.00620.00790.00880.0103
Case 9  Iter  43426035435334
  CPU  0.001260.00570.00480.00650.00850.00750.0142
Table 3. The convergence of Algorithm 6 with each { α n } .
Table 3. The convergence of Algorithm 6 with each { α n } .
Given: Random Initial Point, Stop Condition (Cauchy Error) < 10 9 .
    { α n }       SNR   N = 512, M = 256N = 1024, M = 512
m = 20m = 20
IterCPUIterCPU
    4 n 2 + 12 20 n 2 + 10    4021,189206.126132,9061001.1
5022,454232.596034,3561084.8
40, 50552916.6651814548,778
    25 n 5 + 15 50 n 5 + 10    40736027.207714,285201.2996
50756028.323613,531182.0590
40, 5022103.4878361020.8226
    81 n 9 + 19 90 n 9 + 10    40519014.0915853477.5054
5043169.8401762264.8060
40, 5013011.514516606.7987
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Suantai, S.; Kankam, K.; Cholamjiak, W.; Yajai, W. Parallel Hybrid Algorithms for a Finite Family of G-Nonexpansive Mappings and Its Application in a Novel Signal Recovery. Mathematics 2022, 10, 2140. https://doi.org/10.3390/math10122140

AMA Style

Suantai S, Kankam K, Cholamjiak W, Yajai W. Parallel Hybrid Algorithms for a Finite Family of G-Nonexpansive Mappings and Its Application in a Novel Signal Recovery. Mathematics. 2022; 10(12):2140. https://doi.org/10.3390/math10122140

Chicago/Turabian Style

Suantai, Suthep, Kunrada Kankam, Watcharaporn Cholamjiak, and Watcharaporn Yajai. 2022. "Parallel Hybrid Algorithms for a Finite Family of G-Nonexpansive Mappings and Its Application in a Novel Signal Recovery" Mathematics 10, no. 12: 2140. https://doi.org/10.3390/math10122140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop