Next Article in Journal
Fourth Hankel Determinant Problem Based on Certain Analytic Functions
Previous Article in Journal
Geometric Structures Generated by the Same Dynamics. Recent Results and Challenges
Previous Article in Special Issue
Yet Another New Variant of Szász–Mirakyan Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Accelerated Fixed-Point Algorithm with an Inertial Technique for a Countable Family of G-Nonexpansive Mappings Applied to Image Recovery

by
Kobkoon Janngam
1 and
Rattanakorn Wattanataweekul
2,*
1
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Department of Mathematics, Statistics and Computer, Faculty of Science, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(4), 662; https://doi.org/10.3390/sym14040662
Submission received: 8 March 2022 / Revised: 19 March 2022 / Accepted: 21 March 2022 / Published: 24 March 2022
(This article belongs to the Special Issue New Directions in Theory of Approximation and Related Problems)

Abstract

:
Many authors have proposed fixed-point algorithms for obtaining a fixed point of G-nonexpansive mappings without using inertial techniques. To improve convergence behavior, some accelerated fixed-point methods have been introduced. The main aim of this paper is to use a coordinate affine structure to create an accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings in a Hilbert space with a symmetric directed graph G and prove the weak convergence theorem of the proposed algorithm. As an application, we apply our proposed algorithm to solve image restoration and convex minimization problems. The numerical experiments show that our algorithm is more efficient than FBA, FISTA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration.

1. Introduction

Let H be a real Hilbert space with the norm · and C be a nonempty closed convex subset of H . A mapping T : C C is said to be nonexpansive if it satisfies the following symmetric contractive-type condition:
T x T y x y ,
for all x , y C ; see [1].
The notation of the set of all fixed points of T is F ( T ) : = { x C : x = T x } .
Many mathematicians have studied iterative schemes for finding the approximate fixed-point theorem of nonexpansive mappings over many years; see [2,3]. One of these is the Picard iteration process, which is well known and popular. Picard’s iteration process is defined by
x n + 1 = T x n ,
where n 1 and an initial point x 1 is randomly selected.
The iterative process of Picard has been developed extensively by many mathematicians, as follows:
Mann iteration process [4] is defined by
x n + 1 = ( 1 ρ n ) x n + ρ n T x n ,
where n 1 and an initial point x 1 is randomly selected and { ρ n } is a sequence in [ 0 , 1 ] .
Ishikawa iteration process [5] is defined by
y n = ( 1 ζ n ) x n + ζ n T x n , x n + 1 = ( 1 ρ n ) x n + ρ n T y n ,
where n 1 and an initial point x 1 is randomly selected and { ζ n } , { ρ n } are sequences in [ 0 , 1 ] .
S-iteration process [6] is defined by
y n = ( 1 ζ n ) x n + ζ n T x n , x n + 1 = ( 1 ρ n ) T x n + ρ n T y n ,
where n 1 and an initial point x 1 is randomly selected and { ζ n } , { ρ n } are sequences in [ 0 , 1 ] . We know that the S-iteration process (3) is independent of Mann and Ishikawa iterative schemes and converges quicker than both; see [6].
Noor iteration process [7] is defined by
z n = ( 1 η n ) x n + η n T x n , y n = ( 1 ζ n ) x n + ζ n T z n , x n + 1 = ( 1 ρ n ) x n + ρ n T y n ,
where n 1 and an initial point x 1 is randomly selected and { η n } , { ζ n } , { ρ n } are sequences in [ 0 , 1 ] . We can see that Mann and Ishikawa iterations are special cases of the Noor iteration.
SP-iteration process [8] is defined by
z n = ( 1 η n ) x n + η n T x n , y n = ( 1 ζ n ) z n + ζ n T z n , x n + 1 = ( 1 ρ n ) y n + ρ n T y n ,
where n 1 and an initial point x 1 is randomly selected and { η n } , { ζ n } , { ρ n } are sequences in [ 0 , 1 ] . We know that Mann, Ishikawa, Noor and SP-iterations are equivalent and the SP-iteration converges faster than the other; see [8].
The fixed-point theory is a rapidly growing field of research because of its many applications. It has been found that a self-map on a set admits a fixed point under specific conditions. One of the recent generalizations is due to Jachymiski.
Jachymski [9] proved some generalizations of the Banach contraction principle in a complete metric space endowed with a directed graph using a combination of fixed-point theory and graph theory. In Banach spaces with a graph, Aleomraninejad et al. [10] proposed an iterative scheme for G-contraction and G-nonexpansive mappings. G-monotone nonexpansive multivalued mappings on hyperbolic metric spaces endowed with graphs were defined by Alfuraidan and Khamsi [11]. On a Banach space with a directed graph, Alfuraidan [12] showed the existence of fixed points of monotone nonexpansive mappings. For G-nonexpansive mappings in Hilbert spaces with a graph, Tiammee et al. [13] demonstrated Browder’s convergence theorem and a strong convergence theorem of the Halpern iterative scheme. The convergence theorem of the three-step iteration approach for solving general variational inequality problems was investigated by Noor [7]. According to [14,15,16,17], the three-step iterative method gives better numerical results than the one-step and two-step approximate iterative methods. For approximating common fixed points of a finite family of G-nonexpansive mappings, Suantai et al. [18] combined the shrinking projection with the parallel monotone hybrid method. Additionally, they used a graph to derive a strong convergence theorem in Hilbert spaces under certain conditions and applied it to signal recovery. There is also research related to the application of some fixed-point theorem on the directed graph representations of some chemical compounds; see [19,20].
Several fixed-point algorithms have been introduced by many authors [7,9,10,11,12,13,14,15,16,17,18] for finding a fixed point of G-nonexpansive mappings with no inertial technique. Among these algorithms, we need those algorithms that are efficient for solving the problem. So, some accelerated fixed-point algorithms have been introduced to improve convergence behavior; see [21,22,23,24,25,26,27,28]. Inspired by these works mentioned above, we employed a coordinate affine structure to define an accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings applied to image restoration and convex minimization problems.
This paper is divided into four sections. The first section is the introduction. In Section 2, we recall the basic concepts of mathematics, definitions, and lemmas that will be used to prove the main results. In Section 3, we prove a weak convergence theorem of an iterative scheme with the inertial step for finding a common fixed point of a countable family of G-nonexpansive mappings. Furthermore, we apply our proposed method for solving image restoration and convex minimization problems; see Section 4.

2. Preliminaries

The basic concepts of mathematics, definitions, and lemmas discussed in this section are all important and useful in proving our main results.
Let X be a real normed space and C be a nonempty subset of X. Let = { ( u , u ) : u C } , where Δ stands for the diagonal of the Cartesian product C × C . Consider a directed graph G in which the set V ( G ) of its vertices corresponds to C, and the set E ( G ) of its edges contains all loops, that is E ( G ) . Assume that G does not have parallel edges. Then, G = ( V ( G ) , E ( G ) ) . The conversion of a graph G is denoted by G 1 . Thus, we have
E ( G 1 ) = { ( u , v ) C × C : ( v , u ) E ( G ) } .
A graph G is said to be symmetric if ( x , y ) E ( G ) ; we have ( y , x ) E ( G ) .
A graph G is said to be transitive if for any u , v , w V ( G ) such that ( u , v ) , ( v , w ) E ( G , ) ; then, ( u , w ) E ( G ) .
Recall that a graph G is connected if there is a path between any two vertices of the graph G . Readers might refer to [29] for additional information on some basic graph concepts.
We say that a mapping T : C C is said to be G-contraction [9] if T is edge preserving, i.e.,  ( T u , T v ) E ( G ) for all ( u , v ) E ( G ) , and there exists ρ [ 0 , 1 ) such that
T u T v ρ u v
for all ( u , v ) E ( G ) , where ρ is called a contraction factor. If T is edge preserving, and
T u T v u v
for all ( u , v ) E ( G ) , then T is said to be G-nonexpansive; see [13].
A mapping T : C C is called G-demiclosed at 0 if for any sequence { u n } C , ( u n , u n + 1 ) E ( G ) , u n u and T u n 0 ; then, T u = 0 .
To prove our main result, we need to introduce the concept of the coordinate affine of the graph G = ( V ( G ) , E ( G ) ) . For any α , β R with α + β = 1 , we say that E ( G ) is said to be left coordinate affine if
α ( x , y ) + β ( u , y ) E ( G )
for all ( x , y ) , ( u , y ) E ( G ) . Similar to this, E ( G ) is said to be right coordinate affine if
α ( x , y ) + β ( x , z ) E ( G )
for all ( x , y ) , ( x , z ) E ( G ) .
If E ( G ) is both left and right coordinate affine, then E ( G ) is said to be coordinate affine.
The following lemmas are the fundamental results for proving our main theorem; see also [21,30,31].
Lemma 1
([30]). Let { v n } , { w n } and { ϑ n } R + such that
v n + 1 ( 1 + ϑ n ) v n + w n ,
where n N . If n = 1 ϑ n < and n = 1 w n < , then lim n v n exists.
Lemma 2
([31]). For a real Hilbert space H, the following results hold:
(i) For any u , v H and γ [ 0 , 1 ] ,
γ u + ( 1 γ ) v 2 = γ u 2 + ( 1 γ ) v 2 γ ( 1 γ ) u v 2 .
(ii) For any u , v H ,
u ± v 2 = u 2 ± 2 u , v + v 2 .
Lemma 3
([21]). Let { v n } and { μ n } R + such that
v n + 1 ( 1 + μ n ) v n + μ n v n 1 ,
where n N . Then,
v n + 1 M · j = 1 n ( 1 + 2 μ j ) ,
where M = max { v 1 , v 2 } . Furthermore, if  n = 1 μ n < , then { v n } is bounded.
Let { u n } be a sequence in X . We write u n u to indicate that a sequence { u n } converges weakly to a point u H . Similarly, u n u will symbolize the strong convergence. For  v C , if there is a subsequence { u n k } of { u n } such that u n k v , then v is called a weak cluster point of { u n } . Let ω w ( u n ) be the set of all weak cluster points of { u n } .
The following lemma was proved by Moudafi and Al-Shemas; see [32].
Lemma 4
([32]). Let { u n } be a sequence in a real Hilbert space H such that there exists Λ H satisfying:
(i) For any p Λ , lim n u n p exists.
(ii) Any weak cluster point of { u n } Λ .
Then, there exists x * Λ such that u n x * .
Let { T n } and ψ be families of nonexpansive mappings of C into itself such that F ( ψ ) n = 1 F ( T n ) , where F ( ψ ) is the set of all common fixed points of each T ψ . A sequence { T n } satisfies the NST-condition (I) with ψ if, for any bounded sequence { u n } in C ,
lim n T n u n u n = 0 implies lim n T u n u n = 0 ,
for all T ψ ; see [33]. If ψ = { T } , then { T n } satisfies the NST-condition (I) with T .
The forward–backward operator of lower semi-continuous and convex functions of f , g : R n ( , + ] has the following definition:
A forward-backward operator T is defined by T : = p r o x λ g ( I λ f ) for λ > 0 , where f is the gradient operator of function f and p r o x λ g x : = a r g m i n y H g ( y ) + 1 2 λ y x 2 (see [34,35]). Moreau [36] defined the operator p r o x λ g as the proximity operator with respect to λ and function g. Whenever λ ( 0 , 2 / L ) , we know that T is a nonexpansive mapping and L is a Lipschitz constant of f . We have the following remark for the definition of the proximity operator; see [37].
Remark 1.
Let g : R n R be given by g ( x ) = λ x 1 . The proximity operator of g is evaluated by the following formula
p r o x λ · 1 ( x ) = ( s i g n ( x i ) m a x ( | x i | λ , 0 ) ) i = 1 n ,
where x = ( x 1 , x 2 , , x n ) and x 1 = i = 1 n | x i | .
The following lemma was proved by Bassaban et al.; see [22].
Lemma 5.
Let H be a real Hilbert space and T be the forward–backward operator of f and g, where g is a proper lower semi-continuous convex function from H into R { } , and f is a convex differentiable function from H into R with gradient f being L-Lipschitz constant for some L > 0 . If  { T n } is the forward–backward operator of f and g such that a n a with a, a n ( 0 , 2 / L ) , then { T n } satisfies the N S T -condition (I) with T.

3. Main Results

In this section, we obtain a useful proposition and a weak convergence theorem of our proposed algorithm by using the inertial technique.
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph G = ( V ( G ) , E ( G ) ) such that V ( G ) = C . Let { T n } be a family of G-nonexpansive mappings of C into itself such that n = 1 F ( T n ) .
The following proposition is useful for our main theorem.
Proposition 1.
Let x * n = 1 F ( T n ) and x 0 , x 1 C be such that ( x 0 , x * ) , ( x 1 , x * ) E ( G ) . Let { x n } be a sequence generated by Algorithm 1. Suppose E ( G ) is symmetric, transitive and left coordinate affine. Then, ( x n , x * ) , ( y n , x * ) , ( z n , x * ) , ( x n , x n + 1 ) E ( G ) for all n N .
Algorithm 1 (MSPA) A modified SP-algorithm
1:
Initial. Take x 0 , x 1 C are arbitary and n = 1 , α n [ a , b ] ( 0 , 1 ) , β n ( 0 , 1 ) , θ n 0 and n = 1 θ n < , where θ n is called an inertial step size.
2:
Step 1. y n , z n and x n + 1 are computed by
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 β n ) y n + β n T n y n , x n + 1 = ( 1 α n ) z n + α n T n z n ,
Then, n : = n + 1 and go to Step 1.
Proof. 
We shall prove the results by using mathematical induction. From Algorithm 1, we obtain
( y 1 , x * ) = x 1 + θ 1 ( x 1 x 0 ) , x * = ( 1 + θ 1 ) x 1 θ 1 x 0 , x * = ( 1 + θ 1 ) ( x 1 , x * ) θ 1 ( x 0 , x * ) .
Since ( x 0 , x * ) , ( x 1 , x * ) E ( G ) and E ( G ) is left coordinate affine, we obtain ( y 1 , x * ) E ( G ) and
( z 1 , x * ) = ( 1 β 1 ) y 1 + β 1 T 1 y 1 , x * = ( 1 β 1 ) ( y 1 , x * ) + β 1 ( T 1 y 1 , x * ) .
Since ( y 1 , x * ) E ( G ) and T n is edge preserving, we obtain ( z 1 , x * ) E ( G ) . Next, suppose that
( x k , x * ) , ( y k , x * ) and ( z k , x * ) E ( G )
for k N . We shall show that ( x k + 1 , x * ) , ( y k + 1 , x * ) and ( z k + 1 , x * ) E ( G ) . By Algorithm 1, we obtain
( x k + 1 , x * ) = ( 1 α k ) z k + α k T k z k , x * = ( 1 α k ) ( z k , x * ) + α k ( T k z k , x * ) ,
( y k + 1 , x * ) = ( x k + 1 + θ k + 1 ( x k + 1 x k ) , x * ) = ( 1 + θ k + 1 ) x k + 1 θ k + 1 x k , x * = ( 1 + θ k + 1 ) ( x k + 1 , x * ) θ k + 1 ( x k , x * ) ,
and
( z k + 1 , x * ) = ( 1 β k + 1 ) y k + 1 + β k + 1 T k + 1 y k + 1 , x * = ( 1 β k + 1 ) ( y k + 1 , x * ) + β k + 1 ( T k + 1 y k + 1 , x * ) .
Since E ( G ) is left coordinate affine, T n is edge preserving and from (6)–(9), we obtain ( x k + 1 , x * ) , ( y k + 1 , x * ) and ( z k + 1 , x * ) E ( G ) . By mathematical induction, we conclude that ( x n , x * ) , ( y n , x * ) , ( z n , x * ) E ( G ) for all n N . Since E ( G ) is symmetric, we obtain ( x * , x n + 1 ) E ( G ) . Since ( x n , x * ) , ( x * , x n + 1 ) E ( G ) and E ( G ) is transitive, we obtain ( x n , x n + 1 ) E ( G ) . The proof is now complete.    □
In the following theorem, we prove the weak convergence of G-nonexpansive mapping by using Algorithm 1.
Theorem 1.
Let C be a nonempty closed and convex subset of a real Hilbert space H with a directed graph G = ( V ( G ) , E ( G ) ) with V ( G ) = C and E ( G ) is symmetric, transitive and left coordinate affine. Let x 0 , x 1 C and { x n } be a sequence in H defined by Algorithm 1. Suppose that { T n } satisfies the NST-condition (I) with T such that F ( T ) n = 1 F ( T n ) and ( x 0 , x * ) , ( x 1 , x * ) E ( G ) for all x * n = 1 F ( T n ) . Then, { x n } converges weakly to a point in F ( T ) .
Proof. 
Let x * n = 1 F ( T n ) . By the definitions of y n and z n , we obtain
y n x * = x n + θ n ( x n x n 1 ) x * x n x * + θ n x n x n 1
and
z n x * = ( 1 β n ) y n x * + β n x * + β n T n y n β n x * = ( 1 β n ) ( y n x * ) + β n ( T n y n x * ) ( 1 β n ) y n x * + β n T n y n x * = ( 1 β n ) y n x * + β n T n y n T n x * ( 1 β n ) y n x * + β n y n x * = y n x * .
By the definition of x n + 1 and (11), we obtain
x n + 1 x * = ( 1 α n ) z n x * + α n x * + α n T n z n α n x * = ( 1 α n ) ( z n x * ) + α n ( T n z n x * ) ( 1 α n ) z n x * + α n T n z n x * ( 1 α n ) z n x * + α n z n x * = z n x * y n x * .
From (10)–(12), we obtain
x n + 1 x * x n x * + θ n x n x n 1 ( 1 + θ n ) x n x * + θ n x n 1 x * .
So, we obtain x n + 1 x * M · j = 1 n ( 1 + 2 θ j ) , where M = m a x { x 1 x * , x 2 x * } from Lemma 3. Thus, { x n } is bounded because n = 1 θ n < . Then,
n = 1 θ n x n x n 1 < .
Note that { x n } being bounded implies that { y n } and { z n } are also bounded. By Lemma 1 and (13), we find that lim n x n x * exists. Then, we let lim n x n x * = a . From the boundedness of { y n } and (12), we obtain
lim inf n y n x * a .
By (10) and (14), we obtain
lim sup n y n x * a .
From (15) and (16), it follows that
lim n y n x * = a .
Similarly, from (11), (12), (17) and the boundedness of { z n } , we obtain
lim sup n z n x * a and lim inf n z n x * a .
From (18), we obtain that lim n z n x * = a . It follows that lim n z n x * exists. By the definition of x n + 1 and Lemma 2 (i), we obtain
x n + 1 x * 2 = ( 1 α n ) ( z n x * ) + α ( T n z n x * ) 2 = ( 1 α ) z n x * 2 + α T n z n x * 2 ( 1 α n ) α n z n T n z n 2 ( 1 α n ) z n x * 2 + α n z n x * 2 ( 1 α n ) α n z n T n z n 2 = z n x * 2 ( 1 α n ) α n z n T n z n 2 ( x n x * + θ n x n x n 1 ) 2 ( 1 α n ) α n z n T n z n 2 = x n x * 2 + 2 θ n x n x * x n x n 1 + θ n 2 x n x n 1 2
( 1 α n ) α n z n T n z n 2 .
From (14) and (19), we obtain
z n T n z n 0 .
Since
x n + 1 z n = ( 1 α n ) z n + α n T n z n z n = α n T n z n z n
and from (20), it follows that
x n + 1 z n 0 .
Since { z n } is bounded, (20), and { T n } satisfies the NST-condition (I) with T, we obtain that z n T z n 0 . Let ω w ( z n ) be the set of all weak cluster points of { z n } . Then, ω w F ( T ) by the demicloseness of I T at 0 . By Lemma 3, we conclude that there exists x * F ( T ) such that z n x * and it follows from (21) that x n x * . The proof is now complete.    □

4. Applications

In this section, we are interested in applying our proposed method for solving a convex minimization problem. Furthermore, we also compared the convergence behavior of our proposed algorithm with the others and give some applications to solve the image restoration problem.

4.1. Convex Minimization Problems

Our proposed method will be used to solve a convex minimization problem of the sum of two convex and lower semicontinuous functions f , g : R n ( , + ] . So, we consider the following convex minimization problem: min f ( x ) + g ( x ) x R n . It is well known that x * is a minimizer of (22) if and only if x * = T x * , where T = p r o x ρ g ( I ρ f ) ; see Proposition 3.1 (iii) [35]. It is also known that T is nonexpansive if ρ ( 0 , 2 / L ) when L is a Lipschitz constant of f . Over the past two decades, several algorithms have been introduced for solving the problem (22). A simple and classical algorithm is the forward–backward algorithm (FBA), which was introduced by Lions, P.L. and B. Mercier [23].
The forward–backward algorithm (FBA) is defined by
y n = x n γ f x n , x n + 1 = x n + ρ n ( J γ g y n x n ) ,
where n 1 , x 0 H and L is a Lipschitz constant of f , γ ( 0 , 2 / L ) , δ = 2 ( γ L / 2 ) and { ρ n } is a sequence in [ 0 , δ ] such that n N ρ n ( δ ρ n ) = + . A technique for improving speed and giving a better convergence behavior of the algorithms was introduced firstly by Polyak [38] by adding an inertial step. Since then, many authors have employed the inertial technique to accelerate their algorithms for various kinds of problems; see [21,22,24,25,26,27,28]. The performance of FBA can be improved using an iterative method with the inertial steps described below.
A fast iterative shrinkage-thresholding algorithm (FISTA) [27] is defined by
y n = T x n , t n + 1 = 1 + 1 + 4 t n 2 2 , θ n = t n 1 t n + 1 , x n + 1 = y n + θ n ( y n y n 1 ) ,
where n 1 , t 1 = 1 , x 1 = y 0 R n , T : = p r o x 1 L g ( I 1 L f ) and θ n is the inertial step size. The FISTA was suggested by Beck and Teboulle [27]. They proved the convergence rate of the FISTA and applied the FISTA to the image restoration problem [27]. The inertial step size θ n of the FISTA was firstly introduced by Nesterov [39].
A new accelerated proximal gradient algorithm (nAGA) [28] is defined by
y n = x n + μ n ( x n x n 1 ) , x n + 1 = T n [ ( 1 ρ n ) y n + ρ n T n y n ] ,
where n 1 , T n is the forward–backward operator of f and g with a n ( 0 , 2 / L ) and { μ n } , { ρ n } are sequences in ( 0 , 1 ) and x n x n 1 2 μ n 0 . The nAGA was introduced for proving a convergence theorem by Verma and Shukla [28]. The nonsmooth convex minimization problem with sparsity, including regularizers, was solved using this method for the multitask learning framework.
The convergence of Algorithm 2 is obtained using the convergence result of Algorithm 1, as shown in the following theorem.
Algorithm 2 (FBMSPA) A forward–backward modified SP-algorithm
1:
Initial. Take x 0 , x 1 C are arbitary and n = 1 , α n [ a , b ] ( 0 , 1 ) , β n ( 0 , 1 ) , θ n 0 and n = 1 θ n < .
2:
Step 1. y n , z n and x n + 1 are computed by
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 β n ) y n + β n p r o x a n g ( I a n f ) y n , x n + 1 = ( 1 α n ) z n + α n p r o x a n g ( I a n f ) z n ,
Then, n : = n + 1 and go to Step 1.
Theorem 2.
For f , g : R n ( , ] , g is a convex function and f is a smooth convex function with a gradient having a Lipschitz constant L. Let a n ( 0 , 2 / L ) be such that { a n } converges to a and let T : = p r o x a g ( I a f ) and T n : = p r o x a n g ( I a n f ) and let { x n } be a sequence generated by Algorithm 2, where β n , α n and θ n are the same as in Algorithm 1. Then, the following holds:
(i) x n + 1 x * M · j = 1 n ( 1 + 2 θ j ) , where M = m a x { x 1 x * , x 2 x * } and x * A r g m i n ( f + g ) ;
(ii) { x n } converges weakly to a point in Argmin ( f + g ) .
Proof. 
We know that T and { T n } are nonexpansive operators, and F ( T ) = n = 1 F ( T n ) = A r g m i n ( f + g ) for all n; see Proposition 26.1 in [34]. By Lemma 5, we find that { T n } satisfies the NST-condition (I) with T. From Theorem 1, we obtain the required result directly by putting G = R n × R n , the complete graph, on R n . □

4.2. The Image Restoration Problem

We can describe the image restoration problem as a simple linear model
B x = c + u ,
where B R m × n and c R m × 1 are known, u is an additive noise vector, and x is the “true” image. In image restoration problems, the blurred image is represented by c, and x R n × 1 is the unknown true image. In these problems, the blur operator is described by the matrix B. The problem of finding the original image x * R n × 1 from the noisy image and observed blurred is called an image restoration problem. There are several methods that have been proposed for finding the solution of problem (25); see, for instance, [40,41,42,43].
A new method for the estimation a solution of (25), called the least absolute shrinkage and selection operator (LASSO), was proposed by Tibshirani [44] as follows:
min x B x c 2 2 + λ x 1 ,
where λ > 0 is called a regularization parameter and · 1 is an l 1 -norm defined by x 1 = i = 1 n | x i | . The LASSO can also be applied to solve image and regression problems [27,44], etc.
Due to the size of the matrix B and x along with their members, the model (26) has the computational cost of the multiplication B x and x 1 for solving the RGB image restoration problem. In order to solve this issue, many mathematicians in this field have used the 2-D fast Fourier transform for true RGB image transformation. Therefore, the model (26) was slightly modified using the 2-D fast Fourier transform as follows:
min x B x C 2 2 + λ W x 1
where λ is a positive regularization parameter, R is the blurring matrix, W is the 2-D fast Fourier transform, B is the blurring operation with B = R W and C R m × n is the observed blurred and noisy image of size m × n .
We apply Algorithm 2 to solve the image restoration problem (27) by using Theorem 2 when f ( x ) = B x C 2 2 and g ( x ) = λ W x 1 . Then, we compare Algorithm 2’s deblurring to that of FISTA and FBA. In this experiment, we consider the true RGB images, Suan Dok temple and Aranyawiwek temple of size 500 2 , as the original images. We blur the images with a Gaussian blur of size 9 2 and σ = 4 , where σ is the standard deviation. To evaluate the performance of these methods, we utilize the peak signal-to-noise ratio (PSNR) [45] to measure the efficiency of these methods when PSNR( x n ) is defined by
P S N R ( x n ) = 10 l o g 10 255 2 M S E ,
where a monotic image with 8 bits/pixel has a maximum gray level of 255 and M S E = 1 N x n x * 2 2 = 1 N i = 1 N | x n ( i ) x * ( i ) | 2 , x n ( i ) and x * ( i ) are the i-th samples in image x n and x * , respectively, N is the number of image samples and x * is the original image. We can see that a higher PSNR indicates better a deblurring image quality. For these experiments, we set λ = 5 × 10 5 and the original image was the blurred image. The Lipchitz constant L is calculated using the matrix B T B as the maximum eigenvalues.
The parameters of Algorithm 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration are the same as in Table 1.
Note that all of the parameters in Table 1 satisfy the convergence theorems for each method. The convergence of the sequence { x n } generated by Algorithm 2 to the original image x * is guaranteed by Theorem 2. However, the PSNR value is used to measure the convergence behavior of this sequence. It is known that PSNR is a suitable measurement for image restoration problems.
The following experiments show the efficacy of the blurring results of Suan Dok and Aranyavivek temples at the 500th iteration of Algorithms 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration using PSNR as our measurement, shown in tables and figures as follows.
It is observed from Figure 1 and Figure 2 that the graph of PSNR of Algorithm 2 is higher than that of FISTA FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration which shows that Algorithm 2 gives a better performance than the others.
The efficiency of each algorithm for image restoration is shown in Table 2, Table 3, Table 4 and Table 5 for different number of iterations. The value of PSNR of Algorithm 2 is shown to be higher than that of FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration. Thus, Algorithm 2 has a better convergence behavior than the others.
We show the original images, blurred images, and deblurred images by Algorithm 2, FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration for Suan Dok (Figure 3) and Aranyawiwek temples (Figure 4).

5. Conclusions

In this study, we used a coordinate affine structure to propose an accelerated fixed-point algorithm with an inertial technique for a countable family of G-nonexpansive mappings in a Hilbert space with a symmetric directed graph G. Moreover, we proved the weak convergence theorem of the proposed algorithm under some suitable conditions. Then, we compared the convergence behavior of our proposed algorithm with FISTA, FBA, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration. We also applied our results to image restoration and convex minimization problems. We found that Algorithm 2 gave the best results out of all of them.

Author Contributions

Conceptualization, R.W.; Formal analysis, K.J. and R.W.; Investigation, K.J.; Methodology, R.W.; Supervision, R.W.; Validation, R.W.; Writing—original draft, K.J.; Writing—review and editing, R.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundamental Fund 2022, Chiang Mai university and Ubon Ratchathani University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The first author was supported by Fundamental Fund 2022, Chiang Mai university, Thailand. The second author would like to thank Ubon Ratchathani University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berinde, V. A Modified Krasnosel’skiǐ–Mann Iterative Algorithm for Approximating Fixed Points of Enriched Nonexpansive Mappings. Symmetry 2022, 14, 123. [Google Scholar] [CrossRef]
  2. Bin Dehaish, B.A.; Khamsi, M.A. Mann iteration process for monotone nonexpansive mappings. Fixed Point Theory Appl. 2015, 2015, 177. [Google Scholar] [CrossRef] [Green Version]
  3. Dong, Y. New inertial factors of the Krasnosel’skii-Mann iteration. Set-Valued Var. Anal. 2021, 29, 145–161. [Google Scholar] [CrossRef]
  4. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  5. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  6. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed point of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61. [Google Scholar]
  7. Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef] [Green Version]
  8. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2000, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  9. Jachymski, J. The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 2008, 136, 1359–1373. [Google Scholar] [CrossRef]
  10. Aleomraninejad, S.M.A.; Rezapour, S.; Shahzad, N. Some fixed point result on a metric space with a graph. Topol. Appl. 2012, 159, 659–663. [Google Scholar] [CrossRef] [Green Version]
  11. Alfuraidan, M.R.; Khamsi, M.A. Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph. Fixed Point Theory Appl. 2015, 2015, 44. [Google Scholar] [CrossRef]
  12. Alfuraidan, M.R. Fixed points of monotone nonexpansive mappings with a graph. Fixed Point Theory Appl. 2015, 2015, 49. [Google Scholar] [CrossRef] [Green Version]
  13. Tiammee, J.; Kaewkhao, A.; Suantai, S. On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 2015, 187. [Google Scholar] [CrossRef] [Green Version]
  14. Tripak, O. Common fixed points of G-nonexpansive mappings on Banach spaces with a graph. Fixed Point Theory Appl. 2016, 2016, 87. [Google Scholar] [CrossRef] [Green Version]
  15. Sridarat, P.; Suparaturatorn, R.; Suantai, S.; Cho, Y.J. Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 2019, 42, 2361–2380. [Google Scholar] [CrossRef]
  16. Glowinski, R.; Tallec, P.L. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanic; SIAM: Philadelphia, PA, USA, 1989. [Google Scholar]
  17. Haubruge, S.; Nguyen, V.H.; Strodiot, J.J. Convergence analysis and applications of the Glowinski Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1998, 97, 645–673. [Google Scholar] [CrossRef]
  18. Suantai, S.; Kankam, K.; Cholamjiak, P.; Cholamjiak, W. A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comp. Appl. Math. 2021, 40, 145. [Google Scholar] [CrossRef]
  19. Baleanu, D.; Etemad, S.; Mohammadi, H.; Rezapour, S. A novel modeling of boundary value problems on the glucose graph. Commun. Nonlinear Sci. Numer. Simulat. 2021, 100, 105844. [Google Scholar] [CrossRef]
  20. Etemad, S.; Rezapour, S. On the existence of solutions for fractional boundary value problems on the ethane graph. Adv. Differ. Equ. 2020, 2020, 276. [Google Scholar] [CrossRef]
  21. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization method. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef] [Green Version]
  22. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 21–30. [Google Scholar] [CrossRef]
  23. Lions, P.L.; Mercier, B. Splitting Algorithms for the Sum of Two Nonlinear Operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  24. Janngam, K.; Suantai, S. An accelerated forward-backward algorithm with applications to image restoration problems. Thai. J. Math. 2021, 19, 325–339. [Google Scholar]
  25. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems. Demonstr. Math. 2020, 53, 208–224. [Google Scholar] [CrossRef]
  26. Gebrie, A.G.; Wangkeeree, R. Strong convergence of an inertial extrapolation method for a split system of minimization problems. Demonstr. Math. 2020, 53, 332–351. [Google Scholar] [CrossRef]
  27. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  28. Verma, M.; Shukla, K. A new accelerated proximal gradient technique for regularized multitask learning framework. Pattern Recogn. Lett. 2018, 95, 98–103. [Google Scholar] [CrossRef]
  29. Johnsonbaugh, R. Discrete Mathematics; Pearson: Hoboken, NJ, USA, 1997. [Google Scholar]
  30. Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  31. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  32. Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar]
  33. Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
  34. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: Berlin, Germany, 2017. [Google Scholar]
  35. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 168–1200. [Google Scholar] [CrossRef] [Green Version]
  36. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Hebd. Des Séances L’académie Des Sci. 1962, 255, 2897–2899. [Google Scholar]
  37. Beck, A. First-Order Methods in Optimization; Tel-Aviv University: Tel Aviv-Yafo, Israel, 2017; pp. 129–177. ISBN 978-1-61197-498-0. [Google Scholar]
  38. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  39. Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  40. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  41. Eldén, L. Algorithms for the regularization of ill-conditioned least squares problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
  42. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
  43. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; V.H. Winston: Washington, DC, USA, 1977. [Google Scholar]
  44. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  45. Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
Figure 1. The graphs of PSNR of each algorithm for Suan Dok temple.
Figure 1. The graphs of PSNR of each algorithm for Suan Dok temple.
Symmetry 14 00662 g001
Figure 2. The graphs of PSNR of each algorithm for Aranyawiwek temple.
Figure 2. The graphs of PSNR of each algorithm for Aranyawiwek temple.
Symmetry 14 00662 g002
Figure 3. Results for Suan Dok temple’s deblurring image.
Figure 3. Results for Suan Dok temple’s deblurring image.
Symmetry 14 00662 g003
Figure 4. Results for Aranyawiwek temples’s deblurring image.
Figure 4. Results for Aranyawiwek temples’s deblurring image.
Symmetry 14 00662 g004
Table 1. Methods and their setting controls.
Table 1. Methods and their setting controls.
MethodsSetting
Algorithm 2 α n = 0.9 , β n = 0.5 , c = 1 / L , θ n = n / ( n + 1 ) if 1 n 500 , and 1 / 2 n otherwise
FISTA t 1 = 1 , t n + 1 = ( 1 + 1 + 4 t n 2 ) / 2 , θ n = ( t n 1 ) / t n + 1
FBA ρ n = 0.9 , γ = 1 / L
Ishikawa iteration ρ n = 0.9 , ζ n = 0.5 , c = 1 / L
S-iteration ρ n = 0.9 , ζ n = 0.5 , c = 1 / L
Noor iteration ρ n = 0.9 , ζ n = 0.5 , η n = 0.5 , c = 1 / L
SP-iteration ρ n = 0.9 , ζ n = 0.5 , η n = 0.5 , c = 1 / L
Table 2. The values of PSNR for Algorithm 2, FISTA, FBA of Suan Dok temple.
Table 2. The values of PSNR for Algorithm 2, FISTA, FBA of Suan Dok temple.
No. IterationsAlgorithm 2FISTAFBA
120.4180120.3643220.27827
521.5615421.1334020.64981
1022.8114022.0008120.96027
2524.5482523.7326621.56257
10027.8005326.7126822.93002
25030.2146129.2851523.92280
50031.5711731.2118224.66522
Table 3. The values of PSNR for Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Suan Dok temple.
Table 3. The values of PSNR for Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Suan Dok temple.
No. IterationsIshikawa IterationS-IterationNoor IterationSP-Iteration
120.4101020.4258520.4361120.47630
521.0495121.0875921.1264621.23160
1021.5437021.5983121.6578021.80965
2522.4449122.5181722.5994822.79284
10023.9811224.0569624.1434524.33880
25024.9765425.0538325.1433525.43583
50025.7588225.8422325.9395426.16025
Table 4. The values of PSNR for Algorithm 2, FISTA and FBA of Aranyawiwek temple.
Table 4. The values of PSNR for Algorithm 2, FISTA and FBA of Aranyawiwek temple.
No. IterationsAlgorithm 2FISTAFBA
120.6248520.5707720.48543
521.8535021.3773420.86196
1023.3184022.3558321.19050
2525.2931724.3929321.85570
10028.8643727.7504623.44804
25031.3269430.4899924.60734
50032.6698832.4310825.45769
Table 5. The values of PSNR for Algorithm 2, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Aranyawiwek temple.
Table 5. The values of PSNR for Algorithm 2, Ishikawa iteration, S-iteration, Noor iteration and SP-iteration of Aranyawiwek temple.
No. IterationsIshikawa IterationS-IterationNoor IterationSP-Iteration
120.6169520.6327220.6437120.65791
521.2869221.3282321.3705821.46923
1021.8344521.8960121.9635621.12342
2522.8754722.9618222.05764822.27264
10024.6475924.7619024.8612325.06857
25025.8126125.8998726.0012126.20981
50026.9640026.7872526.8957227.11590
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Janngam, K.; Wattanataweekul, R. An Accelerated Fixed-Point Algorithm with an Inertial Technique for a Countable Family of G-Nonexpansive Mappings Applied to Image Recovery. Symmetry 2022, 14, 662. https://doi.org/10.3390/sym14040662

AMA Style

Janngam K, Wattanataweekul R. An Accelerated Fixed-Point Algorithm with an Inertial Technique for a Countable Family of G-Nonexpansive Mappings Applied to Image Recovery. Symmetry. 2022; 14(4):662. https://doi.org/10.3390/sym14040662

Chicago/Turabian Style

Janngam, Kobkoon, and Rattanakorn Wattanataweekul. 2022. "An Accelerated Fixed-Point Algorithm with an Inertial Technique for a Countable Family of G-Nonexpansive Mappings Applied to Image Recovery" Symmetry 14, no. 4: 662. https://doi.org/10.3390/sym14040662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop