Next Article in Journal
Bursting Dynamics of Spiking Neural Network Induced by Active Extracellular Medium
Next Article in Special Issue
Fixed Point Results in Soft Fuzzy Metric Spaces
Previous Article in Journal
Poiseuille-Type Approximations for Axisymmetric Flow in a Thin Tube with Thin Stiff Elastic Wall
Previous Article in Special Issue
Remark on a Fixed-Point Theorem in the Lebesgue Spaces of Variable Integrability Lp(·)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application

by
Kamonrat Sombut
1,2,
Kanokwan Sitthithakerngkiet
2,3,
Areerat Arunchai
4 and
Thidaporn Seangwattana
2,5,*
1
Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), Pathum Thani 12110, Thailand
2
Applied Mathematics for Science and Engineering Research Unit (AMSERU), Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT), 39 Rungsit-Nakorn Nayok Rd., Klong 6, Khlong Luang, Thanyaburi, Pathum Thani 12110, Thailand
3
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
4
Department of Mathematics and Statistics, Faculty of Science and Technology Nakhon Sawan, Rajabhat University, Nakhon Sawan 60000, Thailand
5
Faculty of Science Energy and Environment, King Mongkut’s University of Technology North Bangkok, Rayong Campus (KMUTNB), Rayong 21120, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2107; https://doi.org/10.3390/math11092107
Submission received: 6 March 2023 / Revised: 20 April 2023 / Accepted: 26 April 2023 / Published: 28 April 2023
(This article belongs to the Special Issue New Trends in Nonlinear Analysis)

Abstract

:
In this paper, we propose an inertial forward–backward splitting method for solving the modified variational inclusion problem. The concept of the proposed method is based on Cholamjiak’s method. and Khuangsatung and Kangtunyakarn’s method. Cholamjiak’s inertial technique is utilized in the proposed method for increased acceleration. Moreover, we demonstrate that the proposed method strongly converges under appropriate conditions and apply the proposed method to solve the image restoration problem where the images have been subjected to various obscuring processes. In our example, we use the proposed method and Khuangsatung and Kangtunyakarn’s method to restore two medical images. To compare image quality, we also evaluate the signal-to-noise ratio (SNR) of the proposed method to that of Khuangsatung and Kangtunyakarn’s method.

1. Introduction

One of the most significant aspects of image processing is image restoration. Image restoration refers to the technique of removing or reducing image degradation that may occur during the acquisition process. This has a number of useful applications in environmental design, motion picture special effects, old photo restoration and removing text and obstructions from photographs. Image restoration’s objective is to recreate a high-quality image X from a low-quality or damaged image Y. It is a classic, inadequately linear inverse problem, which can be formulated as
Y = S X + c
where X and Y are the original and degraded images, respectively, S is the matrix representing the linear irreversible degenerate operator and c is usually noise.
Many inverse problems necessitate the use of optimization. In fact, inversion is frequently presented as a solution to an optimization problem. As a result, many of these inversions are non-linear, non-convex and large-scale, posing some difficult optimization challenges. To find x such that the following is correct, optimization problems can be formulated as follows:
min x R n g ( x )
where g : R n R is continuously differentiable. The variational inclusion problem (VIP) is one of the most fundamental optimizations for determining minimization. The following problems can be resolved: image restoration, machine learning, transportation and engineering. These problems will be solved by finding u in a real Hilbert space H such that the following holds.
0 S u + T u
where operators S : H H and T : H 2 H . Tseng’s technique [1,2], the proximal point method [3,4,5], the forward–backward splitting method [6,7,8,9] and other methods for the variational inclusion problem have received great attention from an increasing number of researchers. The forward–backward splitting method is one of the most commonly used. This is how it is defined:
u n + 1 = J r T ( u n r S u n ) , n 1
where J r T = ( I + r T ) 1 with r > 0 . Additionally, researchers have refined these methods by using relaxation and inertial techniques to give them more flexibility and acceleration. Alvarez and Attouch developed the inertial concept and proposed the inertial forward–backward approach, which is represented as follows:
v n = u n + θ n ( u n u n 1 ) u n + 1 = J r T ( I r S ) v n , n 1 .
The term θ n ( u n u n 1 ) is the technique for speeding up this method. The extrapolation factor θ n is well-known in the inertial term. The inertial technique significantly improves the algorithm’s performance and has better convergence properties. Secondly, the convergence theorem was demonstrated for non-smooth convex minimization problems and monotone inclusions. The relaxed inertial forward–backward methods [10,11], the inertial proximal point algorithm [12,13] and the inertial Tseng’s type method [2,14] were all created as a consequence of this concentration on both methods.
Lorenz and Pock [10] recommended that the inertial forward–backward method for monotone operators be as follows:
v n = u n + θ n ( u n u n 1 ) u n + 1 = J r n T ( I r n S ) v n , n 1 .
It has been determined that algorithm (6) differs from algorithm (5) because J r n T = ( I + r n T ) 1 where { r n } is a positive real sequence. The above-mentioned algorithms for the inertial term have weak convergence. Cholamjiak et al. [11] presented an improved forward–backward method for solving VIP using the inertial technique. The following describes their algorithm:
v n = u n + θ n ( u n u n 1 ) u n + 1 = ω n p + ξ n v n + γ n J r n T ( I r n S ) v n , n 1
where r n ( 0 , 2 α ] . The numerical experiments and proof that the algorithm generated strong convergence results have been provided.
Khuangsatung and Kangtunyakarn’s modified variational inclusion problem (MVIP) [15,16] aims to determine u H such that
0 i = 1 N a i S i u + T u ,
where a i ( 0 , 1 ) with i = 1 N a i = 1 , S i : H H and T : H 2 H . Obviously, if S i S for every i = 1 , 2 , , N , then MVIP implies VIP. Under the required conditions, they were able to devise a method to solve a family of nonexpansive fixed point mapping problems with finite dimensions and a family of finite variational inclusion problems. The following provides their method:
q n i = β n p n + ( 1 β n ) T i p n , n 1 p n + 1 = ω n f ( p n ) + ξ n γ n J λ T ( I λ i = 1 N δ i S i ) p n + γ n i = 1 N a i q n i .
This paper’s purpose is to demonstrate an inertial forward–backward splitting method for solving the modified variational inclusion problem based on the concept of [11,15,16]. The proposed method modifies = method (18) for higher acceleration by using the inertial technique of Cholamjiak et al. [11]. We further show that it strongly converges under appropriate conditions. In addition, we apply the proposed method to solve the image restoration problem. Image restoration is an essential problem in digital image processing. During data collection, images usually suffer degradation. Blurring, information loss due to sampling and quantization effects and different noise sources can all be part of the degradation. The goal of image restoration is to estimate the original image from degraded data. Therefore, the proposed method could be used to solve image restoration problems where the images have suffered a variety of blurring operations. Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as providing a visual representation of the function of certain organs or tissues (physiology). Medical imaging also creates a database of normal anatomy and physiology so that abnormalities can be identified. Occasionally, disturbances occur during the photographing process, such as when X-ray film images are blurred or segments of the X-ray film are absent. In our example, we use the proposed method and the method of Khuangsatung and Kangtunyakarn to restore blurred and motion-damaged X-ray films of the brain and the right shoulder. We also compare the signal-to-noise ratio (SNR) of the proposed method to that of Khuangsatung and Kangtunyakarn’s method in order to determine the image quality.
An overview of the contents of this research is presented below: Section 2 compiles fundamental lemmas and definitions. In Section 3, the suggested methods are completely detailed. Applications for image restoration and numerical experimentation are covered in Section 4. The last section of the work presents the conclusion.

2. Preliminaries

This section involves the use of preliminary statements, lemmas and definitions. Assume H is a real Hilbert space and K H is nonempty, closed and convex with a norm · and inner product · , · . The nearest point projection of H onto K is denoted by P K ; that is, u P K u u v for all u H and v K . It implies that u P K u , v P K u 0 holds for all u H and v K [17,18].
Lemma 1 
([18]). For every u , v H . The following statements are accurate:
(i) 
u 2 v 2 2 u v , v u v 2 ;
(ii) 
u 2 + 2 v , u + v u + v 2 ;
(iii) 
t u 2 + ( 1 t ) v 2 t ( 1 t ) u v 2 = t u + ( 1 t ) v 2 , t [ 0 , 1 ] .
A mapping S : K H is called α -inverse strongly monotone. If there exists α > 0 such that for every u , v K ,
S u S v , u v α S u S v 2 .
Remark 1. 
S is monotone and Lipschitz continuous, if S is α- inverse strongly monotone.
A self mapping Z on H is said to be nonexpansive if for all u , v H ,
Z u Z v u v .
A self mapping Z on H is said to be firmly nonexpansive if for all u , v H ,
Z u Z v 2 u v 2 ( I Z ) u ( I Z ) v 2 ,
or equivalently
Z u Z v 2 Z u Z v , u v .
Remark 2 
([17]). Z is firmly nonexpansive iff I Z is firmly nonexpansive. Obviously, the projection P K is firmly nonexpansive.
Theorem 1 
([19]). Assume that Z : K K is a nonexpansive mapping with a fixed point. For t ( 0 , 1 ) and fixed p K , the unique fixed point u t K of the contraction u t p + ( 1 t ) Z u converges strongly as t 0 to a fixed point of Z.
Lemma 2 
([20]). Assume that { x n } and { y n } are sequences of nonnegative real numbers such that
x n + 1 ( 1 ω n ) x n + y n + z n , n 1 ,
where { ω n } ( 0 , 1 ) and n = 1 z n < . Then, the following holds:
(i) 
{ x n } is a bounded sequence, if y n ω n M for some M 0 ;
(ii) 
lim n x n = 0 , if n = 1 ω n = and lim sup n y n ω n 0 .
Lemma 3 
([21]). Assume { z n } is a sequence of nonnegative real numbers such that
z n + 1 ( 1 ω n ) z n + γ n t n , n 1 ,
and
z n + 1 z n η n + ρ n , n 1 ,
where { ω n } is a sequence in ( 0 , 1 ) , { η m } is a sequence of nonnegative real numbers and { t n } and { ρ n } are real sequences such that
(i) 
n = 1 ω n = ;
(ii) 
lim n ρ n = 0 ;
(iii) 
lim k η n k = 0 implies lim sup k t n 0 for each subsequence of real number { n k } of { n } .
Then lim n z n = 0 .
Proposition 1 
([22]). Let H be a real Hilbert space. Let j N be fixed. Let { x i } i = 1 j H and t i 0 for all i = 1 , 2 , . . . , j with i = 1 j t i 1 . Then, we have
i = 1 j t i x i 2 i = 1 j t i x i 2 2 ( i = 1 j t i ) .

3. Main Result

A set-valued mapping T : H 2 H is called monotone if for all u , v H , g T ( u ) , and h T ( v ) implies u v , g h 0 . A monotone mapping T is maximal if its graph G ( T ) : = { ( g , u ) H × H : g T ( u ) } of T is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if for ( u , g ) H × H , u v , g h 0 for all ( v , h ) G ( T ) implies g T ( u ) . A maximal monotone operator T : H 2 H and λ > 0 ; its associated resolvent of order λ , defined by
J λ T : = ( I + λ T ) 1 ,
where I denotes the identily operator, which is a firmly nonexpansive mapping from H to H with full domain, and the set of fixed points of J λ T coincides with the set of zeros of S. Note that
G λ : = J λ T ( I λ i = 1 N δ i S i ) 1 = ( I + λ T ) 1 ( I λ i = 1 N δ i S i ) .
Lemma 4. 
Let T : H 2 H be a maximal monotone and for any i = 1 , 2 , 3 , . . . , N , S i : H H be α i inverse strongly monotone with α = min { α i , i = 1 , 2 , . . . , n } . Then, for λ > 0 ,
(i) F ( G λ ) = ( i = 1 N δ i S i + T ) 1 ( 0 ) ;
(ii) u G r u 2 u G λ u for 0 < r λ ;
(iii) G λ u G λ v 2 u v 2 λ ( 2 α λ ) i = 1 N δ i S i u i = 1 N δ i S i v 2 ( I J λ T ) ( I λ i = 1 N δ i S i ) u ( I J λ T ) ( I λ i = 1 N δ i S i ) v 2 for all u , v T λ = { p H : p λ } . It follows that G λ is nonexpansive mapping.
Proof. 
(i) By the definition of G λ ,
u F ( G λ ) u = G λ u u = ( I + λ T ) 1 ( u λ i = 1 N δ i S i u ) u λ i = 1 N δ i S i u u + λ T u 0 i = 1 N δ i S i u + T u u ( i = 1 N δ i S i + T ) 1 ( 0 ) .
(ii)
Since G λ u = ( I + λ T ) 1 ( u λ i = 1 N δ i S i u ) , then u G λ u λ i = 1 N δ i S i u T ( G λ u ) . Let 0 < r λ ; by the monotone of T, we obtain that
u G λ u r u G λ u λ , G r u G λ u 0 .
It follows that
0 r u G λ u r u G λ u λ , G r u G λ u = λ r λ u λ r λ G λ u , G r u G λ u G r u G λ u , G r u G λ u .
Since 0 < r λ , we obtain that
G r u G λ u 2 1 r λ u G λ u , G r u G λ u u G λ u G r u G λ u .
Hence, u G r u u G λ u + G r u G λ u 2 u G λ u .
(iii)
Since T is maximal monotone, it is know that J λ T is firmly nonexpaxsive and so we have
G λ u G λ v 2 = ( I + λ T ) 1 ( I λ i = 1 N δ i S i ) u ( I + λ T ) 1 ( I λ i = 1 N δ i S i ) v 2 = J λ T u J λ T v 2 u v 2 ( I J λ T ) u ( I J λ T ) v 2
where u = ( I λ i = 1 N δ i S i ) u and v = ( I λ i = 1 N δ i S i ) v .
Since S i is α i -inverse strongly monotone with α = min { α i : i = 1 , 2 , . . . , n } , we derive that
u v 2 = ( u v ) λ ( i = 1 N δ i S i u ( i = 1 N δ i S i v ) 2 u v 2 + λ 2 i = 1 N δ i S i u i = 1 N δ i S i v 2 2 λ α i = 1 N δ i S i u i = 1 N δ i S i v u v = u v 2 λ ( 2 α λ ) i = 1 N δ i S i u i = 1 N δ i S i v 2 .
From (16) and (17), we obtain that
G λ u G λ v 2 u v 2 λ ( 2 α λ ) i = 1 N δ i S i u i = 1 N δ i S i v 2 ( I J λ T ) u ( I J λ T ) v 2 .
Obviously, the mapping G λ is nonexpansive. □
The following theorem is the strong convergence theorem under sutiable conditions of an inertial forward–backward splitting algorithm for solving the modified variational inclusion in a real Hilbert space.
Theorem 2. 
Let K be a nonempty closed convex subset of a real Hilbert space H. Let S i : H H be α i -inverse strongly monotone mapping with η = min { α i } and let T : H 2 H be a multi-valued maximal monotone mapping such that Ω = ( i = 1 N δ i S i + T ) 1 ( 0 ) . Suppose that the sequence { u n } , generated by u 1 H and
v n = u n + θ n ( u n u n 1 ) , u n + 1 = ω n u + ξ n J λ T ( I λ i = 1 N δ i S i ) v n + γ n v n ,
for all δ i 1 , 0 < λ < 2 η , { θ n } [ 0 , θ ] with θ [ 0 , 1 ) and n N . Let { ω n } , { ξ n } , { γ n } [ 0 , 1 ] and ω n + ξ n + γ n = 1 . Assume that the following conditions hold:
(i) 
n = 1 θ n | u n u n 1 | < ,
(ii) 
lim n ω n = 0 and n = 0 ω n = , n = 0 | ω n + 1 ω n | < ,
(iii) 
0 < c ξ n d < 1 for all n 1 and n = 0 | ξ n + 1 ξ n | < ,
(iv) 
n = 1 N δ i = 1 .
Then, the sequence { u n } converges strongly to z = P Ω p .
Proof. 
For each n N , we put G λ = J λ T ( I λ i = 1 N δ i S i ) . Let { z n } be defined by
z n + 1 = ω n p + ξ n G λ z n + γ n z n .
Using Lemma 1, we have
u n + 1 z n + 1 ξ n G λ v n G λ z n + γ n v n z n ξ n v n z n + γ n v n z n = ( 1 ω n ) v n z n ( 1 ω n ) u n z n + θ n u n u n 1 .
By our assumptions and Lemma 2(ii), we obtain that
lim n u n z n = 0 .
Let z = P Ω p . Then
z n + 1 z ω n p z + ξ n G λ z n z + γ n z n z ω n p z + ξ n z n z + γ n z n z = ω n p z + ( 1 ω n ) z n z .
This show that { z n } is bounded by Lemma 2(i). Therefore, { u n } and { v n } are also bounded. We observe that
v n z 2 = u n + θ n ( u n u n 1 ) z 2 u n z 2 + 2 θ n u n u n 1 , v n z
and
u n + 1 z 2 = ω n p + ξ n G λ v n + γ n v n z 2 ξ n ( G λ v n z ) + γ n ( v n z ) 2 + 2 ω n p z , u n + 1 z .
On the other hand, by Proposition 1 and Lemma 4(iii), we obtain that
ξ n ( G λ v n z ) + γ n ( v n z ) 2 1 1 + ω n ( ξ n G λ v n z 2 + γ n v n z 2 ) ξ n 1 + ω n ( v n z 2 λ ( 2 α λ ) i = 1 N δ i S i v n i = 1 N δ i S i z 2 ( I J λ T ) ( I λ i = 1 N δ i S i ) v n ( I J λ T ) ( I λ i = 1 N δ i S i ) z ) + γ n 1 + ω n v n z 2
= ξ n 1 + ω n v n z 2 ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i v n i = 1 N δ i S i z 2 ξ n 1 + ω n ( I J λ T ) ( I λ i = 1 N δ i S i ) v n ( I J λ T ) ( I λ i = 1 N δ i S i ) z + γ n 1 + ω n v n z 2 1 ω n 1 + ω n v n z 2 ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i ( v n z ) 2 ξ n 1 + ω n v n λ i = 1 N δ i S i v n G λ v n + λ i = 1 N δ i S i z .
Substituting (22) and (24) into (23), we obtain that
u n + 1 z 2 1 ω n 1 + ω n v n z 2 ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i v n i = 1 N δ i S i z 2 ξ n 1 + ω n v n λ i = 1 N δ i S i v n G λ v n + λ i = 1 N δ i S i z + 2 ω n p z , u n + 1 z 1 ω n 1 + ω n ( u n z 2 + 2 θ n u n u n 1 , v n z ) ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i v n i = 1 N δ i S i z 2 ξ n 1 + ω n v n λ i = 1 N δ i S i v n G λ v n + λ i = 1 N δ i S i z + 2 ω n p z , u n + 1 z = ( 1 2 ω n 1 + ω n ) u n z 2 ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i v n i = 1 N δ i S i z 2 + ( 2 ω n 1 + ω n ) ( 1 ω n ω n θ n u n u n 1 , v n z + ( 1 + ω n ) p z , u n + 1 z ) ξ n 1 + ω n v n λ i = 1 N δ i S i v n G λ v n + λ i = 1 N δ i S i z .
We can check that 2 ω n 1 + ω n in ( 0 , 1 ) . From (25), we have
u n + 1 z 2 ( 1 2 ω n 1 + ω n ) u n z 2 + ( 2 ω n 1 + ω n ) ( 1 ω n ω n θ n u n u n 1 , v n z + ( 1 + ω n ) p z , u n + 1 z )
and
u n + 1 z 2 u n z 2 ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i v n i = 1 N δ i S i z 2 ξ n 1 + ω n v n λ i = 1 N δ i S i v n G λ v n + λ i = 1 N δ i S i z + 2 ( 1 ω n ) 1 + ω n θ n u n u n 1 , v n z + 2 ω n p z , u n + 1 z ) .
Then, (26) and (27) are reduced to the following
a n + 1 ( 1 ω n ) a n + ω n b n , n 1
and
a n + 1 a n t n + s n , n 1
where b n = 1 ω n ω n θ n u n u n 1 , v n z + ( 1 + ω n ) p z , u n + 1 z
t n = ξ n λ ( 2 α λ ) 1 + ω n i = 1 N δ i S i v n i = 1 N δ i S i z 2 ξ n 1 + α n v n λ i = 1 N δ i S i v n G λ v n + λ i = 1 N δ i S i z s n = 2 ( 1 ω n ) 1 + ω n θ n u n u n 1 , v n z + 2 ω n p z , u n + 1 z .
Since n = 0 ω n = , it follows that n = 0 2 ω n 1 + ω n = . By the boundedness of { v n } , u n , lim n ω n = 0 and condition (1), we see that lim n s n = 0 . In order to complete the proof, using Lemma 3, it remains to show that lim k t n k = 0 implies that lim sup k b n k 0 for any subsequence { t n k } of { t n } . Let { t n k } be a subsequence of { t n } such that lim k t n k = 0 . With these assumptions, we can deduce that
lim k i = 1 N δ i S i v n k i = 1 N δ i S i z 2 = lim k v n k λ i = 1 N δ i S i v n k G λ v n k + λ i = 1 N δ i S i z = 0 .
By the triangle inequality
lim k v n k λ i = 1 N δ i S i v n k G λ v n k + λ i = 1 N δ i S i z lim k v n k G λ v n k + lim k i = 1 N δ i S i v n k i = 1 N δ i S i z .
Then, we have
lim k v n k G λ v n k = 0 .
Since lim inf n λ n > 0 , there is λ > 0 such that λ n λ for all n 1 . In particular, λ n k λ for all k 1 . By Lemma 4(ii), we have
G λ v n k v n k 2 G λ n v n k v n k .
Then, by (28), we obtain
lim sup k G λ v n k v n k 0 .
This implies that
lim k G λ v n k v n k = 0 .
On the other hand, we have
G λ v n k u n k G λ v n k v n k + v n k u n k = G λ v n k v n k + u n k + θ n k ( u n k u n k 1 ) u n k = G λ v n k v n k + θ n k u n k u n k 1 .
By condition (i) and (30), we obtain
lim k G λ v n k u n k = 0 .
Let z t = t u + ( 1 t ) G λ z t , for all t ( 0 , 1 ) . Then, by Theorem 1, we have lim t 0 z t = z F ( G λ ) .
By Lemma 1(ii) and Lemma 4(iii), we know that G λ is nonexpansive. Thus
z t u n k 2 = t p + ( 1 t ) G λ z t u n k 2 = t ( p u n k ) + ( 1 t ) ( G λ z t u n k ) 2 ( 1 t ) 2 G λ z t u n k 2 + 2 t p u n k , z t u n k = ( 1 t ) 2 G λ z t u n k 2 + 2 t p z t , z t u n k + 2 t z t u n k 2 ( 1 t ) 2 ( G λ z t G λ v n k + G λ v n k u n k ) 2 + 2 t p z t , z t u n k + 2 t z t u n k 2 ( 1 t ) 2 ( z t v n k + G λ v n k u n k ) 2 + 2 t p z t , z t u n k + 2 t z t u n k 2 ( 1 t ) 2 ( z t u n k + θ n u n k u n k 1 + G λ v n k u n k ) 2 2 t z t p , z t u n k + 2 t z t u n k 2 .
This shows that
z t p , z t u n k ( 1 t ) 2 2 t ( z t u n k + θ n u n k u n k 1 + G λ v n k u n k ) 2 + ( 2 t 1 ) 2 t z t u n k 2 .
From conditon (i), (32) and (34), we obtain for some T > 0 large enough
lim sup k z t p , z t u n k t T 2 2 .
Taking t 0 in (35), we obtain
lim sup k z p , z u n k 0 .
On the other hand, we have
u n k + 1 u n k ω n k p + u n k + ξ n k G λ v n k u n k + γ n k v n k u n k ω n k p + u n k + ξ n k G λ v n k u n k + ( 1 ω n k ) v n k u n k = ω n k p + u n k + ξ n k G λ v n k u n k + ( 1 ω n k ) θ n k u n k u n k 1 .
By conditions (i), (ii), (36) and (37), we obtain that
lim k u n k + 1 u n k = 0 .
Combining (36) and (38),
lim sup k z p , z u n k + 1 0 .
Therefore,
1 ω n k ω n k θ n k u n k u n k 1 , v n k z 1 ω n k ω n k θ n k u n k u n k 1 v n k z .
From condition (i), it also follows that
lim sup k 1 ω n k ω n k θ n k u n k u n k 1 , v n k z 0 .
Hence, we obtain that lim sup k b n k 0 . By Lemma 3, we conclude that lim n u n z 2 0 . Therefore, u n z as n . This completes the proof. □

4. Numerical Result

In this section, we will demonstrate how our proposed method can be applied to solve the image recovery problem. In the example, we also compare the proposed method with the method of Theorem 3.1 [16] in terms of the signal-to-noise ratio (SNR) of the recovered image. This demonstrates the capacity of our proposed algorithm. We consider a linear inverse problem y = M x + ϵ , where x R n × 1 is an original image, y R m × 1 is the observed image, ϵ is additive noise and M R m × n is the blurring operation. We recover an approximation of the original image x using the Basis Pursuit denoise technique:
min x R n × 1 P ( x ) = 1 2 y M x 2 + τ x 1
where x 1 = i | x i | and τ is a parameter that is relate to noise ϵ . Problem (41) is widely recognized as the least absolute selection and shrinkage operator problem (LASSO).
We focus on minimizing a special case of the LASSO problem (41) : δ i A i + B , where A i = 1 2 y i M i x 2 and B = τ x 1 , where x is the original image, M i is the blurred matrix, y i is the blurred image by the blurred matrix M i for all i = 1 , 2 , . . . , N . Observe that A i is evidently a smooth function with an L i -Lipschitz continuous gradient A i = M i T ( M i x y i ) , where L i = M i T M i .
Example 1 
(Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10). Let x ˜ be the deblurred image. We show how to solve the special case of the LASSO problem by using the flowchart in Figure 1. We will recover an original image x R n × 1 . Let N = 4 . Consider a linear operator, which is a filtering M i x = φ i * x , for a simple linearized model of image recovery, where φ 1 is a motion blur specifying with motion length 21 pixels (len = 21 ) and motion orientation 11 ( θ = 11 ), φ 2 is a Gaussian blur of filter size 9 × 9 with standard deviation σ = 2 , φ 3 is a circular averaging filter with radius r = 4 and φ 4 is an averaging blur of filter size 9 × 9 . In this experiment, we will use our proposed algorithm to solve problem (41). All parameters are set to the following values: ω n = 1 2 n + 2 , ξ n = 1 ( 2 n + 2 ) 2 , u = 0.01 , θ n = 0.1 , δ n = 0.25 and λ = 0.001 . As a basic stopping criterion, we deem 350 iterations sufficient.
The following numerical results are proposed: Figure 2 presents the original grayscale images for (a) X-ray film of the brain and (b) X-ray film of the right shoulder. Figure 3 and Figure 6 are blurred X-ray films of the brain and the right shoulder images with filtering M i x in the part of degradation of Figure 1. In this example, we set N = 4 . So, we have M 1 x , M 2 x , M 3 x , and M 4 x . Figure 4a, X-ray films of the brain and the right shoulder images were obtained via Theorem 2. Figure 4b, X-ray films of the brain and the right shoulder images were obtained via Theorem 3.1 in [16] (Khuangsatung and Kangtunyakarn’s method). Figure 9 is an X-ray film of the brain image that was recovered via the proposed method that was tuned for the parameter λ.
Additionally, we use the signal-to-noise ratio (SNR),
S N R = 20 log 10 x 2 x x n + 1 2 ,
to measure the quality of recovery; a higher SNR indicates a higher quality of recovery. The following numerical results are proposed: Figure 5 is the SNR results of Figure 4a,b. We see that the SNR of Figure 4a is higher than that of Figure 4b. This means that Figure 4a is better than Figure 4b. Figure 8 is the SNR results of Figure 7a,b. We also see that the SNR of Figure 7a is higher than that of Figure 7b. This means that Figure 7a is better than Figure 7b. Figure 10 is the SNR of Figure 9 when we select a higher λ value inside the specified range to produce a higher quality image than a lower λ value.
Remark 3. 
Experimentally, the SNR value demonstrates that our proposed algorithm is more effective than the algorithm that was introduced by Khuangsatung and Kangtunyakarn [15,16] in solving the image recovery problem (41).
Figure 1. The image restoration process flowchart.
Figure 1. The image restoration process flowchart.
Mathematics 11 02107 g001
Figure 2. Original grayscale images. (a) X-ray film of the brain image and (b) X-ray film of the right shoulder image.
Figure 2. Original grayscale images. (a) X-ray film of the brain image and (b) X-ray film of the right shoulder image.
Mathematics 11 02107 g002
Figure 3. Blurred X-ray film of the brain image with filtering M i x by (a) M 1 x , (b) M 2 x , (c) M 3 x and (d) M 4 x .
Figure 3. Blurred X-ray film of the brain image with filtering M i x by (a) M 1 x , (b) M 2 x , (c) M 3 x and (d) M 4 x .
Mathematics 11 02107 g003
Figure 4. (a) X-ray film of the brain image obtained via Theorem 2 and (b) X-ray film of the brain image obtained via Theorem 3.1 in [16].
Figure 4. (a) X-ray film of the brain image obtained via Theorem 2 and (b) X-ray film of the brain image obtained via Theorem 3.1 in [16].
Mathematics 11 02107 g004
Figure 5. SNR results of X-ray films of the brain images between Figure 4a and Figure 4b.
Figure 5. SNR results of X-ray films of the brain images between Figure 4a and Figure 4b.
Mathematics 11 02107 g005
Figure 6. Blurred X-ray film of the right shoulder image with filtering M i x by (a) M 1 x , (b) M 2 x , (c) M 3 x and (d) M 4 x .
Figure 6. Blurred X-ray film of the right shoulder image with filtering M i x by (a) M 1 x , (b) M 2 x , (c) M 3 x and (d) M 4 x .
Mathematics 11 02107 g006
Figure 7. (a) X-ray film of the right shoulder image obtained via Theorem 2 and (b) X-ray film of the right shoulder image obtained via Theorem 3.1 in [16].
Figure 7. (a) X-ray film of the right shoulder image obtained via Theorem 2 and (b) X-ray film of the right shoulder image obtained via Theorem 3.1 in [16].
Mathematics 11 02107 g007
Figure 8. The SNR results of X-ray film of the right shoulder images between Figure 7a and Figure 7b.
Figure 8. The SNR results of X-ray film of the right shoulder images between Figure 7a and Figure 7b.
Mathematics 11 02107 g008
Figure 9. X-ray film of the brain image obtained via Theorem 2 when Example 1 was tuned for the parameter λ by setting (a) λ = 0.25, (b) λ = 0.5, (c) λ = 0.75, (d) λ = 1.
Figure 9. X-ray film of the brain image obtained via Theorem 2 when Example 1 was tuned for the parameter λ by setting (a) λ = 0.25, (b) λ = 0.5, (c) λ = 0.75, (d) λ = 1.
Mathematics 11 02107 g009
Figure 10. The SNR of Figure 9a–d.
Figure 10. The SNR of Figure 9a–d.
Mathematics 11 02107 g010

5. Conclusions

An inertial forward–backward splitting method is presented for solving modified variational inclusion problems. Obviously, under appropriate conditions, we demonstrate its strong convergence. Moreover, we apply the proposed method to solve the image restoration problem. In our application, we use the proposed method and Khuangsatung and Kangtunyakarn’s method to restore two medical images. To compare image quality, we also evaluate the signal-to-noise ratio (SNR) of the proposed method to that of Khuangsatung and Kangtunyakarn’s method. Finally, the SNR value demonstrates that our proposed algorithm is more effective than the algorithm that was introduced by Khuangsatung and Kangtunyakarn in solving the image recovery problem.

Author Contributions

Conceptualization, K.S. (Kanokwan Sitthithakerngkiet) and T.S.; methodology, K.S. (Kamonrat Sombut); software, K.S. (Kanokwan Sitthithakerngkiet) and T.S.; validation, A.A., K.S. (Kamonrat Sombut) and T.S.; formal analysis, T.S.; investigation, K.S. (Kamonrat Sombut); resources, K.S. (Kamonrat Sombut) and T.S.; writing—original draft preparation, K.S. (Kamonrat Sombut) and T.S.; writing—review and editing, T.S.; visualization, K.S. (Kamonrat Sombut); supervision, T.S.; project administration, K.S. (Kamonrat Sombut) and T.S.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science, Research and Innovation Fund (NSRF), King Mongkut’s University of Technology North Bangkok with Contract No. KMUTNB-FF-66-36.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank King Mongkut’s University of Technology North Bangkok (KMUTNB), Rajamangala University of Technology Thanyaburi (RMUTT) and Nakhon Sawan Rajabhat University. We appreciate the anonymous referee’s thorough review of the manuscript and helpful suggestions for refining the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tseng, P. A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 1998, 38, 431–446. [Google Scholar] [CrossRef]
  2. Seangwattana, T.; Sombut, K.; Arunchai, A.; Sitthithakerngkiet, K. A Modified Tseng’s Method for Solving the Modified Variational Inclusion Problems and Its Applications. Symmetry 2021, 13, 2250. [Google Scholar] [CrossRef]
  3. Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3, 459–470. [Google Scholar]
  4. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  5. Arunchai, A.; Seangwattana, T.; Sitthithakerngkiet, K.; Sombut, K. Image restoration by using a modified proximal point algorithm. AIMS Math. 2023, 8, 9557–9575. [Google Scholar] [CrossRef]
  6. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonliner operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  7. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  8. Thong, D.V.; Cholamjiak, P. Strong convergence of a forward–backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 2019, 38, 94. [Google Scholar] [CrossRef]
  9. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  10. Lorenz, D.; Pock, T. An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  11. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 1–17. [Google Scholar] [CrossRef]
  12. Caihua, C.; Shiqian, M.; Junfeng, Y. A General Inertial Proximal Point Algorithm for Mixed Variational Inequality Problem. SIAM J. Optim. 2015, 25, 2120–2142. [Google Scholar]
  13. Kesornprom, S.; Pholasa, N. Strong Convergence of the Inertial Proximal Algorithm for the Split Variational Inclusion Problem in Hilbert Spaces. Thai J. Math. 2020, 18, 1401–1415. [Google Scholar]
  14. Abubakar, A.; Kumam, P.; Ibrahim, A.H.; Padcharoen, A. Relaxed inertial Tseng’s type method for solving the inclusion problem with application to image restoration. Mathematics 2020, 8, 818. [Google Scholar] [CrossRef]
  15. Khuangsatung, W.; Kangtunyakarn, A. Algorithm of a new variational inclusion problem and strictly pseudononspreading mapping with application. Fixed Point Theory Appl. 2014, 209, 1–27. [Google Scholar] [CrossRef] [Green Version]
  16. Khuangsatung, W.; Kangtunyakarn, A. A theorem of variational inclusion problems and various nonlinear mappings. Appl. Anal. 2018, 97, 1172–1186. [Google Scholar] [CrossRef]
  17. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  18. Takahashi, W. Nonlinear Function Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  19. Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef] [Green Version]
  20. Mainge, P.E. Inertial iterative process for fixed points of certain quasinonexpansive mappings. Set-Valued Anal. 2007, 15, 67–79. [Google Scholar] [CrossRef]
  21. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef] [Green Version]
  22. Cholamjiak, P. A generalized forward–backward splitting method for solving quasi inclusion problems in Banach spaces. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sombut, K.; Sitthithakerngkiet, K.; Arunchai, A.; Seangwattana, T. An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application. Mathematics 2023, 11, 2107. https://doi.org/10.3390/math11092107

AMA Style

Sombut K, Sitthithakerngkiet K, Arunchai A, Seangwattana T. An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application. Mathematics. 2023; 11(9):2107. https://doi.org/10.3390/math11092107

Chicago/Turabian Style

Sombut, Kamonrat, Kanokwan Sitthithakerngkiet, Areerat Arunchai, and Thidaporn Seangwattana. 2023. "An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application" Mathematics 11, no. 9: 2107. https://doi.org/10.3390/math11092107

APA Style

Sombut, K., Sitthithakerngkiet, K., Arunchai, A., & Seangwattana, T. (2023). An Inertial Forward–Backward Splitting Method for Solving Modified Variational Inclusion Problems and Its Application. Mathematics, 11(9), 2107. https://doi.org/10.3390/math11092107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop