Next Article in Journal
Dynamics of a Stochastic SVEIR Epidemic Model with Nonlinear Incidence Rate
Next Article in Special Issue
DTSA: Dynamic Tree-Seed Algorithm with Velocity-Driven Seed Generation and Count-Based Adaptive Strategies
Previous Article in Journal
Time Evolution of the Symmetry of Alphabet Symbols and Its Quantification: Study in the Archeology of Symmetry
Previous Article in Special Issue
On Solving the Set Orienteering Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Relaxed Inertial Method for Solving Monotone Inclusion Problems with Applications

1
School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China
2
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2024, 16(4), 466; https://doi.org/10.3390/sym16040466
Submission received: 9 March 2024 / Revised: 26 March 2024 / Accepted: 3 April 2024 / Published: 11 April 2024
(This article belongs to the Special Issue Advanced Optimization Methods and Their Applications)

Abstract

:
We study a relaxed inertial forward–backward–half-forward splitting approach with variable step size to solve a monotone inclusion problem involving a maximal monotone operator, a cocoercive operator, and a monotone Lipschitz operator. The convergence of the sequence of iterations generated by the discretisations of a continuous-time dynamical system is established under suitable conditions. Given the challenges associated with computing the resolvent of the composite operator, the proposed method is employed to tackle the composite monotone inclusion problem. Additionally, a convergence analysis is conducted under certain conditions. To demonstrate the effectiveness of the algorithm, numerical experiments are performed on the image deblurring problem.

1. Introduction

In a (real) Hilbert space H , we focus on resolving the monotone inclusion problem, which entails the sum of three operators, as follows:
find x H such that 0 A x + B x + C x ,
where A : H 2 H is maximal monotone, B : H H is monotone L-Lipschitz continuous with L > 0 , and C : H H is β -cocoercive, for some β > 0 . Moreover, let A + B be maximal monotone and assume that it has a solution, namely,
zer ( A + B + C ) Ø .
Problem (1) captures numerous significant challenges in convex optimization problems, signal and image processing, saddle point problems, variational inequalities, partial differential equations, and similar problems. For example, see [1,2,3,4].
In recent years, many popular algorithms dealing with monotone inclusion problems involving the sum of three or more operators have been covered in the literature. Although traditional splitting algorithms [5,6,7,8] play an indispensable part in addressing monotone inclusions that include the sum of two operators, they cannot be directly applied to solve problems beyond the sum of two operators. A generalized forward–backward splitting (GFBS) method [3] is designed to address the monotone inclusion problem:
find x H such that 0 i = 1 n A i x + C x ,
where n 1 , { A i } i n : H 2 H indicate maximal monotone operators, and C : H H is the same as (1). A subsequent work by Raguet and Landrieu [9] addressed (2) using a preconditioned generalized forward–backward splitting algorithm. An equivalent form of Equation (2) can be expressed as the following monotone inclusion formulated in the product space:
find x H such that 0 A x + N V x + C x ,
where A : H 2 H and C : H H are the same as (1), and N V denotes the normal cone of a closed vector subspace V. Two novel approaches for solving (3) are presented in [10], where the two methods are equivalent under appropriate conditions. Interestingly, N V can be extended to a maximal monotone operator. However, the resolvent in this case is no longer linear, necessitating more complicated work to establish convergence. In [11], Davis and Yin finished this work by a three-operator splitting method. In a sense, it extends the GFBS method. It is a remarkable fact that the links between the above methods were precisely studied by Raguet [12], who also derived a new approach to solve (2) along with an extra maximal monotone operator M. Note that the case of (3) occurs when N V is generalized as a maximal monotone operator and C is relaxed to monotone Lipschitz continuous. A new approach [13] has been established to deal with this case within the computer-assisted skill. In contrast to [13], two new schemes [14] were discovered for tackling the same problem through discretising a continuous-time dynamical system. If N V is replaced by a monotone Lipschitz operator B, (3) can be translated into (1). Concerning (1), a forward–backward–half-forward (FBHF) splitting method was derived by Briceño-Arias and Davis [15], who exploited the cocoercivity of C by only utilizing it once in every iteration with great ingenuity. See also [16,17,18] for recent advances in four-operator methods.
Designed as an acceleration method, the inertial scheme is a powerful approach that leverages the characteristic of each new iteration being defined by fully using the previous two iterations. The basic idea was first considered in [19] as a heavy ball method, which was further developed in [20]. Later, Güler [21] and Alvarez et al. [22] generalized the accelerated scheme in [20] for addressing the proximal point scheme and maximal monotone problem, respectively. After that, numerous works involving inertial features were discussed and studied in [23,24,25,26,27,28,29].
Relaxation approaches, a principal part of resolving monotone inclusions, offer greater flexibility to the iterative schemes (see [4,30]). In particular, it makes sense to unite inertial and relaxation parameters in a way that enjoys their advantages. Motivated by the inertial proximal algorithm [31], a relaxed inertial proximal algorithm (RIPA) was reported to find the zero of a maximal monotone operator by Attouch and Cabot [32], who also exploited RIPA to address non-smooth convex minimization problems and studied convergence rates in [33]. Further research was made on the more general structure of approaching the solution of the sum of two operators with one being the cocoercive operator [34]. Meanwhile, the idea of combining inertial effect and relaxation method has also been used in the context of the Douglas–Rachford algorithm [35], Tseng’s forward–backward–forward algorithm [36], and alternating minimization algorithm [37].
This paper aims to develop a relaxed inertial forward–backward–half-forward (RIFBHF) scheme that serves as an extension of the FBHF method [15] by combining inertial effects and relaxation parameters to solve (1). It is noteworthy that the FBHF method [15] was considered in a set constraint ( x X ) of the monotone inclusion (1). For simplicity, we only study (1). Specifically, the relaxed inertial algorithm is derived from the perspective of discretisations of the continuous-time dynamical system [38], and its convergence is analysed under certain assumptions. We also discuss the relationship between the relaxed parameters and inertial effects. Since estimation of the resolvent of L * B L is generally challenging, solving the composite monotone inclusion is not straightforward. By drawing upon the primal–dual idea introduced in [39], the composite monotone inclusion can be reformulated equivalently as presented in (1), which can be addressed by our scheme. Similarly, a convex minimization problem is also solved accordingly. At last, numerical tests are designed to validate the effectiveness of the proposed algorithm.
The structure of the paper is outlined as follows. Section 2 provides an introduction to fundamental definitions and lemmas. Section 3 presents the development of a relaxed inertial forward–backward–half-forward splitting algorithm through discretisations of a continuous-time dynamical system, accompanied by a comprehensive convergence analysis. In Section 4, we apply the proposed algorithm to solve the composite monotone inclusion and the convex minimization problem. Section 5 presents several numerical experiments to demonstrate the effectiveness of the proposed algorithm. Finally, conclusions are given in the last section.

2. Preliminaries

In the following discussion, H and G are real Hilbert spaces equipped with an inner product · , · and corresponding norms · , and N represents the set of nonnegative integers. The ⇀ and → signify weak and strong convergence, respectively. H G denotes the Hilbert direct sum of H and G . The set of proper lower semicontinuous convex functions from H to ( , + ] is denoted by Γ 0 ( H ) . The followings are denoted:
  • The set of zeros of A is zer A : = { x H 0 A x } .
  • The domain of A is dom A : = { x H A x } .
  • The range of A is ran A : = { y H x H : y A x } .
  • The graph set A is gra A : = { ( x , y ) H 2 y A x } .
The definitions and lemmas that follow are among the most commonly encountered, as documented in the monograph referenced as [4].
Let an operator A : H 2 H be a set-valued map; then,
  • A is characterized as monotone if it satisfies the inequality x y , u v 0 for all ( x , u ) and ( y , v ) belonging to the graph of A.
  • A is called maximal monotone if no other monotone operator B : H 2 H exists for which its graph strictly encompasses the graph of A.
  • A is called β -strongly monotone with β ( 0 , + ) if for all ( x , u ) gra A and ( y , v ) gra A , there holds that x y , u v β x y 2 .
  • A is said to be β -cocoercive with β ( 0 , + ) if β A x A y 2 A x A y , x y for all x , y H .
  • The resolvent of A is defined by
    J λ A = ( I d + λ A ) 1 ,
    where I d is identity mapping, and λ > 0 . The mapping A : H H is L-Lipschitz continuous with L > 0 if for every pair of points x and y in H , the inequality A x A y L x y holds. Specifically, A is referred to as nonexpansive when L = 1 .
Let f Γ 0 ( H ) ; the subdifferential of f, denoted by f , is defined as f : H 2 H : x u H : y H , y x , u + f ( x ) f ( y ) . It is well-known that f is maximal monotone. The proximity operator of f Γ 0 ( H ) is then defined as:
prox f ( u ) = arg min x 1 2 x u 2 + f ( x ) .
The well-established relationship prox f = J f holds. According to the Baillon–Haddad theorem, if f : H R is a convex and differentiable function with a gradient that is Lipschitz continuous with constant 1 β for some β ( 0 , + ) , then f is said to be β -cocoercive. The following equation will be employed later:
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 ,
for all x H , y H , and α R .
The subsequent two lemmas will play a crucial role in the convergence analysis of the proposed algorithm.
Lemma 1
([23], Lemma 2.3). Assume { φ k } k 0 , { δ k } k 1 , and { α k } k 1 are the sequences in [ 0 , + ) such that for each k 1 ,
φ k + 1 φ k + α k ( φ k φ k 1 ) + δ k , k = 1 δ k < +
and there exists a real number α satisfying 0 α k α < 1 for all k 1 . Thus, the following assertions hold:
 (i) 
k = 1 [ φ k φ k 1 ] + < + , where [ t ] + = m a x { t , 0 } ;
 (ii) 
There exists φ * [ 0 , + ) such that lim k + φ k = φ * .
Lemma 2
((Opial) [4]). Let C be a nonempty subset of H , and let { x k } k 1 be a sequence in H satisfying the following conditions:
 (1) 
For all x * C , lim k x k x * exists;
 (2) 
Every weak sequential cluster point of { x k } k 1 belongs to C .
Then { x k } k 1 converges weakly to a point in C .

3. The RIFBHF Algorithm

We establish the RIFBHF algorithm from the perspective of discretisations of continuous-time dynamical systems. First, we pay attention to the second-order dynamical system of the FBHF method studied in [15]:
x ( t ) = J γ A ( I γ ( B + C ) ) z ( t ) , z ¨ ( t ) + δ ( t ) z ˙ ( t ) + τ ( t ) [ z ( t ) x ( t ) γ ( B z ( t ) B x ( t ) ) ] = 0 , z ( 0 ) = z 0 , z ˙ ( 0 ) = v 0 ,
where δ , τ : [ 0 , + ) [ 0 , + ) indicate Lebesgue measurable functions, 0 < γ < β β L + 1 and β and L are as in (1), and z 0 , v 0 H . Let
T z = z J γ A ( I γ ( B + C ) ) z γ [ B z B J γ A ( I γ ( B + C ) ) z ] .
Thereby, (4) is equal to
z ¨ ( t ) + δ ( t ) z ˙ ( t ) + τ ( t ) T z ( t ) = 0 , z ( 0 ) = z 0 , z ˙ ( 0 ) = v 0 .
Note that the cocoercivity of an operator implies its Lipschitz continuity, which implies, in turn, that B + C is Lipschitz continuous with the Lipschitz constant L + 1 β . One can find that T is Lipschitz continuous by Proposition 1 in [36]. Therefore, by the Cauchy–Lipschitz theorem for absolutely continuous trajectories, it can be deduced that the trajectory of (4) exists and is unique.
Next, the trajectories of (5) are approximated at the time point ( k h k ) k N using discrete trajectories ( z k ) k N . Specifically, we employ the central discretisation z ¨ ( t ) z k + 1 2 z k + z k 1 h k 2 and the backward discretisation z ˙ ( t ) z k z k 1 h k . Let w k be an extrapolation of z k and z k 1 ; one gets
1 h k 2 ( z k + 1 2 z k + z k 1 ) + δ k h k ( z k z k 1 ) + τ k T w k = 0 ,
which implies
z k + 1 = z k + α k ( z k z k 1 ) λ k T w k ,
where α k = 1 δ k h k and λ k = h k 2 τ k . Define that w k = z k + α k ( z k z k 1 ) for all k 1 ; then, one gets the following RIFBHF iterative for all k 1 :
w k = z k + α k ( z k z k 1 ) , x k = J γ k A ( w k γ k ( B + C ) w k ) , t k = x k + γ k ( B w k B x k ) , z k + 1 = ( 1 λ k ) w k + λ k t k .
Remark 1.
The subsequent iterative algorithms can be regarded as specific instances of the proposed algorithm:
 (i) 
FBHF method [15]: assume α k = 0 and λ k = 1 when k 1 ,
x k = J γ k A ( z k γ k ( B + C ) z k ) , z k + 1 = x k γ k ( B x k B z k ) .
 (ii) 
Inertial forward–backward–half-forward scheme [40]: assume λ k = 1 when k 1 ,
w k = z k + α k ( z k z k 1 ) , x k = J γ k A ( w k γ k ( B + C ) w k ) , z k + 1 = x k + γ k ( B w k B x k ) .
 (iii) 
Relaxed forward–backward–half-forward method: assume α k = 0 when k 1 ,
x k = J γ k A ( z k γ k ( B + C ) z k ) , t k = x k γ k ( B x k B z k ) , z k + 1 = ( 1 λ k ) z k + λ k t k .
Furthermore, the convergence results of the proposed algorithm will be established. The convergence analysis relies heavily on the following properties.
Proposition 1.
Consider the problem (1). Suppose { γ k } k 1 is a sequence of positive numbers, and { t k } k 1 , { w k } k 1 , and { x k } k 1 are the sequences generated by (6). Assume that y k = w k γ k ( B + C ) w k in (6) for all k 1 . For all z * zer ( A + B + C ) , then
t k z * 2 w k z * 2 L 2 ( χ 2 γ k 2 ) w k x k 2 2 β γ k χ χ γ k C w k C z * 2 χ 2 β w k x k 2 β γ k χ ( C w k C z * ) 2 ,
where γ k ( 0 , χ ) , and
χ = 4 β 1 + 1 + 16 β 2 L 2 min { 2 β , L 1 } .
Proof. 
By definition, we get x k = J γ k A ( y k ) and 0 ( A + B + C ) z * such that 1 γ k ( y k x k ) A x k and ( B + C ) z * A z * . Therefore, in view of the monotonicity of A and B, one has
x k z * , 1 γ k ( y k x k ) + ( B + C ) z * 0 ,
and
x k z * , B x k B z * 0 .
Using (7) and (8), we yield
x k z * , x k y k γ k B x k = x k z * , γ k C z * + x k z * , x k y k γ k ( B + C ) z * + x k z * , γ k ( B z * B x k ) x k z * , γ k C z * .
Therefore, we obtain
2 γ k x k z * , B w k B x k = 2 x k z * , γ k B w k + y k x k + 2 x k z * , x k y k γ k B x k 2 x k z * , γ k ( B + C ) w k + y k x k + 2 x k z * , γ k C z * γ k C w k = 2 x k z * , w k x k + 2 x k z * , γ k C z * γ k C w k = w k z * 2 x k z * 2 w k x k 2 + 2 x k z * , γ k C z * γ k C w k .
Since C is cocoercive, one gets for all ε > 0 :
2 x k z * , γ k C z * γ k C w k = 2 w k z * , γ k C z * γ k C w k + 2 x k w k , γ k C z * γ k C w k 2 γ k β C w k C z * 2 + 2 x k w k , γ k C z * γ k C w k = 2 γ k β C w k C z * 2 + ε w k x k 2 + γ k 2 ε C w k C z * 2 ε w k x k γ k ε ( C w k C z * ) 2 = ε w k x k 2 γ k 2 β γ k ε C w k C z * 2 ε w k x k γ k ε ( C w k C z * ) 2 .
Thus, in view of (9), (10), and the Lipschitz continuity of B, then
t k z * 2 = x k + γ k ( B w k B x k ) z * 2 = x k z * 2 + 2 γ k x k z * , B w k B x k + γ k 2 B w k B x k 2 x k z * 2 + w k z * 2 x k z * 2 w k x k 2 + γ k 2 B w k B x k 2 + 2 x k z * , γ k C z * γ k C w k w k z * 2 w k x k 2 + γ k 2 B w k B x k 2 + ε w k x k 2 γ k 2 β γ k ε C w k C z * 2 ε w k x k γ k ε ( C w k C z * ) 2 w k z * 2 w k x k 2 + γ k 2 L 2 w k x k 2 + ε w k x k 2 γ k 2 β γ k ε C w k C z * 2 ε w k x k γ k ε ( C w k C z * ) 2 = w k z * 2 1 ε γ k 2 L 2 w k x k 2 γ k ε ( 2 β ε γ k ) C w k C z * 2 ε w k x k γ k ε ( C w k C z * ) 2 .
Similar to [15], let χ = 1 ε L = 2 β ε for 0 < ε < 1 , allowing us to determine the widest interval for γ k such that the second and third terms on the right-hand side of (11) are negative. □
Proposition 2.
Consider the problem (1) and assume that z * z e r ( A + B + C ) . Suppose that { λ k } k 1 > 0 , 0 < γ k < χ , and χ is defined as in Proposition 1. Assume { α k } k 1 is nondecreasing and satisfies 0 α k α < 1 . Let { z k } k 1 denote the sequence generated by (6). Then
 (i) 
z k + 1 z * 2 w k z * 2 φ k z k + 1 w k 2 ,
where φ k = L 2 ( χ 2 γ k 2 ) λ k ( 1 + γ k L ) 2 + 1 λ k λ k .
 (ii) 
Define that
Π k : = z k z * 2 α k z k 1 z * 2 + α k ( 1 + α k ) φ k ( α k 2 α k ) z k z k 1 2 .
Then
Π k + 1 Π k ζ k z k + 1 z k 2 ,
where ζ k = φ k ( 1 α k + 1 ) α k + 1 ( 1 + α k + 1 ) + φ k + 1 ( α k + 1 2 α k + 1 ) .
Proof. 
(i)
Proposition 1 leads to
z k + 1 z * 2 = ( 1 λ k ) w k + λ k t k z * 2 = ( 1 λ k ) ( w k z * ) + λ k ( t k z * ) 2 = ( 1 λ k ) w k z * 2 + λ k t k z * 2 λ k ( 1 λ k ) t k w k 2 ( 1 λ k ) w k z * 2 + λ k w k z * 2 L 2 ( χ 2 γ k 2 ) w k x k 2 λ k ( 1 λ k ) t k w k 2 = w k z * 2 λ k L 2 ( χ 2 γ k 2 ) w k x k 2 λ k ( 1 λ k ) t k w k 2 .
According to the Lipschitz continuity of B,
1 λ k z k + 1 w k = t k w k t k x k + x k w k = γ k B w k B x k + w k x k ( 1 + γ k L ) w k x k ,
which implies that
L 2 χ 2 γ k 2 λ k ( 1 + γ k L ) 2 z k + 1 w k 2 λ k L 2 χ 2 γ k 2 w k x k 2 .
Combining (13) and (14), we have
z k + 1 z * 2 w k z * 2 L 2 χ 2 γ k 2 λ k ( 1 + γ k L ) 2 + 1 λ k λ k z k + 1 w k 2 .
(ii)
It follows from the definition of w k and the Cauchy–Schwartz inequality that
z k + 1 w k 2 = z k + 1 z k α k ( z k z k 1 ) 2 = z k + 1 z k 2 + α k 2 z k z k 1 2 2 α k z k + 1 z k , z k z k 1 z k + 1 z k 2 + α k 2 z k z k 1 2 2 α k z k + 1 z k z k z k 1 ( 1 α k ) z k + 1 z k 2 + ( α k 2 α k ) z k z k 1 2 .
Simultaneously, we have
w k z * 2 = z k + α k ( z k z k 1 ) z * 2 = ( 1 + α k ) ( z k z * ) α k ( z k 1 z * ) 2 = ( 1 + α k ) z k z * 2 α k z k 1 z * 2 + α k ( 1 + α k ) z k z k 1 2 .
By Propositions 2(i), (15), and (16), we have
z k + 1 z * 2 w k z * 2 φ k z k + 1 w k 2 ( 1 + α k ) z k z * 2 α k z k 1 z * 2 + α k ( 1 + α k ) z k z k 1 2 φ k ( 1 α k ) z k + 1 z k 2 + ( α k 2 α k ) z k z k 1 2 = ( 1 + α k ) z k z * 2 α k z k 1 z * 2 φ k ( 1 α k ) z k + 1 z k 2 + α k ( 1 + α k ) φ k ( α k 2 α k ) z k z k 1 2 = ( 1 + α k ) z k z * 2 α k z k 1 z * 2 π k z k + 1 z k 2 + η k z k z k 1 2 ,
where π k : = φ k ( 1 α k ) and η k : = α k ( 1 + α k ) φ k ( α k 2 α k ) . Furthermore, we define that
Π k : = z k z * 2 α k z k 1 z * 2 + η k z k z k 1 2 .
Now, by (17) and α k α k + 1 , we obtain
Π k + 1 Π k = z k + 1 z * 2 α k + 1 z k z * 2 + η k + 1 z k + 1 z k 2 z k z * 2 + α k z k 1 z * 2 η k z k z k 1 2 z k + 1 z * 2 ( 1 + α k ) z k z * 2 + α k z k 1 z * 2 + η k + 1 z k + 1 z k 2 η k z k z k 1 2 ( 1 + α k ) z k z * 2 α k z k 1 z * 2 π k z k + 1 z k 2 + η k z k z k 1 2 ( 1 + α k ) z k z * 2 + α k z k 1 z * 2 + η k + 1 z k + 1 z k 2 η k z k z k 1 2 = ( π k η k + 1 ) z k + 1 z k 2 .
If follows from 0 < α k α k + 1 α that
π k η k + 1 = φ k ( 1 α k ) α k + 1 ( 1 + α k + 1 ) + φ k + 1 ( α k + 1 2 α k + 1 ) φ k ( 1 α k + 1 ) α k + 1 ( 1 + α k + 1 ) + φ k + 1 ( α k + 1 2 α k + 1 ) .
Let ζ k = φ k ( 1 α k + 1 ) α k + 1 ( 1 + α k + 1 ) + φ k + 1 ( α k + 1 2 α k + 1 ) ; we obtain
Π k + 1 Π k ζ k z k + 1 z k 2 .
The proof is completed.
Furthermore, seeking to ensure the convergence of (6), let lim k + α k = α 0 , lim k + γ k = γ > 0 and lim k + λ k = λ > 0 by the idea of Boţ et al. [36]. Proposition 2(ii) implies that
lim k + ζ k = L 2 χ 2 + 1 + 2 γ L λ ( 1 + γ L ) 2 ( 1 α ) 2 1 + α 2 α 2 .
Since χ = 1 ε L , we have
lim k + ζ k = 2 ( 1 + γ L ) ε λ ( 1 + γ L ) 2 ( 1 α ) 2 1 + α 2 α 2 .
Then, to ensure lim k + ζ k > 0 , the following holds:
0 < λ < 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 ) .
Next, we establish the principal convergence theorem.
Theorem 1.
In problem (1), we assume that z * zer ( A + B + C ) . Let a nondecreasing sequence { α k } k 1 satisfy 0 α k α < 1 . Assume that η γ k χ η , 0 < η < χ 2 , and χ is defined as in Proposition 1. In addition, let { λ k } k 1 be nonnegative, and
0 < lim k + λ k = λ < 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 ) ,
where ε = 2 1 + 1 + 16 β 2 L 2 . Then, the sequence { z k } k 1 obtained by (6) converges weakly to a solution of zer ( A + B + C ) .
Proof. 
For any z * zer ( A + B + C ) , by (19), we have lim k + ζ k > 0 . This implies the existence of k 0 1 such that for every k k 0 ,
Π k + 1 Π k ζ k z k + 1 z k 2 0 .
As a result, the sequence { Π k } k k 0 is nonincreasing, and the bound for { α k } k 1 yields
α z k 1 z * 2 z k z * 2 α z k 1 z * 2 Π k Π k 0 ,
which indicates that
z k z * 2 α z k 1 z * 2 + Π k 0 α k k 0 z k 0 z * 2 + Π k 0 1 α k k 0 1 α .
Combining (20)–(22) and α [ 0 , 1 ) , we have
i = k 0 k ζ i z i + 1 z i 2 Π k 0 Π k + 1 Π k 0 + α z k z * 2 α k k 0 + 1 z k 0 z * 2 + Π k 0 1 α k k 0 + 1 1 α z k 0 z * 2 + Π k 0 1 α ,
which indicates that lim k + ζ k z k + 1 z k = 0 . Since lim inf k + ζ k > 0 , this yields lim k + z k + 1 z k = 0 . Let us take account of (17) and Lemma 1 and observe that lim k + z k z * exists.
Meanwhile,
t k w k = 1 λ k z k + 1 w k 1 λ k z k + 1 z k + α k λ k z k z k 1 ,
which implies that lim k + t k w k = 0 . In addition, for every k 1 , we have
t k w k = x k + γ k ( B w k B x k ) w k x k w k γ k B w k B x k ( 1 γ k L ) x k w k .
Due to γ k < 1 L , we deduce that
lim k + x k w k = 0 .
Assume z ¯ is a weak limit point of the sequence { z k } k 1 . Since { z k } k 1 is bounded, there exists a subsequence { z k i } i 0 that converges weakly to z ¯ . In view of (23) and (24), { w k i } i 0 and { x k i } i 0 also converge weakly to z ¯ . Next, since x k i = J γ k i A w k i γ k i ( B + C ) w k i , we have u k i : = γ k i 1 ( w k i x k i ) ( B + C ) w k i + ( B + C ) x k i ( A + B + C ) x k i . Therefore, utilizing (24), the fact that γ k [ η , χ η ] , and combined with the Lipschitz continuity of B + C , we conclude that u k i 0 . Due to the maximal monotonicity of A + B and the cocoercivity of C, it follows that A + B + C is maximal monotone, and its graph is closed in the weak–strong topology in H × H (see Proposition 20.37(ii) in [4]). As a result, z ¯ zer ( A + B + C ) . Following Lemma 2, we conclude that the sequence { z k } k 1 weakly converges to an element of zer ( A + B + C ) . This completes the proof. □
Remark 2
(Inertia versus relaxation). The inequation (19) establishes a relationship between inertial and relaxation parameters. Figure 1 displays the relationship between α k and λ k by a graphical representation for some given values of γ and ε, which has a similar graphical representation to Figure 1 in [36]. It is noteworthy that the expression of the upper bound for λ k resembles that in ([36], Remark 2) if ε 0 . Assume that 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 is constant; it follows from (19) that the upper bound of λ k takes the form of λ max ( α , γ ) = 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 ) with 0 α < 1 . Further, note that the relaxation parameter λ max ( α , γ ) is a decreasing function with respect to inertial parameter α on the interval [ 0 , 1 ] : for example, when α 1 , then λ max ( α , γ ) 0 . Of course, we can also get λ max ( γ ) = 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 when α = 0 , and λ max ( γ ) is also decreasing on ( 0 , 1 L ) because of limiting values 2 ε as γ 0 and 1 ε 4 as γ 1 L .
Remark 3.
The parameters for FBHF [15] and RIFBHF are given in Table 1, which shows that the two schemes have the same range of step sizes. Different from FBHF [15], RIFBHF introduces relaxation parameter λ k and inertial parameter α k . Note that RIFBHF can reduce to FBHF [15] if λ k = 1 and α k = 0 .
Remark 4.
The existence of the over-relaxation ( λ k > 1 ) for RIFBHF deserves further discussion. If 0 α < θ 2 + 4 ( 2 γ 2 L 2 + 2 γ L + ε ) ( 1 ε γ 2 L 2 ) θ 2 ( 2 γ 2 L 2 + 2 γ L + ε ) with θ = 2 γ L γ 2 L 2 + 3 2 ε for k k 0 , we conclude that λ k has over-relaxation. In addition, observe that the over-relaxation in [36] exists when 0 α < ( 3 γ L ) 2 + 8 γ L ( 1 γ L ) 3 + γ L 4 γ L for k k 0 . Although the upper bounds of α for the two approaches are different, the over-relaxation ( λ k > 1 ) for the two methods is possible when α is in a small range.

4. Composite Monotone Inclusion Problem

The aim of this section is to use the proposed algorithm to solve a more generalized inclusion problem, which is outlined as follows:
0 A x + L * B L x + C x ,
where A : H 2 H and B : G 2 G represent two maximal monotone operators, C : H H is a β -cocoercive operator with β > 0 , and L : H G is a bounded linear operator. In addition, the following assumption is given:
zer ( A + L * B L + C ) Ø .
The key to solving (25) is to know the exact resolvent J L * B L . As we know, J L * B L can be estimated exactly using only the resolvent of the operator B, the linear operator L, and its adjoint L * when L * L = ν I for some ν > 0 . However, this condition usually does not hold in our interesting problems, such as total variation regularization. To address this challenge, we introduce an efficient iterative algorithm to tackle (25) by combining the primal–dual approach [39] and (6). Specifically, drawing inspiration from the fully splitting primal–dual algorithm studied by Briceño-Arias and Combettes [39], we naturally rewrite (25) as the following problem, letting K = H G :
find x K such that 0 M x + S x + N x ,
where
M : K 2 K : ( x , y ) ( A x ) × ( B 1 y ) , S : K K : ( x , y ) ( L * y , L x ) , N : K K : ( x , y ) ( C x , 0 ) .
Notice that M is maximal monotone and S is monotone Lipschitz continuous within a constant L , as stated in Proposition 2.7 of [39]. This implies that M + S is also maximal monotone since S is skew-symmetric. A result yields the cocoercivity of N by the cocoercivity of C. Therefore, it follows from (6) and (27) that the following convergence analysis can deal with (26), which implies that (25) is also solved.
Corollary 1.
Suppose that A : H 2 H is maximal monotone; let B : G 2 G be maximal monotone, and assume that C : H H is β-cocoercive with β > 0 . Let L : H G be a nonzero bounded linear operator. Given initial data x 0 , x 1 H and y 0 , y 1 G , the iteration sequences are defined:
w 1 , k = x k + α k ( x k x k 1 ) , w 2 , k = y k + α k ( y k y k 1 ) , z 1 , k = J γ k A ( w 1 , k γ k ( L * w 2 , k + C w 1 , k ) ) , z 2 , k = J γ k B 1 ( w 2 , k + γ k L w 1 , k ) , t 1 , k = z 1 , k + γ k ( L * w 2 , k L * z 2 , k ) , t 2 , k = z 2 , k + γ k ( L w 1 , k + L z 1 , k ) , x k + 1 = ( 1 λ k ) w 1 , k + λ k t 1 , k , y k + 1 = ( 1 λ k ) w 2 , k + λ k t 2 , k ,
where η γ k χ η , 0 < η < χ 2 , χ is defined in Proposition 1, and { α k } k 1 is non-decreasing such that 0 α k α < 1 . Assume that the sequence { λ k } k 1 fulfils the condition
0 < lim k + λ k = λ < 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 ) ,
where ε = 2 1 + 1 + 16 β 2 L 2 . Therefore, the iterative sequence { x k } k 1 generated by (28) weakly converges to a solution of z e r ( A + L * B L + C ) .
Proof. 
Using Proposition 2.7 in [39], we observe that M is maximal monotone and S is Lipschitz continuous together with L . Considering the β -cocoercivity of C, it follows that N is also β -cocoercive. Additionally, for arbitrary k 1 , let
x k = ( x k , y k ) , w k = ( w 1 , k , w 2 , k ) , z k = ( z 1 , k , z 2 , k ) , t k = ( t 1 , k , t 2 , k ) .
Therefore, using (27) and Proposition 2.7 (iv) in [39], we can rewrite (28) in the following form:
w k = x k + α k ( x k x k 1 ) , z k = J γ k M ( w k γ k ( S + N ) w k ) , t k = z k + γ k ( S w k S z k ) , x k + 1 = ( 1 λ k ) w k + λ k t k ,
which has the same structure as (6). Meanwhile, our assumptions guarantee that the conditions of Theorem 1 are held. Hence, according to Theorem 1, the sequence { x k } k 1 generated by (28) weakly converges to an element of zer ( A + L * B L + C ) . □
In the following, we apply the results of Corollary 1 to tackle the convex minimization problem.
Corollary 2.
Consider the convex optimization problem as follows:
min x H f ( x ) + g ( L x ) + h ( x ) ,
where f Γ 0 ( H ) , g Γ 0 ( G ) , h : H R is convex differentiable with 1 β -Lipschitz continuous gradient for some β > 0 , and L : H G is a bounded linear operator. For (29), given initial data x 0 , x 1 H and y 0 , y 1 G , iteration sequences are presented for k 1 :
w 1 , k = x k + α k ( x k x k 1 ) , w 2 , k = y k + α k ( y k y k 1 ) , z 1 , k = prox γ k f ( w 1 , k γ k ( L * w 2 , k + h ( w 1 , k ) ) ) , z 2 , k = prox γ k g * ( w 2 , k + γ k L w 1 , k ) , t 1 , k = z 1 , k + γ k ( L * w 2 , k L * z 2 , k ) , t 2 , k = z 2 , k + γ k ( L w 1 , k + L z 1 , k ) , x k + 1 = ( 1 λ k ) w 1 , k + λ k t 1 , k , y k + 1 = ( 1 λ k ) w 2 , k + λ k t 2 , k ,
where η γ k χ η , 0 < η < χ 2 , χ is defined in Proposition 1, and { α k } k 1 is nondecreasing such that 0 α k α < 1 . Assume that the sequence { λ k } k 1 satisfies that
0 < lim k + λ k = λ < 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 ) ,
where ε = 2 1 + 1 + 16 β 2 L 2 . If zer ( f + L * g L + h ) is nonempty, then the sequence { x k } k 1 weakly converges to a minimizer of (29).
Proof. 
According to [4], f and g are maximal monotone. In view of the Baillon–Haddad theorem, h is indicated to be β -cocoercive. Thus, solving (29) with our algorithm is equivalent to (25) under suitable qualification conditions, which gives
A = f , B = g a n d C = h .
Therefore, it follows from the same arguments as the proof of Corollary 1 that we arrive at the conclusions of Corollary 2. □

5. Numerical Experiments

This section reports the feasibility and efficiency of (6). In particular, we discuss the impact of parameters on (6). All experiments were conducted using MATLAB on a standard Lenovo machine equipped with an Intel(R) Core(TM) i5-8265U CPU @ 1.60 GHz with 1.80 GHz boost. Our objective is to address the following constrained total variation (TV) minimization problem:
min z D 1 2 A z d 2 2 + μ z T V ,
in which A R m × n represents the blurring matrix, z signifies the unknown original image in R n 2 , and D indicates a nonempty closed convex set and represents the prior information regarding the deblurred images. Specifically, D is selected as a nonnegative constraint set, μ > 0 indicates the regularization parameter, z T V denotes the total variation term, and d stands for the recorded blurred image data.
Notice that (31) can be equivalently written with the following structure:
min z 1 2 A z d 2 2 + μ z T V + δ D ( z ) ,
where δ D ( z ) represents the indicator function, which equals 0 when z D and + otherwise. The term z T V can be expressed as a combination of a convex function φ (either using · 1 for the anisotropic total-variation or · 2 for the isotropic total-variation) with a first-order difference matrix H, denoted as z T V = φ ( H z ) (refer to Section 4 in [41]), where H represents a 2 n 2 × n 2 matrix written as:
H : = I n E E I n and E : = 0 1 1 1 1 n × n ,
where I n denotes the n × n identity matrix, and ⨂ is the Kronecker product. Consequently, it is evident that (32) constitutes a special instance of (29).
In order to assess the quality of the deblurred images, we employ the signal-to-noise ratio (SNR) [40], which is defined by
S N R = 10 l o g x 2 x x r 2 ,
where x is the original image, and x r is the deblurred image. The following stopping criterion is utilized to terminate iterative algorithms by the relative error between adjacent iterative steps. We choose test image “Barbara" with a size of 512 × 512 and use four typical image blurring scenarios as in Table 2. In addition, the range of the values in the original images is [ 0 , 255 ] , and the norm of operator A in (31) is equal to 1 for the selected experiments. The cocoercive coefficient β is 1, and H 2 = 8 for the total variation, as estimated in [41], where H is the linear operator. We terminate the iterative algorithms if the relative error is less than 5 × 10 4 or if the maximum number of iterations reaches 1000.
Prior to the comparisons, we do a test to display how the performance of RIFBHF is affected by different parameters. For simplicity, we set γ k = γ for all k 1 . In view of (19), the relationship between inertial parameter α and relaxation parameter λ is presented as follows:
0 < lim k + λ k = λ < 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 ) ,
where 0 α < 1 , and ε = 2 1 + 1 + 16 β 2 L 2 . As we know, the upper boundedness of λ is similar to the one of ([36], Remark 2) if ε 0 . Firstly, we assume γ = 4 β 1 + 1 + 16 β 2 L 2 , α = 0 , and λ = 1 , or α = 0.3 and λ = 0.6 for RIFBHF. The SNR values and the numbers of iterations used with various μ for RIFBHF are recorded in Table 3 and Table 4. Observe that the SNR values in image blurring scenarios 1 and 2 are the best when μ = 1 , while the SNR values in image blurring scenarios 3 and 4 are the best when μ = 0.1 . Therefore, we choose μ = 1 for image blurring scenarios 1 and 2 and μ = 0.1 for image blurring scenarios 3 and 4. For the case of μ = 1 in image blurring scenario 1, we further study the effect of the other parameters. The development of the SNR value and the normalized error of the objective function log ( f f * ) / f * along the running time are considered; here, f represents the present objective value, while f * signifies the optimal objective value. To obtain an approximation of the optimal objective value, we set the function value given by running experimental algorithms for 5000 iterations as an estimate of the optimal value. One can observe that a larger γ results in better error when α = 0.2 and λ = 0.7 , and a larger α also brings better error when γ = 0.2 8 and λ = 0.7 in Figure 2. Of course, α and γ also affect the value of λ . Meanwhile, a conclusion that a larger λ allows for better error can be given. It is worth noting that over-relaxation ( λ > 1 ) exists, and it enjoys a better effect. Figure 3 shows the development of SNR for different parameters; the experiment results are similar to those in Figure 2.
To further validate the rationality and efficiency of (30), the following algorithms and parameter settings are utilized:
  • FBHF [15]: γ = 4 β 1 + 1 + 16 β 2 L 2 for four image blurring scenarios.
  • PD: the first-order primal–dual splitting algorithm [42] with τ = 1 3 , σ = 1 3 , and θ = 1 for four image blurring scenarios.
  • RIFBHF: γ = 4 β 1 + 1 + 16 β 2 L 2 using α = 0.2 and λ = 0.9 for image blurring scenarios 1 and 2 and α = 0.6 and λ = 0.8 for image blurring scenarios 3 and 4.
Figure 4 plots the normalized error of the objective function of FBHF, PD, and RIFBHF along the running time. Note that PD appears to be the fastest algorithm. FBHF and RIFBHF are almost the same in image blurring scenarios 1 and 2, while in image blurring scenarios 3 and 4, the effect of RIFBHF is better than that of FBHF, which shows that our algorithm is acceptable. Meanwhile, to succinctly illustrate the deblurring impact of the proposed algorithm, the deblurred results of image blurring scenario 4 are shown in Figure 5. One can observe visually the better deblurred images generated by RIFBHF.

6. Conclusions

In this paper, we proposed an RIFBHF algorithm to solve (1). On the way, the proposed approach was deduced by discretising a continuous-time dynamical system, and a variable stepsize was introduced into the proposed algorithm. Additionally, we studied the theoretical convergence properties of (6) under reasonable parameter conditions. Inspired by the primal–dual scheme, our approach tackled both the composite monotone inclusion problem (25) and analysed the composite convex optimization problem (29). Subsequently, we conducted numerical experiments focused on image deblurring to illustrate the effectiveness of our proposed technique.

Author Contributions

Methodology, C.Z.; formal analysis, G.Z. and Y.T.; writing—original draft preparation, C.Z.; writing—review and editing, G.Z. and Y.T.; All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided to this project by the National Natural Science Foundation of China (11771193,12061045), the Jiangxi Provincial Natural Science Foundation (20224ACB211004), the Guangzhou Education Scientific Research Project 2024 (202315829), and the Guangzhou University Research Project (RC2023061).

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.

Acknowledgments

We express our thanks to the anonymous referees for their constructive suggestions, which significantly improved the presentation of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  2. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  3. Raguet, H.; Fadili, J.; Peyré, G. A generalized forward-backward splitting. SIAM J. Imaging Sci. 2013, 6, 1199–1226. [Google Scholar] [CrossRef]
  4. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2017. [Google Scholar]
  5. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef]
  6. Lions, P.-L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  7. Combettes, P.L.; Pesquet, J.-C. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
  8. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  9. Raguet, H.; Landrieu, L. Preconditioning of a generalized forward-backward splitting and application to optimization on graphs. SIAM J. Imaging Sci. 2015, 8, 2706–2739. [Google Scholar] [CrossRef]
  10. Briceño-Arias, L.M. Forward-Douglas-Rachford splitting and forward-partial inverse method for solving monotone inclusions. Optimization 2015, 64, 1239–1261. [Google Scholar] [CrossRef]
  11. Davis, D.; Yin, W.T. A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef]
  12. Raguet, H. A note on the forward-Douglas-Rachford splitting for monotone inclusion and convex optimization. Optim. Lett. 2019, 13, 717–740. [Google Scholar] [CrossRef]
  13. Ryu, E.K.; Vũ, B.C. Finding the forward-Douglas-Rachford-forward method. J. Optim. Theory Appl. 2020, 184, 858–876. [Google Scholar] [CrossRef]
  14. Rieger, J.; Tam, M.K. Backward-forward-reflected-backward splitting for three operator monotone inclusions. Appl. Math. Comput. 2020, 381, 125248. [Google Scholar] [CrossRef]
  15. Briceño-Arias, L.M.; Davis, D. Forward-backward-half forward algorithm for solving monotone inclusions. SIAM J. Optim. 2018, 28, 2839–2871. [Google Scholar] [CrossRef]
  16. Alves, M.M.; Geremia, M. Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng’s F-B four-operator splitting method for solving monotone inclusions. Numer. Algorithms 2019, 82, 263–295. [Google Scholar] [CrossRef]
  17. Giselsson, P. Nonlinear forward-backward splitting with projection correction. SIAM J. Optim. 2021, 31, 2199–2226. [Google Scholar] [CrossRef]
  18. Briceño-Arias, L.; Chen, J.; Roldán, F.; Tang, Y. Forward-partial inverse-half-forward splitting algorithm for solving monotone inclusions. Set-Valued Var. Anal. 2022, 30, 1485–1502. [Google Scholar] [CrossRef]
  19. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  20. Nesterov, Y. A method for unconstrained convex minimization problem with the rate of convergence O( 1 k 2 ). Doklady AN SSSR 1983, 269, 543–547. [Google Scholar]
  21. Güler, O. New proximal point algorithms for convex minimization. SIAM J. Optim. 1992, 2, 649–664. [Google Scholar] [CrossRef]
  22. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  23. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2003, 14, 773–782. [Google Scholar] [CrossRef]
  24. Ochs, P.; Chen, Y.; Brox, T.; Pock, T. iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 2014, 7, 1388–1419. [Google Scholar] [CrossRef]
  25. Boţ, R.I.; Csetnek, E.R. A hybrid proximal-extragradient algorithm with inertial effects. Numer. Funct. Anal. Optim. 2015, 36, 951–963. [Google Scholar] [CrossRef]
  26. Chen, C.; Chan, R.H.; Ma, S.; Yang, J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
  27. Dong, Q.; Lu, Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  28. Combettes, P.L.; Glaudin, L.E. Quasi-nonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef]
  29. Attouch, H.; Cabot, A. Convergence rates of inertial forward-backward algorithms. SIAM J. Optim. 2018, 28, 849–874. [Google Scholar] [CrossRef]
  30. Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  31. Attouch, H.; Peypouquet, J. Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators. Math. Program. 2019, 174, 391–432. [Google Scholar] [CrossRef]
  32. Attouch, H.; Cabot, A. Convergence of a relaxed inertial proximal algorithm for maximally monotone operators. Math. Program. 2020, 184, 243–287. [Google Scholar] [CrossRef]
  33. Attouch, H.; Cabot, A. Convergence rate of a relaxed inertial proximal algorithm for convex minimization. Optimization 2020, 69, 1281–1312. [Google Scholar] [CrossRef]
  34. Attouch, H.; Cabot, A. Convergence of a relaxed inertial forward-backward algorithm for structured monotone inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
  35. Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
  36. Boţ, R.I.; Sedlmayer, M.; Vuong, P.T. A relaxed inertial forward-backward-forward algorithm for solving monotone inclusions with application to GANs. J. Mach. Learn. Res. 2023, 1–37. [Google Scholar]
  37. Tang, Y.; Yang, Y.; Peng, J. Convergence analysis of a relaxed inertial alternating minimization algorithm with applications. In Advanced Mathematical Analysis and Its Applications, 1st ed.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2023; 27p. [Google Scholar]
  38. Boţ, R.I.; Csetnek, E.R. Second order forward-backward dynamical systems for monotone inclusion problems. SIAM J. Control Optim. 2016, 54, 1423–1443. [Google Scholar] [CrossRef]
  39. Briceño-Arias, L.M.; Combettes, P.L. A monotone+skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 2011, 21, 1230–1250. [Google Scholar] [CrossRef]
  40. Zong, C.; Tang, Y.; Zhang, G. An accelerated forward-backward-half forward splitting algorithm for monotone inclusion with applications to image restoration. Optimization 2024, 73, 401–428. [Google Scholar] [CrossRef]
  41. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Problems 2011, 27, 045009. [Google Scholar] [CrossRef]
  42. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 2011, 40, 120–145. [Google Scholar] [CrossRef]
Figure 1. Balance between α k and λ k with ε = 0.1 , γ = 0.45 L (left), and γ = 0.9 L (right).
Figure 1. Balance between α k and λ k with ε = 0.1 , γ = 0.45 L (left), and γ = 0.9 L (right).
Symmetry 16 00466 g001
Figure 2. Behaviour of the error of objective value log ( f f * ) / f * against running time for different parameters α , γ , and λ .
Figure 2. Behaviour of the error of objective value log ( f f * ) / f * against running time for different parameters α , γ , and λ .
Symmetry 16 00466 g002
Figure 3. Behaviour of SNR against running time for different parameters α , γ , and λ .
Figure 3. Behaviour of SNR against running time for different parameters α , γ , and λ .
Symmetry 16 00466 g003
Figure 4. Behaviour of the error of objective value log ( f f * ) / f * against running time for different image burring scenarios, i.e., (a) scenario 1, (b) scenario 2, (c) scenario 3, and (d) scenario 4.
Figure 4. Behaviour of the error of objective value log ( f f * ) / f * against running time for different image burring scenarios, i.e., (a) scenario 1, (b) scenario 2, (c) scenario 3, and (d) scenario 4.
Symmetry 16 00466 g004
Figure 5. (a) The original image of Barbara. (b) The blurred image of Barbara. (c) The deblurred image by FBHF. (d) The deblurred image by RIFBHF.
Figure 5. (a) The original image of Barbara. (b) The blurred image of Barbara. (c) The deblurred image by FBHF. (d) The deblurred image by RIFBHF.
Symmetry 16 00466 g005
Table 1. The parameter selection range for FBHF [15] and RIFBHF.
Table 1. The parameter selection range for FBHF [15] and RIFBHF.
Algorithms γ k α k lim k + λ k = λ
FBHF [15] [ η , χ η ] 01
RIFBHF [ η , χ η ] [ 0 , 1 ) 0 , 2 ( 1 + γ L ) ε ( 1 + γ L ) 2 ( 1 α ) 2 ( 2 α 2 α + 1 )
Table 2. Description of image blurring scenarios.
Table 2. Description of image blurring scenarios.
ScenarioBlur KernelGaussian Noise
1 9 × 9 box average kernel σ = 1.5
2 9 × 9 box average kernel σ = 3
3 7 × 7 Gaussian kernel with σ a = 10 σ = 1.5
4 7 × 7 Gaussian kernel with σ a = 10 σ = 3
Table 3. SNR values and iterations when γ = 4 1 + 1 + 16 8 , α = 0 , and λ = 1 with different parameter μ for Barbara image in four blurred scenarios.
Table 3. SNR values and iterations when γ = 4 1 + 1 + 16 8 , α = 0 , and λ = 1 with different parameter μ for Barbara image in four blurred scenarios.
Scenario1234
μ SNR (dB)IterSNR (dB)IterSNR (dB)IterSNR (dB)Iter
0.1 17.5358 50 17.5144 52 17.9741 45 17.9510 48
0.5 17.5414 55 17.5196 56 17.9385 49 17.9148 51
1 17.5515 61 17.5304 62 17.9071 54 17.8799 55
2 17.4859 65 17.4658 65 17.8012 56 17.7788 57
3 17.4001 65 17.3831 65 17.7126 56 17.6937 57
4 17.3251 65 17.3106 66 17.6368 57 17.6203 57
5 17.2584 67 17.2443 68 17.5713 58 17.5553 59
6 17.1948 69 17.1823 69 17.5117 60 17.4972 60
7 17.1344 71 17.1230 71 17.4563 61 17.4422 62
8 17.0779 73 17.0663 74 17.4033 63 17.3895 64
9 17.0237 75 17.0123 76 17.3530 65 17.3413 65
10 16.9718 77 16.9608 78 17.3049 67 17.2940 67
Table 4. SNR values and iterations when γ = 4 1 + 1 + 16 8 , α = 0.3 , and λ = 0.6 with different parameter μ for Barbara image in four blurred scenarios.
Table 4. SNR values and iterations when γ = 4 1 + 1 + 16 8 , α = 0.3 , and λ = 0.6 with different parameter μ for Barbara image in four blurred scenarios.
Scenario1234
μ SNR (dB)IterSNR (dB)IterSNR (dB)IterSNR (dB)Iter
0.1 17.4757 52 17.4513 53 17.9109 46 17.8957 49
0.5 17.4740 55 17.4573 56 17.8890 50 17.8643 51
1 17.5060 62 17.4884 63 17.8783 56 17.8545 57
2 17.4687 68 17.4502 68 17.7913 59 17.7707 60
3 17.3929 69 17.3785 70 17.7093 60 17.6920 61
4 17.3214 70 17.3072 70 17.6367 61 17.6212 61
5 17.2577 72 17.2446 72 17.5723 62 17.5582 63
6 17.1970 74 17.1850 75 17.5144 64 17.5007 65
7 17.1384 76 17.1277 76 17.4600 66 17.4480 66
8 17.0832 78 17.0725 79 17.4078 68 17.3953 69
9 17.0298 81 17.0203 81 17.3580 70 17.3463 71
10 16.9787 83 16.9686 84 17.3103 72 17.3005 72
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zong, C.; Tang, Y.; Zhang, G. A Relaxed Inertial Method for Solving Monotone Inclusion Problems with Applications. Symmetry 2024, 16, 466. https://doi.org/10.3390/sym16040466

AMA Style

Zong C, Tang Y, Zhang G. A Relaxed Inertial Method for Solving Monotone Inclusion Problems with Applications. Symmetry. 2024; 16(4):466. https://doi.org/10.3390/sym16040466

Chicago/Turabian Style

Zong, Chunxiang, Yuchao Tang, and Guofeng Zhang. 2024. "A Relaxed Inertial Method for Solving Monotone Inclusion Problems with Applications" Symmetry 16, no. 4: 466. https://doi.org/10.3390/sym16040466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop