Next Article in Journal
Exploring the Impact of Nanomaterials on Heat- and Mass-Transfer Properties of Carreau-Yasuda Fluid with Gyrotactic Bioconvection Peristaltic Phenomena
Next Article in Special Issue
On the Extended Version of Krasnoselśkiĭ’s Theorem for Kannan-Type Equicontractive Mappings
Previous Article in Journal
Interior Bubbling Solutions for an Elliptic Equation with Slightly Subcritical Nonlinearity
Previous Article in Special Issue
Cash Flow Optimization on Insurance: An Application of Fixed-Point Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two New Modified Regularized Methods for Solving the Variational Inclusion and Null Point Problems

1
College of Mathematics, Zhejiang Normal University, Jinhua 321004, China
2
Xichuan County Education and Sports Bureau of Henan Province, Nanyang 474450, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1469; https://doi.org/10.3390/math11061469
Submission received: 8 February 2023 / Revised: 10 March 2023 / Accepted: 13 March 2023 / Published: 17 March 2023

Abstract

:
In this article, based on the regularization techniques, we construct two new algorithms combining the forward-backward splitting algorithm and the proximal contraction algorithm, respectively. Iterative sequences of the new algorithms can converge strongly to a common solution of the variational inclusion and null point problems in real Hilbert spaces. Multi-inertial extrapolation steps are applied to expedite their convergence rate. We also give some numerical experiments to certify that our algorithms are viable and efficient.

1. Introduction

Let H be a real Hilbert space such that norm is · and the inner product is · , · , respectively. We recall that the variational inclusion problem (VIP):
Find v * H such   that   0 A ( v * ) + B ( v * ) ,
where A : H 2 H is a set-valued operator and B : H H is a single-valued operator. We denote the solution set of (1) by Φ . The variational inclusion problem is a crucial extension of the variational inequality problem. Many nonlinear problems such as problems of saddle point, minimization, and split feasibility can be transformed into variational inclusion problems which can be applied to signal processing, neural networks, medical image reconstruction, machine learning, and data mining, etc., see [1,2,3,4,5,6,7].
As we all know, (1) can be converted to the fixed point equation v * = J λ A ( v * λ B v * ) for some λ > 0 , where J λ A = ( I + λ A ) 1 is the resolvent operator of A. The famous forward–backward splitting method (FBSM) was proposed by Lions and Mercier [8] in 1979:
x n + 1 = J λ A ( I λ B ) x n ,
where A and B are maximally monotone and η -inverse strongly monotone, respectively, λ ( 0 , 2 η ) . Note that the Lipschitz continuity of an operator is a weaker property than the inverse strong monotonicity. So the algorithm has a shortcoming: the convergence requires a strong hypothesis. In order to overcome this difficulty, Tseng [9] constructed a modified forward–backward splitting algorithm (TFBSM) in 2000:
y n = J λ A ( I λ B ) x n , x n + 1 = y n λ ( B y n B x n ) ,
where B is monotone and Lipschitz continuous.
On the other hand, a famous method for solutions of variational inequalities is the projection and contraction method which was first introduced by He [10] for the variational inequality problem in Euclidean space. Inspired by this, the following proximal contraction method (PCM) was proposed by Zhang and Wang [11] in 2018:
y n = J λ n A ( x n λ n B x n ) , h n = x n y n λ n ( B x n B y n ) , x n + 1 = x n r β n h n ,
where r ( 0 , 2 ) ,
β n = 0 , h n = 0 , ϕ ( x n , y n ) h n 2 , h n 0 ,
ϕ ( x n , y n ) = x n y n , h n , and the sequence of variable stepsizes { λ n } satisfies some conditions. Notice that both (TFBSM) and (PCM) can only get weak convergent results in real Hilbert spaces. In general, weakly convergent results are obviously less popular than strongly convergent ones. In order to get the strong convergence, Hieu et al. [12] gave an algorithm named the regularization proximal contraction method (RPCM), for solving (1) in 2021:
y n = J λ n A ( x n λ n ( B + α n F ) x n ) , h n = x n y n λ n ( B x n B y n ) , x n + 1 = x n r β n h n ,
where r ( 0 , 2 ) , ϕ ( x n , y n ) = w n y n , h n , β n = min β , ϕ ( x n , y n ) h n 2 and { λ n } satisfies some appropriate conditions. Before this, some scholars successfully applied this technique to the variational inequality problem. Very recently, Song and Bazighifan [13] introduced an inertial regularized method for solving the variational inequality and null point problem.
In recent years, there has been interest in methods with inertia which are considered effective methods to expedite the convergence. The inertial method is favored by many scholars because of its simple structure and easy operation, which is promoted by many scholars and in-depth research. In 2003, Moudafi and Oliny [14] combined (FBSM) with the inertial method to construct a new algorithm:
y n = x n + ϑ n ( x n x n 1 ) , x n + 1 = J λ n A ( y n λ n B x n ) ,
where { λ n } is a positive real sequence. Furthermore, some scholars have proposed multi-step inertial methods. In 2021, Wang et al. [15] proposed the multi-step inertial hybrid method to solve the problem (1).
Inspired by [12,13,15], we consider the variational inclusion and null point problem:
Find x § Φ G 1 ( 0 ) such   that   F x § , x x § 0 , x Φ G 1 ( 0 ) ,
where G and F are nonlinear operators. We propose two modified regularized multi-step inertial methods to solve the above problem. These two algorithms are the modified forward-backward splitting algorithm and the proximal contraction algorithm. Using regularization techniques, the new algorithms converge strongly under mild conditions. Some numerical examples are given to show that our algorithms are efficient.
This article is arranged as follows: we introduce some notations, fundamental definitions, and results that are used in later proofs in Section 2. In Section 3, we present the new algorithms and discuss their convergence. In Section 4, we report some numerical experiments to support our theoretical results obtained.

2. Preliminaries

Let H be a real Hilbert space. The weak convergence and strong convergence of sequence { x n } are denoted by x n x and x n x , respectively.
Definition 1
([16]). The mapping T: H H is called
(i)
monotone, if
T y T x , y x 0 , x , y H ;
(ii)
γ-strongly monotone ( γ > 0 ), if
T y T x , y x γ y x 2 , x , y H ;
(iii)
δ-inverse strongly monotone ( δ > 0 ), if
T y T x , y x δ T y T x 2 , x , y H ;
(iv)
l-Lipschitz continuous ( l > 0 ), if
T y T x l y x , x , y H ;
(v)
firmly nonexpansive, if
T y T x , y x T y T x 2 , x , y H ;
(vi)
nonexpansive, if
T y T x y x , x , y H .
Definition 2
([16]). Let T : H 2 H be a set-valued mapping. The graph of T is defined by Graph ( T ) = { ( x , u ) : x H , u T x } . The mapping T is said to be
(i)
monotone, if
v u , y x 0 , u T x , v T y ;
(ii)
maximally monotone, if T is monotone on H and for any ( y , v ) H × H ,
v u , y x 0 , ( x , u ) Graph ( T ) i n d i c a t e s ( y , v ) Graph ( T ) .
Lemma 1
([17]). Let A : H 2 H be a maximally monotone operator, and B : H H be a monotone Lipschitz continuous operator. Then A + B is maximally monotone.
Lemma 2
([18]). Let { t n } be a of nonnegative real sequence satisfying
t n + 1 ( 1 β n ) t n + β n d n + ϱ n , x , y H ,
where { β n } , { d n } and { ϱ n } satisfying the conditions:
(i)
{ β n } ( 0 , 1 ) , n = 1 β n = ;
(ii)
limsup n d n 0 ;
(iii)
ϱ n 0 with n = 1 ϱ n < .
Then lim n t n = 0 .
Lemma 3
([19]). Let C be a nonempty closed convex subset of H and T : C C be a nonexpansive mapping. Then, the mapping I T is demiclosed at zero, i.e., if x n x and ( I T ) x n 0 , then x F i x ( T ) .

3. Main Results

We mainly introduce our new algorithms and analyze their convergence in this section. Let H be a real Hilbert space. The following assumptions will be needed throughout the paper:
(A1)
A : H 2 H is maximally monotone.
(A2)
B : H H is monotone and L-Lipschitz continuous.
(A3)
F : H H is ξ -strongly monotone and k-Lipschitz continuous.
(A4)
G : H H is γ -inverse strongly monotone.
(A5)
Ω : = Φ G 1 ( 0 ) , where Φ is the solution set of (1).
To solve (2), we construct a auxiliary problem:
Find x H , such   that   0 A ( x ) + B ( x ) + α ω G ( x ) + α F ( x ) ,
for each α > 0 and 0 < ω < 1 , the solution of the problem (3) denoted by x α .
Lemma 4.
Under the assumptions (A1)–(A4), for each α > 0 and 0 < ω < 1 , the problem (3) has a unique solution x α .
Proof. 
Since the properties of A, B, G, and F in the hypothesis, we can conclude that A + B + α ω G + α F is strongly monotone. It is well known that strong monotone operators have unique solutions (see [20]). Therefore, the problem (3) has a unique solution x α . □
Lemma 5.
The net { x α } is bounded.
Proof. 
For each p Ω and α > 0 , we have 0 A p + B p , G p = 0 and 0 A x α + B x α + α ω G x α + α F x α . Thus,
α F x α A x α + B x α + α ω G x α ,
and
0 A p + B p + α ω G p .
Using the monotonic property of A , B and G, we derive
p x α , α F x α 0 .
By (4) and the ξ -strong monotonicity, it follows that
p x α , F p = p x α , F x α + p x α , F p F x α ξ p x α 2 .
Consequently (5) and the Cauchy-Schwarz inequality, we find F p p x α ξ p x α 2 , then p x α F p / ξ , we get
x α p + p x α p + F p ξ .
So the net { x α } is bounded. □
Lemma 6.
For all α 1 , α 2 ( 0 , 1 ) , there exists M > 0 such that,
x α 1 x α 2 | α 2 α 1 | α 1 α 2 M .
Proof. 
According to the assumption, x α 1 , x α 2 are solutions of the problem (3), let us suppose that 0 < α 2 < α 1 < 1 . Then,
0 A x α 1 + B x α 1 + α 1 ω G x α 1 + α 1 F x α 1
and
0 A x α 2 + B x α 2 + α 2 ω G x α 2 + α 2 F x α 2 ,
which implies
α 1 ω G x α 1 α 1 F x α 1 ( A + B ) x α 1
and
α 2 ω G x α 2 α 2 F x α 2 ( A + B ) x α 2 .
By Lemma 1, we know that
x α 1 x α 2 , α 1 F x α 1 α 1 ω G x α 1 + α 2 F x α 2 + α 2 ω G x α 2 0 ,
or, equivalently,
x α 1 x α 2 , ( α 2 α 1 ) F x α 2 + x α 1 x α 2 , α 1 ( F x α 2 F x α 1 ) + x α 1 x α 2 , ( α 2 ω α 1 ω ) G x α 2 + x α 1 x α 2 , α 1 ω ( G x α 2 G x α 1 ) 0 .
The properties of G and F and the Cauchy-Schwarz inequality imply that
α 1 ξ x α 1 x α 2 2 ( α 2 ω α 1 ω ) x α 1 x α 2 , G x α 2 + ( α 2 α 1 ) x α 1 x α 2 , F x α 2 | α 2 ω α 1 ω | x α 1 x α 2 G x α 2 + | α 2 α 1 | x α 1 x α 2 F x α 2
which equal to
x α 1 x α 2 | α 2 ω α 1 ω | G x α 2 + | α 2 α 1 | F x α 2 α 1 ξ .
The Lipschitz continuity of the mapping F and G imply they are bounded. Combining the Lagrange’s mean-value theorem, we deduce that
| α 2 ω α 1 ω | = α 1 ω α 2 ω ω α 2 ω 1 ( α 1 α 2 ) ω α 2 1 ( α 1 α 2 ) α 2 1 ( α 1 α 2 ) ,
this together with (6), implies that
x α 1 x α 2 | α 2 α 1 | α 1 α 2 G x α 2 ξ + | α 2 α 1 | α 1 α 2 F x α 2 ξ | α 2 α 1 | α 1 α 2 M ,
where M = 1 ξ sup α ( 0 , 1 ) { G x α + F x α } . Indeed, since F and G are Lipschitz continuous, the net { G x α } and { F x α } is bounded. If 0 < α 1 α 2 < 1 , we can also get the same results. □
Lemma 7.
lim α 0 + x α = x § .
Proof. 
According to the conclusion of Lemma 5, there exists a subsequence { x α m } of the net { x α } such that x α m x ¯ and α m 0 + as m . From RVI, we have that B x α α ω G x α α F x α A x α . Let us take a point ( u , v ) in Graph ( A + B ) , that is, v A u + B u . Thus, we derive by the assumption (A1),
u x α , v B u + B x α + α ω G x α + α F x α 0 .
Replace α with α m , we deduce from the monotonicity of B that
0 u x α m , v B u + B x α m + α m ω G x α m + α m F x α m = u x α m , α m ω G x α m + α m F x α m + u x α m , v x α m u , B x α m B u u x α m , α m ω G x α m + α m F x α m + u x α m , v .
It obtains that the sequence { F x α m } is bounded by the boundedness of the sequence { x α m } and the Lipschitz continuity of F. Letting m in relation (8) and we infer that
u x ¯ , v 0 , ( u , v ) Graph ( A + B ) ,
x ¯ ( A + B ) 1 ( 0 ) .
For every q Ω , 0 A q + B q and G q = 0 . By (3), we obtain
α m ω G x α m α m F x α m A x α m + B x α m ,
due to the definition of A + B , we know that
x α m q , α m ω G x α m α m F x α m 0 ,
by the monotonicity of F,
α m ω G x α m , x α m q α m F x α m , q x α m α m F q , q x α m ,
which leads to
G x α m , x α m q α m 1 ω F q , q x α m 0 .
By the property of G, noting (10) and G q = 0 , we obtain
γ G x α m 2 = γ G x α m G q 2 = G x α m G q , x α m q G x α m , x α m q 0 ,
which yields that
lim m G x α m = 0 .
For any ι ( 0 , 2 γ ] , G ι = I ι G is nonexpansive obviously holds. Owing to Lemma 3, we obtain that x ¯ Fix ( G ι ) ,
x ¯ G 1 ( 0 ) ,
together with (9), implies
x ¯ Ω .
Noting (5), we obtain F p , p x α 0 for all p Ω . Letting α = α m 0 + , we have
F p , p x ¯ 0 , p Ω .
By Minty lemma [21], we get
F x ¯ , p x ¯ 0 , p Ω .
Due to uniqueness of the solution x § to the problem (2), we have x ¯ = x § . Since x ¯ is any point in ω w ( x α ) , ω w ( x α ) = { x § } , that is, the net { x α } converges weakly to x § . After that, applying (5) for p = x § , we get
ξ x § x α 2 F x § , x § x α .
Taking limit in (11) as α 0 + , we obtain
lim α 0 + ξ x § x α 2 lim α 0 + F x § , x § x α = 0 .
Thus, lim α 0 + x § x α = 0 . □
Remark 1.
α n can be chosen as α n = 1 n p , where 0 < p < 1 2 .
Lemma 8.
Under the condition (A2), the sequence { λ n } generated by Algorithm 1 or Algorithm 2 is convergent and
lim n λ n = λ > 0 .
To be more precise, we have λ min λ 1 , μ L > 0 .
Algorithm 1 Modified multi-steps inertial forward-backward splitting method with regularization
  • Initialization: Let x 0 , x 1 H be arbitrary, μ ( 0 , 1 ) , λ 1 ( 0 , ( 1 μ ) γ ) and set n : = 1 .
  • Choose a sequence { τ n } [ 0 , + ) such that n = 1 τ n = τ < and 0 < μ + λ 1 + τ γ < 1 .
  • Choose a sequence { α n } [ 0 , + ) satisfying:
    n = 1 α n = , lim n α n = 0 , lim n α n + 1 α n α n + 1 α n 2 = 0 .
  • For a given positive integer N, choose a sequence { ϵ i , n } [ 0 , + ) ( i = 1 , 2 , , N ) satisfying
    lim n + ϵ i , n α n = 0 .
  • Iterative steps: Calculate x n + 1 as follows:
  • Step 1. Compute
    w n = x n + i = 1 min { n , N } θ i , n ( x n i + 1 x n i ) ,
    where 0 θ i , n θ i for some θ i R with
    θ i , n = min θ i , ϵ i , n x n i + 1 x n i , if x n i + 1 x n i , θ i , otherwise .
  • Step 2. Compute
    y n = J λ n A w n λ n ( B + α n ω G + α n F ) w n .
    Step 3. Compute
    x n + 1 = y n λ n ( B y n B w n + α n ω G y n α n ω G w n ) ,
    and
    λ n + 1 = min λ n + τ n , μ w n y n B w n B y n , if B w n B y n , λ n + τ n , otherwise .
  • Set n = n + 1 and go to Step 1.
Algorithm 2 Modified multi-steps inertial proximal contraction method with regularization
  • Initialization: Let x 0 , x 1 H be arbitrary, r ( 0 , 2 ) , β > 0 , μ ( 0 , 1 ) , and λ 1 ( 0 , ( 1 μ ) γ ) and set n : = 1 . Choose a sequence { τ n } [ 0 , + ) such that n = 1 τ n = τ < and 0 < μ + λ 1 + τ γ < 1 . Choose a sequence { α n } [ 0 , + ) satisfying:
    n = 1 α n = , lim n α n = 0 , lim n α n + 1 α n α n + 1 α n 2 = 0 .
    For a given positive integer N, choose a sequence { ϵ i , n } [ 0 , + ) ( i = 1 , 2 , , N ) satisfying
    lim n ϵ i , n α n = 0 .
    Iterative steps: Calculate x n + 1 as follows:
    Step 1. Compute
    w n = x n + i = 1 min { N , n } θ i , n ( x n i + 1 x n i ) ,
    where 0 θ i , n θ i for some θ i R with
    θ i , n = min θ i , ϵ i , n x n i + 1 x n i , if x n i + 1 x n i , θ i , otherwise .
    Step 2. Compute
    y n = J λ n A w n λ n ( B + α n ω G + α n F ) w n .
    and
    λ n + 1 = min λ n + τ n , μ w n y n B w n B y n , if B w n B y n , λ n + τ n , otherwise .
    step 3. Compute
    h n = w n y n λ n ( B w n B y n ) + α n ω ( G w n G y n ) , ϕ ( w n , y n ) = w n y n , h n .
    Step 4. Compute
    x n + 1 = w n r β n h n ,
    where
    β n = ϕ ( w n , y n ) h n 2 , if h n 0 , β , otherwise .
  • Set n = n + 1 and go to Step 1.
Proof. 
Since
B w n B y n L w n y n ,
in the case of B w n B y n ,
μ w n y n B w n B y n μ w n y n L w n y n = μ L .
By induction, can draw the sequence { λ n } has the lower bound min λ 1 , μ L . Since the computation of λ n + 1 , we can get
λ n + 1 λ n + τ n ,
that is
λ n + 1 λ n τ n .
Let [ a ] + represent max { a , 0 } for all a R . And we know τ n 0 , then
[ λ n + 1 λ n ] + τ n .
Because n = 1 τ n < , obviously
n = 1 [ λ n + 1 λ n ] + < .
Besides [ a ] + = 1 2 a + 1 2 | a | , we infer
| λ n + 1 λ n | = 2 [ λ n + 1 λ n ] + λ n + 1 + λ n ,
then,
n = 1 k | λ n + 1 λ n | = 2 n = 1 k [ λ n + 1 λ n ] + λ k + 1 + λ 1 .
Since { λ n } has the lower bound min λ 1 , μ L , we know λ k + 1 > 0 . So we have
n = 1 k | λ n + 1 λ n | < 2 n = 1 k [ λ n + 1 λ n ] + + λ 1 ,
furthermore,
n = 1 | λ n + 1 λ n | < .
Therefore, { λ n } is convergent. □
Theorem 1.
If the conditions (A1)–(A5) hold, x § is the unique solution of problem (2) and the sequence { x n } is generated by Algorithm 1, then x n converges strongly to x § .
Proof. 
Setting s n = B y n B w n + α n ω G y n α n ω G w n ,
x n + 1 x α n 2 = y n λ n s n x α n 2 = y n x α n 2 + λ n 2 s n 2 2 λ n y n x α n , s n .
Since x α n is the solution of (3), we get
x α n = J λ n A x α n λ n ( B x α n + α n ω G x α n + α n F x α n ) ,
and J λ n A is firmly nonexpansive,
y n x α n , w n x α n λ n ( B w n + α n ω G w n + α n F w n B x α n α n ω G x α n α n F x α n ) y n x α n 2 ,
which implies
y n x α n , w n x α n λ n y n x α n , B w n + α n ω G w n B x α n α n ω G x α n α n λ n y n x α n , F w n F x α n y n x α n 2 .
Since the monotony of B and G, we find
λ n y n x α n , B y n B x α n + α n ω G y n α n ω G x α n 0 ,
combining (13) and (14), we derive
λ n y n x α n , B y n B w n + α n ω G y n α n ω G w n y n x α n , x α n w n + α n λ n y n x α n , F w n F x α n + y n x α n 2 ,
or, equivalently,
y n x α n , s n 1 λ n y n x α n , x α n w n + α n y n x α n , F w n F x α n + 1 λ n y n x α n 2 .
Combining (12) and (15), we get that
x n + 1 x α n 2 λ n 2 s n 2 2 y n x α n , x α n w n 2 α n λ n y n x α n , F w n F x α n y n x α n 2 ,
and the fact of
2 y n x α n , x α n w n = y n w n 2 w n x α n 2 x α n y n 2 ,
imply that
x n + 1 x α n 2 x α n w n 2 w n y n 2 + λ n 2 s n 2 2 α n λ n y n x α n , F w n F x α n .
Since
λ n 2 s n 2 = λ n 2 B y n B w n 2 + λ n 2 α n 2 ω G y n G w n 2 + 2 λ n 2 α n ω B y n B w n , G y n G w n λ n 2 μ 2 λ n + 1 2 w n y n 2 + λ n 2 α n 2 ω γ 2 w n y n 2 + 2 λ n 2 μ α n ω γ λ n + 1 w n y n 2 μ λ n λ n + 1 + λ 1 + τ γ 2 w n y n 2 .
Let t 1 , t 2 and t 3 be three positive numbers such that
2 ξ k t 1 t 2 t 3 > 0 .
By virtue of Lemma 8, α n 0 and ϵ i , n α n 0 , there exists n 0 1 , n n 0 such that
1 α n λ n k t 1 μ λ n λ n + 1 + λ 1 + τ γ 2 > 0 , 1 t 3 α n λ n > 0 , i = 1 N ϵ i , n t 3 λ n α n .
Because F is strongly monotone,
y n x α n , F w n F x α n = w n x α n , F w n F x α n + y n w n , F w n F x α n ξ w n x α n 2 k w n x α n y n w n ξ w n x α n 2 k t 1 2 w n x α n 2 k 2 t 1 y n w n 2 = ξ k t 1 2 w n x α n 2 k 2 t 1 y n w n 2 .
In views of (18)–(20), we get
x n + 1 x α n 2 1 α n λ n ( 2 ξ k t 1 ) w n x α n 2 1 α n λ n k t 1 λ 1 + τ γ + μ λ n λ n + 1 2 w n y n 2 .
which implies,
x n + 1 x α n 2 1 α n λ n ( 2 ξ k t 1 ) w n x α n 2 , n n 0 .
By Lemma 6, for all n n 0 , we have
x n + 1 x α n + 1 2 = 2 x α n x α n + 1 , x n + 1 x α n + x α n x α n + 1 2 + x n + 1 x α n 2 2 x α n x α n + 1 x n + 1 x α n + x α n + 1 x α n 2 + x n + 1 x α n 2 1 t 2 α n λ n x α n x α n + 1 2 + t 2 α n λ n x n + 1 x α n 2 + x n + 1 x α n + 1 2 + x α n x α n + 1 2 = 1 + 1 t 2 α n λ n x α n x α n + 1 2 + ( 1 + t 2 α n λ n ) x n + 1 x α n 2 1 + 1 t 2 α n λ n α n + 1 α n α n α n + 1 2 M 2 + ( 1 + t 2 α n λ n ) x n + 1 x α n 2 ,
where M appears in Lemma 6. Substituting (23) into (22), for all n n 0 , we deduce
x n + 1 x α n + 1 2 ( 1 + t 2 α n λ n ) 1 α n λ n ( 2 ξ k t 1 ) w n x α n 2 + 1 + 1 t 2 α n λ n α n + 1 α n α n α n + 1 2 M 2 = ( 1 ( 2 ξ k t 1 t 2 ) α n λ n ( 2 ξ k t 1 ) t 2 α n 2 λ n 2 ) w n x α n 2 + 1 + 1 t 2 α n λ n α n + 1 α n α n α n + 1 2 M 2 ( 1 ( 2 ξ k t 1 t 2 ) α n λ n ) w n x α n 2 + ( 1 + t 2 α n λ n ) ( α n + 1 α n ) 2 t 2 λ n α n 3 α n + 1 2 M 2 .
In the view of, for all n n 0 ,
w n x α n 2 = x n + i = 1 N θ i , n ( x n i + 1 x n i ) x α n 2 x n x α n + i = 1 N θ i , n x n i + 1 x n i 2 = x n x α n 2 + i = 1 N θ i , n 2 x n i + 1 x n i 2 + 2 1 i < j N θ i , n θ j , n x n i + 1 x n i x n j + 1 x n j + 2 i = 1 N θ i , n x n x α n x n i + 1 x n i 1 + i = 1 N ϵ i , n x n x α n 2 + i = 1 N ϵ i , n 2 + i = 1 N ϵ i , n + 2 1 i < j N ϵ i , n ϵ j , n = 1 + i = 1 N ϵ i , n x n x α n 2 + ϵ ¯ n ( 1 + t 3 α n λ n ) x n x α n 2 + ϵ ¯ n ,
where ϵ ¯ n = i = 1 N ϵ i , n 2 + i = 1 N ϵ i , n + 2 1 i < j N ϵ i , n ϵ j , n . The condition of { ϵ i , n } implies that lim n ϵ ¯ n α n = 0 . Substituting (25) into (24), for all n n 0 ,
x n + 1 x α n + 1 2 ( 1 ( 2 ξ k t 1 t 2 ) α n λ n ) ( 1 + t 3 α n λ n ) x n x α n 2 + ( 1 + t 2 α n λ n ) ( α n + 1 α n ) 2 t 2 λ n α n 3 α n + 1 2 M 2 + ϵ ¯ n ( 1 ( 2 ξ k t 1 t 2 t 3 ) α n λ n ) x n x α n 2 + ( 1 + t 2 α n λ n ) ( α n + 1 α n ) 2 t 2 λ n α n 3 α n + 1 2 M 2 + ϵ ¯ n = ( 1 ( 2 ξ k t 1 t 2 t 3 ) α n λ n ) x n x α n 2 + ( 2 ξ k t 1 t 2 t 3 ) α n λ n ( 1 + t 2 α n λ n ) ( α n + 1 α n ) 2 ( 2 ξ k t 1 t 2 t 3 ) t 2 λ n 2 α n 4 α n + 1 2 M 2 + ϵ ¯ n ( 1 φ n ) x n x α n 2 + φ n M α n + 1 α n α n + 1 α n 2 2 + ϵ ¯ n = ( 1 φ n ) x n x α n 2 + φ n ζ n ,
where φ n = ( 2 ξ k t 1 t 2 t 3 ) α n λ n , M = sup n N ( 1 + t 2 α n λ n ) M 2 ( 2 ξ k t 1 t 2 t 3 ) t 2 λ n 2 is positive and ζ n = M α n + 1 α n α n + 1 α n 2 2 + ϵ ¯ n φ n . Because the constraints of { λ n } and { α n } , we know that φ n 0 , n = 1 φ n = , and ζ n 0 . We deduce from Lemma 2 that x n x α n 2 0 as n . □
Theorem 2.
If the conditions (A1)–(A5) hold, x § is the unique solution of problem (2) and the sequence { x n } is generated by Algorithm 2, then x n converges strongly to x § .
Proof. 
We have lim n 1 μ λ n λ n + 1 λ 1 + τ γ = 1 μ λ 1 + τ γ > 0 by Lemma 8, so for all n n 0 , there exists δ > 0 and n 0 1 such that 1 μ λ n λ n + 1 λ 1 + τ γ > δ > 0 . We can also obtain lim n 1 + μ λ n λ n + 1 + λ 1 + τ γ = 1 + μ + λ 1 + τ γ > 0 , then 1 + μ λ n λ n + 1 + λ 1 + τ γ is bounded. We will use the letter V to denote sup n N 1 + μ λ n λ n + 1 + λ 1 + τ γ , obviously V > 0 .
In the remainder proof, we assume that n n 0 . Setting s n = B y n B w n + α n ω G y n α n ω G w n , then
h n w n y n λ n B w n B y n λ n α n ω G w n G y n w n y n μ λ n λ n + 1 w n y n λ 1 + τ γ w n y n = 1 μ λ n λ n + 1 λ 1 + τ γ w n y n δ w n y n .
In the meantime,
h n w n y n + λ n s n w n y n + λ n ( B w n B y n + α n ω G w n G y n ) 1 + μ λ n λ n + 1 + λ 1 + τ γ w n y n V w n y n .
For any n n 0 , w n = y n is equivalent to h n = 0 . Since
ϕ ( w n , y n ) = w n y n , w n y n + λ n s n = w n y n 2 w n y n , λ n ( B w n B y n ) + α n ω λ n ( G w n G y n ) w n y n 2 μ λ n λ n + 1 w n y n 2 λ 1 + τ γ w n y n 2 = 1 μ λ n λ n + 1 λ 1 + τ γ w n y n 2 δ w n y n 2 ,
combining (26) and (27), if h n 0 , then
β n = ϕ ( w n , y n ) h n 2 δ V 2 > 0 ,
hence β n min β , δ V 2 > 0 . Then observe that
x n + 1 x α n 2 = w n x α n 2 r β n h n = w n x α n 2 + r 2 β n 2 h n 2 2 r β n w n x α n , h n .
By the definition of β n ,
ϕ ( w n , y n ) = β n h n 2 ,
which and (28) imply
x n + 1 x α n 2 = w n x α n 2 + r 2 β n ϕ ( w n , y n ) 2 r β n w n x α n , h n .
By the definition of h n ,
ϕ ( w n , y n ) = w n y n 2 + λ n w n y n , s n ,
and
w n x α n , h n = w n x α n , w n y n + λ n w n y n , s n + λ n y n x α n , s n .
Substituting (30) and (31) into (29), we infer
x n + 1 x α n 2 = w n x α n 2 + r 2 β n w n y n 2 ( 2 r ) r β n λ n w n y n , s n 2 r β n λ n y n x α n , s n 2 r β n w n x α n , w n y n .
And then, by the properties of B and G, we infer that
w n y n , s n = w n y n , B w n B y n α n ω w n y n , G w n G y n μ λ n + 1 + 1 γ w n y n 2 .
Using the same method in the Theorem 1, we get
x n + 1 x α n 2 1 r β n α n λ n ( 2 ξ k t 1 ) w n x α n 2 r β n 2 r ( 2 r ) λ n μ λ n + 1 + 1 γ α n k λ n t 1 w n y n 2 1 r β n α n λ n ( 2 ξ k t 1 ) w n x α n 2 r β n ( 2 r ) 1 μ λ n + 1 λ 1 + τ γ α n k λ n t 1 w n x α n 2 1 r β n α n λ n ( 2 ξ k t 1 ) w n x α n 2 r β n ( 2 r ) η α n k λ n t 1 w n x α n 2 .
where t 1 ( 0 , 2 ξ k ) . Cause α n 0 , we assume ( 2 r ) η α n k λ n t 1 > 0 . Hence
x n + 1 x α n 2 1 r β n α n λ n ( 2 ξ k t 1 ) w n x α n 2 .
The remaining proofs are the same as Theorem 1. □

4. Numerical Experiments

Three examples are given to show the performances of our algorithms. When the coefficients of inertia are equal to zero, let us use MFBMR and MPCMR for Algorithms 1 and 2, respectively. We denote Algorithm 1 for N = 1 , 2 , 3 by MIFBMR, 2-MMIFBMR and 3-MMIFBMR, respectively. Similarly denote Algorithm 2 for N = 1 , 2 , 3 by MIPCMR, 2-MMIPCMR and 3-MMIPCMR, respectively. All the programmes are written in Matlab 9.0 and performed on PC Desktop Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, RAM 16.0 GB.
Example 1.
Suppose H = R . Let A : R 2 R be a mapping defined as
A x : = 1 4 x , x R ,
and B : R R as
B x : = x arctan x 1 2 ln ( 1 + x 2 ) + π 2 x , x R .
Set mapping G : R R as
G x : = x sin x , x R .
It is obvious that A is maximally monotone. We can prove that B is monotone and Lipschitz continuous. We know G is 1 2 -inverse strongly monotone by calculation. Let F = 0.4 I .
Choose θ i = 0.1 , x 0 = 1 and ϵ i , n = n 2 for MIFBMR, 2-MMIFBMR, 3-MMIFBMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. Choose x 1 = 1 , ω = 0.6 , λ 1 = 0.08 , μ = 0.6 , τ n = 0.1 ( n + 1 ) 4 and α n = n 1 / 3 for each algorithm. Choose r = 1 , β = 2 for MPCMR, MIPCMR, 2-MMIPCMR, and 3-MMIPCMR. It is obvious that Ω = { 0 } and x § = 0 is the only one solution of problem (2). The numerical results of this example are represented in Figure 1 and Figure 2.
Example 2.
Let H = R s . Let F = I . Let A : R s 2 R s be defined by
A x : = { J x } , x R s ,
where J is an upper triangular matrix whose nonzero elements are all 1 in R s × s . Let B : R s R s be a mapping defined as
B x : = E x , x R s ,
where
E = C C T + S + D ,
here C is a matrix, S is a skew-symmetric matrix and D is a diagonal matrix whose diagonal entries are positive. They all in R s × s . Therefore E is positive definite. Obviously, B is monotone and Lipschitz continuous. Define G : R s R s as
G x : = x 1 Q Q x , x R s ,
where Q is a nonzero matrix in R s × s . We know G is 1 2 -inverse strongly monotone by calculation.
Choose x 0 = ( 1 , 1 , , 1 ) T , ϵ i , n = n 2 and θ i = 0.1 for MIFBMR, 2-MMIFBMR, 3-MMIFBMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. Choose x 1 = ( 1 , 1 , , 1 ) T , ω = 0.5 , μ = 0.5 , λ 1 = 0.2 , τ n = 0.1 ( n + 1 ) 4 and α n = n 1 / 4 for each algorithm. Choose r = 1 , β = 2 for MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. All the diagonal elements of D are arbitrary in ( 0 , 2 ) , the elements of C, S and Q are generated randomly in ( 2 , 2 ) , ( 2 , 2 ) and ( 0 , 1 ) , respectively. It is obvious that Ω = { ( 0 , 0 , , 0 ) T } and hence the solution of (2) x § = ( 0 , 0 , , 0 ) T is unique. The numerical results are represented in Figure 3 and Figure 4.
Example 3.
Let H = R 2 . Let A : R 2 2 R 2 be a mapping defined as
A ( u , v ) T : = 2 5 5 13 ( u , v ) T , ( u , v ) T R 2 ,
B : R 2 R 2 be a mapping defined as
B ( u , v ) T : = ( u + v + sin u , u + v + sin v ) T , ( u , v ) T R 2 ,
and F : R 2 R 2 be a mapping defined as
F ( u , v ) T : = ( 2 u + 2 v + sin u , 2 u + 2 v + sin v ) T , ( u , v ) T R 2 .
Define G : R 2 R 2 as
G ( u , v ) T : = 3 28 1 1 1 1 ( u , v ) T , ( u , v ) T R 2 .
We can claim that B is monotone and 10 -Lipschitz continuous, F is 1-strongly monotone and 26 -Lipschitz continuous. We know G is 2-inverse strongly monotone by calculation. Choose θ i = 0.1 , x 0 = ( 1 , 1 ) T and ϵ i , n = n 2 for MIFBMR, 2-MMIFBMR, 3-MMIFBMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. Choose x 1 = ( 1 , 1 ) T , ω = 0.8 , λ 1 = 0.05 , μ = 0.2 , τ n = 0.1 ( n + 1 ) 6 and α n = n 2 / 5 for each algorithm. Choose r = 1 , β = 2 for MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR. It is obvious that Ω = { ( 0 , 0 ) T } and x § = ( 0 , 0 ) T is the only solution of problem (2). The numerical results are represented in Figure 5, Figure 6, Figure 7 and Figure 8.
Remark 2.
In Algorithms 1 and 2, the values of L, k and ξ are not necessary to be known.

5. Conclusions

We have introduce two improved regularized algorithms with multi-step inertia to solve the variational inclusion and null point problem in Hilbert spaces. Then we can get strong convergence without using the inverse strongly monotone assumption. Another advantage of our algorithms is that the stepsizes do not need to use the Lipschitz constant of the operator. In addition, the values of k, L, and ξ are not needed in the calculation process, and the choice of α n seems harsh but is actually available, such as α n = n p , 0 < p < 1 / 2 . Finally, the feasibility and effectiveness of our algorithms can be seen in the figures of the numerical experiments. After this, a question is how to get strong convergence under weaker conditions. We will discuss and study this issue in the future.

Author Contributions

Conceptualization, M.L.; Methodology, Y.W. and B.J.; Validation, Y.W., M.L. and C.Y.; Formal analysis, Y.W.; Investigation, M.L. and C.Y.; Resources, C.Y. and B.J.; Data curation, C.Y. and B.J.; Writing—original draft, M.L.; Writing—review & editing, B.J.; Project administration, Y.W., M.L. and C.Y.; Funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant no. 11671365).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  2. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  3. Duchi, J.; Singer, Y. Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 2009, 10, 2899–2934. [Google Scholar]
  4. Raguet, H.; Fadili, J.; Peyré, G. A generalized forward-backward splitting. SIAM J. Imaging Sci. 2013, 6, 1199–1226. [Google Scholar] [CrossRef] [Green Version]
  5. Dilshad, M.; Aljohani, A.F.; Akram, M.; Khidir, A.A. Yosida approximation iterative methods for split monotone variational inclusion problems. J. Funct. Space 2022, 2022, 3667813. [Google Scholar] [CrossRef]
  6. Abubakar, J.; Kumam, P.; Garba, A.I.; Abdullahi, M.S.; Ibrahim, A.H.; Sitthithakerngkiet, K. An inertial iterative scheme for solving variational inclusion with application to Nash-Cournot equilibrium and image restoration problems. Carpathian J. Math. 2021, 37, 361–380. [Google Scholar] [CrossRef]
  7. Okeke, C.C.; Izuchukwu, C.; Mewomo, O.T. Strong convergence results for convex minimization and monotone variational inclusion problems in Hilbert space. Rend. Circ. Mat. Palermo Ser. 2 2020, 69, 675–693. [Google Scholar] [CrossRef]
  8. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  9. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  10. He, B.S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
  11. Zhang, C.; Wang, Y. Proximal algorithm for solving monotone variational inclusion. Optimization 2018, 67, 1197–1209. [Google Scholar] [CrossRef]
  12. Hieu, D.V.; Anh, P.K.; Ha, N.H. Regularization proximal method for monotone variational inclusions. Netw. Spat. Econ. 2021, 21, 905–932. [Google Scholar] [CrossRef]
  13. Song, Y.; Bazighifan, O. Modified inertial subgradient extragradient method with regularization for variational inequality and null point problems. Mathematics 2022, 10, 2367. [Google Scholar] [CrossRef]
  14. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, Y.; Yuan, M.; Jiang, B. Multi-step inertial hybrid and shrinking Tseng’s algorithm with Meir-Keeler contractions for variational inclusion problems. Mathematics 2021, 9, 1548. [Google Scholar] [CrossRef]
  16. Cholamjiak, P.; Hieu, D.V.; Muu, L.D. Inertial splitting methods without prior constants for solving variational inclusions of two operators. Bull. Iran. Math. Soc. 2022, 48, 3019–3045. [Google Scholar] [CrossRef]
  17. Wang, Z.; Long, X.; Lei, Z.; Chen, Z. New self-adaptive methods with double inertial steps for solving splitting monotone variational inclusion problems with applications. Commun. Nonlinear Sci. Numer. Simul. 2022, 2022, 106656. [Google Scholar] [CrossRef]
  18. Jiang, B.; Wang, Y.; Yao, J.C. Multi-step inertial regularized methods for hierarchical variational inequality problems involving generalized Lipschitzian mappings. Mathematics 2021, 9, 2103. [Google Scholar] [CrossRef]
  19. Wang, Y.; Wu, X.; Pan, C. The iterative solutions of split common fixed point problem for asymptotically nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2020, 2020, 18. [Google Scholar] [CrossRef]
  20. Chang, S.S. The Mann and Ishikawa iterative approximation of solutions to variational inclusions with accretive type mappings. Comput. Math. Appl. 1999, 37, 17–24. [Google Scholar] [CrossRef] [Green Version]
  21. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
Figure 1. Comparison of MFBMR, MIFBMR, 2-MMIFBMR and 3-MMIFBMR in Example 1.
Figure 1. Comparison of MFBMR, MIFBMR, 2-MMIFBMR and 3-MMIFBMR in Example 1.
Mathematics 11 01469 g001
Figure 2. Comparison of MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR in Example 1.
Figure 2. Comparison of MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR in Example 1.
Mathematics 11 01469 g002
Figure 3. Comparison of MFBMR, MIFBMR, 2-MMIFBMR and 3-MMIFBMR in Example 2 with s = 10 .
Figure 3. Comparison of MFBMR, MIFBMR, 2-MMIFBMR and 3-MMIFBMR in Example 2 with s = 10 .
Mathematics 11 01469 g003
Figure 4. Comparison of MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR in Example 2 with s = 10 .
Figure 4. Comparison of MPCMR, MIPCMR, 2-MMIPCMR and 3-MMIPCMR in Example 2 with s = 10 .
Mathematics 11 01469 g004
Figure 5. Comparison of MFBMR, MIFBMR, 2-MMIFBMR and 3-MMIFBMR in Example 3.
Figure 5. Comparison of MFBMR, MIFBMR, 2-MMIFBMR and 3-MMIFBMR in Example 3.
Mathematics 11 01469 g005
Figure 6. Comparison of MPCMR, MIPCMR, 2-MMIFBMR and 3-MMIPCMR in Example 3.
Figure 6. Comparison of MPCMR, MIPCMR, 2-MMIFBMR and 3-MMIPCMR in Example 3.
Mathematics 11 01469 g006
Figure 7. Comparison of 2-MMIFBMR and 2-MMIPCMR in Example 3.
Figure 7. Comparison of 2-MMIFBMR and 2-MMIPCMR in Example 3.
Mathematics 11 01469 g007
Figure 8. Comparison of 3-MMIFBMR and 3-MMIPCMR in Example 3.
Figure 8. Comparison of 3-MMIFBMR and 3-MMIPCMR in Example 3.
Mathematics 11 01469 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Li, M.; Yao, C.; Jiang, B. Two New Modified Regularized Methods for Solving the Variational Inclusion and Null Point Problems. Mathematics 2023, 11, 1469. https://doi.org/10.3390/math11061469

AMA Style

Wang Y, Li M, Yao C, Jiang B. Two New Modified Regularized Methods for Solving the Variational Inclusion and Null Point Problems. Mathematics. 2023; 11(6):1469. https://doi.org/10.3390/math11061469

Chicago/Turabian Style

Wang, Yuanheng, Miaoqing Li, Chengru Yao, and Bingnan Jiang. 2023. "Two New Modified Regularized Methods for Solving the Variational Inclusion and Null Point Problems" Mathematics 11, no. 6: 1469. https://doi.org/10.3390/math11061469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop