Next Article in Journal
Interval Type-2 Fuzzy Envelope of Proportional Hesitant Fuzzy Linguistic Term Set: Application to Large-Scale Group Decision Making
Previous Article in Journal
Multi-Step-Ahead Electricity Price Forecasting Based on Temporal Graph Convolutional Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Inertial Subgradient Extragradient Method with Regularization for Variational Inequality and Null Point Problems

by
Yanlai Song
1,*,† and
Omar Bazighifan
2,3,*,†
1
College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China
2
Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186 Rome, Italy
3
Department of Mathematics, Faculty of Science, Hadhramout University, Mukalla 50512, Yemen
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(14), 2367; https://doi.org/10.3390/math10142367
Submission received: 2 May 2022 / Revised: 26 June 2022 / Accepted: 28 June 2022 / Published: 6 July 2022

Abstract

:
The paper develops a modified inertial subgradient extragradient method to find a solution to the variational inequality problem over the set of common solutions to the variational inequality and null point problems. The proposed method adopts a nonmonotonic stepsize rule without any linesearch procedure. We describe how to incorporate the regularization technique and the subgradient extragradient method; then, we establish the strong convergence of the proposed method under some appropriate conditions. Several numerical experiments are also provided to verify the efficiency of the introduced method with respect to previous methods.

1. Introduction

Let H be a real Hilbert space with scalar product · , · and generated norm · . Let A : H H be a nonlinear operator. Recall that the variational inequality problem (VIP) for the operator A on C is described as follows:
Find v * C , such that A v * , v v * 0 , v C .
Denote by V I ( C , A ) the set of solutions of the VIP (1). The VIP theory, which was first proposed independently by Fichera [1] and Stampacchia [2], is a  powerful and effective tool in mathematics. It has a vast application across several fields of study, such as engineering mechanics, nonlinear equations, necessary optimality conditions, fractional differential equations, economics, and so on  [3,4,5,6,7,8,9,10,11,12,13,14,15,16].
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. A mapping S : C H is:
(i)
monotone, if
S x S y , x y 0 , x , y C ;
(ii)
α - strongly monotone, if there exists a number α > 0 such that
S x S y , x y α x y 2 , x , y C ;
(iii)
γ -inverse strongly monotone, if there exists a positive number γ such that
S x S y , x y γ S x S y 2 , x , y C ;
(iv)
k- Lipschitz continuous, if there exists k > 0 such that
S x S y k x y , x , y C ;
(v)
nonexpansive, if
S x S y x y , x , y C .
Remark 1. 
Notice that any γ-inverse strongly monotone mapping is 1 γ -Lipschitz continuous, and both strongly monotone and inverse strongly monotone mappings are monotone. It is also known that the set of fixed points of a nonexpansive mapping is closed and convex.
For each p H , because C is nonempty, closed, and convex, then there exists a unique point in C, denoted by P C p , which is the nearest to p. The mapping P C : H C is called the metric projection from H onto C. We give two explicit formulas to find the projection of any point onto a ball and a half-space.
  • The projection of  p onto a ball B ( u , r ) = { x : x u r } is computed by
    P B ( u , r ) = u + r max { p u , r } ( p u ) .
  • The projection of p onto a half-space H = { x : v , x u 0 } is computed by
    P H ( p ) = p max { v , p u / v 2 , 0 } v .
The problem, which will be studied in this work, is stated as follows:
Find u * V I ( C , A ) S 1 0 , such that , F u * , v u * 0 , v V I ( C , A ) S 1 0 ,
where S is inverse-strongly monotone, F is strongly monotone, and A is monotone.
Recently, many works of literature on iterative methods for solving variational inequality problems have been proposed and studied, see, e.g., [6,9,11,12,14,15] and the references therein. The simplest and oldest method is the projected-gradient method in which only one projection on a feasible set is performed. A convergence of the method, however, requires a strong hypothesis: strong monotonicity or inverse strong monotonicity on A. To weaken this strong hypothesis, Korpelevich [9] suggested the following extragradient method (EGM):
y n = P C ( x n τ A x n ) , x n + 1 = P C ( x n τ A y n ) ,
where A is monotone and L-Lipschitz continuous. It is known that the sequence { x n } generated by the EGM converges weakly to a solution of the VIP provided that V I ( C , A ) . However, the EGM needs to  compute the projection onto the feasible set twice in each iteration. This may be expensive if  the feasible set C has a complicated structure. Furthermore, it might seriously affect the efficiency by using such a method. In 2011, Censor et al. [15] modified the EGM and first proposed the subgradient extragradient method (SEGM) (Algorithm 1):
Algorithm 1: The subgradient extragradient algorithm (SEGM).
Initialization: Set τ > 0 , and let x 0 H be arbitrary.
Step 1. Given x n ( n 0 ) , compute
y n = P C ( x n τ A x n ) ,
and construct the half-space T n the bounding hyperplane of which supports C at  y n ,
T n : = { w H ( x n τ A x n ) y n , w y n 0 } .
Step 2. Calculate the next iterate
x n + 1 = P T n ( x n τ A y n ) .
Step 3. If x n = y n , then stop. Otherwise, set n : = n + 1 and return to Step 1.
The main advantage of the SEGM is that it replaces the second projection from the convex and closed subset C to the half-space T n , and this projection can be computed by an explicit formula. Notice that both EGM and SEGM have been proven to possess only weak convergence in real Hilbert spaces. It is worth noting that strong convergence is more desirable than weak convergence in infinite dimensional spaces. One the other hand, we point out here that the stepsize of the EGM and SEGM plays a  significant role in the convergence properties of the iterative methods. The stepsize defined by a constant is often very small and slows down the convergence rate. On the contrary, variable and appropriate stepsizes can often cause better numerical results. Recently, Yang et al. [17] introduced a self-adaptive subgradient extragradient method for solving the VIP.
It should be noted that strong convergence theorems can be obtained by using Algorithm 2 in real Hilbert spaces. However, the stepsize used in Algorithms 2 is monotonically decreasing, which may also affect the execution efficiency of such a method. Following the ideas of the EGM, the SEGM, and the  SSEGM, Tan and Qin [18] proposed the following inertial extragradient algorithm with nonmonotonic stepsizes for solving the monotone variational inequality problem in real Hilbert spaces.
Algorithm 2: Self-adaptive subgradient extragradient method (SSEGM).
Initialization: Set { τ n } ( 0 , + ) , { α n } ( 0 , + ) , and ν ( 0 , 1 ) , and let x 0 H be arbitrary.
Step 1. Given x n , compute
y n = P C x n τ n A x n .
If x n = y n , then stop: x n is a solution. Otherwise, go to Step 2.
Step 2. Construct the half-space T n the bounding hyperplane of which supports C at  y n ,
T n : = { w H x n τ n A x n y n , w y n 0 } ,
and calculate
z n = P T n x n τ n A y n .
Step 3. Compute
x n + 1 = α n x 0 + ( 1 α n ) z n ,
where { τ n } is updated by
τ n + 1 = min { ν x n y n 2 z n y n 2 2 A x n A y n , z n y n , τ n } , if A x n A y n , z n y n 0 ; τ n , otherwise .
Step 4. Set n : = n + 1 and return to Step 1.
Under some suitable conditions, they proved that the iterative sequence { x n } constructed by Algorithm 3 converges strongly to a solution of the VIP (1).
Algorithm 3: Self adaptive viscosity-type inertial subgradient extragradient method with nonmonotonic stepsizes (SSEGMN).
Initialization: Set { τ n } ( 0 , + ) , { α n } ( 0 , + ) , ν ( 0 , 1 ) , and ϱ ( 0 , + ) , and let x 0 , x 1 H be arbitrary.
Step 1. Given x n 1 and x n ( n 1 ) , compute
u n = x n + μ n ( x n x n 1 ) ,
where
μ n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , otherwise .
Step 2. Compute
y n = P C u n τ n A u n ,
and construct the half-space T n the bounding hyperplane of which supports C at  y n ,
T n : = { w H u n τ n A u n y n , w y n 0 } .
Step 3. Compute
z n = P T n u n τ n A y n .
Step 4. Compute
x n + 1 = α n f ( x n ) + ( 1 α n ) z n ,
where { τ n } is updated by
τ n + 1 = min { ν u n y n A u n A y n , τ n + ξ n } , if A u n A y n 0 ; τ n + ξ n , otherwise .
Step 5. Set n : = n + 1 and return to Step 1.
Motivated by Korpelevich [9], Censor et al. [15], Yang et al. [17], and Tan and Qin [18], we introduce a modified inertial subgradient extragradient method with a nonmonotonic stepsize rule for finding a solution to the variational inequality problem over the set of common solutions to the variational inequality and null point problems. Different from the Halpern iteration and the viscosity methods, the strong convergence of the new method is based on the regularization technique. We also give some numerical examples to illustrate the efficiency of the proposed method over some existing ones.

2. Preliminaries

In this work, we denote the strong and weak convergence of  { x n } by x n x and x n x , respectively. We denote by Fix ( S ) the set of fixed points of a mapping S : C H (i.e., Fix ( S ) = { x C : S x = x } ). We use ω ( x n ) = { x : x n j x } to denote the weak ω -limit set of  { x n } . In view of  Alaoglu’s Theorem, we have: each bounded sequence { x n } in H has a subsequence that is weakly convergent (see [19]).
Lemma 1 
([20]). Let C be a nonempty closed convex subset of a real Hilbert space H. For u H and v C , v = P C u if and only if  u v , w v 0 , w C .
Lemma 2 
([19]). Assume C is a nonempty closed convex subset of a real Hilbert space H, and S : C H is a nonexpansive mapping. Then, the mapping I S is demiclosed, i.e., if { x n } is a sequence in C such that x n x and ( I S ) x n y , then ( I S ) x = y .
Lemma 3 
([19]). Let { x n } be a sequence in H. If x n x and x n x , then x n x .
Lemma 4 
([21]). Suppose A : C H is a hemicontinuous and (pseudo) monotone operator; then, v * is a solution of the VIP (1) if and only if  v * is a solution of the following problem:
find v * C such that A v , v v * 0 , v C .
Lemma 5 
([22,23]). Let { a n } be a sequence of nonnegative real numbers. Assume that
a n + 1 ( 1 β n ) a n + γ n , n N ,
where { β n } , { γ n } satisfy the conditions
(i) 
n = 1 β n = , lim n β n = 0 ,
(ii) 
lim sup n γ n β n 0 .
Then, lim n a n = 0 .

3. Main Results

In this section, let C be a nonempty, closed, and convex subset of H, S : H H be ρ -inverse-strongly monotone, F : H H be γ - strongly monotone, and k- Lipschitz continuous. Let A : H H be L- Lipschitz continuous and monotone such that Ω : = V I ( C , A ) S 1 0 . It is easily seen that the mapping I ι S is nonexpansive for any ι ( 0 , 2 ρ ] (also, see [24]). Furthermore, one finds from Remark 1 that S 1 0 = Fix ( I ι S ) is closed and convex. Since V I ( C , A ) is closed and convex, then Ω is also closed and convex. These ensure the uniqueness of solutions x to problem (2). Let μ ( 0 , 1 ) be a fixed positive real number. We now construct a regularization solution x α for problem (2) by solving the following general variational inequality problem:
Find x α C , such that A x α + α μ S x α + α F x α , v x α 0 , v C .
The following lemmas play an important role in proving the main results.
Lemma 6. 
For each α > 0 , the problem (6) has a unique solution x α .
Proof. 
Since A is monotone and Lipschitz continuous, S is inverse-strongly monotone, and F is strongly monotone and Lipschitz continuous, then we deduce that A + α μ S + α F is strongly monotone and Lipschitz continuous. It is well-known that if a mapping A is Lipschitzian and strongly monotone, then the  VIP (1) has a unique solution (see [25] for more details). Hence, replacing A by A + α μ S + α F , we derive that the problem (6) has a unique solution x α C for each α > 0 . □
Lemma 7. 
It holds that lim α 0 + x α x = 0 .
Proof. 
For every q Ω , one has A q , x α q 0 and S q = 0 . By (6), the monotone property of  A , S , and F, one obtains
F x α , q x α α 1 A x α + α μ S x α , x α q = α 1 A x α , x α q + α μ 1 S x α , x α q α 1 A q , x α q + α μ 1 S q , x α q 0 .
Noticing that
F x α F q , q x α + F q , q x α = F x α , q x α ,
this, together with the  γ - strong monotonicity of F and (7), implies
γ x α q 2 + F q , q x α 0 .
Namely,
F q , q x α γ x α q 2 .
Since F q , q x α F q q x α , by substituting it into (9), we find
γ x α q 2 F q q x α .
If x α q 0 , then it holds that
γ x α q F q ,
which implies
x α q + F q γ , q Ω .
If x α q = 0 , then the above inequality obviously holds. Particularly, one also obtains
x α x + F x γ .
Thus, we deduce that { x α } is bounded. Furthermore, there is a subsequence { x α j } of  { x α } and some point x * such that x α j x * ( as j ) . Utilizing the inverse strongly monotone property of  S, the monotone property of A, and noting (6), one has
α j μ S x α j , x α j q A x α j , q x α j + α j F x α j , q x α j A q , q x α j + α j F x α j , q x α j α j F q , q x α j ,
which leads to
S x α j , x α j q α j 1 μ F q , q x α j 0 ( as j ) .
By using the  ρ -inverse-strongly monotone property of S, noticing S q = 0 and (11), one obtains
ρ S x α j 2 = ρ S x α j S q 2 S x α j S q , x α j q = S x α j , x α j q 0 ,
which yields that
lim j S x α j = 0 .
Since the mapping S ι : = I ι S is nonexpansive for any ι ( 0 , 2 ρ ] (see [24]), noting Lemma 2, one obtains that S ι demiclosed at zero. Therefore, one deduces that x * Fix ( S ι ) , which implies
x * S 1 0 .
We now prove that x * V I ( C , A ) . It follows from the monotonic property of A, S, F, and (6) that
A v + α j μ S v + α j F v , v x α j 0 , v C ,
or, equivalently,
A v , x α j v + α j μ S v , x α j v + α j F v , x α j v 0 , v C .
Since α j 0 + as  j , one deduces from (13) that
A v , x * v 0 , v C .
Hence, one obtains from Lemma 4 that x * V I ( C , A ) . This together with (12) implies
x * Ω .
We prove x α x * ( α 0 + ) . One deduces from  (9) that
F q , q x α j 0 , q Ω .
Passing to the limit in (15) as  j , and noting the fact x α j x * ( as j ) , one infers
F q , q x * 0 , q Ω ,
which implies by Lemma 4 that x * is the solution of problem (2). Since the solution x of problem (2) is unique, one deduces that x * = x . Thus, the set ω ( x α ) only has one element x ; that is, ω ( x α ) = { x } . Therefore, the whole sequence { x α } converges weakly to  x . Again, using (9) with q = x , one finds
x α j x 2 1 γ F x , x x α j .
Passing in (16) to the limit as  j , and noting the fact x α j x * = x ( as j ) , one obtains
lim j x α j x = 0 .
We take another subsequence { x α k } of  { x α } and some point x ^ such that x α k x ^ . By following a similar proof to the above, one derives lim k x α k = x ^ , and x ^ = x . Therefore, one deduces that lim α 0 + x α x = 0 . This completes the proof.  □
Lemma 8. 
x α 1 x α 2 | α 2 α 1 | α 1 α 2 M for all α 1 , α 2 ( 0 , 1 ) , where M = 1 γ sup { S x α 2 + F x α 2 } .
Proof. 
Suppose x α 1 , x α 2 are two solutions of the problem (6). Without loss of generality, we assume that 0 < α 2 < α 1 < 1 . Then,
A x α 1 + α 1 μ S x α 1 + α 1 F x α 1 , x α 2 x α 1 0 ,
and
A x α 2 + α 2 μ S x α 2 + α 2 F x α 2 , x α 1 x α 2 0 .
Summing up the above two inequalities, one finds that
A x α 1 A x α 2 , x α 2 x α 1 + α 1 μ S x α 1 S x α 2 , x α 2 x α 1 + ( α 1 μ α 2 μ ) S x α 2 , x α 2 x α 1 + α 1 F x α 1 F x α 2 , x α 2 x α 1 + ( α 1 α 2 ) F x α 2 , x α 2 x α 1 0 .
Noting the monotonic property of A and S, one obtains
( α 2 μ α 1 μ ) S x α 2 , x α 1 x α 2 + α 1 F x α 1 F x α 2 , x α 2 x α 1 + ( α 1 α 2 ) F x α 2 , x α 2 x α 1 0 ,
or, equivalently,
( α 2 μ α 1 μ ) S x α 2 , x α 1 x α 2 + ( α 2 α 1 ) F x α 2 , x α 1 x α 2 α 1 F x α 1 F x α 2 , x α 1 x α 2 .
It follows from the  γ - strong monotonicity of F that
( α 2 μ α 1 μ ) S x α 2 , x α 1 x α 2 + ( α 2 α 1 ) F x α 2 , x α 1 x α 2 α 1 γ x α 1 x α 2 2 ,
which yields
| α 2 μ α 1 μ | S x α 2 x α 1 x α 2 + | α 2 α 1 | F x α 2 x α 1 x α 2 α 1 γ x α 1 x α 2 2 .
Thus, one obtains
x α 1 x α 2 | α 2 μ α 1 μ | α 1 S x α 2 γ + | α 2 α 1 | α 1 F x α 2 γ .
The mappings S and F are bounded due to the fact that they are Lipschitz. From the Lagrange’s mean-value theorem to a continuous function f ( x ) = x μ on  ( 0 , 1 ) , one deduces that
| α 2 μ α 1 μ | = α 1 μ α 2 μ μ α 2 μ 1 ( α 1 α 2 ) μ α 2 1 ( α 1 α 2 ) α 2 1 ( α 1 α 2 ) .
This, together with (17), implies that
x α 1 x α 2 | α 2 α 1 | α 1 α 2 S x α 2 γ + | α 2 α 1 | α 1 α 2 F x α 2 γ | α 2 α 1 | α 1 α 2 M ,
where M = 1 γ sup { S x α 2 + F x α 2 } . If 0 < α 1 α 2 < 1 , by interchanging α 1 and α 2 in the above proof, one can also obtain the same results. This finishes the proof.    □
Now, the iterative method is presented as follows:    
Lemma 9 
([18]). The sequence { τ n } generated by (20) is well defined, and lim n τ n = τ and τ [ min { ν L , τ 0 } , τ 0 + ζ ] , where ζ = i = 1 ξ n .
Assumption 1. 
(C1) 
0 < ν + τ ρ 1 < 1 , where τ is that in Lemma 9;
(C2) 
lim n α n = 0 , n = 0 α n = ;
(C3) 
lim n α n α n + 1 α n 2 α n + 1 = 0 ;
(C4) 
lim n ϵ n α n = 0 ;
(C5) 
Ω : = V I ( C , A ) S 1 0 .
Theorem 1. 
The sequence { x n } generated by Algorithm 4 converges strongly to the unique solutions x of problem (2) under Assumption A1 (C1)–(C5).
Algorithm 4: Modified inertial subgradient extragradient method with regularization (MSEMR).
Initialization: Set { τ n } ( 0 , + ) , { ϵ n } ( 0 , + ) , { α n } ( 0 , + ) , ϱ > 0 , μ ( 0 , 1 ) , and ν ( 0 , 1 ) . Choose a nonnegative real sequence { ξ n } such that n = 0 ξ n < . Let x 0 , x 1 H be arbitrary.
Step 1.Compute
u n = x n + μ n ( x n x n 1 ) ,
where
μ n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , otherwise .
Step 2.Compute
y n = P C u n τ n ( A u n + α n μ S u n + α n F u n ) ,
and construct the half-space T n the bounding hyperplane of which supports C at  y n ,
T n : = { w H u n τ n ( A u n + α n μ S u n + α n F u n ) y n , w y n 0 } .
Step 3.Calculate
x n + 1 = P T n u n τ n ( A y n + α n μ S y n + α n F u n ) ,
where { τ n } is updated by
τ n + 1 = min { ν u n y n A u n A y n , τ n + ξ n } , if A u n A y n 0 ; τ n + ξ n , otherwise .
Step 4.Set n : = n + 1 and return to Step 1.
Proof. 
Denote by x α n the solution of the problem (6) with α = α n . By (C2) and Lemma 7, one obtains x α n x as  n . Thus, it is sufficient to prove
lim n x n x α n = 0 .
Setting w n = u n τ n ( A y n + α n μ S y n + α n F u n ) , and noting x α n C T n for any n N , one obtains
x n + 1 x α n 2 = P T n w n x α n 2 = P T n w n x α n , P T n w n x α n = P T n w n w n + w n x α n , P T n w n w n + w n x α n = P T n w n w n 2 + w n x α n 2 + 2 P T n w n w n , w n x α n .
It follows from Lemma 1 that
2 P T n w n w n 2 + 2 P T n w n w n , w n x α n = 2 P T n w n w n , P T n w n x α n 0 .
This, together with (21), implies
x n + 1 x α n 2 w n x α n 2 P T n w n w n 2 = u n τ n ( A y n + α n μ S y n + α n F u n ) x α n 2 u n τ n ( A y n + α n μ S y n + α n F u n ) x n + 1 2 = ( u n x α n ) τ n ( A y n + α n μ S y n + α n F u n ) 2 ( u n x n + 1 ) τ n ( A y n + α n μ S y n + α n F u n ) 2 = u n x α n 2 u n x n + 1 2 + 2 τ n A y n + α n μ S y n + α n F u n , x α n x n + 1 = u n x α n 2 u n x n + 1 2 + 2 u n y n , y n x n + 1 + 2 τ n A y n + α n μ S y n + α n F u n , x α n y n + 2 τ n ( A y n + α n μ S y n ) ( A u n + α n μ S u n ) , y n x n + 1 + 2 u n τ n A u n τ n α n μ S u n τ n α n F u n y n , x n + 1 y n .
Noticing the definition of  T n and x n + 1 , one derives
u n τ n A u n τ n α n μ S u n τ n α n F u n y n , x n + 1 y n 0 .
Substituting (24) into (23), one obtains
x n + 1 x α n 2 u n x α n 2 u n x n + 1 2 + 2 u n y n , y n x n + 1 + 2 τ n A y n + α n μ S y n + α n F u n , x α n y n + 2 τ n ( A y n + α n μ S y n ) ( A u n + α n μ S u n ) , y n x n + 1 .
As
2 u n y n , y n x n + 1 = u n x n + 1 2 u n y n 2 y n x n + 1 2 ,
by substituting (25) into the former inequality, one obtains
x n + 1 x α n 2 u n x α n 2 u n y n 2 y n x n + 1 2 + 2 τ n A y n + α n μ S y n + α n F u n , x α n y n + 2 τ n ( A y n + α n μ S y n ) ( A u n + α n μ S u n ) , y n x n + 1 .
It follows from (20) and the inverse-strongly monotonicity of S that
2 τ n ( A y n + α n μ S y n ) ( A u n + α n μ S u n ) , y n x n + 1 2 τ n ( A y n A u n + S y n S u n ) y n x n + 1 2 τ n ν τ n + 1 y n u n + ρ 1 y n u n y n x n + 1 = 2 ν τ n τ n + 1 + τ n ρ 1 y n u n y n x n + 1 ν τ n τ n + 1 + τ n ρ 1 2 y n u n 2 + y n x n + 1 2 .
This, together with (26), leads to
x n + 1 x α n 2 u n x α n 2 1 ν τ n τ n + 1 + τ n ρ 1 2 u n y n 2 + 2 τ n A y n + α n μ S y n + α n F u n , x α n y n .
Now, we estimate the last term in inequality (27). Observe that
2 τ n A y n + α n μ S y n + α n F u n , x α n y n = 2 τ n A y n A x α n , x α n y n + 2 τ n A x α n + α n μ S y n + α n F u n , x α n y n = 2 τ n A y n A x α n , x α n y n + 2 τ n A x α n + α n μ S x α n + α n F x α n , x α n y n + 2 τ n α n μ S y n S x α n , x α n y n + 2 τ n α n F u n F x α n , x α n y n .
Since S and A are monotone, and x α n is the solution of the problem (6) with α = α n , one finds that
A y n A x α n , x α n y n 0 , S y n S x α n , x α n y n 0 ,
and A x α n + α n μ S x α n + α n F x α n , x α n y n 0 , which, by (28) and the  strongly monotonicity of F, imply that
2 τ n A y n + α n μ S y n + α n F u n , x α n y n 2 τ n α n F u n F x α n , x α n y n 2 τ n α n F u n F x α n , x α n u n + 2 τ n α n F u n F x α n , u n y n 2 τ n α n γ u n x α n 2 + 2 τ n α n F u n F x α n u n y n 2 τ n α n γ u n x α n 2 + 2 τ n α n k u n x α n u n y n .
Let σ 1 , σ 2 , and σ 3 be positive real numbers such that
2 γ σ 1 k σ 2 σ 3 > 0 .
Noticing
2 u n x α n u n y n σ 1 u n x α n 2 + 1 σ 1 u n y n 2 ,
one obtains from  (29) and (30) that
2 τ n A y n + α n μ S y n + α n F u n , x α n y n 2 τ n α n γ u n x α n 2 + τ n α n σ 1 k u n x α n 2 + τ n α n k σ 1 u n y n 2 .
Combining (27) and (31), one finds
x n + 1 x α n 2 ( 1 ( 2 γ σ 1 k ) τ n α n ) u n x α n 2 1 ν τ n τ n + 1 + τ n ρ 1 2 τ n α n k σ 1 u n y n 2 .
By virtue of Lemma 9, (C1), (C2), and (C4), there exists n 0 1 such that
1 ( 2 γ σ 1 k ) τ n α n > 0 , 1 ν τ n τ n + 1 + τ n ρ 1 2 τ n α n k σ 1 > 0 , 1 σ 3 τ n α n > 0 , ϵ n σ 2 τ n α n , n n 0 .
Thus, one derives from (32) and (33) that
x n + 1 x α n 2 ( 1 ( 2 γ σ 1 k ) τ n α n ) u n x α n 2 , n n 0 .
It follows from definition of  u n , (4) and (33) that
u n x α n 2 = x n + μ n ( x n x n 1 ) x α n 2 ( x n x α n + μ n x n x n 1 ) 2 x n x α n 2 + μ n 2 x n x n 1 2 + 2 x n x α n μ n x n x n 1 x n x α n 2 + ϵ n 2 + 2 x n x α n ϵ n x n x α n 2 + ϵ n 2 + x n x α n 2 ϵ n + ϵ n ( 1 + ϵ n ) x n x α n 2 + 2 ϵ n ( 1 + σ 2 τ n α n ) x n x α n 2 + 2 ϵ n , n n 0 .
Observe that
2 x α n + 1 x α n x α n + 1 x n + 1 1 σ 3 τ n α n x α n + 1 x α n 2 + σ 3 τ n α n x α n + 1 x n + 1 2 .
Hence, for each n n 0 , one deduces from (36) and Lemma 8 that
x α n x n + 1 2 = x α n + 1 x α n 2 + x α n + 1 x n + 1 2 2 x α n + 1 x α n , x α n + 1 x n + 1 x α n + 1 x α n 2 + x α n + 1 x n + 1 2 2 x α n + 1 x α n x α n + 1 x n + 1 x α n + 1 x α n 2 + x α n + 1 x n + 1 2 1 σ 3 τ n α n x α n + 1 x α n 2 σ 3 τ n α n x α n + 1 x n + 1 2 ( 1 σ 3 τ n α n ) x α n + 1 x n + 1 2 1 σ 3 τ n α n σ 3 τ n α n x α n + 1 x α n 2 ( 1 σ 3 τ n α n ) x α n + 1 x n + 1 2 1 σ 3 τ n α n σ 3 τ n α n α n + 1 α n α n α n + 1 2 M 2 = ( 1 σ 3 τ n α n ) x α n + 1 x n + 1 2 ( 1 σ 3 τ n α n ) ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 ,
which implies
x α n + 1 x n + 1 2 1 1 σ 3 τ n α n x n + 1 x α n 2 + ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 , n n 0 .
It follows from (33), (34), (35), and (37) that
x α n + 1 x n + 1 2 1 ( 2 γ σ 1 k ) τ n α n 1 σ 3 τ n α n u n x α n 2 + ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 ( 1 ( 2 γ σ 1 k ) τ n α n ) ( 1 + σ 2 τ n α n ) 1 σ 3 τ n α n x n x α n 2 + 1 ( 2 γ σ 1 k ) τ n α n 1 σ 3 τ n α n 2 ϵ n + ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 1 ( 2 γ σ 1 k σ 2 ) τ n α n 1 σ 3 τ n α n x n x α n 2 ( 2 γ σ 1 k ) σ 2 τ n 2 α n 2 1 σ 3 τ n α n x n x α n 2 + 1 ( 2 γ σ 1 k ) τ n α n 1 σ 3 τ n α n 2 ϵ n + ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 1 ( 2 γ σ 1 k σ 2 ) τ n α n 1 σ 3 τ n α n x n x α n 2 + 4 ϵ n + ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 = ( 1 β n ) x n x α n 2 + γ n , n n 0 ,
where β n = ( 2 γ σ 1 k σ 2 σ 3 ) τ n α n 1 σ 3 τ n α n , and γ n = 4 ϵ n + ( α n + 1 α n ) 2 σ 3 τ n α n 3 α n + 1 2 M 2 . From (C2), one obtains that β n 0 and n = 1 β n = + . Furthermore, one derives by (C3) and (C4) that
γ n β n = 4 ϵ n α n + ( α n + 1 α n ) 2 σ 3 τ n α n 4 α n + 1 2 M 2 1 σ 3 τ n α n ( 2 γ σ 1 k σ 2 σ 3 ) τ n 0 .
By using Lemma 5, one deduces lim n x n x α n = 0 . Thus, one has lim n x n x = 0 .    □
Remark 2. 
Compared with Algorithm (3) ( EGM ), Algorithm 1 ( SEGM ), Algorithm 2 ( SSEGM ), and Algorithm 3 ( SSEGMN ), our Algorithm 4 ( MSEMR ) is very different in the following aspects.
  • The MSEMR , used to find a solution to the problem (6), is more extensive than the  EGM , SEGM , SSEGM , and SSEGMN used to find a solution to the  VIP (1).
  • Note that the stepsizes adopted in the  EGM , SEGM , SSEGM , and SSEGMN are monotonically decreasing, which may affect the execution efficiency of such methods. However, the  MSEMR adopts a new nonmonotonic stepsize criterion that overcomes the drawback of the monotonically decreasing stepsize sequences generated by the other mentioned methods.
  • One of the advantages of the  MSEMR is its strong convergence, which is preferable to the weak convergence resulting from the  EGM and SEGM in infinite dimensional spaces.
  • The strong convergence of the  MSEMR comes from the regularization technique, which is different to the Halpern iteration and the viscosity methods used in the  SSEGM and SSEGMN , respectively.

4. Application to Split Minimization Problems

Let C be a nonempty closed convex subset of H. The constrained convex minimization problem is to find a point x * C such that
f ( x * ) = min x C f ( x ) ,
where f : C R is a continuous differentiable function. The following lemma is useful to prove Theorem 2.
Lemma 10 
([26]). A necessary condition of optimality for a point x * C to be a solution to the minimization problem (39) is that x * solves the  inequality
f ( x * ) , y x * 0 , y C .
Equivalently, x * C solves the fixed point equation
x * = P C ( x * τ f ( x * ) )
for every constant τ > 0 . If, in addition, f is convex, then the optimality condition (40) is also sufficient.
Theorem 2. 
Suppose f : H H is a differentiable and convex function whose gradient is L-Lipschitz continuous. Assume that the optimization problem min x C f ( x ) is consistent, (i.e., its solution set is nonempty). Then, the sequence { x n } constructed by Algorithm 5 converges to the minimization problem (39) under Assumption 1 (C1)–(C4).
Algorithm 5: Modified inertial subgradient extragradient method to split minimization problems (MSESM).
Initialization: Set { τ n } ( 0 , + ) , { ϵ n } ( 0 , + ) , { α n } ( 0 , + ) , ϱ > 0 , μ ( 0 , 1 ) , and ν ( 0 , 1 ) . Choose a nonnegative real sequence { ξ n } such that n = 0 ξ n < . Let x 0 , x 1 H be arbitrary.
Step 1.Compute
u n = x n + μ n ( x n x n 1 ) ,
where
μ n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , otherwise .
Step 2.Compute
y n = P C u n τ n f u n ,
and construct the half-space T n the bounding hyperplane of which supports C at  y n ,
T n : = { w H u n τ n f u n y n , w y n 0 } .
Step 3.Calculate
x n + 1 = P T n u n τ n f y n ,
where { τ n } is updated by
τ n + 1 = min { ν u n y n f u n f y n , τ n + ξ n } , if f u n f y n 0 ; τ n + ξ n , otherwise .
Step 4.Set n : = n + 1 and return to Step 1.

5. Numerical Illustrations

In the sequel, we provide some numerical experiments to demonstrate the advantages of the proposed method and compare it with the EGM , SEGM , SSEGM , and SSEGMN .
Example 3. 
Let H = R , and C = [ 0 , 50 ] . Let A : H H be given by A x = 1 2 x and F : H H be given by F x = 1 3 x for all x H . It is not difficult to determine that 0 is the unique solution of problem (2). Let us choose ϱ = 0.3 , μ = 0.5 , τ 1 = 1 , ν = 0.9 , α n = 1 n + 1 3 , and ϵ n = 50 ( n + 1 ) 2 . We test the MSEMR for different values of x 0 and x 1 , see Figure 1 and Figure 2.
Example 4. 
Let H = R m , and C = { x R m : 0 x i 3 , i = 1 , 2 , , m } . Assume the operator A : R m R m is given by A x = x , and f : R m R m is given by f x = 9 x 10 for all x R m . It is obvious that A is monotone and Lipschitz continuous. For this experiment, in all the methods, we use the same starting points, which are randomly generated. The control parameters are chosen as ϱ = 0.3 , μ = 0.5 , τ 1 = 1 , ν = 0.9 , α n = 1 n + 1 3 , and ϵ n = 50 ( n + 1 ) 2 .  Figure 3 describes the numerical results for Example 4 in R 5 and R 10 , respectively.
From Examples 3 and 4, one finds that the MSEMR seems better than the other mentioned methods. The reasonable use of inertial terms and new stepsize greatly improves the computational performance of the proposed method.

6. Conclusions

The paper proposed an inertial subgradient extragradient method for finding a solution to the variational inequality problem over the set of common solutions to the variational inequality and null point problems. The introduced method used a nonmonotonic stepsize rule without any linesearch procedure. This allowed the proposed method to perform without the prior knowledge of the Lipschitz constants. The regularization technique was incorporated into the proposed method to obtain the strong convergence. Finally, several numerical experiments were also provided to verify the efficiency of the introduced method with respect to previous methods.

Author Contributions

Conceptualization, Y.S.; Data curation, O.B.; Funding acquisition, Y.S.; Methodology, Y.S. and O.B.; Supervision, O.B.; Writing—original draft, Y.S. and O.B.; Writing—review & editing, Y.S. and O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Scientific Research Project for Colleges and Universities in Henan Province (No.20A110038), Key Young Teachers of Colleges and Universities in Henan Province (No.2019GGJS143).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers and the editor for the valuable comments to improve the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  2. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  3. Song, Y.; Chen, X. Analysis of Subgradient Extragradient Method for Variational Inequality Problems and Null Point Problems. Symmetry 2022, 14, 636. [Google Scholar] [CrossRef]
  4. Yao, Y.; Shehu, Y.; Li, X.H.; Dong, Q.L. A method with inertial extrapolation step for split monotone inclusion problems. Optimization 2021, 70, 741–761. [Google Scholar] [CrossRef]
  5. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algor. 2021, 88, 1419–1456. [Google Scholar] [CrossRef]
  6. Kazmi, K.R.; Rizvi, S.H. Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 2013, 21, 44–51. [Google Scholar] [CrossRef] [Green Version]
  7. Song, Y.L.; Ceng, L.C. Convergence theorems for accretive operators with nonlinear mappings in Banach spaces. Abstr. Appl. Anal. 2014, 12, 1–12. [Google Scholar] [CrossRef]
  8. Jolaoso, L.O.; Karahan, I. A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert spaces. Comput. Appl. Math. 2020, 39, 1–22. [Google Scholar] [CrossRef]
  9. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  10. Akashi, S.; Takahashi, W. Weak convergence theorem for an infinite family of demimetric mappings in a Hilbert space. J. Nonlinear Convex Anal. 2016, 10, 2159–2169. [Google Scholar]
  11. Yao, Y.; Cho, Y.J.; Liou, Y.C. Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212, 242–250. [Google Scholar] [CrossRef]
  12. Alvarez, F.; Attouch, H. An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  13. Song, Y.L.; Ceng, L.C. Strong convergence of a general iterative algorithm for a finite family of accretive operators in Banach spaces. Fixed Point Theory Appl. 2015, 2015, 1–24. [Google Scholar] [CrossRef] [Green Version]
  14. Tan, B.; Qin, X.; Yao, J.C. Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 2021, 87, 1–34. [Google Scholar] [CrossRef]
  15. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  16. Fulga, A.; Afshari, H.; Shojaat, H. Common fixed point theorems on quasi-cone metric space over a divisible Banach algebra. Adv. Differ. Equations 2021, 2021, 1–15. [Google Scholar] [CrossRef]
  17. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  18. Tan, B.; Qin, X. Self adaptive viscosity-type inertial extragradient algorithms for solving variational inequalities with applications. Math. Model. Anal. 2022, 27, 41–58. [Google Scholar] [CrossRef]
  19. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  20. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Dekker: New York, NY, USA, 1984. [Google Scholar]
  21. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  22. Xu, H.K. Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2, 1–17. [Google Scholar] [CrossRef]
  23. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  24. Iiduka, H.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. Theory Methods Appl. 2005, 61, 341–350. [Google Scholar] [CrossRef]
  25. Zhou, H.; Zhou, Y.; Feng, G. Iterative methods for solving a class of monotone variational inequality problems with applications. J. Inequalities Appl. 2015, 2015, 1–17. [Google Scholar] [CrossRef] [Green Version]
  26. Meng, S.; Xu, H.K. Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1, 35–43. [Google Scholar]
Figure 1. Example 3. Left: x 0 = 4 and x 1 = 2 . Right: x 0 = 10 and x 1 = 5 .
Figure 1. Example 3. Left: x 0 = 4 and x 1 = 2 . Right: x 0 = 10 and x 1 = 5 .
Mathematics 10 02367 g001
Figure 2. Example 3. Left: x 0 = 20 and x 1 = 16 . Right: x 0 = 30 and x 1 = 15 .
Figure 2. Example 3. Left: x 0 = 20 and x 1 = 16 . Right: x 0 = 30 and x 1 = 15 .
Mathematics 10 02367 g002
Figure 3. Example 4. Left: m = 5 . Right: m = 10 .
Figure 3. Example 4. Left: m = 5 . Right: m = 10 .
Mathematics 10 02367 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Bazighifan, O. Modified Inertial Subgradient Extragradient Method with Regularization for Variational Inequality and Null Point Problems. Mathematics 2022, 10, 2367. https://doi.org/10.3390/math10142367

AMA Style

Song Y, Bazighifan O. Modified Inertial Subgradient Extragradient Method with Regularization for Variational Inequality and Null Point Problems. Mathematics. 2022; 10(14):2367. https://doi.org/10.3390/math10142367

Chicago/Turabian Style

Song, Yanlai, and Omar Bazighifan. 2022. "Modified Inertial Subgradient Extragradient Method with Regularization for Variational Inequality and Null Point Problems" Mathematics 10, no. 14: 2367. https://doi.org/10.3390/math10142367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop