Next Article in Journal
Using the Fractional Differential Equation for the Control of Objects with Delay
Previous Article in Journal
Total and Partial Shear Viscosity in Heavy-Ion Collisions at Energies of BES, FAIR and NICA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Subgradient Extragradient Method for Variational Inequality Problems and Null Point Problems

College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(4), 636; https://doi.org/10.3390/sym14040636
Submission received: 10 March 2022 / Revised: 17 March 2022 / Accepted: 18 March 2022 / Published: 22 March 2022
(This article belongs to the Section Computer)

Abstract

:
In this paper, we introduce a new numerical method for finding a common solution to variational inequality problems involving monotone mappings and null point problems involving a finite family of inverse-strongly monotone mappings. The method is inspired by the subgradient extragradient method and the regularization method. Strong convergence results of the proposed algorithms have been obtained under some suitable conditions.

1. Introduction

Let C be a nonempty, convex and closed subset of a real Hilbert space H with generated norm · . Let A : C H be a nonlinear operator. The variational inequality problem (VIP) is to obtain a point v C , such that
A v , u v 0 , u C ,
whose solutions set is denoted by V I ( C , A ) .
The VIP was introduced by Hartman and Stampacchia in the 1960s [1] as an important tool in studying obstacle, unilateral, and equilibrium problems arising in several branches of pure and applied sciences. This problem has been studied by many authors, since it includes diverse theoretical disciplines such as optimization, optimal control, symmetric boundary value problems, fixed point problems, etc.; see [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16].
Definition 1.
A mapping A : C H is said to be:
(1) 
monotone, if
A x A y , x y 0 , x , y C ;
(2) 
η-strongly monotone, if there exists a number η > 0 such that
A x A y , x y η x y 2 , x , y C ;
(3) 
λ-inverse strongly monotone, if there exists a positive number λ such that
A x A y , x y λ A x A y 2 , x , y C ;
(4) 
L-Lipschitz continuous, if there exists L > 0 such that
A x A y L x y , x , y C ;
(5) 
nonexpansive, if
A x A y x y , x , y C .
Remark 1.
A relation R is symmetric if and only if for all x , y : if R ( x , y ) , then R ( y , x ) . It is easy to see that the relations in Definition 1 are all symmetric. As we know, every λ-inverse-strongly monotone mapping is 1 λ -Lipschitz continuous. Additionally, if A is η-strongly monotone and L-Lipschitz continuous, then A is η L 2 -inverse-strongly monotone. Furthermore, both strongly monotone and inverse strongly monotone mappings are monotone. It is also known that if the set of fixed points of a nonexpansive mapping A exists, then it is closed and convex.
The problem which will be studied in this paper is to find a point u * V I ( C , A ) i = 1 N A i 1 0 , and
u * = inf { p : p V I ( C , A ) i = 1 N A i 1 0 } ,
where { A i } i = 1 N : C H is a finite family of inverse-strongly monotone mappings and A : C H is a monotone mapping.
Recently, in order to solve Lipschitz and monotone VIPs in Hilbert spaces, Korpelevich [5] introduced the extragradient method (EGM):
y n = P C ( x n τ A x n ) , x n + 1 = P C ( x n τ A y n ) ,
where λ ( 0 , 1 / L ) and P C is denoted by the metric projection from H onto C. If V I ( C , A ) , then the sequence { x n } generated by process (3) converges weakly to an element in V I ( C , A ) . The EGM needs to calculate two values of A and two projections onto C at each iteration. This can be expensive if the structures of A and the feasible set C are complicated, hence the efficiency might be seriously affected by using this method.
Very recently, inspired by the EGM, Censor et al. [17] introduced the following subgradient extragradient method algorithm (SEGM) for solving the VIP.
They proved that { x n } and { y n } generated by Algorithm 1 converge weakly to u * , where u * V I ( C , A ) . In the SEGM, the second projection in the EGM is replaced by a projection onto a specially constructed half-space. The projection on a half-space is inherently explicit, hence the SEGM can be considered as an improvement of the EGM over each computational step.
Algorithm 1: The subgradient extragradient algorithm (SEGM).
Initialization: Set τ > 0 and let x 0 H be arbitrary.
Step 1. Given x n ( n 0 ) , compute
y n = P C ( x n τ A x n ) ,
construct the half-space T n the bounding hyperplane of which supports C at y n ,
T n : = { w H ( x n τ A x n ) y n , w y n 0 } ,
and calculate the next iterate
x n + 1 = P T n ( x n τ A y n ) .
Step 2. If x n = y n then stop. Otherwise, set n : = n + 1 and return to Step 1.
Following the ideas of the EGM and the SEGM, some notable methods can be recalled here such as the projected reflected gradient method [18], the golden ratio method [19], the modified extragradient method [20], and the forward–backward–forward splitting method [21]. We note that the aforementioned methods only provide, in general, weak convergence in infinite dimensional Hilbert spaces. In order to obtain the strong convergence, which is more useful, the aforementioned methods are often combined with one or more different methods such as the Halpern iteration, the viscosity method, and the regularization method.
Most recently, inspired by the EGM and the regularization method, Hieu et al. [22] introduced the following double projection method (DPM) which has a more simple structure.
Under some suitable conditions, they proved that the iterative sequence { x n } constructed by Algorithm 2 converges strongly to a solution of the VIP (1).
Algorithm 2: The double-projection method.
Initialization: Set { τ n } ( 0 , + ) , { α n } ( 0 , + ) and let x 0 C be arbitrary.
Iterative Steps:
x n + 1 = P C ( x n τ n ( A y n + α n x n ) , y n = P C ( x n τ n ( A x n + α n x n ) .
Motivated by Korpelevich [5], Censor et al. [17], Malitsky [18], Hieu et al. [19], Hieu et al. [20], Tseng [21], and Hieu et al. [22], we introduce a new numerical algorithm for finding a common solution to variational inequality problems involving monotone mappings and null point problems involving a finite family of inverse-strongly monotone mappings. The strong convergence analysis is mainly based on the extragradient method and the regularization method.

2. Preliminaries

In this work, we denote the strong and weak convergence of { x n } by x n x and x n x , respectively, and denote by Fix ( T ) the set of fixed points of a mapping T : C H . Namely, Fix ( T ) = { u C : T u = u } .
Lemma 1
([23]). Let H be a real Hilbert space and let C be a nonempty convex closed subset of H . For u H and v C , then v = P C u if and only if
u v , w v 0 , w C .
Lemma 2
([24]). (Demiclosedness principle) Let H be a real Hilbert space and let C be a nonempty convex closed subset of H . Let T : C H be a nonexpansive mapping. Then, the mapping I T is demiclosed, i.e., if { x n } is a sequence in C such that x n x and ( I T ) x n y , then ( I T ) x = y .
Lemma 3
([24]). Let { x n } be a sequence in H . If x n x and x n x , then x n x .
Lemma 4
([25]). Suppose A : C H is a (pseudo) monotone and hemicontinuous operator, then x * is a solution of the VIP if and only if x * is a solution of the problem:
find x * C such that A x , x x * 0 , x C .
Lemma 5
([26,27]). Assume { α n } is a sequence of nonnegative numbers satisfying the following inequality:
α n + 1 ( 1 β n ) α n + b n + β n c n , n N ,
where { β n } , { b n } , { c n } satisfy the restrictions:
(i) 
n = 1 β n = , lim n β n = 0 ,
(ii) 
b n 0 , n = 1 b n < ,
(iii) 
lim sup n c n 0 .
Then, lim n α n = 0 .
Lemma 6.
Assume H is a Hilbert space and C is a nonempty closed convex subset of H . Suppose { A i } i = 1 N : C H is a finite family of ρ i -inverse-strongly monotone operators with ρ = min { ρ 1 , ρ 2 , , ρ N } . Assume that { λ i } i = 1 N is a positive sequence such that i = 1 N λ i = 1 , and i = 1 N A i 1 0 . Then, it holds that
(1) 
i = 1 N λ i A i : C H is ρ-inverse-strongly monotone;
(2) 
( i = 1 N λ i A i ) 1 0 = i = 1 N A i 1 0 .
Proof. 
(1) Taking any x , y C , we find that
i = 1 N λ i A i x i = 1 N λ i A i y , x y = i = 1 N λ i A i x A i y , x y i = 1 N λ i ρ i A i x A i y 2 ρ i = 1 N λ i A i x A i y 2 ρ i = 1 N λ i A i x i = 1 N λ i A i y 2 .
Hence, i = 1 N λ i A i : C H is ρ-inverse-strongly monotone.
(2) It is obvious that i = 1 N A i 1 0 ( i = 1 N λ i A i ) 1 0 . Next, we prove ( i = 1 N λ i A i ) 1 0 i = 1 N A i 1 0 . Assume that p ( i = 1 N λ i A i ) 1 0 and q i = 1 N A i 1 0 . Then, we have
p q 2 = ( I r i = 1 N λ i A i ) p ( I r i = 1 N λ i A i ) q 2 = p q 2 2 r i = 1 N λ i p q , A i p A i q + r 2 i = 1 N λ i A i p i = 1 N λ i A i q 2 p q 2 2 r i = 1 N λ i ρ i A i p A i q 2 + r 2 i = 1 N λ i A i p A i q 2 p q 2 r i = 1 N λ i ( 2 ρ r ) A i p A i q 2 .
Without loss of generality, we assume r ( 0 , 2 ρ ) . We then deduce from (4) that
A i p = A i q , i = 1 , 2 , , N .
In view of q i = 1 N A i 1 0 , we find that
A i p = A i q = 0 , i = 1 , 2 , , N ,
which implies that
p i = 1 N A i 1 0 .
Hence, we deduce ( i = 1 N λ i A i ) 1 0 i = 1 N A i 1 0 .    □

3. Main Results

In the following, we assume C is a nonempty closed convex subset of H , { A i } i = 1 N : C H is a finite family of ρ i -inverse-strongly monotone operators with ρ = min { ρ 1 , ρ 2 , , ρ N } , and A ˜ : = i = 1 N λ i A i such that i = 1 N λ i = 1 for all x C . Let A : C H be monotone and L-Lipschitz continuous mapping such that Ω : = V I ( C , A ) i = 1 N A i 1 0 . It is easily seen that the mapping I κ A ˜ is nonexpansive for any κ ( 0 , 2 ρ ] (also, see [28]). We deduce from Remark 1 that Fix ( I κ A ˜ ) ( = A ˜ 1 0 ) is convex and closed. It is known that V I ( C , A ) is closed and convex, then Ω : = V I ( C , A ) i = 1 N A i 1 0 is also closed and convex. Therefore, there exists a unique solution u in Ω having a minimal norm. In order to find the smallest norm solution, we firstly introduce the following generalized regularized variational inequality problem: find a point u α C , such that
A u α + α μ i = 1 N λ i A i u α + α u α , v u α 0 , v C , μ ( 0 , 1 ) .
We give the following lemma for describing the relationship between u α and u .
Lemma 7.
(i) Problem (5) has a unique solution u α for each α > 0 ;
(ii) lim α 0 + u α u = 0 , and u α u , α > 0 ;
(iii) u α u β α β α M , α , β > 0 , where M is a positive constant.
Proof. 
(i) Letting A ˜ : = i = 1 N λ i A i for all x C , from Remark 1 and Lemma 6, one can derive that A ˜ is Lipschitz continuous and monotone. Hence, we deduce that A + α μ A ˜ + α I is Lipschitz continuous and strongly monotone. Therefore, Problem (5) has a unique solution u α for each α > 0 .
(ii) We show that
u α p , p Ω .
Taking any p Ω , we have A p , u α p 0 and A ˜ p = 0 . From (5), the monotone property of A and A ˜ , we deduce that
u α , p u α α 1 A u α + α μ A ˜ u α , u α p = α 1 A u α , u α p + α μ 1 A ˜ u α , u α p α 1 A p , u α p + α μ 1 A ˜ p , u α p 0 ,
which implies
u α p , p Ω .
Especially, one also obtains u α u . Therefore, { u α } is a bounded subset of C. As C is convex and closed, C is weakly closed. Hence, there is a subsequence { u α j } of { u α } and some point u * C such that u α j u * ( as j ) . By the inverse-strongly monotone property of A ˜ (notice Lemma 6), the monotone property of A , and noting (5), we deduce
α j μ A ˜ u α j , u α j p A u α j , p u α j + α j u α j , p u α j A p , p u α j + α j u α j , p u α j α j u α j , p u α j ,
which implies
A ˜ u α j , u α j p α j 1 μ u α j , p u α j 0 ( as j ) .
Using the inverse-strongly monotone property of A ˜ , noticing A ˜ p = 0 and (7), we find
μ A ˜ u α j = μ A ˜ u α j A ˜ p A ˜ u α j A ˜ p , u α j p = A ˜ u α j , u α j p α j 1 μ p , p u α j 0 ( as j ) .
Hence,
lim j A ˜ u α j = 0 .
On the other hand, since the mapping T κ : = I κ A ˜ is nonexpansive for any κ ( 0 , 2 ρ ] (see [28]), then by Lemma 2, we have T κ demiclosed at zero. Hence, we can derive that A ˜ u * = 0 , which implies u * A ˜ 1 0 = i = 1 N A i 1 0 .
Now, we show that u * V I ( C , A ) . By the monotonic property of A , A ˜ , and noting (5), we have
A v + α j μ A ˜ v + α j v , v u α j 0 , v C , μ ( 0 , 1 ) .
It follows that
A v , u α j v + α j μ A ˜ v , u α j v + α j v , u α j v 0 , v C , μ ( 0 , 1 ) .
Since α j 0 as j , we see that (8) implies
A v , u * v 0 , v C .
Therefore, in view of Lemma 4, we obtain that u * V I ( C , A ) . This in turn implies
u * Ω .
Next, we show u α u * as α 0 . Indeed, noticing the lower semicontinuous of norm and (6), we obtain
u * lim inf j u α j lim sup j u α j u * .
Namely,
lim j u α j = u * .
In view of Lemma 3 and the fact that u α j u * , we derive that lim j u α j = u * . Further, in view of (6), (9), and (11), we see that
u * = inf { p : p Ω } = u .
Since u is a unique element which has the smallest norm in Ω , then u * = u . We show the lim α 0 u α u = 0 , taking another subsequence { u α k } of { u α } and some point u ^ such that u α k u ^ . By following the lines of proof as above, we deduce that lim k u α k = u ^ Ω and u ^ = u . Hence, we derive that lim α 0 u α u = 0 .
(iii) Assume that 0 < α < β < 1 and u α , u β are the solutions of the problem (5). We find that
A u α + α μ A ˜ u α + α u α , u β u α 0
and
A u β + β μ A ˜ u β + β u β , u α u β 0 .
From above two inequalities, and noticing the monotonic property of A and A ˜ , we obtain
( β μ α μ ) A ˜ u β , u α u β + α u α , u β u α + β u β , u α u β 0 , μ ( 0 , 1 ) ,
which implies
u α u β | α β | α u β + | β μ α μ | α A ˜ u β .
The mapping A ˜ is bounded due to the fact that the operator A ˜ is Lipschitz. Moreover, using (6) and the Lagrange’s mean-value theorem to a continuous function f ( x ) = x μ on [ 0 , + ) , we obtain the conclusion (iii).    □
We now introduce the following algorithm to find the smallest norm solution u of problem (2).
Assumption 1. 
(C1) 0 < a τ n b < ( L + ρ 1 ) 1 ;
(C2) lim n α n = 0 and n = 0 α n = ;
(C3) lim n α n α n + 1 α n 2 = 0 ;
(C4) Ω : = V I ( C , A ) i = 1 N A i 1 0 .
Theorem 1.
The sequence { x n } generated by Algorithm 3 converges strongly to the minimal norm solution u of problem (2) under Assumption 1 (C1)–(C4).
Algorithm 3: The subgradient extragradient method with regularization (SEGMR).
Initialization: Set { τ n } ( 0 , + ) , { α n } ( 0 , + ) , μ ( 0 , 1 ) and let x 0 H be arbitrary.
Step 1. Given x n ( n 0 ) , compute
y n = P C x n τ n ( A x n + α n μ i = 1 N λ i A i x n + α n x n ) ,
construct the half-space T n the bounding hyperplane of which supports C at y n ,
T n : = { w H x n τ n ( A x n + α n μ i = 1 N λ i A i x n + α n x n ) y n , w y n 0 } ,
and calculate the next iterate
x n + 1 = P T n x n τ n ( A y n + α n μ i = 1 N λ i A i y n + α n x n ) .
Step 2.Set n : = n + 1 and return to Step 1.
Proof. 
Let A ˜ : = i = 1 N λ i A i for all x C and denote by x α n the solution of the problem (5) with α = α n . From (C1) and Lemma 7, we have x α n u as n . Hence, it is sufficient to prove that
lim n x n x α n = 0 .
Denoting v n = x n τ n ( A y n + α n μ A ˜ y n + α n x n ) and noticing x α n C T n for any n N , we deduce that
x n + 1 x α n 2 = P T n v n x α n 2 = P T n v n x α n , P T n v n x α n = P T n v n v n + v n x α n , P T n v n v n + v n x α n = P T n v n v n 2 + v n x α n 2 + 2 P T n v n v n , v n x α n .
We obtain from Lemma 1 that
2 P T n v n v n 2 + 2 P T n v n v n , v n x α n = 2 P T n v n v n , P T n v n x α n 0 .
This together with
x n + 1 x α n 2 v n x α n 2 P T n v n v n 2 = x n τ n ( A y n + α n μ A ˜ y n + α n x n ) x α n 2 x n τ n ( A y n + α n μ A ˜ y n + α n x n ) x n + 1 2 = ( x n x α n ) τ n ( A y n + α n μ A ˜ y n + α n x n ) 2 ( x n x n + 1 ) τ n ( A y n + α n μ A ˜ y n + α n x n ) 2 = x n x α n 2 x n x n + 1 2 + 2 τ n A y n + α n μ A ˜ y n + α n x n , x α n x n + 1 = x n x α n 2 x n x n + 1 2 + 2 x n y n , y n x n + 1 + 2 τ n A y n + α n μ A ˜ y n + α n x n , x α n y n + 2 τ n ( A y n + α n μ A ˜ y n ) ( A x n + α n μ A ˜ x n ) , y n x n + 1 + 2 x n τ n A x n τ n α n μ A ˜ x n τ n α n x n y n , x n + 1 y n .
Noticing the definition of T n and x n + 1 , we have
x n τ n A x n τ n α n μ A ˜ x n τ n α n x n y n , x n + 1 y n 0 .
Moreover, we have
x n x n + 1 2 x n y n 2 y n x n + 1 2 = 2 x n y n , y n x n + 1 .
In view of (14)–(16), we obtain
x n + 1 x α n 2 x n x α n 2 x n y n 2 y n x n + 1 2 + 2 τ n A y n + α n μ A ˜ y n + α n x n , x α n y n + 2 τ n ( A y n + α n μ A ˜ y n ) ( A x n + α n μ A ˜ x n ) , y n x n + 1 .
It follows that
2 τ n ( A y n + α n μ A ˜ y n ) ( A x n + α n μ A ˜ x n ) , y n x n + 1 2 τ n ( A y n A x n + A ˜ y n A ˜ x n ) y n x n + 1 2 τ n L ¯ y n x n y n x n + 1 τ n 2 L ¯ 2 y n x n + y n x n + 1 2 ,
where L ¯ = L + ρ 1 . This together with (17) implies that
x n + 1 x α n 2 x n x α n 2 ( 1 τ n 2 L ¯ 2 ) x n y n 2 + 2 τ n A y n + α n μ A ˜ y n + α n x n , x α n y n .
Note that
2 τ n A y n + α n μ A ˜ y n + α n x n , x α n y n = 2 τ n A y n A x α n , x α n y n + 2 τ n A x α n + α n μ A ˜ y n + α n x n , x α n y n = 2 τ n A y n A x α n , x α n y n + 2 τ n A x α n + α α n μ A ˜ x α n + α n x α n , x α n y n + 2 τ n α n μ A ˜ y n A ˜ x α n , x α n y n + 2 τ n α n x n x α n , x α n y n .
As A ˜ , A are monotone on C and x α n is the solution of Problem (5) with α = α n , we obtain
A y n A x α n , x α n y n 0 , A ˜ y n A ˜ x α n , x α n y n 0 ,
and A x α n + α α n μ A ˜ x α n + α n x α n , x α n y n 0 , which imply (by (19)) that
2 τ n A y n + α n μ A ˜ y n + α n x n , x α n y n 2 τ n α n x n x α n , x α n y n 2 τ n α n x n x α n , x α n x n + 2 τ n α n x n x α n , x n y n 2 τ n α n x n x α n 2 + 2 τ n α n x n x α n x n y n .
Hence, due to the fact that 2 x n x α n x n y n x n x α n 2 + x n y n 2 , we obtain
2 τ n A y n + α n μ A ˜ y n + α n x n , x α n y n 2 τ n α n x n x α n 2 + τ n α n ( x n x α n 2 + x n y n 2 ) τ n α n x n x α n 2 + τ n α n x n y n 2 .
From (18) and (20), we have
x n + 1 x α n 2 ( 1 τ n α n ) x n x α n 2 ( 1 τ n 2 L ¯ 2 τ n α n ) x n y n 2 .
As 0 < a τ n b < L ¯ 1 and α n 0 , there exists n 0 1 such that
1 τ n 2 L ¯ 2 τ n α n > 1 b 2 L ¯ 2 > 0 , τ n α n ( 0 , 2 ) , n n 0 .
Therefore, we derive from (21) that
x n + 1 x α n 2 ( 1 τ n α n ) x n x α n 2 .
Notice that
2 x α n + 1 x α n x α n + 1 x n + 1 2 τ n α n x α n + 1 x α n 2 + τ n α n 2 x α n + 1 x n + 1 2 .
Hence, for each n n 0 , from (22), (24), and Lemma 7 (iii), we have
x α n x n + 1 2 = x α n + 1 x α n 2 + x α n + 1 x n + 1 2 2 x α n + 1 x α n , x α n + 1 x n + 1 x α n + 1 x α n 2 + x α n + 1 x n + 1 2 2 x α n + 1 x α n x α n + 1 x n + 1 x α n + 1 x α n 2 + x α n + 1 x n + 1 2 2 τ n α n x α n + 1 x α n 2 τ n α n 2 x α n + 1 x n + 1 2 2 τ n α n 2 x α n + 1 x n + 1 2 2 τ n α n τ n α n x α n + 1 x α n 2 2 τ n α n 2 x α n + 1 x n + 1 2 2 τ n α n τ n α n α n + 1 α n α n 2 M 2 = 2 τ n α n 2 x α n + 1 x n + 1 2 ( 2 τ n α n ) ( α n + 1 α n ) 2 τ n α n 3 M 2 ,
which implies
x α n + 1 x n + 1 2 2 2 τ n α n x α n + 1 x α n 2 + 2 ( α n + 1 α n ) 2 τ n α n 3 M 2 .
Therefore, it follows from (23) and (25) that
x α n + 1 x n + 1 2 2 2 τ n α n 2 τ n α n x α n + 1 x α n 2 + 2 ( α n + 1 α n ) 2 τ n α n 3 M 2 1 τ n α n 2 τ n α n x n x α n 2 + 2 ( α n + 1 α n ) 2 τ n α n 3 M 2 .
Thus, Lemma 5, (C1), (C2), and (26) ensure that lim n x n x α n = 0 , which implies the lim n x n u = 0 . This finishes the proof.    □
Algorithm 3 (SEGMR) behaves better than Algorithm (3) (EGM) and Algorithm 1 (SEGM).
Remark 2.
Compared with Algorithm (3) ( EGM ) and Algorithm 1( SEGM ), our Algorithm 3 ( SEGMR ) is very different from those in the following aspects.
  • In Algorithm 3, the second projection in Algorithm (3) is replaced by a projection onto a specially constructed half-space. The projection on a half-space is inherently explicit, hence Algorithm 3 can be considered as an improvement of Algorithm (3).
  • Algorithm 3 is close to Algorithm 1. However, the advantage of Algorithm 3 is its strong convergence which is more useful than the weak convergence resulting from Algorithm 1 in Hilbert spaces.
  • The strong convergence of Algorithm 3 is obtained from the regularization method which is different to the viscosity-like method.
From the definition of T n , we see that C T n . If the structures of A and the feasible set C are simple, then we have the following result.
Corollary 1. 
(Theorem 1 [23]) The sequence { x n } generated by Algorithm 2 converges strongly to the minimal norm solution u of the VIP (1) under the following assumptions:
(C1) 
0 < a λ n b < L 1 ;
(C2) 
lim n α n = 0 and n = 0 α n = ;
(C3) 
lim n α n α n + 1 α n 2 = 0 ;
(C4) 
V I ( C , A ) .

4. Application to Split Minimization Problems

Assume H is a Hilbert space and C is a nonempty closed convex subset of H . The constrained convex minimization problem is to find a point v * C such that
φ ( v * ) = min v C φ ( v ) ,
where φ : C R is a continuous differentiable function. We need the following lemma to prove Theorem 2.
Lemma 8
([29]). A necessary condition of optimality for a point v * C to be a solution of the minimization problem (27) is that v * solves the inequality
φ ( v * ) , u v * 0 , u C .
Equivalently, v * C solves the fixed point equation
v * = P C ( v * τ φ ( v * ) )
for every constant τ > 0 . If, in addition, φ is convex, then the optimality condition (28) is also sufficient.
Theorem 2.
Let ψ : C H be a differentiable and convex function whose gradient ψ is L-Lipschitz continuous. Suppose that the optimization problem min x C φ ( x ) is consistent, i.e., its solution set is nonempty. Then, { x n } defined by Algorithm 4 converges strongly to the minimal norm solution u of Problem (27) under Assumption 1 (C1)–(C4).
Algorithm 4: The subgradient extragradient method with regularization for minimization problems.
Initialization: Set { τ n } ( 0 , + ) , { α n } ( 0 , + ) and let x 0 H be arbitrary.
Step 1. Given x n ( n 0 ) , compute
y n = P C x n τ n ( φ x n + α n x n ) ,
construct the half-space T n the bounding hyperplane of which supports C at y n ,
T n : = { w H x n τ n ( φ x n + α n x n ) y n , w y n 0 } ,
and calculate the next iterate
x n + 1 = P T n x n τ n ( φ y n + α n x n ) .
Step 2. Set n : = n + 1 and return to Step 1.

5. Numerical Examples

This section presents a experiment to show the numerical behavior of the proposed method in comparison with other methods in the literature. We compare Algorithm 3 (SEGMR) with Algorithm (3) (EGM) and Algorithm 1 (SEGM).
We first give two projection calculation formulas which need to be used in numerical experiments.
(1)
The projection of p onto a half-space H = { x : v , x u 0 } is computed by
P H ( p ) = p max { v , p u / v 2 , 0 } v .
(2)
The projection of p onto a ball B ( u , r ) = { x : x u r } is computed by
P B ( u , r ) = u + r max { p u , r } ( p u ) .
Example 3.
Let H = R 2 and C = { x R 2 : x 2 } . Let A : C H be given by A x = 1 2 x , and A i : C H be given by A i x = 1 2 x for all i = 1 , 2 , N . It is not difficult to check that 0 is the smallest norm solution of problem (2) in Ω . Let us choose α n = 1 n + 1 , τ n = μ = 1 3 . We test our Algorithm 3 for different values of x 0 as following figures.
As shown in Figure 1 and Figure 2, one finds that Algorithm 3 (SEGMR) behaves better than Algorithm (3) (EGM) and Algorithm 1 (SEGM).

6. Conclusions

In this paper, we proposed a new numerical method for obtaining a common solution to variational inequality problems involving monotone mappings and null point problems involving a finite family of inverse-strongly monotone mappings. This method is proposed on the basis of the the regularization method and subgradient extragradient method. Strong convergence theorems are proved for sequences generated by the method under appropriate conditions. A numerical experiment is also performed to support the efficiency of the proposed method over some existing methods.

Author Contributions

Conceptualization, Y.S.; Methodology, Y.S.; Data curation, X.C.; Formal analysis, Y.S.; Writing—review & editing, Y.S. and X.C.; Writing—original draft, Y.S.; Formal analysis, Y.S.; Funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Scientific Research Project for Colleges and Universities in Henan Province (grant number 20A110038).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank reviewers and the editor for valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  2. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. A general iterative method for finding common fixed point of finite family of demicontractive mappings with accretive variational inequality problems in Banach spaces. Nonlinear Stud. 2020, 27, 1–24. [Google Scholar]
  3. Bnouhachem, A.; Chen, Y. An iterative method for a common solution of generalized mixed equilibrium problem, variational inequalities and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 2014, 155. [Google Scholar] [CrossRef] [Green Version]
  4. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algor. 2021, 88, 1419–1456. [Google Scholar] [CrossRef]
  5. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  6. Song, Y.; Postolache, M. Modified Inertial Forward-Backward Algorithm in Banach Spaces and Its Application. Mathematics 2021, 9, 1365. [Google Scholar] [CrossRef]
  7. Song, Y. Hybrid Inertial Accelerated Algorithms for Solving Split Equilibrium and Fixed Point Problems. Mathematics 2021, 9, 2680. [Google Scholar] [CrossRef]
  8. Song, Y.; Pei, Y. A new viscosity semi-implicit midpoint rule for strict pseudo-contractions and (α, β)-generalized hybrid mappings. Optimization 2021, 70, 2635–2653. [Google Scholar] [CrossRef]
  9. Alvarez, F.; Attouch, H. An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  10. Akashi, S.; Takahashi, W. Weak convergence theorem for an infinite family of demimetric mappings in a Hilbert space. J. Nonlinear Convex Anal. 2016, 10, 2159–2169. [Google Scholar]
  11. Yao, Y.; Cho, Y.J.; Liou, Y.C. Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212, 242–250. [Google Scholar] [CrossRef]
  12. Song, Y.L.; Ceng, L.C. Strong convergence of a general iterative algorithm for a finite family of accretive operators in Banach spaces. Fixed Point Theory Appl. 2015, 2015, 90. [Google Scholar] [CrossRef] [Green Version]
  13. Tan, B.; Qin, X.; Yao, J.C. Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 2021, 87, 20. [Google Scholar] [CrossRef]
  14. Hieu, D.V.; Anh, P.K.; Muu, L.D. Strong convergence of subgradient extragradient method with regularization for solving variational inequalities. Optim. Eng. 2021, 22, 2575–2602. [Google Scholar] [CrossRef]
  15. Hieu, D.V.; Moudafi, A. Regularization projection method for solving bilevel variational inequality problem. Optim. Lett. 2021, 15, 205–229. [Google Scholar] [CrossRef]
  16. Ceng, L.C. Modified inertial subgradient extragradient algorithms for pseudomonotone equilibrium problems with the constraint of nonexpansive mappings. Nonlinear Var. Anal. 2021, 5, 281–297. [Google Scholar]
  17. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  18. Malitsky, Y. Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 2015, 25, 502–520. [Google Scholar] [CrossRef] [Green Version]
  19. Hieu, D.V.; Cho, Y.J.; Xiao, Y.B. Golden ratio algorithms with new stepsize rules for variational inequalities. Math. Methods Appl. Sci. 2019, 42, 6067–6082. [Google Scholar] [CrossRef] [Green Version]
  20. Hieu, D.V.; Cho, Y.J.; Xiao, Y.B. Modified extragradient method for pseudomonotone variational inequalities in infinite dimensional Hilbert spaces. Vietnam. J. Math. 2021, 49, 1165–1183. [Google Scholar] [CrossRef]
  21. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  22. Hieu, D.V.; Quy, P.K.; Duong, H.N. Strong convergence of double-projection method for variational inequality problems. Comput. Appl. Math. 2021, 40, 73. [Google Scholar] [CrossRef]
  23. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Dekker: New York, NY, USA; Basel, Switzerland, 1984. [Google Scholar]
  24. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Univ. Press: Cambridge, UK, 1990. [Google Scholar]
  25. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  26. Xu, H.K. Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2, 1–17. [Google Scholar] [CrossRef]
  27. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  28. Iiduka, H.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings. Nonlinear Anal. Theory Methods Appl. 2005, 61, 341–350. [Google Scholar] [CrossRef]
  29. Meng, S.; Xu, H.K. Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1, 35–43. [Google Scholar]
Figure 1. Comparison of Algorithm 1 with the existing ones. (a) x 0 = ( 5 , 8 ) T ; (b) x 0 = ( 3 , 1 ) T .
Figure 1. Comparison of Algorithm 1 with the existing ones. (a) x 0 = ( 5 , 8 ) T ; (b) x 0 = ( 3 , 1 ) T .
Symmetry 14 00636 g001
Figure 2. Comparison of Algorithm 1 with the existing ones. (a) x 0 = ( 3 , 10 ) T ; (b) x 0 = ( 20 , 10 ) T .
Figure 2. Comparison of Algorithm 1 with the existing ones. (a) x 0 = ( 3 , 10 ) T ; (b) x 0 = ( 20 , 10 ) T .
Symmetry 14 00636 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Chen, X. Analysis of Subgradient Extragradient Method for Variational Inequality Problems and Null Point Problems. Symmetry 2022, 14, 636. https://doi.org/10.3390/sym14040636

AMA Style

Song Y, Chen X. Analysis of Subgradient Extragradient Method for Variational Inequality Problems and Null Point Problems. Symmetry. 2022; 14(4):636. https://doi.org/10.3390/sym14040636

Chicago/Turabian Style

Song, Yanlai, and Xinhong Chen. 2022. "Analysis of Subgradient Extragradient Method for Variational Inequality Problems and Null Point Problems" Symmetry 14, no. 4: 636. https://doi.org/10.3390/sym14040636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop