Next Article in Journal
Evaluation of Infinite Series by Integrals
Previous Article in Journal
Mechanical Model and FEM Simulations for Efforts on Biceps and Triceps Muscles under Vertical Load: Mathematical Formulation of Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularization Method for the Variational Inequality Problem over the Set of Solutions to the Generalized Equilibrium Problem

by
Yanlai Song
1,*,† and
Omar Bazighifan
2,3,*,†
1
College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China
2
Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
3
Department of Mathematics, Faculty of Science, Hadhramout University, Mukalla 50512, Yemen
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(14), 2443; https://doi.org/10.3390/math10142443
Submission received: 28 May 2022 / Revised: 5 July 2022 / Accepted: 10 July 2022 / Published: 13 July 2022
(This article belongs to the Section Engineering Mathematics)

Abstract

:
The paper is devoted to bilevel problems: variational inequality problems over the set of solutions to the generalized equilibrium problems in a Hilbert space. To solve these problems, an iterative algorithm is proposed that combines the ideas of the Tseng’s extragradient method, the inertial idea and iterative regularization. The proposed method adopts a non-monotonic stepsize rule without any line search procedure. Under suitable conditions, the strong convergence of the resulting method is obtained. Several numerical experiments are also provided to illustrate the efficiency of the proposed method with respect to certain existing ones.

1. Introduction

Let H be a real Hilbert space, let C be a closed, convex and nonempty subset of H , let A : C H be an operator, and let f : C × C R be a bifunction. The generalized equilibrium problem ( GEP) is defined as: find a point v * H such that
f ( v * , v ) + A v * , v v * 0 , v C .
Denote by GEP ( f , A ) the set of solutions of the GEP. Next, we give two special cases of the GEP (1).
(I) If A = 0 , then the GEP (1) becomes the equilibrium problem (EP): find a point v * C such that
f ( v * , v ) 0 , v C .
The GEPs unifies in a simple form many mathematical models in applied sciences, such as variational inequalities [1], operator Equations [2,3], optimization problems [4], fixed point problems [5], saddle point problems [6], the Nash equilibrium problem [7] and so on. Due to its importance in applications, this problem has received attention from several authors.
(II) If f ( u , v ) = 0 for all u , v s . C , then the GEP (1) becomes the variational inequality problem (VIP): find a point v * C such that
A v * , v v * 0 , v C ,
where the solutions set is denoted by VI ( C , A ) . The VIP theory was proposed independently by Fichera [8] and Stampacchia [9]. It provides a natural, convenient and unified framework for the study of many problems in engineering operation research, necessary optimality conditions and engineering mechanics. It covers, as special cases, well-known problems in mathematical programming, such as systems of optimization and control problems [6], traffic network problems [10], nonlinear Equations [11] and fixed point problems [12].
Recently, many authors introduced and studied various methods for solving the VIP. The simplest and oldest projection method is the gradient projection method:
x n + 1 = P C ( x n η A x n ) .
It is well known that the iterative process defined by (3) converges to an element of VI ( C , A ) when A is strongly monotone and L-Lipschitz continuous. In order to weaken such a strong hypothesis, Korpelevich [13] introduced the following extragradient method (EGM) for an L-Lipschitz continuous and monotone operator A . This method is of the form:
x n + 1 = P C ( x n η A y n ) , y n = P C ( x n η A x n ) ,
where η ( 0 , 1 / L ) . They proved that the sequence { x n } generated by the EGM converges to a solution of the VIP. Notice that the EGM is required to calculate the projection onto the feasible set C and evaluations of the cost operator A twice in each iteration. This, in general, may be computationally expensive if the feasible set C and the operator A have a complicated structure. Thus, it might affect the efficiency of the method (see, e.g., [14,15,16,17]). In order to overcome the drawback, the Tseng’s extragradient method (TEGM) [18] has been suggested. This method is described as follows (Algorithm 1):
Algorithm 1: The Tseng’s extragradient method (TEGM).
Initialization: Given x 0 H as arbitrary, take η > 0 .
Step 1. Calculate
y n = P C ( x n η A x n ) .
Step 2. Calculate the next iterate
x n + 1 = y n η ( A y n A x n ) .
A major improvement on the TEGM is that it requires computing only one projection onto C in each iteration. Furthermore, the main shortcoming of the TEGM is the choice of stepsize. It should be noted that the stepsize takes a significant part in the convergence analysis. We should also notice that the stepsizes are defined to be dependent on the Lipschitz constant of the monotone operator in the TEGM.
Recently, Yang et al. [19] introduced a new self-adaptive subgradient extragradient method (STEGM) for solving the VIPs. In the following Algorithm 2, the mapping g is a contraction on H . It should be noted that Algorithm 2 has strong convergence theorems established in real Hilbert spaces. However, the stepsize used in Algorithm 2 is monotonically decreasing, which may also affect the execution efficiency of such a method. Following the ideas of the EGM, the TEGM and the STEGM, Tan and Qin [20] proposed the following inertial extragradient methods with non-monotonic stepsizes (SVTEGM) for solving the VIPs in Hilbert spaces.
Algorithm 2: Self-adaptive Tseng’s extragradient method (STEGM).
Initialization: Given x 0 H as arbitrary, take { α n } ( 0 , + ) , η 0 > 0 and υ ( 0 , 1 ) .
Step 1. Calculate
y n = P C x n η n A x n ,
and
z n = y n η n ( A y n A x n ) .
If x n = y n , then stop: x n is a solution. Otherwise:
Step 2. Compute
x n + 1 = α n g ( x n ) + ( 1 α n ) z n ,
where { η n } is updated by
η n + 1 = min { υ x n y n A x n A y n , η n } , if A x n A y n 0 ; η n , otherwise .
Step 3. Set n : = n + 1 and return to Step 1.
They proved that the iterative process { x n } constructed by Algorithm 3 converges to a solution of the VIP (2) under certain appropriate conditions.
Algorithm 3: The self-adaptive viscosity-type inertial Tseng extragradient algorithm (SVTEGM).
Initialization: Given x 0 , x 1 H as arbitrary, take { ξ n } ( 0 , + ) , { α n } ( 0 , + ) , ρ > 0 , η 0 > 0 , and υ ( 0 , 1 ) .
Step 1. Calculate
u n = x n + μ n ( x n x n 1 ) ,
where
μ n = min { ξ n x n x n 1 , ρ } , if x n x n 1 ; ρ , otherwise .
Step 2. Compute
y n = P C u n η n A u n ,
and
z n = y n η n ( A y n A x n ) ,
where { η n } is updated by
η n + 1 = min { υ u n y n A u n A y n , η n + ϵ n } , if A u n A y n 0 ; η n + ϵ n , otherwise .
Step 3. Compute
x n + 1 = α n g ( x n ) + ( 1 α n ) z n .
Step 5. Set n : = n + 1 and return to Step 1.
In recent years, the bilevel variational inequality problem (BVIP), has spurred the interest of many authors; see, e.g., [21]. The BVIP in H is presented as follows: find v * VI ( C , A ) such that
F v * , v v * 0 , v VI ( C , A ) ,
where A : C H and F : H H are operators. This class is interesting because it includes a couple of classes of mathematical programs with variational inequality problems, equilibrium constraints, linear complementarity problems, bilevel optimization problems and bilevel linear programs.
In the present work, we consider a more general bilevel problem: a variational inequality over the set of solutions to the generalized equilibrium problem (VIOGE). Furthermore, this problem is described as follows: find v * GEP ( f , A ) such that
F v * , v v * 0 , v GEP ( f , A ) ,
where f is a bifunction, F is strongly monotone and A is monotone.
It is easy to see that the VIOGE is a special class of bilevel equilibrium problems. The bilevel equilibrium problems have interested many authors, and particularly, the bilevel equilibrium problem has been widely and intensively studied after the appearance of the monograph books [22]. It is worth noting that the VIOGE contains some classes of mathematical programs with equilibrium constraints [23], minimum-norm problems of the solution set of variational inequalities [24], variational inequalities [25], bilevel convex programming models [26], bilevel minimization problems [27] and bilevel linear programming [28]. For these reasons, we find that it is necessary to study and propose new iterative methods with better efficiency for solving the VIOGE.
Inspired by Korpelevich [13], Tseng [18] and Tan and Qin [20], we research the VIOGE (8) and introduce a new iterative method for finding a solution of the VIOGE (8) in Hilbert spaces. The proposed method is constructed around the TEGM, the inertial idea, and the regularization technique and uses a non-monotonic stepsize rule without any line search procedure. Strong convergence theorems are established under appropriate conditions. Several numerical experiments are also provided to show the effectiveness and the fast convergence of the new method over certain known methods.

2. Preliminaries

In this section, we assume that C is a closed, convex and nonempty subset of a Hilbert space H . The set of real numbers is denoted by R . The symbols “→”and “⇀” denote the strong and weak convergence, respectively. For a given sequence { x n } H , ω ( x n ) : = { x : x n j x } denotes a set of weak limits of { x n } . The set of fixed points of a mapping S : C H is denoted by Fix ( S ) .
Definition 1
([7]). An operator S : C H is called:
(i) 
monotone, if
S x S y , x y 0 , x , y C ;
(ii) 
α-strongly monotone, if there exists a number α > 0 such that
S x S y , x y α x y 2 , x , y C ;
(iii) 
r-contractive, if there exists a positive number r < 1 such that
S x S y r x y , x , y C ;
(iv) 
k-Lipschitz continuous, if there exists k > 0 such that
S x S y k x y , x , y C ;
(v) 
nonexpansive, if
S x S y x y , x , y C .
Definition 2
([29]). A single-valued operator S : C H is called hemicontinuous if the real function
t S ( ( 1 t ) x + t y ) , z
is continuous on [ 0 , 1 ] for all x , y , z C .
Definition 3
([30]). The mapping P C : H C which assigns to each point x H the unique point w C such that
x P C x = inf y C x y .
is called the metric (or nearest point) projection of H onto C .
Proposition 1
([31]). The metric projection P C satisfies
P C x P C y 2 P C x P C y , x y , x , y C .
Given x H and y C , then y = P C x if there holds
0 x y , y z , z C .
Assumption 1.
Assume C is a closed, nonempty, and convex subset of a Hilbert space H . Let f : C × C R be a bifunction satisfying the following restrictions:
(A1) 
f ( x , x ) = 0 , x C ;
(A2) 
f is monotone, i.e., for all x , y C , f ( x , y ) + f ( y , x ) 0 ;
(A3) 
for all x C , f ( x , · ) is convex and lower semicontinuous; and
(A4) 
for all x , y , z C , lim sup t 0 + f ( t z + ( 1 t ) x , y ) f ( x , y ) .
Lemma 1
([1]). Let f : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4). For x H and r > 0 , define a mapping T r f : H C by:
T r f x = { z C : f ( z , y ) + 1 r y z , z x 0 , y C } .
Then, it holds that:
(i) 
T r f is nonempty and single-valued;
(ii) 
T r f is a firmly nonexpansive mapping, that is, T r f x T r f y 2 T r f x T r f y , x y for all x , y H ;
(iii) 
Fix ( T r f ) = EP ( f ) ; and
(iv) 
EP ( f ) is convex and closed.
Remark 1.
Assume A : C H is Lipschitz continuous and monotone, f : C × C R is a bifunction satisfying Assumption 1 (A1)–(A4). We find that the mapping F ^ ( x , y ) : = f ( x , y ) + A x , y x also satisfies Assumption 1. Thus, we deduce, by Lemma 1, that
(i) 
T r F ^ is nonempty and single-valued;
(ii) 
T r F ^ is a firmly nonexpansive mapping;
(iii) 
Fix ( T r F ^ ) = Fix T r f ( I r A ) = EP ( F ^ ) = GEP ( f , A ) ; and
(iv) 
GEP ( f , A ) is closed and convex.
Lemma 2.
Assume f : C × C R is a bifunction satisfying Assumption 1 (A1)–(A4), A : C H is monotone and L-Lipschitz continuous, F : C H is τ-strongly monotone and k-Lipschitz continuous. Then, T r f ( I r ( A + α F ) ) is a contraction on C provided r 0 , 2 α τ ( L + α k ) 2 .
Proof. 
For x , y C , we have, from Lemma 1(ii), that
T r f ( I r ( A + α F ) ) x T r f ( I r ( A + α F ) ) y 2 ( I r ( A + α F ) ) x ( I r ( A + α F ) ) y 2 x y 2 2 r x y , ( A + α F ) x ( A + α F ) y + r 2 ( A + α F ) x ( A + α F ) y 2 x y 2 2 r α x y , F x F y + r 2 ( L + α k ) 2 x y 2 1 r ( 2 α τ r ( L + α k ) 2 ) x y 2 .
Since F : C H is τ -strongly monotone and k-Lipschitz continuous, then one finds τ k . Notice that
1 r ( 2 α τ r ( L + α k ) 2 ) = 1 2 r α τ + r 2 ( L + α k ) 2 > 1 2 r α τ + r 2 α 2 k 2 1 2 r α τ + r 2 α 2 τ 2 = ( 1 r α τ ) 2 .
Then, we deduce that 1 r ( 2 α τ r ( L + α k ) 2 ) 0 for all r > 0 . By using the assumption that r 0 , 2 α τ ( L + α k ) 2 , we derive 1 r ( 2 α τ r ( L + α k ) 2 ) < 1 . Thus, 0 1 r ( 2 α τ r ( L + α k ) 2 ) < 1 for each r 0 , 2 α τ ( L + α k ) 2 . Furthermore, T r f ( I r ( A + α F ) ) is a contraction on C provided r 0 , 2 α τ ( L + α k ) 2 . This completes the proof.  □
Lemma 3.
Assume A : C H is monotone and hemicontinuous, and f : C × C R is a bifunction satisfying Assumption 1 (A1)–(A4). Then, v * is a solution to the GEP (1) if v * is a solution to the problem: find a point v * C such that
f ( v , v * ) + A v , v v * 0 , v C .
Proof. 
Let v * C be a solution of the GEP (1). From (A2) and the monotonicity of A , one finds, for all v C , that
f ( v , v * ) + A v , v v * f ( v * , v ) + A v * , v v * 0 .
Thus, v * C is a solution of the problem (10).
Conversely, let v * C be a solution of the problem (10). For all v C , let v t = ( 1 t ) v * + t v , and then v t C . Since v * C is a solution of the problem (10), it follows that
f ( v t , v * ) + A v t , v t v * 0 .
From Assumption 1 (A1) and (A3), we have
0 = f ( v t , v t ) = f ( v t , ( 1 t ) v * + t v ) ( 1 t ) f ( v t , v * ) + t f ( v t , v ) = ( 1 t ) f ( v t , v * ) A v t , v t v * + ( 1 t ) A v t , v t v * + t f ( v t , v ) = ( 1 t ) f ( v t , v * ) A v t , v t v * + ( 1 t ) t A v t , v v * + t f ( v t , v ) .
This, together with (12), implies
0 ( 1 t ) t A v t , v v * + t f ( v t , v ) ,
which implies
0 ( 1 t ) A v t , v v * + f ( v t , v ) .
Letting t 0 and noticing (A4) and the fact that A is hemicontinuous, we obtain
0 f ( v * , v ) + A v * , v v * .
Therefore, v * is a solution of the GEP (1). This completes the proof.  □
Applying Lemma 2, we obtain the following results immediately.
Corollary 1
([32]). Suppose A : C H is a monotone and hemicontinuous mapping, and then v * is a solution to the VIP (2) if v * is a solution to the problem: find v * C such that
A v , v v * 0 , v C .
Lemma 4
([33]). Assume { v n } is a sequence in H . If v n v and v n v , then v n v .
Lemma 5
([34]). Assume { x n } is a sequence of nonnegative real numbers. Suppose that
x n + 1 ( 1 a n ) x n + b n , n N ,
where { a n } , { b n } satisfy the conditions
(i) 
n = 1 a n = , lim n a n = 0 ,
(ii) 
lim sup n b n a n 0 .
Then, lim n x n = 0 .

3. Main Results

In this section, we focus on the strong convergence analysis for the VIOGE by using the Tseng’s extragradient method, the inertial idea and regularization technique. In what follows, suppose that f : H × H R is a bifunction satisfying (A1)–(A4), A : H H is L-Lipschitz continuous and monotone, F : H H is τ -strongly monotone and k-Lipschitz continuous. Additionally, we also assume that GEP ( f , A ) is nonempty. One finds from Remark 1(iv) that GEP ( f , A ) is closed and convex. This, by the strong monotonicity and continuity of F , ensures the uniqueness of solutions x to the VIOGE (8). Together with the GEP (1), we consider the following regularized generalized equilibrium problem (RGEP) for each α > 0 : we find a point x C such that
f ( x , y ) + A x + α F x , y x 0 , y C .
Remark 2.
Under above assumptions, one deduces that RGEP (13) has a unique solution. Indeed, the solutions set of RGEP (13) is GEP ( f , A + α F ) . It is also not hard to see from Remark 1 (iii) that Fix ( T r f ( I r ( A + α F ) ) ) = GEP ( f , A + α F ) for each r > 0 . In particular, for each r 0 , 2 α τ ( L + α k ) 2 , by Lemma 2, the mapping T r f ( I r ( A + α F ) ) is contractive. Then, this mapping has a unique fixed point by the Banach contraction principle. Hence, RGEP (13) has a unique solution, which is denoted by x α , for each α > 0 .
Now, we study some properties of the solution x α .
Lemma 6.
It holds that
(i) 
x α x + F x τ , α > 0 ;
(ii) 
lim α 0 + x α x = 0 , α > 0 ;
(iii) 
x a x b | b a | a M , a , b > 0 , where M is a positive constant.
Proof. 
(i) Taking an arbitrary w GEP ( f , A ) , one has f ( w , y ) + A w , y w 0 for all y C , which with y = x α C , implies that
f ( w , x α ) + A w , x α w 0 .
Since x α is the solution of the RGEP, one then finds
f ( x α , y ) + A x α + α F x α , y x α 0 , y C .
Substituting y = w C into (15), one obtains
f ( x α , w ) + A x α + α F x α , w x α 0 .
Then, by combining this inequality with (14), one finds
f ( w , x α ) + f ( x α , w ) + A x α A w , w x α + α F x α , w x α 0 .
Observe that
F x α , w x α = F x α F w , w x α + F w , w x α .
It follows from the τ -strong monotonicity of F and (17) that
F x α , w x α τ x α w 2 + F w , w x α .
Substituting (18) into (16), one obtains
f ( w , x α ) + f ( x α , w ) + A x α A w , w x α α τ x α w 2 + α F w , w x α 0 .
Utilizing (A2) and the monotonicity of A , one deduces
τ x α w 2 F w , w x α .
Notice that
F w , w x α F w w x α ,
which, by (20), implies
x α w + F w τ , w GEP ( f , A ) .
This, together with the fact that x GEP ( f , A ) , leads to
x α x + F x τ .
Thus, one deduces that { x α } is bounded.
(ii) Since C is closed and convex, then C is weakly closed. Therefore, there is a subsequence { x α j } of { x α } and some element x * C such that x α j x * . In view of (A2) and the monotone property of A , we deduce, for all v C , that
( f ( y , x α j ) + A y , y x α j ) ( f ( x α j , y ) + A x α j + α j F x α j , y x α j ) = ( f ( y , x α j ) f ( x α j , y ) ) + A y A x α j , y x α j α j F x α j , y x α j α j F x α j , y x α j .
Considering the fact that f ( x α j , y ) + A x α j + α j F x α j , y x α j 0 and noticing (22), one infers that
f ( y , x α j ) + A y , y x α j α j F x α j , y x α j , y C .
As y f ( x , y ) is convex and lower semicontinuous, then it is also weakly lower semicontinuous. Taking j , noticing (A3) and the boundedness of { x α } (see, Lemma 6(i)), one concludes that
f ( y , x * ) + A y , y x * 0 , y C ,
which, by Lemma 2, immediately yields
x * GEP ( f , A ) .
It follows from (20) that
0 F w , w x α j , w GEP ( f , A ) .
Passing to the limit in (24) as j and noticing the fact x α j x * ( as j ) , one infers that
0 F w , w x * , w GEP ( f , A ) ,
which implies, by Corollary 1, that
0 F x * , w x * , w GEP ( f , A ) .
Hence, x * is the solution of the VIOGE (8). Since the solution x of the VIOGE (8) is unique, one deduces that x * = x . Therefore, the set ω ( x α ) only has one element x —that is, ω ( x α ) = { x } . Thus, one finds x α x . Further, utilizing (20), one finds
τ x α x 2 F x , x x α .
Passing to the limit in the above inequality as α 0 + and noticing the fact that x α x , one deduces lim α 0 + x α x = 0 . This is the desired result (ii).
(iii) Assume that x a , x b are the solutions of the RGEP. Then, one has
f ( x a , x b ) + A x a + a F x a , x b x a 0
and
f ( x b , x a ) + A x b + b F x b , x a x b 0 .
Summing up the above two inequalities, one finds
f ( x a , x b ) + f ( x b , x a ) + A x a A x b , x b x a + a F x a F x b , x b x a + ( a b ) F x b , x b x a 0 .
Noting the monotonic property of A and (A2), one finds
a F x a F x b , x b x a + ( a b ) F x b , x b x a 0 ,
or, equivalently,
( b a ) F x b , x a x b a F x a F x b , x a x b .
It follows from the τ -strong monotonicity of F that
( b a ) F x b , x a x b a τ x a x b 2 ,
which leads to
| b a | F x b x a x b a τ x a x b 2 .
Thus, one concludes that
x a x b | b a | a F x b τ .
Since the operator F is Lipschitz continuous, noting (21), we then deduce that { F x a } is also bounded. Thus, by (25), there exists M > 0 such that x a x b | b a | a M . This completes the proof.  □
In the following, combining with the Tseng’s extragradient method, the inertial idea and the regularization method, we propose a new numerical algorithm for solving the VIOGE (8).
Lemma 7
([20]). The sequence { η n } generated by (27) is well-defined and lim n η n = η and η [ min { υ L , η 0 } , η 0 + ζ ] , where ζ = i = 1 ϵ n .
Assumption 2.
(C1) 
lim n α n = 0 and n = 0 α n = ;
(C2) 
lim n α n α n + 1 α n 2 = 0 ;
(C3) 
lim n ξ n α n = 0 ; and
(C4) 
GEP ( f , A ) .
Theorem 1.
Assume C is a nonempty convex closed subsets of real Hilbert space H , f : H × H R is a bifunction satisfying (A1)–(A4), A : H H is L-Lipschitz continuous and monotone. Then, the sequence { x n } constructed by Algorithm 4 converges strongly to the unique solution x of the VIOGE (8) under Assumption 2 (C1)–(C4).
Algorithm 4: The Tseng’s extragradient method with regularization (TEGMR).
Initialization: Take { ξ n } ( 0 , + ) , { α n } ( 0 , + ) , ρ > 0 , η 0 > 0 , υ ( 0 , 1 ) .
Choose a nonnegative real sequence { ϵ n } such that n = 0 ϵ n < . Let x 0 , x 1 H is arbitrary.
Step 1.Compute
u n = x n + μ n ( x n x n 1 ) ,
where
μ n = min { ξ n x n x n 1 , ρ } , if x n x n 1 ; ρ , otherwise .
Step 2.  Given x n ( n 0 ) , compute
y n = T η n f u n η n ( A u n + α n F u n ) ,
where { η n } is updated by
η n + 1 = min { υ u n y n A u n A y n , η n + ϵ n } , if A u n A y n 0 ; η n + ϵ n , otherwise .
Step 3.  Compute
x n + 1 = y n η n ( A y n A u n ) .
Step 4.  Set n : = n + 1 and return to  Step 1.
Proof. 
In view of Lemma 6(ii) and (C1), one obtains x α n x as n . Therefore, it is sufficient to show that
lim n x n x α n = 0 .
Observe that
x n + 1 x α n 2 = y n η n ( A y n A u n ) x α n 2 = y n x α n 2 + η n 2 A y n A u n 2 2 η n y n x α n , A y n A u n .
We also notice that
y n x α n 2 = y n u n 2 + u n x α n 2 + 2 y n u n , u n x α n
and
2 y n u n , u n x α n = 2 y n u n , u n y n + 2 y n u n , y n x α n = 2 y n u n 2 + 2 y n u n , y n x α n .
Substituting (29) and (30) into (31), one finds
x n + 1 x α n 2 = u n x α n 2 y n u n 2 + 2 y n u n , y n x α n + η n 2 A y n A u n 2 2 η n y n x α n , A y n A u n .
Since x α is a solution of RGEP (13) for each α > 0 , we obtain, by Remark 2, that
x α n = T η n f ( x α n η n ( A x α n + α n F x α n ) ) .
In view of Lemma 1(ii), one finds that
y n x α n 2 = T η n f u n η n ( A u n + α n F u n ) T η n f ( x α n η n ( A x α n + α n F x α n ) ) 2 T η n f u n η n ( A u n + α n F u n ) T η n f ( x α n η n ( A x α n + α n F x α n ) ) , u n η n ( A u n + α n F u n ) ( x α n η n ( A x α n + α n F x α n ) ) = y n x α n , u n η n ( A u n + α n F u n ) ( x α n η n ( A x α n + α n F x α n ) ) ,
which implies
y n x α n , u n η n ( A u n + α n F u n ) ( x α n η n ( A x α n + α n F x α n ) ) y n x α n 2 0 .
That is,
y n x α n , u n y n η n ( A u n + α n F u n ) + η n ( A x α n + α n F x α n ) 0 ,
or equivalently
y n x α n , u n y n η n y n x α n , ( A u n + α n F u n ) ( A x α n + α n F x α n ) .
Substituting (33) into (31), noting (27) and the monotonicity of A , one obtains
x n + 1 x α n 2 u n x α n 2 y n u n 2 2 η n y n x α n , ( A u n + α n F u n ) ( A x α n + α n F x α n ) + η n 2 A y n A u n 2 2 η n y n x α n , A y n A u n = u n x α n 2 y n u n 2 + η n 2 A y n A u n 2 2 η n y n x α n , A y n A x α n 2 α n η n y n x α n , F u n F x α n u n x α n 2 y n u n 2 + η n 2 A y n A u n 2 + 2 α n η n y n x α n , F x α n F u n u n x α n 2 y n u n 2 + η n 2 υ 2 η n + 1 2 y n u n 2 + 2 α n η n y n x α n , F x α n F u n u n x α n 2 1 υ 2 η n 2 η n + 1 2 y n u n 2 + 2 α n η n y n x α n , F x α n F u n .
Now, select three positive numbers σ 1 , σ 2 , σ 3 ( 0 , 1 ) such that
2 τ σ 1 k σ 2 σ 3 > 0 .
We estimate the last term in (34) as follows
2 y n x α n , F x α n F u n = 2 y n u n , F x α n F u n + 2 u n x α n , F x α n F u n 2 k y n u n x α n u n 2 τ u n x α n 2 k 1 σ 1 y n u n 2 + k σ 1 x α n u n 2 2 τ u n x α n 2 ,
which, together with (34), yields
x n + 1 x α n 2 u n x α n 2 1 υ 2 η n 2 η n + 1 2 y n u n 2 + α n η n k 1 σ 1 y n u n 2 + k σ 1 x α n u n 2 2 τ u n x α n 2 = ( 1 ( 2 τ σ 1 k ) α n η n ) u n x α n 2 1 υ 2 η n 2 η n + 1 2 k α n η n σ 1 y n u n 2 .
Utilizing assumptions υ ( 0 , 1 ) , α n 0 and applying Lemma 7 and Assumption 2 (C3), it follows that there exists n 0 1 such that
1 υ 2 η n 2 η n + 1 2 k α n η n σ 1 > 0 , ξ n < 1 and ξ n σ 2 α n η n , n n 0 .
Consequently, from (36) and (37), we find that
x n + 1 x α n 2 ( 1 ( 2 τ σ 1 k ) α n η n ) u n x α n 2 .
From (37), we find ξ n < 1 , n n 0 . Then, we deduce that ξ n 2 < ξ n , n n 0 . Utilizing (26) and (37), one deduces that
u n x α n 2 = x n + μ n ( x n x n 1 ) x α n 2 ( x n x α n + μ n x n x n 1 ) 2 x n x α n 2 + μ n 2 x n x n 1 2 + 2 x n x α n μ n x n x n 1 x n x α n 2 + ξ n 2 + 2 x n x α n ξ n x n x α n 2 + ξ n 2 + x n x α n 2 ξ n + ξ n ( 1 + ξ n ) x n x α n 2 + 2 ξ n ( 1 + σ 2 α n η n ) x n x α n 2 + 2 ξ n .
On the other hand, in terms of the Cauchy–Schwartz inequality, one concludes that
2 x α n + 1 x n + 1 x α n + 1 x α n 1 σ 3 α n η n x α n + 1 x α n 2 + σ 3 α n η n x α n + 1 x n + 1 2 .
Thus, for each n n 0 , one obtains from (40) and Lemma 6(iii) that
x α n x n + 1 2 = x α n + 1 x n + 1 2 + x α n + 1 x α n 2 2 x α n + 1 x n + 1 , x α n + 1 x α n x α n + 1 x n + 1 2 + x α n + 1 x α n 2 2 x α n + 1 x n + 1 x α n + 1 x α n x α n + 1 x n + 1 2 + x α n + 1 x α n 2 1 σ 3 α n η n x α n + 1 x α n 2 σ 3 α n η n x α n + 1 x n + 1 2 ( 1 σ 3 α n η n ) x α n + 1 x n + 1 2 1 σ 3 α n η n σ 3 α n η n x α n + 1 x α n 2 ( 1 σ 3 α n η n ) x α n + 1 x n + 1 2 1 σ 3 α n η n σ 3 α n η n α n + 1 α n α n 2 M 2 = ( 1 σ 3 α n η n ) x α n + 1 x n + 1 2 ( 1 σ 3 α n η n ) ( α n + 1 α n ) 2 σ 3 α n 3 η n M 2 ,
which yields
x α n + 1 x n + 1 2 1 1 σ 3 α n η n x n + 1 x α n 2 + ( α n + 1 α n ) 2 σ 3 α n 3 η n M 2 .
By relation (35), (38), (39) and (41), one finds
x α n + 1 x n + 1 2 1 ( 2 τ σ 1 k ) α n η n 1 σ 3 α n η n u n x α n 2 + ( α n + 1 α n ) 2 σ 3 α n η n 3 M 2 ( 1 ( 2 τ σ 1 k ) α n η n ) ( 1 + σ 2 α n η n ) 1 σ 3 α n η n x n x α n 2 + 1 ( 2 τ σ 1 k ) α n η n 1 σ 3 α n η n 2 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 η n M 2 1 ( 2 τ σ 1 k σ 2 ) α n η n 1 σ 3 α n η n x n x α n 2 ( 2 τ σ 1 k ) σ 2 η n 2 α n 2 1 σ 3 α n η n x n x α n 2 + 1 ( 2 τ σ 1 k ) α n η n 1 σ 3 α n η n 2 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 η n M 2 1 ( 2 τ σ 1 k σ 2 ) α n η n 1 σ 3 α n η n x n x α n 2 + 4 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 η n M 2 = ( 1 a n ) x n x α n 2 + b n ,
where a n = ( 2 τ σ 1 k σ 2 σ 3 ) α n η n 1 σ 3 α n η n and b n = 4 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 η n M 2 . By condition (C1), one obtains that a n 0 and n = 1 a n = + . Furthermore, one derives from condition (C2) and (C3) that
b n a n = 4 ξ n α n + ( α n + 1 α n ) 2 σ 3 α n 4 η n M 2 1 σ 3 α n η n ( 2 τ σ 1 k σ 2 σ 3 ) η n 0 .
Applying Lemma 5 to (42), one infers lim n x n x α n = 0 . Thus, one finds lim n x n x = 0 .  □

4. Application to the BVIP

Assume C be a closed convex and nonempty subset of R . Further assume f ( x , y ) 0 , x , y C . Noticing Proposition 1, one obtains T η n f = P C . Furthermore, the VIOGE becomes the BVIP. Next, by Theorem 1, we give the following result for the BVIP.
Theorem 2.
Assume that C is a closed convex and nonempty subset of Hilbert space H , A : C H is L-Lipschitz continuous and monotone. Further, assume VI ( C , A ) . Then, the sequence { x n } constructed by Algorithm 5 converges strongly to the unique solution x to the BVIP (7) under Assumption 2 (C1)–(C4).
Algorithm 5: Tseng’s extragradient method with regularization for the BVIP.
Initialization: Take { ξ n } ( 0 , + ) , { α n } ( 0 , + ) , ρ > 0 , η 0 > 0 , υ ( 0 , 1 ) .
Choose a nonnegative real sequence { ϵ n } such that n = 0 ϵ n < . Let x 0 , x 1 H be arbitrary.
Step 1.Compute
u n = x n + μ n ( x n x n 1 ) ,
where
μ n = min { ξ n x n x n 1 , ρ } , if x n x n 1 ; ρ , otherwise .
Step 2.Given x n ( n 0 ) , compute
y n = P C u n η n ( A u n + α n F u n ) .
Step 3.Compute
x n + 1 = y n η n ( A y n A u n ) .
where { η n } is updated by
η n + 1 = min { υ u n y n A u n A y n , η n + ϵ n } , if A u n A y n 0 ; η n + ϵ n , otherwise .
Step 4.Set n : = n + 1 and go back toStep 1.

5. Numerical Examples

In this subsection, several numerical examples are supported to show the behavior and performance of our Algorithm 4 (TEGMR) as well as comparing it with Algorithm 1 (TEGM of Tseng [18]), Algorithm 2 (STEGM of Yang et al. [19]) and Algorithm 3 (SVTEGM of Tan and Qin [20]).
Example 3.
Let H = l 2 ( R ) be the linear spaces whose elements are all 2-summable sequences of scalars in R —namely,
l 2 ( R ) : = { x = ( x 1 , x 2 , , x j , ) , x j R and j = 1 | x j | 2 < }
with inner product · , · : H × H R and · l 2 : H R defined by x , y : = j = 1 x j y j and · l 2 : = j = 1 | x j | 2 1 / 2 , respectively, where { x j } j = 1 , { y i } i = 1 . Let F : H H be defined by F x = x 6 , A : H H be defined by A x = x 4 , x = { x j } j = 1 H . Then, A * x = x 4 , x = { x j } j = 1 H . Define the set C : = { x H : x < 1 } . Let the bifunctions f : H × H R be given by f ( x , y ) = 2 y 2 + 3 x y 5 x 2 for all x , y H . We find that T η f = x 1 + 7 η , and 0 is a unique solution to VIOGE (8). Choose α n = ( n + 1 ) 0.9 , ξ n = ( n + 1 ) 0.8 , ρ = 0.4 , η 0 = ν = 0.6 . The stopping criterion used for our computation is x n + 1 x n < 10 3 . We test our Algorithm 4 for different values of x 0 and x 1 as follows:
Case I: x 0 = 2 3 , 1 9 , 2 5 , 0 , , 0 , and x 1 = 7 6 , 10 11 , 1 9 , 0 , , 0 , ;
Case II: x 0 = 4 3 , 5 3 , 2 3 , 0 , , 0 , and x 1 = 5 7 , 4 5 , 6 5 , 0 , , 0 , ;
Case III: x 0 = 3 , 3 , 19 6 , 0 , , 0 , and x 1 = 5 3 , 11 4 , 7 5 , 0 , , 0 , ; and
Case IV: x 0 = 50 9 , 73 11 , 6 , 0 , , 0 , and x 1 = 76 9 , 6 , 11 2 , 0 , , 0 , .
Example 4.
Let H = L 2 ( [ 0 , 1 ] ) with norm x L 2 = 0 1 | x ( t ) | 2 d t 1 2 and inner product x , y = 0 1 x ( t ) y ( t ) d t , x , y L 2 ( [ 0 , 1 ] ) . The feasible set C is given by C = { x L 2 ( [ 0 , 1 ] ) : x 1 } . We choose α n = ( n + 1 ) 0.9 , ν = λ 0 = 1 2 , ξ n = ( n + 1 ) 0.8 , ρ = 3 10 , r n = 3 5 , β n = 1 8 . Let f : H × H R be given by f ( x , y ) = y 2 x 2 , A : H H be given by A x = x 4 , F : H H be given by f x = x 6 for all x H . We find that A is monotone and 1 4 -Lipschitz continuous. It is easy to check that T η f = x 1 + 7 η and 0 is unique solution to VIOGE (8). Let us α n = ( n + 1 ) 0.9 , ξ n = ( n + 1 ) 0.8 , ρ = 0.4 , η 0 = ν = 0.6 . Using x n + 1 x n < 10 3 as our stopping criterion, we test our Algorithm 4 for different values of x 0 and x 1 as follows:
Case I: x 0 = 2 t 3 and x 1 = t + 1 ;
Case II: x 0 = 2 t 4 + t 2 6 and x 1 = 3 t ;
Case III: x 0 = cos t and x 1 = sin t ; and
Case IV: x 0 = 3 e t and x 1 = e t .
Example 5.
Let H = R m with inner product · , · : H × H R defined by x , y : = j = 1 m x j y j . Let A = N T N x + p , where q is a vector in R m and N is a m × m matrix. Let F ( x ) = Q x + q where Q is a symmetric and positive-definite matrix of size m × m , and q is a vector in R m . It is easy to check that A is monotone and Lipschitz continuous with the constant L = N 2 , and F is Lipschitz continuous and strongly monotone. Define the set C : = { x H : x < 1 } . Let the bifunctions f : H × H R be given by f ( x , y ) = 2 y 2 + 3 x y 5 x 2 for all x , y H . We deduce that T η f = x 1 + 7 η . Furthermore, 0 is unique solution to VIOGE (8). Choose α n = ( n + 1 ) 0.9 , ξ n = ( n + 1 ) 0.8 , ρ = 0.4 , η 0 = ν = 0.6 . The maximum number of iterations is 300 as the stopping criterion and the initial values x 0 are randomly generated by rand ( 0 , 1 ) in MATLAB. We test Algorithm 4 with different values m as follows.
According to Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, one finds that the TEGMR performs better than TEGM, STEGM and SVTEGM in terms of the CPU-time taken for computation.

6. Conclusions

In the paper, we considered a more general bilevel problem: a variational inequality over the set of solutions of the generalized equilibrium problem in a Hilbert space. The new proposed problem covers many well-known problems, such as the bilevel variational inequality problem, generalized equilibrium problem and bilevel optimization problems. We also introduced a new iterative method for solving the proposed bilevel problem in real Hilbert spaces.
The introduced method was constructed around the TEGM, the inertial idea and the regularization technique, and we used a non-monotonic stepsize rule without any line search procedure. Under some suitable conditions, strong convergence of the sequences generated by the proposed method was obtained. Several numerical experiments were also provided to demonstrate the effectiveness and the fast convergence of the new method over certain known methods.

Author Contributions

Conceptualization, Y.S.; Data curation, Y.S.; Funding acquisition, Y.S.; Methodology, Y.S. and O.B.; Supervision, O.B.; Writing—original draft, Y.S. and O.B.; Writing—review & editing, Y.S. and O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Scientific Research Project for Colleges and Universities in Henan Province (grant number 20A110038).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank reviewers and the editor for valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  2. Chidume, C.E.; Măruşter, Ş.T. Iterative methods for the computation of fixed points of demicontractive mappings. J. Comput. Appl. Math. 2010, 234, 861–882. [Google Scholar] [CrossRef] [Green Version]
  3. Mennouni, A. A regularization procedure for solving some singular integral equations of the second kind. Internat. J. Differ. Equ. 2013, 8, 71–76. [Google Scholar]
  4. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  5. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algor. 2018, 78, 1045–1060. [Google Scholar] [CrossRef]
  6. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. A general iterative method for finding common fixed point of finite family of demicontractive mappings with accretive variational inequality problems in Banach spaces. Nonlinear Stud. 2020, 27, 1–24. [Google Scholar]
  7. Yao, Y.; Cho, Y.J.; Liou, Y.C. Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212, 242–250. [Google Scholar] [CrossRef]
  8. Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  9. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  10. Bnouhachem, A.; Chen, Y. An iterative method for a common solution of generalized mixed equilibrium problem, variational inequalities and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 2014, 155. [Google Scholar] [CrossRef] [Green Version]
  11. Aliev, N.; Hosseini, S.M. A regularization of Fredholm type singular integral equations. Int. J. Math. Math. Sci. 2001, 26, 123–128. [Google Scholar] [CrossRef] [Green Version]
  12. Song, Y.; Bazighifan, O. A new alternative regularization method for dolving generalized equilibrium problems. Mathematics 2022, 10, 1350. [Google Scholar] [CrossRef]
  13. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  14. Wairojjana, N.; Younis, M.; Rehman, H.U.; Pholasa, N. Modified viscosity subgradient extragradient-like algorithms for solving monotone variational inequalities problems. Axioms 2020, 9, 118. [Google Scholar] [CrossRef]
  15. Rehman, H.; Kumam, P.; Ozdemir, M.; Argyros, I.K.; Kumam, W. Three novel inertial explicit Tseng’s extragradient methods for solving pseudomonotone variational inequalities. Optimization 2021, 2021, 1–34. [Google Scholar] [CrossRef]
  16. Hammad, H.A.; Rehman, H.; De la Sen, M. Advanced Algorithms and Common Solutions to Variational Inequalities. Symmetry 2020, 12, 1198. [Google Scholar] [CrossRef]
  17. Abubakar, J.; Kumam, P.; ur Rehman, H. Self-adaptive inertial subgradient extragradient scheme for pseudomonotone variational inequality problem. Int. J. Nonlinear Sci. Numer. Simul. 2022, 23, 77–96. [Google Scholar] [CrossRef]
  18. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  19. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert spaces. Numer. Algor. 2019, 80, 741–752. [Google Scholar] [CrossRef]
  20. Tan, B.; Qin, X. Self adaptive viscosity-type inertial extragradient algorithms for solving variational inequalities with applications. Math. Model. Anal. 2022, 27, 41–58. [Google Scholar] [CrossRef]
  21. Mordukhovich, B. Variational Analysis and Applications; Springer: Cham, Switzerland, 2018. [Google Scholar]
  22. Dempe, S. Foundations of Bilevel Programming; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  23. Luo, Z.Q.; Pang, J.S.; Ralph, D. Mathematical Programs with Equilibrium Constraints; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  24. Yao, Y.; Marino, G.; Muglia, L. A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 2014, 63, 559–569. [Google Scholar] [CrossRef]
  25. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Kluwer Academic: Dordrecht, The Netherlands, 2014. [Google Scholar]
  26. Trujillo, C.R.; Zlobec, S. Bilevel convex programming models. Optimization 2009, 58, 1009–1028. [Google Scholar] [CrossRef]
  27. Solodov, M. An explicit descent method for bilevel convex optimization. J. Convex Anal. 2007, 14, 227–237. [Google Scholar]
  28. Glackin, J.; Ecker, J.G.; Kupferschmid, M. Solving bilevel linear programs using multiple objective linear programming. J. Optim. Theory Appl. 2009, 140, 197–212. [Google Scholar] [CrossRef]
  29. Petrot, C.N. Regularization and iterative method for general variational inequality problem in hilbert spaces. J. Inequal. Appl. 2011, 2011, 21. [Google Scholar]
  30. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Space; Springer: New York, NY, USA, 2011. [Google Scholar]
  31. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejrmonotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef] [Green Version]
  32. Takahashi, W. Nonlinear Functional Analysis: Fixed Point Theory and Its Applications; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  33. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  34. Xu, H.K. Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
Figure 1. Example 3, (left): Case I and (right): Case II.
Figure 1. Example 3, (left): Case I and (right): Case II.
Mathematics 10 02443 g001
Figure 2. Example 3, (left): Case III and (right): Case IV.
Figure 2. Example 3, (left): Case III and (right): Case IV.
Mathematics 10 02443 g002
Figure 3. Example 4, (left): Case I and (right): Case II.
Figure 3. Example 4, (left): Case I and (right): Case II.
Mathematics 10 02443 g003
Figure 4. Example 4, (left): Case III and (right): Case IV.
Figure 4. Example 4, (left): Case III and (right): Case IV.
Mathematics 10 02443 g004
Figure 5. Example 5, (left): m = 5 and (right): m = 10.
Figure 5. Example 5, (left): m = 5 and (right): m = 10.
Mathematics 10 02443 g005
Figure 6. Example 5, (left): m = 100 and (right): m = 200.
Figure 6. Example 5, (left): m = 100 and (right): m = 200.
Mathematics 10 02443 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Bazighifan, O. Regularization Method for the Variational Inequality Problem over the Set of Solutions to the Generalized Equilibrium Problem. Mathematics 2022, 10, 2443. https://doi.org/10.3390/math10142443

AMA Style

Song Y, Bazighifan O. Regularization Method for the Variational Inequality Problem over the Set of Solutions to the Generalized Equilibrium Problem. Mathematics. 2022; 10(14):2443. https://doi.org/10.3390/math10142443

Chicago/Turabian Style

Song, Yanlai, and Omar Bazighifan. 2022. "Regularization Method for the Variational Inequality Problem over the Set of Solutions to the Generalized Equilibrium Problem" Mathematics 10, no. 14: 2443. https://doi.org/10.3390/math10142443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop