Next Article in Journal
Management of Transfer Prices in Professional Football as a Function of Fan Numbers
Previous Article in Journal
On the Solvability of Equations with a Distributed Fractional Derivative Given by the Stieltjes Integral
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Regularization Methods for the Variational Inequality Problem over the Set of Solutions of the Generalized Mixed Equilibrium Problem

by
Yanlai Song
1,*,† and
Omar Bazighifan
2,3,*,†
1
College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China
2
Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
3
Department of Mathematics, Faculty of Science, Hadhramout University, Mukalla 50512, Yemen
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(16), 2981; https://doi.org/10.3390/math10162981
Submission received: 23 July 2022 / Revised: 12 August 2022 / Accepted: 15 August 2022 / Published: 18 August 2022

Abstract

:
In this work, we consider bilevel problems: variational inequality problems over the set of solutions of the generalized mixed equilibrium problems. Two new inertial extragradient methods are proposed for solving these problems. Under appropriate conditions, we prove strong convergence theorems for the proposed methods by the regularization technique. Finally, some numerical examples are provided to show the efficiency of the proposed algorithms.

1. Introduction

Let H be a Hilbert space with the inner product · , · and the induced norm · . Let C be a closed, convex and nonempty subset of a real Hilbert space H . Let Ψ : C × C R be a nonlinear bifunction, A : C H be a nonlinear mapping, and Φ : C R { + } be a proper convex lower semicontinuous function. The generalized mixed equilibrium problem (GMEP) is defined as:
Find a point u * C such that Ψ ( u * , v ) + Φ ( v ) Φ ( u * ) + A u * , v u * 0 , v C .
The set of solutions of (1) is denoted by GMEP ( Ψ , Φ , A ) . Next, we provide some special cases of the GMEP (1).
(I) If A = 0 , the GMEP (1) reduces to the mixed equilibrium problem (MEP):
Find a point u * C such that Ψ ( u * , v ) + Φ ( v ) Φ ( u * ) 0 , v C .
The set of solutions of the MEP (2) is denoted by MEP ( Ψ , Φ ) .
(II) If Φ = 0 , the GMEP (1) becomes the generalized equilibrium problem (GEP):
Find a point u * C such that Ψ ( u * , v ) + A u * , v u * 0 , v C .
The set of solutions of the GEP (3) is denoted by GEP ( Ψ , A ) . Particularly, if A = 0 in (3), the GEP reduces to the classical equilibrium problem which is to
find a point u * C such that Ψ ( u * , v ) 0 , v C .
The GMEP is a generalization of many mathematical models such as saddle point problems, variational inequality problems, fixed point problems, minimization problems, certain optimization problems, Nash equilibrium problems, and others, see [1,2,3,4,5] for examples. Many authors have studied and proposed several iterative algorithms for solving GMEPs and related optimization problems; see for instance [6,7,8].
(III) If Ψ ( u , v ) = 0 for all u , v C and Φ ( u ) = 0 for all u C , then the GMEP (1) reduces to the variational inequality problem (VIP):
Find a point u * C such that A u * , v u * 0 , v C ,
which solution set is denoted by VI ( C , A ) . The VIP was first introduced by Fichera [9] in 1963 as Signori’s contact problem. Later, it was studied by Stampacchia [10] for modeling problems arising from mechanics. This problem is also known to have numerous applications in diverse fields such as, engineering, physics, mathematical programming, economics, among others. It can be considered as a central problem in nonlinear analysis and optimization since the theory of variational inequalities provides a natural, convenient, and unified framework for many network equilibrium problems, minimization problems, systems of nonlinear equations, complementary problems, and others (see [11,12,13]). Hence, the theory has become an area of great interest to numerous researchers. As a result of this, there has been an increased interest in developing implementable and efficient methods for solving VIPs.
Many methods for solving the variational inequality (1) have been introduced and studied (see, for example, [14,15,16]). One of the popular methods is the extragradient method (EGM), which was proposed by Korpelevich [7] (and also independently Antipin [17]) for solving the saddle point problems. It was subsequently extended to the VIPs with monotone and Lipschitz continuous mappings in both Euclidean spaces and Hilbert spaces. More precisely, the EGM is described as follows:
y n = P C ( x n r A x n ) , x n + 1 = P C ( x n r A y n ) ,
where r ( 0 , 1 / L ) . Note that the EGM might require a prohibitive amount of computation time when the structure of the feasible set C is complex. This can affect the efficiency of the used method. In an attempt to overcome this computational drawback, Tseng [18] introduced a new extragradient method (TEGM) which is stated as follows.
One of the novelties in the proposed TEGM is that it requires only one projection per iteration. It is worth pointing out that the TEGM needs to know the information or approximation of the Lipschitz constant of A , which limits the applicability of the method. On the other hand, we note that under some appropriate settings, Equation (6) and Algorithm 1 converge to the solution of the variational inequality weakly. However, Bauschke and Combettes [19] remarked that strong convergence is more desirable than weak convergence in solving some optimization problems. In order to establish a strongly convergent Algorithm 2 which does not require prior knowledge of the Lipschitz constants of A , Yang et al. [20] suggested a new self-adaptive Tseng’s extragradient method (STEM):
Algorithm 1 Tseng’s extragradient method (TEGM).
Initialization: Take r > 0 and let x 1 H be arbitrary.
Step 1. Given x n ( n 1 ) , compute
y n = P C ( x n r A x n ) .
Step 2. Calculate the next iterate
x n + 1 = y n r ( A y n A x n ) .
Algorithm 2 Self-adaptive Tseng’s extragradient method (STEM).
Initialization: Take { α n } ( 0 , + ) , r 1 > 0 , υ ( 0 , 1 ) and let x 1 H be arbitrary.
Step 1. Given x n ( n 1 ) , compute
y n = P C x n r n A x n ,
and
z n = y n r n ( A y n A x n ) ,
where { r n } is updated by
r n + 1 = min { υ x n y n A x n A y n , r n } , if A x n A y n 0 ; r n , otherwise .
If x n = y n , then stop: x n is a solution. Otherwise, go to Step 2.
Step 2. Compute
x n + 1 = α n g ( x n ) + ( 1 α n ) z n .
Step 3. Set n : = n + 1 and go to Step 1.
Under mild conditions, they proved the strong convergence of the introduced method to an element of VI ( C , A ) . The algorithm proposed in [20] applies a new step size criterion that automatically updates the iteration stepsize by a simple computation using some previously known information. However, this method generates a monotonically decreasing sequence of stepsize and might further affect the execution efficiency of such method.
Motivated by the ideas of the EGM, the TEGM, and the STEM, Tan and Qin [8] proposed a new inertial extragradient method with non-monotonic stepsizes (SVTE) for solving the VIPs as follows:
They proved strong convergence of Algorithm 3 to an element of VI ( C , A ) under appropriate conditions.
Algorithm 3 Self adaptive viscosity-type inertial Tseng’s extragradient algorithm (SVTE).
Initialization: Take { ϵ n } ( 0 , + ) , { σ n } ( 0 , + ) , { α n } ( 0 , + ) , ϱ > 0 , r 1 > 0 , υ ( 0 , 1 ) and let x 0 , x 1 H be arbitrary.
Step 1. Given x n 1 and x n ( n 1 ) , compute
u n = x n + h n ( x n x n 1 ) ,
where
τ n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , otherwise .
Step 2. Compute
y n = P C u n r n A u n ,
and
z n = y n r n ( A y n A x n ) ,
where { r n } is updated by
r n + 1 = min { υ u n y n A u n A y n , r n + σ n } , if A u n A y n 0 ; r n + σ n , otherwise .
Step 3. Compute
x n + 1 = α n g ( x n ) + ( 1 α n ) z n .
Step 5. Set n : = n + 1 and back to Step 1.
Recently, a great deal of effort has been devoted to the bilevel variational inequality problem (BVIP) which consists of the following:
Find u * VI ( C , A ) , such that , F u * , v u * 0 , v VI ( C , A ) ,
where A : C H and F : C H are two single-valued mappings. Mathematically, as observed, the problem BVIP is very general in the sense that it includes, as special cases, the complementarity problem, variational inequality problem, Nash equilibrium problem, fixed point problem, as well as the optimization problem; see, e.g., [21,22,23].
Since the VIP is a special case of the GMEP (see, e.g., [24]) then, it is necessary to improve the BVIP to more general bilevel problems and propose effective methods for approximating numerical solutions to these bilevel problems.
The purpose of this paper is to analyze and study a more general bilevel problem: a variational inequality over the set of solutions of the generalized mixed equilibrium problem (VOGME). More precisely, the VOGME is presented as follows:
Find u * GMEP ( Ψ , Φ , A ) , such that , F u * , v u * 0 , v GMEP ( Ψ , Φ , A ) ,
where Ψ is a bifunction, F is strongly monotone, and A is monotone. Combining the TEGM, the inertial idea and regularization technique, we propose two alternative methods for solving the VOGME (10). The strong convergence theorems of the suggested algorithms is established under appropriate and weaker conditions. The theoretical results are also confirmed by two numerical examples.

2. Preliminaries

Throughout this section, let C be a nonempty closed convex subset of a Hilbert space H . We now recall the following concepts of mappings and lemmas for the convergence analysis of our methods.
Notation 1.
Let { x n } be a sequence and x be a point in a Hilbert space H .
  • x n x means that { x n } converges to x strongly;
  • x n x means that { x n } converges to x weakly;
  • ω ( x n ) denotes the set of weak limits of { x n } , i.e., ω ( x n ) : = { x : x n j x } ;
  • Fix ( T ) denotes the set of fixed points of a mapping T .
Definition 1.
A mapping T : C H is said to be:
(i) 
monotone, if
T x T y , x y 0 , x , y C ;
(ii) 
α- strongly monotone, if there exists a number α > 0 such that
T x T y , x y α x y 2 , x , y C ;
(iii) 
λ-inverse strongly monotone, if there exists a positive number λ such that
T x T y , x y λ T x T y 2 , x , y C ;
(iv) 
k- Lipschitz continuous, if there exists k > 0 such that
T x T y k x y , x , y C .
Definition 2.
The equilibrium bifunction Ψ : C × C R is said to be monotone, if
Ψ ( x , y ) + Ψ ( y , x ) 0 , x , y C .
Definition 3.
A single-valued operator A : C H is said to be hemicontinuous if the real function
t A ( ( 1 t ) u + t v ) , w
is continuous on [ 0 , 1 ] for all u , v , w C .
Lemma 1
([14]). Let { z n } be a sequence of nonnegative real numbers. Assume that
z n + 1 ( 1 a n ) z n + b n , n N ,
where { a n } , { b n } satisfy the conditions
(i) 
n = 1 a n = , lim n a n = 0 ,
(ii) 
lim sup n b n a n 0 .
Then, lim n z n = 0 .
Assumption 1.
Let C be a nonempty, convex, and closed subset of a Hilbert space H , and Ψ: C × C R be a bifunction satisfying the following restrictions:
(A1) 
Ψ ( x , x ) = 0 , x C ;
(A2) 
Ψ is monotone;
(A3) 
lim sup t 0 + Ψ ( x + t ( z x ) , y ) Ψ ( x , y ) , x , y , z C ;
(A4) 
for all x C , Ψ ( x , · ) is convex and lower semicontinuous.
Lemma 2
([1]). Let Ψ : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4). For x H and r > 0 , define a mapping J r : H C by:
J r x = { z C : Ψ ( z , y ) + 1 r y z , z x 0 , y C } .
Then, the following results hold:
(i) 
J r x for each x H ;
(ii) 
J r is single-valued;
(iii) 
J r is a firmly nonexpansive mapping, i.e., for all x , y H , J r x J r y 2 J r x J r y , x y ;
(iv) 
Fix ( J r ) = EP ( Ψ ) ;
(v) 
EP ( Ψ ) is closed and convex.
Lemma 3.
Let A : C H be monotone and hemicontinuous, Φ : C R be a proper lower semicontinuous and convex function, and Ψ : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4). Then the GMEP (1) is equivalent to the following problem:
F i n d a p o i n t u * C s u c h t h a t Ψ ( v , u * ) + Φ ( v ) Φ ( u * ) + A v , v u * 0 , v C .
Proof. 
Let u * C be a solution of the GMEP (1). We find from Assumption 1 (A2) and the monotonicity of A that
Ψ ( v , u * ) + Φ ( v ) Φ ( u * ) + A v , v u * Ψ ( u * , v ) + Φ ( v ) Φ ( u * ) + A u * , v u * 0 .
Thus, u * C is a solution of the problem (11).
Conversely, let u * C be a solution of the problem (11). Setting v t = ( 1 t ) u * + t v for all v C , then v t C . Since u * C is a solution of the problem (11), it follows that
Ψ ( v t , u * ) + Φ ( v t ) Φ ( u * ) + A v t , v t u * 0 .
Utilizing Assumption 1 (A1) and (A4), we deduce
0 = Ψ ( v t , v t ) ( 1 t ) Ψ ( v t , u * ) + t Ψ ( v t , v ) = ( 1 t ) [ Ψ ( v t , u * ) Φ ( v t ) + Φ ( u * ) A v t , v t u * ] + t Ψ ( v t , v ) + ( 1 t ) [ Φ ( v t ) Φ ( u * ) + A v t , v t u * ] .
Substituting (13) into (14), we obtain
0 t Ψ ( v t , v ) + ( 1 t ) [ Φ ( v t ) Φ ( u * ) + A v t , v t u * ] .
Due to the assumption that Φ is convex, we derive that
Φ ( v t ) Φ ( u * ) + A v t , v t u * t [ Φ ( v ) Φ ( u * ) + A v t , v u * ] .
Substituting (16) into (15), we find
0 t Ψ ( v t , v ) + ( 1 t ) t [ Φ ( v ) Φ ( u * ) + A v t , v u * ] ,
which yields
0 Ψ ( v t , v ) + ( 1 t ) [ Φ ( v ) Φ ( u * ) + A v t , v u * ] .
By allowing t 0 , noting (A3) and hemicontinuous continuity of A , we derive
0 Ψ ( u * , v ) + Φ ( v ) Φ ( u * ) + A u * , v u * .
Hence, we conclude that u * C is a solution of the GMEP (1). This completes the proof.    □
Applying Lemma 3, we can obtain the following results immediately.
Corollary 1
([25]). Suppose A : C H is a hemicontinuous and monotone operator, then u * is a solution of the VIP (5) if and only if u * is a solution of the following problem:
F i n d u * C s u c h t h a t A v , v u * 0 , v C .
Lemma 4.
Let A : C H be monotone and hemicontinuous, Φ : C R be a proper lower semicontinuous and convex function, and Ψ : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4). For x H and r > 0 , define a mapping T r : H C by:
T r x = { z C : Ψ ( z , y ) + A z , y z + Φ ( y ) Φ ( z ) + 1 r y z , z x 0 , y C } .
Then, the following results hold:
(i) 
T r x for each x H ;
(ii) 
T r is single-valued;
(iii) 
T r is a firmly nonexpansive mapping;
(iv) 
Fix ( T r ) = GMEP ( Ψ , Φ , A ) ;
(v) 
GMEP ( Ψ , Φ , A ) is closed and convex.
Furthermore, if A is β-strongly monotone, then it also holds that
(iii)’ 
T r is a contraction on C with contraction coefficient 1 1 + r β .
Proof. 
Letting F ˜ ( x , y ) = Ψ ( x , y ) + Φ ( y ) Φ ( x ) + A x , y x , x , y C , it is not difficult to check that F ˜ satisfies Assumption 1 (A1)–(A4). Then one finds from Lemma 2 that (i)–(v) holds. Further, if A is β -strongly monotone, then we have, for x , y H :  
Ψ ( T r x , T r y ) + A T r x , T r y T r x + Φ ( T r y ) Φ ( T r x ) + 1 r T r y T r x , T r x x 0 ,
and
Ψ ( T r y , T r x ) + A T r y , T r x T r y + Φ ( T r x ) Φ ( T r y ) + 1 r T r x T r y , T r y y 0 .
Adding the two inequalities, noticing Assumption 1 (A2) and the β -strong monotonicity of A , we have
β T r y T r x 2 1 r T r y T r x 2 + 1 r T r y T r x , y x 0 ,
which is equivalent to
( 1 + r β ) T r y T r x 2 T r y T r x , y x .
Thus,
T r y T r x 1 1 + r β y x .
This implies T r is contractive with contraction coefficient 1 1 + r β . This is the desired result ( i i i ) .    □
If A = 0 for all x C in Lemma 4, then we obtain the following corollary.
Corollary 2.
Let Φ : C R be a proper lower semicontinuous and convex function, and Ψ : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4). Let the mapping K r : H C defined by
K r ( x ) = { z C : Ψ ( z , y ) + Φ ( y ) Φ ( z ) + 1 r y z , z x 0 , y C } .
Then, we find that
(i) 
K r ( x ) for each x H ;
(ii) 
K r is single-valued;
(iii) 
K r is a firmly nonexpansive mapping;
(iv) 
Fix ( K r ) = MEP ( Ψ , Φ ) ;
(v) 
MEP ( Ψ , Φ ) is closed and convex.

3. Main Results

In this section, we aim to establish strong convergence results for the VOGME (10) by using Tseng’s extragradient method, the inertial idea, and the regularization method. In what follows, let C be a nonempty closed convex subset of a Hilbert space H . Suppose that Ψ : C × C R is a bifunction satisfying Assumption 1 (A1)–(A4), Φ : C R is a proper lower semicontinuous and convex function A : H H is monotone, and L-Lipschitz continuous, F : H H is h-strongly monotone and k-Lipschitz continuous. Additionally, we also assume that GMEP ( Ψ , Φ , A ) is nonempty. One finds from Lemma 4 (v) that GMEP ( Ψ , Φ , A ) is closed and convex. This, by strong monotonicity and continuity of F , ensures the uniqueness of solutions x to the VOGME (10). Together with the GMEP (1), we consider the following regularized generalized mixed equilibrium problem (RGMEP) for each α > 0 : Find a point x C such that
Ψ ( x , y ) + Φ ( y ) Φ ( x ) + A x + α F x , y x 0 , y C .
Under the above assumptions, one deduces that the mapping A ˜ : = A + α F is strongly monotone for all α > 0 . Notice here that the RGMEP (17) is actually equivalent to the GMEP ( Ψ , Φ , A ˜ ) . From Lemma 4 (v), (iii)’ and the Banach contraction principle, one deduces that the RGMEP (17) has the unique solution which is denoted by x α , for each α > 0 . We study the relationship between x α and x as following.
Lemma 5.
Let A : H H be monotone and L- Lipschitz continuous, F : H H be h-strongly monotone and k-Lipschitz continuous, Φ : C R be a proper lower semicontinuous and convex function, Ψ : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4). Then it holds that
(i) x α x + F x h , α > 0 ;
(ii) lim α 0 + x α x = 0 ;
(iii) x α x β α β α M , α , β > 0 .
Proof. 
(i) Taking any p GMEP ( Ψ , Φ , A ) , we have Ψ ( p , y ) + Φ ( y ) Φ ( p ) + A p , y p 0 for all y C , which with y = x α C , implies that
Ψ ( p , x α ) + Φ ( x α ) Φ ( p ) + A p , x α p 0 .
Since x α is the solution of the RGMEP, we then find
Ψ ( x α , y ) + Φ ( y ) Φ ( x α ) + A x α + α F x α , y x α 0 , y C .
Substituting y = p C into (19), we obtain
Ψ ( x α , p ) + Φ ( p ) Φ ( x α ) + A x α + α F x α , p x α 0 .
Summing up inequalities (18) and (20), we obtain
Ψ ( p , x α ) + Ψ ( x α , p ) + A x α A p , p x α + α F x α , p x α 0 .
Observe that
F x α , p x α = F x α F p , p x α + F p , p x α .
It follows from the h- strong monotonicity of F and (22) that
F x α , p x α h x α p 2 + F p , p x α .
Substituting (23) into (21), we deduce
Ψ ( p , x α ) + Ψ ( x α , p ) + A x α A p , p x α α h x α p 2 + α F p , p x α 0 .
Thanks to Assumption 1 (A2) and the monotonicity of A , we obtain
h x α p 2 F p , p x α .
Observe that
F p , p x α F p p x α ,
which together with (25) yields
x α p + F p h , p GMEP ( Ψ , Φ , A ) .
Since x GMEP ( Ψ , Φ , A ) , we have
x α x + F x h .
This is the desired result (i). And hence, we find that { x α } is bounded.
(ii) As C is closed and convex, then C is weakly closed. Thus, there is a subsequence { x α k } of { x α } and some point x * C such that x α k x * . Take into account Assumption 1 (A2) and the monotonicity of A , we find, for all y C , that
Ψ ( y , x α k ) + Φ ( y ) Φ ( x α k ) + A y , y x α k Ψ ( x α k , y ) + Φ ( y ) Φ ( x α k ) + A x α k + α k F x α k , y x α k = Ψ ( y , x α k ) Ψ ( x α k , y ) + A y A x α k , y x α k α k F x α k , y x α k α k F x α k , y x α k .
Due to the fact that x α k is the solution of the RGMEP (17), and utilizing (27), we deduce that
Ψ ( y , x α k ) + Φ ( y ) Φ ( x α k ) + A y , y x α k α k F x α k , y x α k , y C .
Since y Ψ ( x , y ) and x Φ ( x ) are convex and lower semicontinuous, then they are also weakly lower semicontinuous. Passing to the super limit in (28) as k , we obtain
Ψ ( y , x * ) + Φ ( y ) Φ ( x * ) + A y , y x * 0 , y C .
Utilizing Lemma 3, we obtain
x * GMEP ( Ψ , Φ , A ) .
In view of (25), we see
0 F p , p x α k , p GMEP ( Ψ , Φ , A ) .
Passing to the limit in (30) as k and noticing the fact x α k x * ( as k ) , we deduce
0 F p , p x * , p GMEP ( Ψ , Φ , A ) ,
which yields by Corollary 1 that x * is the solution of the VOGME (10). As the solution x of the VOGME (10) is unique, we obtain that x * = x . Therefore, the set ω ( x α ) only has one element x . That is ω ( x α ) = { x } . Hence, the whole sequence { x α } converges weakly to x . Again using (25) with p = x , we find
x α x 2 1 h F x , x x α .
Passing in (31) to the limit as α 0 + , we infer
lim α 0 + x α x = 0 .
(iii) Assume that x α , x β are the solutions of the RGMEP (17). Then we have
Ψ ( x α , x β ) + Φ ( x β ) Φ ( x α ) + A x α + α F x α , x β x α 0
and
Ψ ( x β , x α ) + Φ ( x α ) Φ ( x β ) + A x β + β F x β , x α x β 0 .
Summing up the two inequalities above, we have
Ψ ( x α , x β ) + Ψ ( x β , x α ) + A x α A x β , x β x α + α F x α F x β , x β x α + ( α β ) F x β , x β x α 0 .
Thanks to the monotonicity of A and Assumption 1(A2), we obtain
α F x α F x β , x β x α + ( α β ) F x β , x β x α 0 .
or, equivalently
( β α ) F x β , x α x β α F x α F x β , x α x β .
It follows from the h- strong monotonicity of F that
( β α ) F x β , x α x β α h x α x β 2 ,
which leads to that
| β α | F x β x α x β α h x α x β 2 .
Therefore, we have
x α x β | β α | α F x β h .
Since the operator F is Lipschitz continuous, noting (i), we derive { F x α } is also bounded. Therefore, according to (32), there exists M > 0 such that x α x β | β α | α M . This completes the proof.    □
In the following, together with the regularization method, we introduce two new numerical algorithms for solving the VOGME (10). The first algorithm (IMR) is presented as follows.
Theorem 1.
Let C be a nonempty convex closed subsets of real Hilbert spaces H , Ψ : C × C R be a bifunction satisfying Assumption 1 (A1)–(A4), Φ : C R be a proper lower semicontinuous and convex function, A : H H be ω-inverse strongly monotone, and F : H H be h-strongly monotone and k-Lipschitz continuous. Let the mapping K r be the same as Corollary 2. Then the sequence { x n } generated by Algorithm 4 converges strongly to the unique solution x of the VOGME (10) under Assumption 2 (C1)–(C5).
Assumption 2.
(C1) 
lim n α n = 0 and n = 1 α n = ;
(C2) 
lim n α n α n + 1 α n 2 = 0 ;
(C3) 
lim n ϵ n α n = 0 ;
(C4) 
GMEP ( Ψ , Φ , A ) ;
(C5) 
0 < a r n b < 2 ω .
Algorithm 4 The inertial method with regularization (IMR).
Initialization: Set { r n } ( 0 , + ) , { ϵ n } ( 0 , + ) , { α n } ( 0 , + ) and ϱ > 0 . Let x 0 , x 1 H be arbitrary.
Step 1. Compute
u n = x n + h n ( x n x n 1 ) ,
where
h n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , otherwise .
Step 2. Given x n ( n 0 ) , compute
x n + 1 = K r n u n r n ( A u n + α n F u n ) .
Step 3. Set n : = n + 1 and return to Step 1.
Proof. 
By (C1) and Lemma 5 (ii), we have x α n x as n . Hence, it is sufficient to prove that
lim n x n x α n = 0 .
Since x α is a solution of the RGMEP (17) for all α > 0 , noting the definition of K r and Corollary 2 (ii), we have
x α n = K r n ( x α n r n ( A x α n + α n F x α n ) ) .
According to Corollary 2 (iii), we deduce
x n + 1 x α n 2 = K r n u n r n ( A u n + α n F u n ) K r n ( x α n r n ( A x α n + α n F x α n ) ) 2 u n r n ( A u n + α n F u n ) ( x α n r n ( A x α n + α n F x α n ) ) 2 = u n x α n 2 2 r n u n x α n , ( A u n + α n F u n ) ( A x α n + α n F x α n ) + r n 2 ( A u n + α n F u n ) ( A x α n + α n F x α n ) 2 = u n x α n 2 2 r n u n x α n , A u n A x α n 2 r n α n u n x α n , F u n F x α n + r n 2 ( A u n + α n F u n ) ( A x α n + α n F x α n ) 2 u n x α n 2 2 r n ω A u n A x α n 2 2 r n α n h u n x α n 2 + r n 2 ( A u n + α n F u n ) ( A x α n + α n F x α n ) 2 .
Noticing (C5), we select three positive numbers σ 1 , σ 2 , σ 3 such that
2 ω b ( 1 + σ 1 ) > 0 and 2 h σ 1 σ 2 σ 3 > 0 .
Utilizing the Cauchy–Schwarz inequality, we obtain
( A u n + α n F u n ) ( A x α n + α n F x α n ) 2 = ( A u n A x α n ) + α n ( F u n F x α n ) 2 A u n A x α n 2 + 2 α n A u n A x α n F u n F x α n + α n 2 F u n F x α n 2 A u n A x α n 2 + σ 1 A u n A x α n 2 + 1 σ 1 α n 2 F u n F x α n 2 + α n 2 F u n F x α n 2 = ( 1 + σ 1 ) A u n A x α n 2 + 1 + 1 σ 1 α n 2 k 2 u n x α n 2 .
From assumptions (C1), (C3), and (C5), it follows that there exists n 0 1 such that
σ 1 1 + 1 σ 1 r n α n k 2 > 0 , 1 r n α n ( 2 h σ 1 ) 1 σ 3 r n α n < 2 , ϵ n < σ 2 r n α n and ϵ n < 1 , n n 0 .
This, along with (34)–(36), implies
x n + 1 x α n 2 1 r n α n 2 h 1 + 1 σ 1 r n α n k 2 u n x α n 2 r n ( 2 ω r n ( 1 + σ 1 ) ) A u n A x α n 2 1 r n α n 2 h 1 + 1 σ 1 r n α n k 2 u n x α n 2 ( 1 r n α n ( 2 h σ 1 ) ) u n x α n 2 , n n 0 .
Taking into account the definition of u n , and noticing (33) and (36), we obtain
u n x α n 2 = x n + h n ( x n x n 1 ) x α n 2 ( x n x α n + h n x n x n 1 ) 2 x n x α n 2 + h n 2 x n x n 1 2 + 2 x n x α n h n x n x n 1 x n x α n 2 + ϵ n 2 + 2 x n x α n ϵ n x n x α n 2 + ϵ n 2 + x n x α n 2 ϵ n + ϵ n ( 1 + ϵ n ) x n x α n 2 + 2 ϵ n ( 1 + σ 2 r n α n ) x n x α n 2 + 2 ϵ n , n n 0 .
For each n n 0 , by using the Cauchy–Schwarz inequality and Lemma 5 (iii), we obtain
x n + 1 x α n 2 = x n + 1 x α n + 1 2 + x α n + 1 x α n 2 + 2 x n + 1 x α n + 1 , x α n + 1 x α n x n + 1 x α n + 1 2 + x α n + 1 x α n 2 2 x n + 1 x α n + 1 x α n + 1 x α n x n + 1 x α n + 1 2 + x α n + 1 x α n 2 σ 3 r n α n x n + 1 x α n + 1 2 1 σ 3 r n α n x α n + 1 x α n 2 = ( 1 σ 3 r n α n ) x n + 1 x α n + 1 2 1 σ 3 r n α n σ 3 r n α n x α n + 1 x α n 2 ( 1 σ 3 r n α n ) x n + 1 x α n + 1 2 ( 1 σ 3 r n α n ) ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 ,
which yields
x n + 1 x α n + 1 2 1 1 σ 3 r n α n x n + 1 x α n 2 + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 .
Thus, we deduce from (35)–(39) that
x n + 1 x α n + 1 2 1 1 σ 3 r n α n x n + 1 x α n 2 + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 1 r n α n ( 2 h σ 1 ) 1 σ 3 r n α n u n x α n 2 + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 1 r n α n ( 2 h σ 1 ) ( 1 + σ 2 r n α n ) 1 σ 3 r n α n x n x α n 2 + 4 ζ n + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 = 1 r n α n ( 2 h σ 1 σ 2 ) 1 σ 3 r n α n x n x α n 2 ( 2 h σ 1 ) σ 2 r n 2 α n 2 1 σ 3 r n α n x n x α n 2 + 4 ζ n + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 1 r n α n ( 2 h σ 1 σ 2 ) 1 σ 3 r n α n x n x α n 2 + 4 ζ n + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 = ( 1 a n ) x n x α n 2 + b n ,
where a n = r n α n 2 h σ 1 σ 2 σ 3 1 σ 3 r n α n and b n = 4 ζ n + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 . Combining conditions (C1) and (C2), we find that a n 0 and n = 1 a n = + . Furthermore, utilizing condition (C3), we have
b n a n = 4 ζ n α n + ( α n + 1 α n ) 2 σ 3 r n α n 4 M 2 1 σ 3 r n α n r n 2 h σ 1 σ 2 σ 3 0 .
From Lemma 1, we obtain lim n x n x α n = 0 . Thus, we conclude that lim n x n x = 0 . This completes the proof.    □
The convergence of Algorithm 4 is established under assumption that A is inverse strongly monotone. In the next theorem, we consider another stepsize rule which needs only for A to be monotone. Combining the inertial Tseng extragradient method and the regularization method, we propose the following iterative algorithm (ITER).
Lemma 6
([8]). The sequence { r n } generated by (41) is well defined and lim n r n = θ and θ [ min { υ L , r 0 } , r 0 + ϵ ] , where ϵ = i = 1 σ n .
Theorem 2.
Let C be a nonempty convex closed subsets of real Hilbert spaces H , Ψ : C × C R be a bifunction satisfying (A1)–(A4) and Φ : C R be a proper lower semicontinuous and convex function. Let A : H H be monotone and L-Lipschitz continuous and F : H H be h-strongly monotone and k-Lipschitz continuous. Let the mapping K r be the same as Corollary 2. Then the sequence { x n } generated by Algorithm 4 converges strongly to the unique solution x of the VOGME (10) under Assumption 1 (C1)–(C4).
Algorithm 5 The inertial Tseng extragradient method with regularization (ITER).
Initialization: Set { r n } ( 0 , + ) , { ϵ n } ( 0 , + ) , { h n } ( 0 , + ) , { α n } ( 0 , + ) , υ ( 0 , 1 ) ϱ > 0 and r 0 > 0 . Choose a nonnegative real sequence { σ n } such that n = 0 σ n < . Let x 0 , x 1 H be arbitrary.
Step 1.Compute
u n = x n + h n ( x n x n 1 ) ,
where
h n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , o t h e r w i s e .
Step 2.Given x n ( n 0 ) , compute
y n = J r n u n r n ( A u n + α n F u n ) .
Step 3.Compute
x n + 1 = y n r n ( A y n A u n ) ,
where { r n } is updated by
r n + 1 = min { υ u n y n A u n A y n , r n + σ n } , if A u n A y n 0 ; r n + σ n , o t h e r w i s e .
Step 4.Set n : = n + 1 and return toStep 1.
Proof. 
By (C1) and Lemma 5 (iii), we obtain x α n x as n . Therefore, it is sufficient to prove that lim n x n x α n = 0 . Observe that
x n + 1 x α n 2 = y n r n ( A y n A u n ) x α n 2 = y n x α n 2 + r n 2 A y n A u n 2 2 r n y n x α n , A y n A u n = y n u n 2 + u n x α n 2 + 2 y n u n , u n x α n + r n 2 A y n A u n 2 2 r n y n x α n , A y n A u n = y n u n 2 + u n x α n 2 + 2 y n u n , u n y n + 2 y n u n , y n x α n + r n 2 A y n A u n 2 2 r n y n x α n , A y n A u n = u n x α n 2 y n u n 2 + 2 y n u n , y n x α n + r n 2 A y n A u n 2 2 r n y n x α n , A y n A u n .
As x α is a solution of the RGMEP (17) for all α > 0 , by the definition of K r and Corollary 2 (ii), we have
x α n = K r n ( x α n r n ( A x α n + α n F x α n ) ) .
According to Corollary 2 (iii), we deduce
y n x α n 2 = K r n u n r n ( A u n + α n F u n ) K r n ( x α n r n ( A x α n + α n F x α n ) ) 2 K r n u n r n ( A u n + α n F u n ) K r n ( x α n r n ( A x α n + α n F x α n ) ) , u n r n ( A u n + α n F u n ) ( x α n r n ( A x α n + α n F x α n ) ) = y n x α n , u n r n ( A u n + α n F u n ) ( x α n r n ( A x α n + α n F x α n ) ) ,
which leads to
y n x α n , u n r n ( A u n + α n F u n ) ( x α n r n ( A x α n + α n F x α n ) ) y n x α n 2 0 .
Simple calculation shows that
y n x α n , u n y n r n ( A u n + α n F u n ) + r n ( A x α n + α n F x α n ) 0 ,
or equivalently
y n x α n , u n y n r n y n x α n , ( A u n + α n F u n ) ( A x α n + α n F x α n ) .
Substituting (44) into (42), noticing the monotone property of A and (41), we obtain
x n + 1 x α n 2 u n x α n 2 y n u n 2 2 r n y n x α n , ( A u n + α n F u n ) ( A x α n + α n F x α n ) + r n 2 A y n A u n 2 2 r n y n x α n , A y n A u n = u n x α n 2 y n u n 2 + r n 2 A y n A u n 2 2 r n y n x α n , A y n A x α n 2 α n r n y n x α n , F u n F x α n u n x α n 2 y n u n 2 + r n 2 A y n A u n 2 + 2 α n r n y n x α n , F x α n F u n u n x α n 2 y n u n 2 + r n 2 υ 2 r n + 1 2 y n u n 2 + 2 α n r n y n x α n , F x α n F u n = u n x α n 2 1 υ 2 r n 2 r n + 1 2 y n u n 2 + 2 α n r n y n x α n , F x α n F u n .
Let σ 1 , σ 2 , and σ 3 be positive real numbers such that
2 h σ 1 k σ 2 σ 3 > 0 .
Next, we will evaluate the last term in (45). Since F is h-strongly monotone, then we have
2 y n x α n , F x α n F u n = 2 y n u n , F x α n F u n + 2 u n x α n , F x α n F u n 2 k y n u n x α n u n 2 h u n x α n 2 k 1 σ 1 y n u n 2 + σ 1 x α n u n 2 2 h u n x α n 2 .
Substituting (47) into (45), we deduce
x n + 1 x α n 2 u n x α n 2 1 υ 2 r n 2 r n + 1 2 y n u n 2 + α n r n k 1 σ 1 y n u n 2 + k σ 1 x α n u n 2 2 h u n x α n 2 = ( 1 ( 2 h k σ 1 ) α n r n ) u n x α n 2 1 υ 2 r n 2 r n + 1 2 k α n r n σ 1 y n u n 2 .
As 0 < υ < 1 , α n 0 and noting Lemma 6, there exists n 0 1 such that
1 υ 2 r n 2 r n + 1 2 k α n r n σ 1 > 0 , 1 ( 2 h k σ 1 ) α n r n 1 σ 3 α n r n < 2 , ϵ n σ 2 r n α n , ϵ n < 1 , n n 0 .
Hence, we obtain from (48) that
x n + 1 x α n 2 ( 1 ( 2 h k σ 1 ) α n r n ) u n x α n 2 , n n 0 .
Repeating the argument for (38), we have
u n x α n 2 ( 1 + σ 2 r n α n ) x n x α n 2 + 2 ϵ n , n n 0 .
For each n n 0 , a calculation similar to (39) guarantees that
x n + 1 x α n + 1 2 1 1 σ 3 r n α n x n + 1 x α n 2 + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 .
Thus, we deduce from (46), (50), and (52) that
x α n + 1 x n + 1 2 1 ( 2 h k σ 1 ) α n r n 1 σ 3 α n r n u n x α n 2 + ( α n + 1 α n ) 2 σ 3 α n r n 3 M 2 ( 1 ( 2 h k σ 1 ) α n r n ) ( 1 + σ 2 α n r n ) 1 σ 3 α n r n x n x α n 2 + 1 ( 2 h k σ 1 ) α n r n 1 σ 3 α n r n 2 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 r n M 2 1 ( 2 h k σ 1 σ 2 ) α n r n 1 σ 3 α n r n x n x α n 2 ( 2 h k σ 1 ) σ 2 r n 2 α n 2 1 σ 3 α n r n x n x α n 2
+ 1 ( 2 h k σ 1 ) α n r n 1 σ 3 α n r n 2 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 r n M 2 1 ( 2 h k σ 1 σ 2 ) α n r n 1 σ 3 α n r n x n x α n 2 + 4 ξ n + ( α n + 1 α n ) 2 σ 3 α n 3 r n M 2 = ( 1 a n ) x n x α n 2 + b n ,
where a n = ( 2 h k σ 1 σ 2 σ 3 ) α n r n 1 σ 3 α n r n and b n = 4 ϵ n + ( α n + 1 α n ) 2 σ 3 r n α n 3 M 2 . Combining conditions (C1) and (C2), we find that a n 0 and n = 1 a n = + . Furthermore, utilizing condition (C3), we have
b n a n = 4 ξ n α n + ( α n + 1 α n ) 2 σ 3 α n 4 r n M 2 1 σ 3 α n r n ( 2 h k σ 1 σ 2 σ 3 ) r n 0 .
From Lemma 1, we obtain lim n x n x α n = 0 . Hence, we conclude that lim n x n x = 0 . This completes the proof.    □
Observe that, if Φ 0 , then the mapping K r reduces to the mapping J r . Meanwhile, the VOGME reduces to the problem (VOME):
Find u * MEP ( Ψ , Φ , A ) , such that , F u * , v u * 0 , v MEP ( Ψ , Φ , A ) .
From Theorem 2, we can obtain the following Corollary.
Corollary 3.
Let C be a nonempty convex closed subset of real Hilbert spaces H , Ψ : C × C R be a bifunction satisfying (A1)–(A4). Let A : H H be monotone and L-Lipschitz continuous and F : H H be h-strongly monotone and k-Lipschitz continuous. Let the mapping J r be the same as Lemma 2. Then the sequence { x n } generated by Algorithm 6 converges strongly to the unique solution x of the VOME (54) under Assumption 1 (C1)–(C4).
Algorithm 6 The inertial Tseng extragradient method with regularization for the mixed equilibrium problem.
Initialization: Set { r n } ( 0 , + ) , { ϵ n } ( 0 , + ) , { h n } ( 0 , + ) , { α n } ( 0 , + ) , υ ( 0 , 1 ) ϱ > 0 and r 0 > 0 . Choose a nonnegative real sequence { σ n } such that n = 0 σ n < . Let x 0 , x 1 H be arbitrary.
Step 1. Compute
u n = x n + h n ( x n x n 1 ) ,
where
h n = min { ϵ n x n x n 1 , ϱ } , if x n x n 1 ; ϱ , otherwise .
Step 2. Given x n ( n 0 ) , compute
y n = J r n u n r n ( A u n + α n F u n ) .
Step 3. Compute
x n + 1 = y n r n ( A y n A u n ) ,
where { r n } is updated by
r n + 1 = min { υ u n y n A u n A y n , r n + σ n } , if A u n A y n 0 ; r n + σ n , otherwise .
Step 4. Set n : = n + 1 and return to Step 1.

4. Numerical Example

In this subsection, we give two numerical examples to show the numerical behavior of our proposed algorithms and compare them with SVTE, STEM, TEGM.
Example 1.
Let H = R m with inner product · , · : H × H R defined by x , y : = j = 1 m x j y j . The feasible set C is given by C = { x R m : x 1 } . Take the operator A ( x ) : = M x + p , where M = B B T + D , and B is an m × m matrix, D is an m × m diagonal matrix, whose diagonal entries are nonnegative, p is a vector in R m . It is clear that A is monotone and L-Lipschitz-continuous with L = M . Similarly, take the operator F ( x ) : = Q x + q , where Q = N N T + G , and N is an m × m matrix, G is an m × m diagonal matrix, whose diagonal entries are positive, q is a vector in R m . We can check that F is h-strongly monotone and k-Lipschitz continuous with k = Q , h = min { e i g ( Q ) } , where eig ( Q ) represents all eigenvalues of Q. Let p = q = 0 , Ψ ( x , y ) = y 2 + 3 x y 4 x 2 for all x , y R m , Φ ( x ) = x 2 and g ( x ) = 99 100 x for all x R m . It is easy to see that K r x = x 1 + 7 r and the unique solution to the VOGME (10) is 0.
We choose the stepsize r = 9 10 A for TEGM and r n = 9 min { e i g ( Q ) } 10 A 2 for IMR. Take α n = ( n + 1 ) 0.8 , r 1 = 1 3 , and ν = 9 10 for STEM; α n = ( n + 1 ) 0.8 , ϵ n = ( n + 1 ) 0.7 , σ n = ( n + 1 ) 2 , r 1 = 1 3 , ϱ = 1 and ν = 9 10 for SVTE; ϵ n = ( n + 1 ) 0.7 and ϱ = 1 for IMR; ϵ n = ( n + 1 ) 0.7 , σ n = ( n + 1 ) 2 , r 1 = 1 3 , ϱ = 1 and ν = 9 10 for ITER. Set x n < 10 3 as stopping criterion and the initial values x 0 , x 1 are generated randomly in ( 6 , 6 ) . Figure 1 and Figure 2 describe the numerical results.
Example 2.
We consider an example that appears in the infinite dimensional Hilbert space H = L 2 ( [ 0 , 1 ] ) with inner product x , y = 0 1 x ( t ) y ( t ) d t and the induced norm x = 0 1 x 2 ( t ) d t 1 / 2 . The feasible set C is given by C = { x L 2 ( [ 0 , 1 ] ) : x 1 } . Let Ψ : H × H R be given by Ψ ( x , y ) = y 2 + 3 x y 4 x 2 for all x , y H , Φ ( x ) = x 2 , A x = x , F x = 1 2 x and g ( x ) = 99 100 x for all x H . It is easy to verify that 0 is the unique solution to the VOGME (10). We choose the stepsize r = 9 10 A for TEGM and r n = 9 10 for IMR. Our parameters are the same as in Example 1. Use x n < 10 3 as stopping criterion and the numerical results are reported in Figure 3 and Figure 4.

5. Conclusions

This work presents two alternative regularization methods for finding a solution of the variational inequality problem over the set of solutions of the generalized mixed equilibrium problem. These methods can be viewed as an improvement of Tseng’s extragradient method and the regularization method. We show that the iterative process generated by proposed methods converge strongly to solutions of above bilevel problems. The theoretical results are also confirmed by two numerical examples.

Author Contributions

Conceptualization, Y.S.; Data curation, O.B.; Funding acquisition, Y.S.; Software, Y.S.; Writing—original draft, Y.S. and O.B.; Writing—review & editing, Y.S. and O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Scientific Research Project for Colleges and Universities in Henan Province (grant number 20A110038).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank reviewers and the editor for valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  2. Tan, B.; Qin, X.; Yao, J.C. Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 2021, 87, 20. [Google Scholar] [CrossRef]
  3. Yao, Y.; Cho, Y.J.; Liou, Y.C. Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212, 242–250. [Google Scholar] [CrossRef]
  4. Song, Y.L.; Bazighifan, O. A New Alternative Regularization Method for Solving Generalized Equilibrium Problems. Mathematics 2022, 10, 1350. [Google Scholar] [CrossRef]
  5. Song, Y.L. Hybrid Inertial Accelerated Algorithms for Solving Split Equilibrium and Fixed Point Problems. Mathematics 2021, 9, 2680. [Google Scholar] [CrossRef]
  6. Jolaoso, L.O.; Karahan, I. A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert spaces. Comput. Appl. Math. 2020, 39, 150. [Google Scholar] [CrossRef]
  7. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  8. Tan, B.; Qin, X. Self adaptive viscosity-type inertial extragradient algorithms for solving variational inequalities with applications. Math. Model. Anal. 2022, 27, 41–58. [Google Scholar] [CrossRef]
  9. Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  10. Hartman, P.; Stampacchia, G. On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  11. Bnouhachem, A.; Chen, Y. An iterative method for a common solution of generalized mixed equilibrium problem, variational inequalities and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 2014, 155. [Google Scholar] [CrossRef]
  12. Bazighifan, O. An approach for studying asymptotic properties of solutions of neutral differential equations. Symmetry 2020, 12, 555. [Google Scholar] [CrossRef]
  13. Bazighifan, O.; Kumam, P. Oscillation theorems for advanced differential equations with p-Laplacian like operators. Mathematics 2020, 8, 821. [Google Scholar] [CrossRef]
  14. Xu, H.K. Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2002, 2, 240–256. [Google Scholar] [CrossRef]
  15. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker, Inc.: New York, NY, USA, 1984. [Google Scholar]
  16. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef]
  17. Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. Mat. Metody. 1976, 12, 1164–1173. [Google Scholar]
  18. Tseng, P. A modified forward-backward splitting method for maximal monotone mapping. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  19. Bauschke, H.; Combettes, P. A weak-to-strong convergence principle for feje´r-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
  20. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert spaces. Numer. Algor. 2019, 80, 741–752. [Google Scholar] [CrossRef]
  21. Thong, D.V.; Li, X.H.; Dong, Q.L.; Cho, Y.J.; Rassias, T.M. A projection and contraction method with adaptive step sizes for solving bilevel pseudo-monotone variational inequality problems. Optimization 2020, 71, 2073–2096. [Google Scholar] [CrossRef]
  22. Tan, B.; Liu, L.; Qin, X. Self adaptive inertial extragradient algorithms for solving bilevel pseudomonotone variational inequality problems. Jpn. Ind. Appl. Math. 2021, 38, 519–543. [Google Scholar] [CrossRef]
  23. Song, Y.; Bazighifan, O. Regularization Method for the Variational Inequality Problem over the Set of Solutions to the Generalized Equilibrium Problem. Mathematics 2022, 10, 2443. [Google Scholar] [CrossRef]
  24. Moudafi, A. Proximal methods for a class of bilevel monotone equilibrium problems. J. Glob. Optim. 2010, 47, 287–292. [Google Scholar] [CrossRef]
  25. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
Figure 1. Example 1, m = 10 .
Figure 1. Example 1, m = 10 .
Mathematics 10 02981 g001
Figure 2. Example 1, m = 20 .
Figure 2. Example 1, m = 20 .
Mathematics 10 02981 g002
Figure 3. Example 2, x 0 ( t ) = t 2 + t 6 and x 1 ( t ) = t 7 .
Figure 3. Example 2, x 0 ( t ) = t 2 + t 6 and x 1 ( t ) = t 7 .
Mathematics 10 02981 g003
Figure 4. Example 2, x 0 ( t ) = t 2 + t 8 and x 1 ( t ) = t 5 .
Figure 4. Example 2, x 0 ( t ) = t 2 + t 8 and x 1 ( t ) = t 5 .
Mathematics 10 02981 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Bazighifan, O. Two Regularization Methods for the Variational Inequality Problem over the Set of Solutions of the Generalized Mixed Equilibrium Problem. Mathematics 2022, 10, 2981. https://doi.org/10.3390/math10162981

AMA Style

Song Y, Bazighifan O. Two Regularization Methods for the Variational Inequality Problem over the Set of Solutions of the Generalized Mixed Equilibrium Problem. Mathematics. 2022; 10(16):2981. https://doi.org/10.3390/math10162981

Chicago/Turabian Style

Song, Yanlai, and Omar Bazighifan. 2022. "Two Regularization Methods for the Variational Inequality Problem over the Set of Solutions of the Generalized Mixed Equilibrium Problem" Mathematics 10, no. 16: 2981. https://doi.org/10.3390/math10162981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop