Next Article in Journal
Modelling Symmetric Ion-Acoustic Wave Structures for the BBMPB Equation in Fluid Ions Using Hirota’s Bilinear Technique
Previous Article in Journal
q-Rung Orthopair Fuzzy Archimedean Aggregation Operators: Application in the Site Selection for Software Operating Units
Previous Article in Special Issue
Solutions of Fractional Differential Inclusions and Stationary Points of Intuitionistic Fuzzy-Set-Valued Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Subgradient Extragradient Algorithm for Solving Bilevel System of Equilibrium Problems

1
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
2
Department of Mathematics, Uttaradit Rajabhat University, Uttaradit 53000, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(9), 1681; https://doi.org/10.3390/sym15091681
Submission received: 28 July 2023 / Revised: 23 August 2023 / Accepted: 24 August 2023 / Published: 31 August 2023

Abstract

:
In this research paper, we propose a novel approach termed the inertial subgradient extragradient algorithm to solve bilevel system equilibrium problems within the realm of real Hilbert spaces. Our algorithm is capable of circumventing the necessity for prior knowledge about the Lipschitz constant of the involving bifunction and only computes the minimization of strong bifunctions onto the feasible set that is required. Under appropriate conditions, we establish strong convergence theorems for our proposed algorithms. To validate our algorithms, we illustrate a series of numerical examples. Through these examples, we demonstrate the performance of the algorithms we have put forth in this paper.

1. Introduction

Throughout this article, let H be a real Hilbert space and C be a nonempty closed convex subset of H. I = 1 , 2 , , N is set a finite index. This work studies the bilevel system of equilibrium problems (shortly, B S E P ( g i , f , C ) ) as follows:
Find x * Ω = i = 1 N S E P ( g i , C ) such that f ( x * , y ) 0 for every y Ω .
where f and { g i } i I are finite family of bifunctions from H × H to R , such that f ( x , x ) = 0 and g i ( x , x ) = 0 for every x H ; S E P ( g i , C ) is the nonempty solution set of the equilibrium problem defined as follows:
g i ( x * , y ) 0 for all y C .
The solution set of (1) is denoted as Ω * .
In the case of N = 1 , we see that the B S E P ( g i , f , C ) can be considered on bilevel equilibrium problems, introduced in 2000 by Chadli et al. [1] and developed by Moudafi [2] (see also [3,4,5,6,7,8,9]), such that the bilevel equilibrium problem is defined by the following:
Find x * S E P ( g , C ) such that f ( x * , y ) 0 for every y S E P ( g , C ) .
where f and g are bifunctions from H × H to R . S E P ( g , C ) is the nonempty solution set of the equilibrium problem defined as follows:
g ( x * , y ) 0 for every y C .
The authors of [10] show that the function f is strong monotonicity and of Lipschitz-type continuity. Then, the Equation (2) has a unique solution. Equation (3), referred to as the Ky Fan inequality, is an homage to the contributions of this field [11], and Equation (3) can be transformed into many special cases, for instance, fixed point problems, variational inequality problems, optimization problems, saddle point problems, and the Nash equilibrium problem in noncooperative game; see details in [12,13,14,15,16].
The proximal-like method was presented as the first methods to solve the Equation (3). This methodology, rooted in the auxiliary problem principle, was presented in [17]. Under different assumptions, the bifunction is pseudomonotone and Lipschitz-type continuous; it obtains the convergence result see more in [18]. More precisely, the method in [18] is generated by sequence { x n } and { y n } as follows:
x 0 C y n = argmin { λ f ( x n , y ) + 1 2 y x n 2 : y C } x n + 1 = argmin { λ f ( y n , z ) + 1 2 z x n 2 : z C } ,
where λ > 0 is a suitable parameter. In recent years, many authors paid attention to the integration of inertial techniques into traditional algorithms that aimed to modify algorithms to solve Equation (3) (see [19,20]). It is underscored that most algorithms must use the knowledge of Lipschitz-type constants of the bifunction in order to choose suitable stepsize λ . These constants are often limitations or not practical for actual use in practice. Nevertheless, two optimization sub-problems on the feasible set C need to be solved during each iteration, which is high overhead and affects the performance of the algorithm. To circumvent this problem, many authors introduced a self-adaptive stepsize procedure so that the knowledge of Lipschitz-type constants of the bifunction is not necessary (see [21,22]).
For the bilevel equilibrium Equation (2), there are many methods to solve Equation (2). The authors of [2] introduced a simple proximal method and obtained a weak convergence to solve Equation (2). By using the proximal method and Halpern method to solve the bilevel monotone equilibrium and fixed point problem [6]. For more bilevel equilibrium problem details and recent works on the methods to solve equilibrium problems, we refer the reader to [3,4,5,23,24].
Recently, Anh et al. [25] proposed a new explicit extragradient algorithm for solving a class of bilevel equilibriums, which is generated by
x 0 C y n = argmin { λ n ( g ( x n , y ) + Φ ( y ) ) + 1 2 y x n 2 : y C } z n = argmin { λ n ( g ( y n , z ) + Φ ( z ) ) + 1 2 z x n 2 : z C } x n + 1 = argmin { β n f ( z n , t ) + 1 2 y z n 2 : t C }
under the bifunctions f and g, which are Lipschitz continuous and monotone on C. The convergence of { x n } is obtained. Moreover, the strong convergence is obtained under the main assumptions that the Lipschitz-type constant of the bifunction is known.
Motivated and inspired by all of the above contributions, in this work, we will propose iterative algorithms for finding the solution of the bilevel system of equilibrium problems. The strong convergence of the sequence generated by the proposed method is obtained under the main assumptions that the Lipschitz-type constant of the bifunction is unknow. Finally, we present a numerical result of our algorithm, which show that our algorithm has efficiency.

2. Preliminaries

In this part, we present some definitions and lemmas in the following for proving convergent theorem. For each x , y H , we have
x + y 2 x 2 + 2 y , x + y .
Let f : H × H R .
(i)
f is β -strongly monotone on C if
f ( x , y ) + f ( y , x ) β x y 2 x , y C ;
(ii)
f is monotone on C if
f ( x , y ) + f ( y , x ) 0 x , y C ;
(iii)
f is pseudomonotone on C if
f ( x , y ) 0 f ( y , x ) 0 , x , y C .
For each x H , let f ( x , · ) be convex, and the subdifferential of f ( x , . ) at x, denoted by 2 f ( x , x ) , is defined by
2 f ( x , x ) = { w H : f ( x , y ) f ( x , x ) w , y x y H } = { w H : f ( x , y ) w , y x y H } ,
studied in [26].
Lemma 1
([15]). Let H be a real Hilbert space and C be a nonempty closed convex subset in H . Let g : C R be a convex, lower semicontinuous, and subdifferentialble function on C . Then, we have x * is a solution to the convex optimization problem
min { g ( x ) : x C }
if and only if 0 g ( x * ) + N C ( x * ) , where g ( · ) denote the subdifferential of g and N C ( x * ) is the normal cone of C at x * .
Lemma 2
([27]). Let { x n } be a sequence of non-negative real numbers, { α n } be a sequence of real numbers in ( 0 , 1 ) with n = 1 α n = , and { y n } be a sequence of real numbers. Assume that
x n + 1 ( 1 α n ) x n + α n y n
for all n N . If lim sup k y n k 0 for every subsequence { x n k } of { x n } satisfying lim inf k ( x n k + 1 x n k ) 0 , then lim n x n = 0 .
Lemma 3
([23]). Let f : H × H R be β-strongly monotone, and x 2 f ( x , x ) is L-Lipschitz continuous on every bounded subset of C. Let 0 < α < 1 , 0 η 1 α and 0 < μ < 2 β L 2 . For each x , y C , w 2 f ( x , x ) and v 2 f ( y , y ) , we have
( 1 η ) x α μ w [ ( 1 η ) y α μ v ] ( 1 η α τ ) x y
where τ = 1 1 μ ( 2 β μ L 2 ) ( 0 , 1 ] .
In order to solve a solution of B S E P ( g i , f , C ) , we must use the following assumptions:
Conditions I
(1)
f ( x , · ) is convex, weakly lower semicontinuous, and subdifferentiable on H for every fixed x C ;
(2)
f ( · , y ) is weakly upper semicontinuous on H for every fixed y C ;
(3)
f : H × H R is β -strongly monotone on H .
(4)
The mapping x 2 f ( x , x ) is bounded and L-Lipschitz continuous on every bounded subset of C.
Conditions II
(1)
g ( x , . ) is convex, weakly lower semicontinuous, and subdifferentiable on H, for every fixed x C .
(2)
g ( · , y ) is weakly upper semicontinuous on H for every fixed y C ;
(3)
g is pseudomonotone on C with respect to S E P ( g , C ) , i.e.,
g ( x , x * ) 0 , x C , x * S E P ( g , C ) ;
(4)
g is Lipschitz-type continuous, i.e, there is two positive constants L 1 , L 2 such that
g ( x , y ) + g ( y , z ) g ( x , z ) L 1 x y 2 L 2 y z 2 , x , y , z H ;
(5)
g is jointly weakly continuous on H × H in the sense that, if x , y C and { x n } , { y n } C converge weakly to x and y, respectively, then g ( x n , y n ) g ( x , y ) as n + ;
(6)
Let { ε n } be a positive sequence such that lim n ε n α n = 0 , where { α n } ( 0 , 1 ) satisfies the following conditions: n = 1 α n = and lim n α n = 0 . Moreover, the sequence { η n i } [ η , η ¯ ) ( 0 , 1 ] such that i = 1 N η n i = 1 .

3. Main Results

In this part, we introduce a inertial subgradient extragradient algorithm to solve the bilevel system of equilibrium problems. The strong convergence is obtained under the Lipschitz-type constant of the bifunction, which is unknown.
The modified inertial subgradient extragradient algorithm (shortly, MISE Algorithm)
  • ( I n i t i a l i z a t i o n : ) Set θ > 0 , λ 1 i > 0 , μ ( 0 , 1 ) , 0 < γ < 2 β L 2 , 0 < α ¯ α ̲ 1 , i = 1 N η n i = 1 and choose x 0 , x 1 H
Step 1:
Given the iterates x n 1 and x n ( n 1 ) , set
w n = x n + θ n ( x n x n 1 )
where
θ n = min { ε n x n x n 1 , θ } , if x n x n 1 θ , otherwise
Step 2:
Compute
y n i = arg min y C { g i ( w n , y ) + 1 2 λ n i y w n }
Step 3:
Select u n i 2 g i ( w n , y n i ) and compute
z n i = arg min y H n i { g i ( y n i , y ) + 1 2 λ n i y w n }
where H n i = { x H : w n λ n i u n i y n i , x y n i 0 }
Step 4:
Compute z n = i = 1 N η n i z n i . Select v n 2 f ( z n , z n ) and compute
x n + 1 = z n α n γ v n .
Step 5:
Set λ i = g i ( w n , z n i ) g i ( y n i , z n i ) g i ( w n , y n i )
λ n + 1 i = min { μ 2 λ i ( w n y n i 2 + z n i y n i 2 ) , λ n i } , if λ i > 0 λ n i , otherwise
Remark 1.
We obtain that
lim n θ n α n x n x n 1 = 0 .
Indeed, let x n x n 1 , we obtain
0 θ n α n x n x n 1 ε n α n x n x n 1 x n x n 1 .
Taking n in (8), we obtain
lim n θ n α n x n x n 1 = 0 .
Lemma 4.
Let the bifunctions g i satify Condition II. It follows that the sequence { λ n i } generated by (7) is a nonincreasing sequence and
lim n λ n i = φ = min { μ 2 max { L 1 i , L 2 i } , λ 0 i } for all i = 1 , 2 , , N .
Proof. 
Let i = 1 , 2 , , N . It obvious that
λ n + 1 i λ n i
for all n N . Therefore, { λ n i } is a non-increasing sequence. Since g i is Lipschitz-type continuous on C, there is L 1 i , L 2 i > 0 such that
g i ( w n , y n i ) + g i ( y n i , z n i ) g i ( w n , z n i ) L 1 i w n y n i 2 L 2 i y n i z n i 2 .
So, we have
λ i = g i ( w n , z n i ) g i ( w n , y n i ) g i ( y n i , z n i ) L 1 i w n y n i 2 + L 2 i y n i z n i 2 max { L 1 i , L 2 i } ( w n y n i 2 + y n i z n i 2 ) .
This implies that
λ i max { L 1 i , L 2 i } ( w n y n i 2 + y n i z n i 2 ) .
So, for each λ i > 0 , we have
μ 2 λ i ( w n y n i 2 + z n i y n i 2 ) μ ( w n y n i 2 + z n i y n i 2 ) 2 max { L 1 i , L 2 i } ( w n y n i 2 + y n i z n i 2 ) = μ 2 max { L 1 i , L 2 i } .
It follows that
λ n i min { μ 2 max { L 1 i , L 2 i } , λ 0 i } ,
for all n N . Thus, we conclude that lim n λ n i exists such that
lim n λ n i = φ min { μ 2 max { L 1 i , L 2 i } , λ 0 i } .
Lemma 5.
Let the bifunctions g i satify Condition II, and { z n i } be sequences generated by (7). Then, for all p Ω = i = 1 N S E P ( g i , C ) , we have
z n i p 2 w n p 2 ( 1 μ λ n i λ n + 1 i ) y n i w n 2 ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 ,
for all i = 1 , 2 , , N .
Proof. 
Let i = 1 , 2 , , N via the definition of the equation:
z n i = arg min y H n i { g i ( y n i , y ) + 1 2 λ n i y w n } .
Thus,
λ n i ( g i ( y n i , y ) g i ( y n i , z n i ) ) w n z n i , y z n i , for all y H n i .
Since p S E P ( g i , C ) H n i for all i = 1 , 2 , , N , we have
λ n i ( g i ( y n i , p ) g i ( y n i , z n i ) ) w n z n i , p z n i .
Since p S E P ( g i , C ) and y n i C , we have g i ( p , y n i ) 0 . Using the pseudo monotoxicity of g i , we have g i ( y n i , p ) 0 , which we obtain from (9) that
λ n i g i ( y n i , z n i ) w n z n i , p z n i λ n i g i ( y n i , p ) w n z n i , p z n i .
Since u n i 2 g i ( w n , y n i ) , we have
g i ( w n , y ) g i ( w n , y n i ) u n i , y y n i , for all y H .
Therefore,
g i ( w n , z n i ) g ( w n , y n i ) u n i , z n i y n i .
So,
2 λ n i ( g ( w n , z n i ) g i ( w n , y n i ) ) 2 λ n i u n i , z n i y n i .
Since z n i H n i , we have
w n λ n i u n i y n i , z n i y n i 0 .
Thus,
2 λ n i u n i , z n i y n i 2 w n y n i , z n i y n i .
From (10)–(12), we obtain
2 λ n i ( g i ( w n , z n i ) g i ( w n , y n i ) g i ( y n i , z n i ) ) 2 ( w n z n i , p z n i + w n y n i , z n i y n i ) = z n i p 2 w n p 2 + w n y n i 2 + y n i z n i 2
Therefore,
z n i p 2 w n p 2 w n y n i 2 z n i y n i 2 + 2 λ n i ( g ( w n i , z n i ) g i ( w n , y n i ) g i ( y n i , z n i ) ) .
Using the definition of λ n i , we have
z n i p 2 w n p 2 w n y n i 2 z n i y n i 2 + 2 λ n i λ n + 1 i λ n + 1 i ( g i ( w n , z n i ) ) g i ( y n i , z n i ) g i ( w n , y n i ) w n p 2 w n y n i 2 z n i y n i 2 λ n i λ n + 1 i μ ( w n y n i 2 z n i y n i 2 ) = w n p 2 ( 1 μ λ n i λ n + 1 i ) w n y n i 2 ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 .
Theorem 1.
Let bifunctions f satisfy Condition I, and g i satisfy Condition II. Suppose that Ω = i = 1 N S E P ( g i , C ) is a nonempty set. Then, we have the sequence { x n } generated by the MISE Algorithm, which converges to the unique solution of (BSEP).
Proof. 
Under the assumptions of the bifunctions g i and f , we obtain the unique solution of the bilevel system equilibrium Equation (1), denoted as p . It implies that f ( p , y ) 0 for all y Ω . Thus, p is a minimum of the convex function f ( p , · ) over Ω . Using the optimality condition, we obtain
0 2 f ( p , p ) + N Ω ( p ) .
Then, there exists v * 2 f ( p , p ) such that
v * , z p 0 for all z Ω .
Next, we prove that { x n } generated by the MISE Algorithm converges to p . We divide the proof into four steps.
Step 1: We show that the sequence { x n } is bounded since
lim n ( 1 μ λ n i λ n + 1 i ) = 1 μ > 0 .
For each i = 1 , 2 , , N , there is n 0 i N such that
1 μ λ n i λ n + 1 i > 0 , n n 0 i .
Choose n 0 = max { n 0 i : i = 1 , 2 , , N } . For each n n 0 , we have
1 μ λ n i λ n + 1 i > 0 , i = 1 , 2 , , N .
Therefore,
z n p 2 = i = 1 N η n i z n i p 2 i = 1 N η n i ( z n i p ) 2 = i = 1 N η n i z n i p 2 1 2 i = 1 N t = 1 N η n i η n t z n i z n t 2
Combining Lemma 5 and (16), we have
z n p 2 w n p 2 i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) y n i w n 2 i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 .
It implies from (15) that
z n p w n p , n n 0 .
Therefore,
w n p x n + θ n ( x n x n 1 ) p θ n ( x n x n 1 ) + x n p = α n θ n α n x n x n 1 + x n p .
According to Remark 1, we have θ n α n x n x n 1 | | 0 . There exists a constant M 1 > 0 such that
θ n α n x n x n 1 M 1 , n 1 .
Combining (18)–(20), we obtain
z n p     w n p α n θ n α n x n x n 1 + x n p α n M 1 + x n p , n n 0 .
Using Lemma 3 and (7), it follows that
x n + 1 p = z n α n γ v n p + α n γ v * α n γ v * = ( z n α n γ v n ) ( p α n γ v * ) α n γ v * ( 1 α n τ ) z n p + α n γ v * ( 1 α n τ ) ( α n M 1 + x n p ) + α n γ v * = α n M 1 α n 2 τ M 1 + ( 1 α n τ ) x n p + α n γ v * ( 1 α n τ ) x n p + α n τ M 1 τ + α n τ γ τ v * = ( 1 α n τ ) x n p + α n τ ( M 1 τ + γ τ v * ) max { M 1 + γ v * τ , x n p }
for all n n 0 , where τ = 1 1 γ ( 2 β γ L 2 ) . Through induction, we obtain
x n p max { M 1 + γ v * τ , x n 0 x * } .
Hence, the sequence { x n } is bounded.
Step 2: Show that there is M 4 0 such that
t = 1 N η n i ( 1 μ λ n i λ n + 1 i ) y n i w n 2 + t = 1 N η n i ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 x n p 2 x n p 2 + α n M 4 ,
for all n n 0 . One has
x n + 1 p 2 z n α n γ v n + α n γ v * α n γ v * p 2 ( z n α n γ v n ) [ p α n γ v * ] α n γ v * 2 ( z n α n γ v n ) [ p α n γ v * ] 2 2 α n γ v * , z n α n γ v n p + α n γ v * α n γ v * = ( z n α n γ v n ) ( p α n γ v * ) 2 2 α n γ v * , p x n + 1 ( 1 α n τ ) 2 z n p 2 + α n M 2 z n p 2 + α n M 2
for some M 2 > 0 . Using (17), we obtain
x n + 1 p 2 w n p 2 t = 1 N η n i ( 1 μ λ n i λ n + 1 i ) y n i w n 2 t = 1 N η n i ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 + α n M 2 .
It follows from (21) that
w n p 2 ( w n p α n M 1 ) 2 x n p 2 + 2 α n M 1 x n p + α n 2 M 1 2 x n p 2 + α n M 3
for some M 3 > 0 . Combining (23) and (24), we obtain
x n + 1 p 2 x n p 2 + α n M 3 i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) y n i w n 2 i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 + α n M 2 .
Hence,
i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) y n i w n 2 + i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 x n p 2 x n + 1 p 2 + α n ( M 3 + M 2 ) = x n p 2 x n + 1 p 2 + α M 4
where M 4 = M 2 + M 3 .
Step 3: Show that
x n + 1 p 2 ( 1 α n τ ) x n p 2 + α n τ [ 2 γ τ v * , p x n + 1 + 3 M θ n α n τ x n x n + 1 ]
for all n n 0 . Indeed, we have
w n p 2 = x n + θ n ( x n x n + 1 ) p 2 = ( x n p ) + θ n ( x n x n + 1 ) 2 = x n p 2 + 2 θ n x n p , x n x n + 1 + θ n 2 x n x n + 1 2 x n p 2 + 2 θ n x n p x n x n + 1 + θ n 2 x n x n + 1 2 .
Combining (18) and (22), we obtain
x n + 1 p 2 ( 1 α n τ ) z n p 2 + 2 α n γ v * , p x n + 1 ( 1 α n τ ) w n p 2 + 2 α n γ v * , p x n + 1 .
for all n n 0 . Substituting (25) into (26), we obtain
x n + 1 p 2 ( 1 α n τ ) x n p 2 + 2 α n γ v * , p x n + 1 + θ n x n x n + 1 ( 2 x n p + θ x n x n + 1 ) ( 1 α n τ ) x n p 2 + α n τ ( 2 γ τ v * , p x n + 1 + 3 M θ n α n τ x n x n 1 )
for all n n 0 where M = sup { x n p , θ x n x n 1 } > 0 .
Step 4:  { x n p } 2 converges to zero. Indeed, using Lemma 2, it suffices to show that
lim sup k v * , p x n k + 1 0 ,
for every subsequence { x n k p } of { x n p } satisfying lim inf k ( x n k + 1 p x n k p ) 0 . Assume that { x n k p is a subsequence of { x n p } such that
lim inf k ( x n k p 2 x n k + 1 p 2 ) 0 .
In Step 2, one has
lim sup k [ i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) y n i w n 2 + i = 1 N η n i ( 1 μ λ n i λ n + 1 i ) z n i y n i 2 ] lim sup k [ α n k M 4 + x n k p 2 x n k + 1 p 2 ] lim sup k α n k M 4 + lim sup k [ x n k p 2 x n k + 1 p 2 ] = lim inf k [ x n k + 1 p 2 x n k p 2 ] 0 ,
for all i = 1 , 2 , , N . This implies that
lim k y n k i w n k = 0 and lim k z n k i y n k i = 0 ,
for all i = 1 , 2 , , N . Therefore
lim k z n k i w n k = 0 .
for all i = 1 , 2 , , N . We know that
z n k w n k 2 = i = 1 N η n k i z n k i w n k 2 = i = 1 N η n k i ( z n k i w n k ) 2 = i = 1 N η n k i z n k i w n k 2 1 2 i = 1 N t = 1 N η n k i η n k t z n k i z n k t 2 .
Taking k in the above inequality, we obtain
lim k z n k w n k = 0 .
Moreover, we can show that
lim k x n k + 1 z n k   = lim k α n k γ v n k = 0
and
lim k x n k w n k   = lim k θ n k x n k x n k 1   = lim k α n k θ n k α n k x n k x n k 1 = 0 .
We know that
x n k + 1 x n k     x n k + 1 z n k   +   z n k w n k   +   w n k x n k .
Taking k in (32) and using (29)–(31), we obtain
lim k x n k + 1 x n k = 0 .
Since the sequence { x n k } is bounded, there exists a subsequence { x n k j } of { x n k } , which converges weakly to some z H such that
lim sup k v * , p x n k = lim j v * , p x n j = v * , p z .
It follows from (31) and (27) that { w n k } and { y n k i } converge weakly to some z H . Since C is closed and convex, it is also weakly closed, and thus, z C . Next, we show that z Ω = i = 1 N S E P ( g i , C ) . It follows from Lemma 1 and the definition of { y n i } that
0 2 { g i ( w n , y ) + 1 2 λ n i y w n 2 } ( y n i ) + N C ( y n i ) .
Therefore,
λ n { g i ( w n , y ) g i ( w n , y n i ) } w n y n i , y y n i for all y C .
Let n = n k in (35) and taking k , using the assumption of the sequence { λ n } and Condition II (5), we obtain g i ( x ¯ , y ) 0 for all y C and for all i = 1 , 2 , , N . This implies that z Ω . By using (14), we obtain v * , p z 0 . It follows from (34) and the above inequality that
lim sup k v * , p x n k + 1 0 .
Since lim n θ n α n x n x n 1 = 0 and (36), we obtain
lim sup k [ 2 γ τ v * , p x n k + 1 + 3 M θ n k α n k x n k x n k + 1 ] 0 .
Combining Step 3 and (37) with Lemma 2, we can conclude that { x n } converges strongly to p . This completes the proof.

4. Numerical Example

In this section, we present a numerical example for testing the modified inertial subgradient extragradient algorithm (shortly, MISE Algorithm) to solve the bilevel system of equilibrium problems. We consider the following problem. Let H = R n and C = { x R n : 20 x j 20 , j { 1 , 2 , , n } } . Let the bifunction f : R n × R n R and g i : R n × R n R for all i = 1 , 2 , , N be defined via
f ( x , y ) = P x + Q y , y x , x , y R n , g i ( x , y ) = A i x , y x x , y R n , i = 1 , , N ,
where P and Q are randomly symmetric positive definite matrices defined via
Q = W W + n I n , P = Q + V V + n I n
where W and V are random n × n matrices, and I n is the identity n × n matrix. A i : R n R n are linear operators given via A i = ( a l s i ) n × n R n × n , which are randomly symmetric positive definite matrices for all i = 1 , 2 , , N .
Note that the bifunction f ( x , y ) is n-strongly monotone on R n , and for fixed x H , we have f ( x , · ) , which is convex on R n . Moreover, we obtain that the subdifferential 2 f ( x , x ) = { ( P + Q ) x } . We also obtain that the function x 2 f ( x , x ) is bounded, and g i are pseudomonotone on R n and Lipschitz-type continuous with L 1 i = L 2 i = 1 2 A i for all i = 1 , 2 , , N .
We have tested our algorithm for this example in which the dimension is expressed as follows:
n = 10 , 50 , 100 , 500 , 1000 ;
the number of system N = 10 , 50 , 100 , 500 . The matrices P and Q are matrices of W and V, respectively, being randomly generated in the interval [ 5 , 5 ] . The linear operators A i : C R n are defined via A i = ( a l s i ) n × n , where a l s i are randomly generated in C for all i = 1 , 2 , , N . We choose the starting point of the MISE Algorithm x 0 and x 1 to be vectors with coordinates that are one and parameters that are as follows: L ¯ = 1 2 P + Q ; L 1 i = L 2 i = 1 2 A i , i = 1 , , N ; L = max { L ¯ , L 1 i , L 2 i : i = 1 , , N } ; θ = 1 4 L ; λ 1 i = 1 4 L , i = 1 , , N ; μ = 1 4 L ; γ = 2 P + Q 2 ; η n i = 1 N ; α n = 1 n + 1 and ϵ n = 1 ( n + 1 ) 2 .
Note that at each iteration in the MISE Algorithm, we obtain y n i and z n i via
y n i = P C ( w n λ n i A i w n )
and
z n i = P H n i ( y n i λ n i A i y n i ) .
Since C = { x R n : 20 x j 20 , j { 1 , 2 , , n } } is box and H n i = { x R n : w n λ n i u n i y n i , x y n i } is a half space, y n i and z n i can be computed explicitly. For more details, see [21].
The experiment is performed under MATLAB R2018a running on a laptop with 2.59 GHz Intel Core i7 and 4 GB RAM. We terminate Algorithm via the stopping criterions
x n + 1 x n x n + 1 ε ,
where ε = 10 6 to obtain the number of iteration and CPU times, and the CPU times are considered in the second unit. The results are presented in Table 1, where the following are noted:
  • The number of the tested problems denoted as N.P;
  • The average number of iterations denoted as Average iteration;
  • The average CPU computation times denoted as Average times.
We see the computed results reported in Table 1. The sequence generated by our proposed MISE Algorithm is convergent and effective for finding the solution of bilevel system of equilibrium problems.
Next, we present the comparison of the proposed MISE Algorithm and the extragradient subgradient Halpern method (shortly, ESH Algorithm) [23]. We consider Problem (38) in the case of the number of systems, N = 1 . We tested the example with the dimension n = 50 , 100 , and the matrices P and Q are the matrices of W and V, respectively, being randomly generated in the interval [ 5 , 5 ] . The matrix A = ( a l s ) , where a l s are randomly generated in C. The parameters are defined as follows:
  • MISE Algorithm: the starting point of x 0 = x 1 = ( 1 , , 1 ) ; L ¯ = 1 2 P + Q ; L 1 i = L 2 i = 1 2 A i , i = 1 , , N ; L = max { L ¯ , L 1 i , L 2 i : i = 1 , , N } ; θ = μ = λ 1 i = 1 4 L i = 1 , , N ; γ = 2 P + Q 2 ; η n i = 1 N ; α n = 1 n + 1 and ϵ n = 1 ( n + 1 ) 2 .
  • ESH Algorithm: x 0 = ( 1 , , 1 ) ; λ n = 1 4 A ; μ = 2 P + Q 2 ; α n = 1 n + 1 and η n = n + 1 2 ( n + 1 ) .
We terminate the algorithms by stopping the criterion x n + 1 x n x n + 1 10 6 . The results are presented in Figure 1 and Figure 2.
From the result reported in Figure 1 and Figure 2, we obtain that the sequence generated by the MISE Algorithm is significantly better than the ESH Algorithm.

5. Conclusions

We have proposed the inertial subgradient extragradient algorithms to solve the bilevel system equilibrium problems in real Hilbert spaces. Our algorithm obtained without the prior knowledge of the Lipschitz constant of the involving bifunction. Under oppropriate conditions, we obtain strong convergence theorems of our algorithms. Finally, we have presented some numerical examples and shown that our algorithms are efficient.

Author Contributions

Methodology, T.Y.; Writing—original draft, T.Y.; Supervision, S.P.; T.Y. conceived and designed the method, proved the theorem, authored and reviewed drafts of the article, and approved the final draft. S.P. approved the final draft. All authors have read and agreed to the published version of the manuscript.

Funding

This work (Grant No. RGNS 64 –195) was supported by The Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (OPS MHESI), Thailand Science Research and Innovation (TSRI) and Uttaradit Rajabhat University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank The Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (OPS MHESI), Thailand Science Research and Innovation (TSRI) for supporting by grant fund under Grant No. RGNS 64 –195. We would also like to thank Bui Van Dinh for his able guidance and support in this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chadli, O.; Chbani, Z.; Riahi, H. Equilibrium problems with generalized monotone bifunctions and applications to variational inequalities. J. Optim. Theory Appl. 2000, 105, 299–323. [Google Scholar]
  2. Moudafi, A. Proximal methods for a class of bilevel monotone equilibrium problems. J. Global Optim. 2010, 47, 287–292. [Google Scholar] [CrossRef]
  3. Bento, G.C.; Cruz Neto, J.X.; Lopes, J.O.; Soares, P.A., Jr.; Soubeyran, A. Generalized proximal distances for bilevel equilibrium problems. SIAM J. Optim 2016, 26, 810–830. [Google Scholar] [CrossRef]
  4. Chbani, Z.; Riahi, H. Weak and strong convergence of proximal penalization and proximal splitting algorithms for two-level hierarchical Ky Fan minimax inequalities. Optimization 2015, 64, 1285–1303. [Google Scholar] [CrossRef]
  5. Thuy, L.Q.; Hai, T.N. A projected subgradient algorithm for bilevel equilibrium problems and applications. J. Optim. Theory Appl. 2017, 175, 411–431. [Google Scholar] [CrossRef]
  6. Quy, N.V. An algorithm for a bilevel problem with equilibrium and fixed point constraints. Optimization 2014, 64, 172359–172375. [Google Scholar]
  7. Dinh, B.V.; Muu, L.D. Penalty and Gap Function Methods for Bilevel Equilibrium Problems. J. Appl. Math. 2011, 2011, 646452. [Google Scholar]
  8. Hieu, D.V.; Quy, P.K. One-Step iterative method for bilevel equilibrium problem in Hilbert space. J. Glob. Optim. 2023, 85, 487–510. [Google Scholar] [CrossRef]
  9. Lotfikar, R.; Eskandani, G.Z.; Kim, J.K. The subgradient extragradient method for solving monotone bilevel equilibrium problems using Bregman distance. Nonlinear Funct. Anal. Appl. 2023, 28, 337–363. [Google Scholar]
  10. Mastroeni, G. On auxiliary principle for equilibrium problems. Publ. Dip. Math. Dell’univ. Pisa 2000, 3, 1244–1258. [Google Scholar]
  11. Fan, K. A minimax inequality and applications. In Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 103–113. [Google Scholar]
  12. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 127–149. [Google Scholar]
  13. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  14. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacanto, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef]
  15. Daniele, P.; Giannessi, F.; Maugeri, A. Equilibrium Problems and Variational Models; Kluwer: Norwell, MA, USA, 2003. [Google Scholar]
  16. Kassay, G.; Radulescu, V. Equilibrium Problems and Applications; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  17. Flam, S.D.; Antipin, A.S. Equilibrium programming and proximal-like algorithms. Math. Progr. 1997, 78, 29–41. [Google Scholar] [CrossRef]
  18. Quoc, T.D.; Muu, L.D.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  19. Chbani, Z.; Riahi, H. Weak and strong convergence of an inertial proximal method for solving Ky Fan minimax inequalities. Optim. Lett. 2013, 7, 185–206. [Google Scholar] [CrossRef]
  20. Vinh, N.T.; Muu, L.D. Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  21. Duc, M.H.; Thanh, H.N.T.; Huyen, T.T.T.; Dinh, B.V. Ishikawa Subgradient Extragradient Method for Equilibrium Problems and Fixed Point Problems in Hilbert Spaces. Numer. Funct. Anal. Optim. 2020, 41, 1065–1088. [Google Scholar] [CrossRef]
  22. Thong, D.V.; Cholamjiak, P.; Rassias, M.T.; Cho, Y.J. Strong convergence of inertial subgradient extragradient algorithm for solving pseudomonotone equilibrium problems. Optim. Lett. 2021, 16, 545–573. [Google Scholar] [CrossRef]
  23. Yuying, T.; Dinh, B.V.; Kim, D.S.; Plubtieng, S. Extragradient subgradient methods for solving bilevel equilibrium problems. J. Inequal. Appl. 2018, 2018, 327. [Google Scholar] [CrossRef]
  24. Munkong, J.; Dinh, B.V.; Ungchittrakool, K. An inertial extragradient method for solving bilevel equilibrium problems. Carpathian J. Math. 2020, 36, 91–107. [Google Scholar] [CrossRef]
  25. Anh, P.N.; Thanh, D.D.; Linh, N.K.; Tu, H.P. New Explicit Extragradient Methods for Solving a Class of Bilevel Equilibrium Problems. Bull. Malays. Math. Sci. Soc. 2021, 44, 3285–3305. [Google Scholar] [CrossRef]
  26. Iusem, A.N. On the Maximal Monotonicity of Diagonal Subdifferential Operators. J. Convex Anal. 2011, 18, 1705–1706. [Google Scholar]
  27. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2022, 66, 240–256. [Google Scholar] [CrossRef]
Figure 1. The number of iterations of MISE Algorithm and ESH Algorithm, where dimension is n = 50 .
Figure 1. The number of iterations of MISE Algorithm and ESH Algorithm, where dimension is n = 50 .
Symmetry 15 01681 g001
Figure 2. The number of iterations of MISE Algorithm and ESH Algorithm, where dimension is n = 100 .
Figure 2. The number of iterations of MISE Algorithm and ESH Algorithm, where dimension is n = 100 .
Symmetry 15 01681 g002
Table 1. The result of the modified inertial subgradient extragradient algorithm.
Table 1. The result of the modified inertial subgradient extragradient algorithm.
nNN.PAverage IterationAverage Times
1010103000.0703
50102540.1813
100102380.4266
500102531.9734
5010102040.1797
50102070.2828
100101960.6547
500101972.7172
1001010900.2469
5010891.2828
10010902.2328
500109112.1719
5001010171.7171
5010188.8781
100101915.7156
500101780.5547
1000101097.9687
5010833.2968
10010873.9375
500109362.4844
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Plubtieng, S.; Yuying, T. Accelerated Subgradient Extragradient Algorithm for Solving Bilevel System of Equilibrium Problems. Symmetry 2023, 15, 1681. https://doi.org/10.3390/sym15091681

AMA Style

Plubtieng S, Yuying T. Accelerated Subgradient Extragradient Algorithm for Solving Bilevel System of Equilibrium Problems. Symmetry. 2023; 15(9):1681. https://doi.org/10.3390/sym15091681

Chicago/Turabian Style

Plubtieng, Somyot, and Tadchai Yuying. 2023. "Accelerated Subgradient Extragradient Algorithm for Solving Bilevel System of Equilibrium Problems" Symmetry 15, no. 9: 1681. https://doi.org/10.3390/sym15091681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop