Next Article in Journal
On Locating and Counting Satellite Components Born along the Stability Circle in the Parameter Space for a Family of Jarratt-Like Iterative Methods
Next Article in Special Issue
Weighted Method for Uncertain Nonlinear Variational Inequality Problems
Previous Article in Journal
Transformation of Some Lambert Series and Cotangent Sums
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Method for Bilevel Variational Inequality Problems with Fixed Point and Minimizer Point Constraints

by
Seifu Endris Yimer
1,2,
Poom Kumam
1,3,*,
Anteneh Getachew Gebrie
2 and
Rabian Wangkeeree
4
1
KMUTTFixed Point Research Laboratory, SCL 802 Fixed Point Laboratory & Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Department of Mathematics, College of Computational and Natural Science, Debre Berhan University, P.O. Box 445, Debre Berhan, Ethiopia
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 841; https://doi.org/10.3390/math7090841
Submission received: 5 August 2019 / Revised: 30 August 2019 / Accepted: 2 September 2019 / Published: 11 September 2019

Abstract

:
In this paper, we introduce an iterative scheme with inertial effect using Mann iterative scheme and gradient-projection for solving the bilevel variational inequality problem over the intersection of the set of common fixed points of a finite number of nonexpansive mappings and the set of solution points of the constrained optimization problem. Under some mild conditions we obtain strong convergence of the proposed algorithm. Two examples of the proposed bilevel variational inequality problem are also shown through numerical results.

1. Introduction

Bilevel problem is defined as a mathematical program, where the problem contains another problem as a constraint. Mathematically, bilevel problem is formulated as follows:
f i n d   x ¯ S X   t h a t   s o l v e s   p r o b l e m   P 1   i n s t a l l e d   i n   s p a c e   X ,
where S is the solution set of the problem
find x Y X that solves problem P 2 installed in space X .
Usually, (1) is called the upper level problem and (2) is called the lower level problem. Many real life problems can be modeled as a bilevel problem and some studies have been performed towards solving different kinds of bilevel problems using approximation theory—see, for example, for bilevel optimization problem [1,2,3], for bilevel variational inequality problem [4,5,6,7,8,9], for bilevel equilibrium problems [10,11,12], and [13,14] for its practical applications. In [14], application of bilevel problem (bilevel optimization problem) in transportation (network design, optimal pricing), economics (Stackelberg games, principal-agent problem, taxation, policy decisions), management (network facility location, coordination of multi-divisional firms), engineering (optimal design, optimal chemical equilibria), etc. has been demonstrated. Due to the vast applications of bilevel problems, the research on approximation algorithm for bilevel problems has increased over years and is still in nascent stage.
A simple example of the practical bilevel model is a supplier and a store owner of a business chain (supply chain management), i.e., suppose the supplier will always give his/her best output of some commodities to the store owner in their business’s chain. Since both want to do well in their businesses, the supplier will always give his/her best output to the store owner who in turn would like to do his/her best in the business. In some sense, both would like to minimize their loss or rather maximize their profit and thus act in the optimistic pattern. It is clear that, in this example, the store owner is the upper-level decision maker and the supplier is the lower-level decision maker. Thus, in the study of supply chain management, the bilevel problem can indeed play a fundamental role.
In this paper, our main aim is to solve a bilevel variational inequality problem over the intersection of the set of common fixed points of finite number of nonexpansive mappings, denoted by BVIPO-FM, and the set of solution points of the constrained minimization problem of real-valued convex function. To be precise, let C be closed convex subset of a real Hilbert space H, F : H H is a mapping, f : C R is a real-valued convex function, and U j : C C is a nonexpansive mapping for each j { 1 , , M } . Then, BVIPO-FM is given by
find x ¯ Ω such that F ( x ¯ ) , x x ¯ 0 , x Ω ,
where Ω is the solution set of
find x C such that f ( x ) = min x C f ( x ) and x j = 1 M F i x U j .
The notation F i x U j represents the set of fixed points of U j , i.e., F i x U j = { y C : U j ( y ) = y } for j { 1 , , M } . Thus, Ω = ( j = 1 M F i x U j ) Γ , where Γ is the solution set of constrained convex minimization problem given by
find x C such that f ( x ) = min x C f ( x ) .
The problem (3) is a classical variational inequality problem, denoted by VIP ( Ω , F ) , which was studied by many authors—for example, see in [7,15,16,17] and references therein. The solution set of the variational inequality problem VIP ( Ω , F ) is denoted by SVIP ( Ω , F ) . Therefore, BVIPO-FM is obtained by solving VIP ( Ω , F ) , where Ω = ( j = 1 M F i x U j ) Γ . Bilevel problem with upper-level problem is variational inequality problem, which was introduced in [18]. These problems have received significant attention from the mathematical programming community. Bilevel variational inequality problem can be used to study various bilevel models in optimization, economics, operations research, and transportation.
It is known that the gradient-projection algorithm—given by
x n + 1 = P C ( x n λ n f ( x n ) ) ,
where the parameters λ n are real positive numbers—is one of the powerful methods for solving the minimization problem (5) (see [19,20,21]). In general, if the gradient f is Lipschitz continuous and strongly monotone, then, the sequence { x n } generated by recursive Formula (6) converges strongly to a minimizer of (6), where the parameters { λ n } satisfy some suitable conditions. However, if the gradient f is only to be inverse strongly monotone, the sequence { x n } generated by (6) converges weakly.
In approximation theory, constructing iterative schemes with speedy rate of convergence is usually of great interest. For this purpose, Polyak [22] proposed an inertial accelerated extrapolation process to solve the smooth convex minimization problem. Since then, there are growing interests by authors working in this direction. Due to this reason, a lot of researchers constructed fast iterative algorithms by using inertial extrapolation, including inertial forward–backward splitting methods [23,24], inertial Douglas–Rachford splitting method [25], inertial forward-–backward-–forward method [26], inertial proximal-extragradient method [27], and others.
In this paper, we introduce an algorithm with inertial effect for solving BVIPO-FM using projection method for the variational inequality problem, the well-known Mann iterative scheme [28] for the nonexpansive mappings T j ’s, and gradient-projection for the function f. It is proved that the sequence generated by our proposed algorithm converges strongly to the solution of BVIPO-FM.

2. Preliminary

Let H be a real Hilbert space H. The symbols “ ” and “ ” denote weak and strong convergence, respectively. Recall that for a nonempty closed convex subset C of H, the metric projection on C is a mapping P C : H C , defined by
P C ( x ) = arg min { y x : y C } , x H .
Lemma 1.
Let C be a closed convex subset of H. Given x H and a point z C , then, z = P C ( x ) if and only if x z , y z 0 , y C .
Definition 1.
For C H , the mapping T : C H is said to be L-Lipschitz on C if there exists L > 0 such that
T ( x ) T ( y ) L x y , x , y C .
If L ( 0 , 1 ) , then, we call T a contraction mapping on C with constant L. If L = 1 , then, T is called a nonexpansive mapping on C.
Definition 2.
The mapping T : H H is said to be firmly nonexpansive if
x y , T ( x ) T ( y ) T ( x ) T ( y ) 2 , x , y H .
Alternatively, T : H H is firmly nonexpansive if T can be expressed as
T = 1 2 ( I + S ) ,
where S : H H is nonexpansive.
The class of firmly nonexpansive mappings belong to the class of nonexpansive mappings.
Definition 3.
The mapping T : H H is said to be
(a) 
monotone if
T ( x ) T ( y ) , x y 0 , x , y H ;
(b) 
β-strongly monotone if there exists a constant β > 0 such that
T ( x ) T ( y ) , x y β x y 2 , x , y H ;
(c) 
ν-inverse strongly monotone (ν-ism) if there exists ν > 0 such that
T ( x ) T ( y ) , x y ν T ( x ) T ( y ) 2 , x , y H .
Definition 4.
The mapping T : H H is said to be an averaged mapping if it can be written as the average of the identity mapping I and a nonexpansive mapping, that is
T = ( 1 α ) I + α S ,
where α ( 0 , 1 ) and S : H H is nonexpansive. More precisely, when (7) holds, we say that T is α-averaged.
It is easy to see that firmly nonexpansive mapping (in particular, projection) is 1 2 -averaged and 1-inverse strongly monotone mappings. Averaged mappings and ν -inverse strongly monotone mapping ( ν -ism) have received many investigations, see [29,30,31,32]. The following propositions about averaged mappings and inverse strongly monotone mappings are some of the important facts in our discussion in this paper.
Proposition 1
([29,30]). Let the operators S , T , V : H H be given:
(i) 
If T = ( 1 α ) S + α V for some α ( 0 , 1 ) and if S is averaged and V is nonexpansive, then, T is averaged.
(ii) 
T is firmly nonexpansive if and only if the complement I T is firmly nonexpansive.
(iii) 
If T = ( 1 α ) S + α V , for some α ( 0 , 1 ) and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
(iv) 
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 N is averaged, then so is the composite T 1 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 ( 0 , 1 ) , then, the composite T 1 T 2 is α-averaged, where α = α 1 + α 2 α 1 α 2 .
Proposition 2
([29,31]). Let T : H H be given. We have
(a) 
T is nonexpansive if and only if the complement I T is 1 2 -ism;
(b) 
If T is ν-ism and γ > 0 , then γ T is ν γ -ism;
(c) 
T is averaged if and only if the complement I T is ν-ism for some ν > 1 2 . Indeed, for α ( 0 , 1 ) , T is α-averaged if and only if I T is 1 2 α -ism.
Lemma 2.
(Opial’s condition) For any sequence { x n } in the Hilbert space H with x n x , the inequality
lim inf n + x n x < lim inf n + x n y
holds for each y H with y x .
Lemma 3.
For a real Hilbert space H, we have
(i) 
x + y 2 = x 2 + y 2 + 2 x , y , x , y H ;
(ii) 
x + y 2 x 2 + 2 y , x + y , x , y H .
Lemma 4.
Let H be real Hilbert space. Then, x , y H and α [ 0 , 1 ] , we have
α x + ( 1 α ) y 2 = α x 2 + α y 2 α ( 1 α ) x y 2 .
Lemma 5
([33]). Let { c n } and { γ n } be a sequences of non-negative real numbers, and { β n } be a sequence of real numbers such that
c n + 1 ( 1 α n ) c n + β n + γ n , n 1 ,
where 0 < α n < 1 and γ n < .
(i) 
If β n α n M for some M 0 , then, { c n } is a bounded sequence.
(ii) 
If α n = and lim sup n β n α n 0 , then, c n 0 as n .
Definition 5.
Let { Γ n } be a real sequence. Then, { Γ n } decreases at infinity if there exists n 0 N such that Γ n + 1 Γ n for n n 0 . In other words, the sequence { Γ n } does not decrease at infinity, if there exists a subsequence { Γ n t } t 1 of { Γ n } such that Γ n t < Γ n t + 1 for all t 1 .
Lemma 6
([34]). Let { Γ n } be a sequence of real numbers that does not decrease at infinity. Additionally, consider the sequence of integers { φ ( n ) } n n 0 defined by
φ ( n ) = max { k N : k n , Γ k Γ k + 1 } .
Then, { φ ( n ) } n n 0 is a nondecreasing sequence verifying lim n φ ( n ) = 0 and for all n n 0 , the following two estimates hold:
Γ φ ( n ) Γ φ ( n ) + 1 a n d Γ n Γ φ ( n ) + 1 .
Let C be closed convex subset of a real Hilbert space H and given a bifunction g : C × C R . Then, the problem
find x ¯ C such that g ( x ¯ , y ) 0 , y C
is called equilibrium problem (Fan inequality [35]) of g on C, denoted by EP ( g , C ) . The set of all solutions of the EP ( g , C ) is denoted by SEP ( g , C ) , i.e., SEP ( g , C ) = { x ¯ C : g ( x ¯ , y ) 0 , y C } . If g ( x , y ) = A ( x ) , y x for every x , y C , where A is a mapping from C into H, then, the equilibrium problem becomes the variational inequality problem.
We say that the bifunction g : C × C R satisfies Condition CO on C if the following four assumptions are satisfied:
(i) 
g ( x , x ) = 0 , for all x C ;
(ii) 
g is monotone on H, i.e., g ( x , y ) + g ( y , x ) 0 , for all x , y C ;
(iii) 
for each x , y , z C , lim sup α 0 g ( α z + ( 1 α ) x , y ) g ( x , y ) ;
(iv) 
g ( x , . ) is convex and lower semicontinuous on H for each x C .
Lemma 7
([36]). If g satisfies Condition C O on C, then, for each r > 0 and x H , the mapping given by
T r g ( x ) = z C : g ( z , y ) + 1 r y z , z x 0 , y C
satisfies the following conditions:
(1) 
T r g is single-valued;
(2) 
T r g is firmly nonexpansive, i.e., for all x , y H ,
T r g ( x ) T r g ( y ) 2 T r g ( x ) T r g ( y ) , x y ;
(3) 
Fix ( T r g ) = { x ¯ H : g ( x ¯ , y ) 0 , y C } , where Fix ( T r g ) is the fixed point set of T r g ;
(4) 
{ x ¯ H : g ( x ¯ , y ) 0 , y C } is closed and convex.

3. Main Result

In this paper, we are interested in finding a solution to BVIPO-FM, where F and f satisfy the following conditions:
(A1) 
F : H H is β -strongly monotone and κ -Lipschitz continuous on H.
(A2) 
The gradient f is L-Lipschitz continuous on C.
We are now in a position to state our inertial algorithm and prove its strong convergence to the solution of BVIPO-FM assuming that F satisfies condition (A1), f satisfies condition (A2), and SVIP ( Ω , F ) is nonempty.
We have plenty of choices for { α n } , { ε n } , and { ρ n } satisfying parameter restrictions (C3), (C4), and (C5). For example, if we take α n = 1 3 n , ε n = 1 n 2 and ρ n = n + 1 3 n + 1 , then, 0 < α n < 1 , lim n α n = 0 , lim n ε n α n = lim n 3 n = 0 ( ε n = o ( α n ) ), 0 ρ n = n + 1 3 n + 1 1 α n = 3 n 1 3 n and lim n ρ n = 1 3 . Therefore, (C3), (C4), and (C5) are satisfied.
Remark 1.
From (C4) and Step 1 of Algorithm 1, we have that
θ n α n x n x n 1 0 , n .
Since { α n } is bounded, we also have
θ n x n x n 1 0 , n .
Note that Step 1 of Algorithm 1 is easily implemented in numerical computation since the value of x n x n 1 is a priori known before choosing β n .
Algorithm 1—Inertial Algorithm for BVIPO-FM
Initialization: Choose x 0 , x 1 C . Let a positive real constants θ , μ and the real sequences { α n } , { ε n } , { ρ n } , { λ n } , { β n } satisfy the following conditions:
      (C1) 
0 θ < 1 ;
      (C2) 
0 < μ < min { 2 β κ 2 , 1 2 β } ;
      (C3) 
0 < α n < 1 , lim n α n = 0 and n = 1 α n = ;
      (C4) 
ε n > 0 and ε n = o ( α n ) ;
      (C5) 
0 ρ n 1 α n and lim n ρ n = ρ < 1 ;
      (C6) 
0 < a λ n b < 2 L and lim n λ n = λ ^ ;
      (C7) 
0 < ξ β n ζ < 1 .
Step 1. 
Given the iterates x n 1 and x n ( n 1 ), choose θ n such that 0 θ n θ ¯ n , where
θ ¯ n : = min θ , ε n x n 1 x n , if x n 1 x n θ , otherwise .
Step 2. 
Evaluate z n = x n + θ n ( x n x n 1 ) .
Step 3. 
Evaluate y n = P C ( z n λ n f ( z n ) ) .
Step 4. 
Evaluate t n j = ( 1 β n ) y n + β n U j ( y n ) for each j { 1 , , M } .
Step 5. 
Evaluate t n = arg max { v y n : v { t n 1 , , t n M } } .
Step 6. 
Compute
x n + 1 = ρ n z n + Ψ μ , α n , ρ n ( t n ) ,
where Ψ μ , α n , ρ n : = ( 1 ρ n ) I α n μ F .
Remark 2.
Note that the point x ¯ C solves the minimization problem (5) if and only if
P C ( x ¯ λ f ( x ¯ ) ) = x ¯ ,
where λ > 0 is any fixed positive number. Therefore, the solution set Γ of the problem (5) is closed and convex subset of H, because for 0 < λ < 2 L the mapping P C ( I λ f ) is nonexpansive mapping and solution points of (5) are fixed points of P C ( I λ f ) . Moreover, U j is nonexpansive and hence F i x U j is closed and convex for each j { 1 , , M } .
Lemma 8.
For a real number λ > 0 with 0 < a λ b < 2 L , the mapping T λ : = P C ( I λ f ) is 2 + λ L 4 -averaged.
Proof. 
Since f is L-Lipschitz, the gradient f is 1 L -ism [37], which then implies that λ f is 1 λ L -ism. So by Proposition 2 ( c ) , I λ f is λ L 2 -averaged. Now since the projection P C is 1 2 -averaged, we see from Proposition 2 ( i v ) that the composite P C ( I λ f ) is 2 + λ L 4 -averaged. Therefore, for some nonexpansive mapping T, T λ can written as
T λ : = P C ( I λ f ) = ( 1 δ ) I + δ T ,
where 1 2 < a 1 = 2 + a L 4 δ = 2 + λ L 4 b 1 = 2 + b L 4 < 1 . Note that, in view of Remark 2 and (8), the point x ¯ C solves the minimization problem (5) if and only if T ( x ¯ ) = x ¯ .  □
Lemma 9.
For each n, the mapping Ψ μ , α n , ρ n defined in Step 6 of Algorithm 1 satisfies the inequality
Ψ μ , α n , ρ n ( x ) Ψ μ , α n , ρ n ( y ) ( 1 η α n τ ) x y , x , y H ,
where τ = 1 1 μ ( 2 β μ κ 2 ) ( 0 , 1 ) .
Proof. 
From (C2), it is easy to see that
0 < 1 2 μ β < 1 μ ( 2 β μ κ 2 ) < 1 .
This implies that 0 < 1 μ ( 2 β μ κ 2 ) < 1 .
Then,
Ψ μ , α n , ρ n ( x ) Ψ μ , α n , ρ n ( y ) = [ ( 1 ρ n ) x α n μ F ( x ) ] [ ( 1 ρ n ) y α n μ F ( y ) ] = ( 1 ρ n α n ) ( x y ) + α n [ ( x y ) μ ( F ( x ) F ( y ) ) ] ( 1 ρ n α n ) x y + α n ( x y ) μ ( F ( x ) F ( y ) ) .
By the strong monotonicity and the Lipschitz continuity of F, we have
( x y ) μ ( F ( x ) F ( y ) ) 2 = x y 2 + μ 2 F ( x ) F ( y ) 2 2 μ x y , F ( x ) F ( y ) x y 2 + μ 2 κ 2 x y 2 2 μ β x y 2 = ( 1 + μ 2 κ 2 2 μ β ) x y 2 .
From (9) and (10), we have
Ψ μ , α n , ρ n ( x ) Ψ μ , α n , ρ n ( y ) ( 1 ρ n α n ) x y + α n ( 1 + μ 2 κ 2 2 μ β ) x y 2 = ( 1 ρ n α n ) x y + α n 1 μ ( 2 β μ κ 2 ) x y = ( 1 ρ n α n τ ) x y ,
where τ = 1 1 μ ( 2 β μ κ 2 ) ( 0 , 1 ) .  □
Theorem 1.
The sequence { x n } generated by Algorithm 1 converges strongly to the unique solution of BVIPO-FM.
Proof. 
Let x ¯ SVIP ( Ω , F ) .
Now, from the definition of z n , we get
z n x ¯ = x n + θ n ( x n x n 1 ) x ¯ x n x ¯ + θ n x n x n 1 .
Note that for each n, there is a nonexpansive mapping T n such that y n = ( 1 δ n ) z n + δ n T n ( z n ) , where δ n = 2 + λ n L 4 [ a 1 , b 1 ] ( 0 , 1 ) for a 1 = 2 + a L 4 and b 1 = 2 + b L 4 . Now, using Lemma 4 and the fact that T n ( x ¯ ) = x ¯ , we have
y n x ¯ 2 = ( 1 δ n ) z n + δ n T n ( z n ) x ¯ 2 = ( 1 δ n ) z n x ¯ 2 + δ n T n ( z n ) x ¯ 2 δ n ( 1 δ n ) T n ( z n ) z n 2 z n x ¯ 2 δ n ( 1 δ n ) T n ( z n ) z n 2 .
Let { j n } n = 1 be the sequence of natural numbers such that 1 j n M where j n arg max { t n j x n : j { 1 , , M } } . This means that t n = ( 1 β n ) y n + β n U j n ( y n ) . Thus, by Lemma 4
t n x ¯ 2 = ( 1 β n ) y n x ¯ 2 + β n U j n ( y n ) x ¯ 2 β n ( 1 β n ) U j n ( y n ) y n 2 y n x ¯ 2 β n ( 1 β n ) U j n ( y n ) y n 2 .
From (11)–(13) we have
t n x ¯ y n x ¯ z n x ¯ .
Using the definition of x n + 1 , (14) and Lemma 9, we get
x n + 1 x ¯ = ρ n z n + Ψ μ , α n , ρ n ( t n ) x ¯ = Ψ μ , α n , ρ n ( t n ) Ψ μ , α n , ρ n ( x ¯ ) + ρ n ( z n x ¯ ) α n μ F ( x ¯ ) Ψ μ , α n , ρ n ( t n ) Ψ μ , α n , ρ n ( x ¯ ) + ρ n z n x ¯ + α n μ F ( x ¯ ) ( 1 ρ n α n τ ) t n x ¯ + ρ n z n x ¯ + α n μ F ( x ¯ ) ( 1 α n τ ) z n x ¯ + α n μ F ( x ¯ ) ( 1 α n τ ) x n x ¯ + ( 1 α n τ ) θ n x n x n 1 + α n μ F ( x ¯ ) ( 1 α n τ ) x n x ¯ + α n τ ( 1 α n τ ) τ θ n α n x n x n 1 + μ F ( x ¯ ) τ .
where τ = 1 1 μ ( 2 β μ L 2 ) ( 0 , 1 ) . Observe that by condition (C3) and by Remark 1, we see that
lim n ( 1 α n τ ) τ θ n α n x n x n 1 = 0 .
Let
M ¯ = 2 max μ F ( x ¯ ) τ , sup n 1 ( 1 α n τ ) τ θ n α n x n x n 1 .
Then, (15) becomes
x n + 1 x ¯ ( 1 α n τ ) x n x ¯ + α n τ M ¯ .
Thus, by Lemma 5 the sequence { x n } is bounded. As a consequence, { z n } , { y n } , { t n } , and { F ( t n ) } are also bounded.
Now, using the definition of z n and Lemma 3 ( i ) , we obtain
z n x ¯ 2 = x n + θ n ( x n x n 1 ) x ¯ 2 = x n x ¯ 2 + θ n 2 x n x n 1 2 + 2 θ n x n x ¯ , x n x n 1 .
Again, by Lemma 3 ( i ) , we have
x n x ¯ , x n x n 1 = 1 2 x n x ¯ 2 1 2 x n 1 x ¯ 2 + 1 2 x n x n 1 2 .
From (16) and (17), and since 0 θ n < 1 , we get
z n x ¯ 2 = x n x ¯ 2 + θ n 2 x n x n 1 2 + θ n ( x n x ¯ 2 x n 1 x ¯ 2 + x n x n 1 2 ) x n x ¯ 2 + 2 θ n x n x n 1 2 + θ n ( x n x ¯ 2 x n 1 x ¯ 2 ) .
Using the definition of x n + 1 together with (14) and Lemma 9, we have
x n + 1 x ¯ 2 = ρ n z n + Ψ μ , α n , ρ n ( t n ) x ¯ 2 = Ψ μ , α n , ρ n ( t n ) Ψ μ , α n , ρ n ( x ¯ ) + ρ n ( z n x ¯ ) α n μ F ( x ¯ ) 2 = Ψ μ , α n , ρ n ( t n ) Ψ μ , α n , ρ n ( x ¯ ) + ρ n ( z n x ¯ ) 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ = Ψ μ , α n , ρ n ( t n ) Ψ μ , α n , ρ n ( x ¯ ) + ρ n z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 ρ n α n τ ) t n x ¯ + ρ n z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 ρ n α n τ ) t n x ¯ 2 + ρ n z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ .
From (12) and (13), we obtain
t n x ^ 2 z n x ^ 2 δ n ( 1 δ n ) T n ( z n ) z n 2 β n ( 1 β n ) U j n ( y n ) y n 2 .
In view of (19) and (20), we get
x n + 1 x ¯ 2 ( 1 ρ n α n τ ) t n x ¯ 2 + ρ n z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 ρ n α n τ ) z n x ¯ 2 + ρ n z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ δ n ( 1 ρ n α n τ ) ( 1 δ n ) T n ( z n ) z n 2 β n ( 1 ρ n α n τ ) ( 1 β n ) U j n ( y n ) y n 2 = ( 1 α n τ ) z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ δ n ( 1 ρ n α n τ ) ( 1 δ n ) T n ( z n ) z n 2 β n ( 1 ρ n α n τ ) ( 1 β n ) U j n ( y n ) y n 2 .
Since the sequence { x n } is bounded, there exists M ¯ such that 2 α n μ F ( x ¯ ) , x n + 1 x ¯ M ¯ for all n 1 . Thus, from (18) and (21), we get
x n + 1 x ¯ 2 ( 1 α n τ ) x n x ¯ 2 + 2 ( 1 α n τ ) θ n x n x n 1 2 + ( 1 α n τ ) θ n ( x n x ¯ 2 x n 1 x ¯ 2 ) + α n M ¯ δ n ( 1 ρ n α n τ ) ( 1 δ n ) T n ( z n ) z n 2 β n ( 1 ρ n α n τ ) ( 1 β n ) U j n ( y n ) y n 2 .
Let us distinguish the following two cases related to the behavior of the sequence { Γ n } , where Γ n = x n x ¯ 2 .
Case 1. Suppose the sequence { Γ n } decrease at infinity. Thus, there exists n 0 N such that Γ n + 1 Γ n for n n 0 . Then, { Γ n } converges and Γ n Γ n + 1 0 as n 0 .
From (22) we have
δ n ( 1 ρ n α n τ ) ( 1 δ n ) T n ( z n ) z n 2 ( Γ n Γ n + 1 ) + α n M 1 + ( 1 α n τ ) θ n ( Γ n Γ n 1 ) + 2 ( 1 α n τ ) θ n x n x n 1 2 .
Since Γ n Γ n + 1 0 ( Γ n 1 Γ n 0 ) and using condition (C3) and Remark 1 (noting α n 0 , 0 < α n < 1 , θ n x n x n 1 0 and { x n } is bounded), from (22) we have
δ n ( 1 ρ n α n τ ) ( 1 δ n ) T n ( z n ) z n 2 0 , n .
The conditions (C2) and (C5) (i.e., 0 < α n < 1 , α n 0 and 0 < ρ n 1 α n ), together with (23) and the fact that δ n = 2 + λ n L 4 [ a 1 , b 1 ] ( 0 , 1 ) , we obtain
T n ( z n ) z n 0 , n .
Similarly, from (23) and the restriction condition imposed on β n in (C6), together with conditions (C2) and (C5), we have
U j n ( y n ) y n 0 , n .
Thus, using the definition of y n together with (24) gives
y n z n = ( 1 δ n ) z n + δ n T n ( z n ) z n = δ n T n ( z n ) z n 0 , n .
Moreover, using the definition of z n and Remark 1, we have
x n z n = x n x n θ n ( x n x n 1 ) = θ n x n x n 1 0 , n .
By (26) and (27), we get
x n y n x n z n + y n z n 0 , n .
By the definition of t n together with (25) gives
t n y n = ( 1 β n ) y n + β n U j n ( y n ) y n = β n U j n ( y n ) y n 0 , n .
By (28) and (29), we get
x n t n x n y n + y n t n 0 , n .
Again, from (26) and (29), we obtain
z n t n z n y n + y n t n 0 , n .
By the definition of x n + 1 , with the parameter restriction conditions (C2) and (C6) together with (31) and boundedness of { F ( t n ) } , we have
x n + 1 t n = ρ n z n + Ψ μ , α n , ρ n ( t n ) t n = ρ n z n + ( 1 ρ n ) t n α n μ F ( t n ) t n ρ n z n t n + α n μ F ( t n ) 0 , n .
Results from (30) and (32) give
x n + 1 x n x n + 1 t n + t n x n 0 , n .
By definition of t n j and t n , and using (30), for all j { 1 , , M } , we have
t n j x n t n x n 0 , n
and this together with (28), yields
t n j y n t n j x n + y n x n 0 , n
for all j { 1 , , M } . Thus,
U j ( y n ) y n = 1 β n t n j y n 0 , n
for all j { 1 , , M } . Therefore, from (28) and (34)
U j ( x n ) x n = U j ( x n ) U j ( y n ) + U j ( y n ) y n + y n x n U j ( y n ) y n + 2 y n x n 0 , n
for all j { 1 , , M } . Moreover, from (24) and (27)
T n ( x n ) x n = T n ( x n ) T n ( z n ) + T n ( z n ) z n + z n x n T n ( z n ) z n + 2 z n x n 0 , n .
From (C6), we have 0 < λ ^ < 2 L . Thus, let T : = P C ( I λ ^ f ) . Then, using the nonexpansiveness of projection mapping and (C6) of assumption 1 together with (28) and boundedness of { f ( z n ) } ( { z n } is bouded and f is Lipschitz continuous), we get
T ( z n ) x n = T ( z n ) y n + y n x n T ( z n ) y n + y n x n = P C ( z n λ ^ f ( z n ) ) P C ( z n λ n f ( z n ) ) + y n x n | λ ^ λ n | f ( z n ) + y n x n 0 , n .
Hence, in view of (27), (37), and the nonexpansiveness of T, we get
T ( x n ) x n = T ( x n ) T ( z n ) + T ( z n ) x n x n z n + T ( z n ) x n 0 , n .
Let p be a weak cluster point of { x n } , there exists a subsequence { x n k } of { x n } such that x n k p as k . We observe that p C because { x n k } C and C is weakly closed. Assume p F i x ( U j 0 ) for some j 0 { 1 , , M } . Since x n k p and U j 0 is a nonexpansive mapping, from (35) and Opial’s condition, one has
lim inf k + x n k p < lim inf k + x n k U j 0 ( p ) = lim inf k + x n k U j 0 ( x n k ) + U j 0 ( x n k ) U j 0 ( p ) lim inf k + ( x n k U j 0 ( x n k ) + U j 0 ( x n k ) U j 0 ( p ) ) = lim inf k + U j 0 ( x n k ) U j 0 ( p ) lim inf k + x n k p
which is a contradiction. It must be the case that p F i x ( U j ) for all j { 1 , , M } . Similarly, using Opial’s condition and (38), we can show that p F i x ( T ) , i.e., p Γ . Therefore, p Ω = ( j = 1 M F i x U j ) Γ .
Next, we show that lim sup n F ( x ¯ ) , x ¯ x n + 1 0 . Indeed, since x ¯ S V I ( Ω , F ) and p Ω , we obtain that
lim sup n F ( x ¯ ) , x ¯ x n = lim k F ( x ¯ ) , x ¯ x n k = F ( x ¯ ) , x ¯ p 0 .
Since x n + 1 x n 0 from (33), by (39), we have
lim sup n F ( x ¯ ) , x ¯ x n + 1 0 .
From (11), (14) and (19), we have
x n + 1 x ¯ 2 ( 1 ρ n α n τ ) t n x ¯ 2 + ρ n z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 α n τ ) z n x ¯ 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 α n τ ) ( x n x ¯ + θ n x n x n 1 ) 2 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 α n τ ) ( x n x ¯ 2 + θ n 2 x n x n 1 2 + 2 θ n x n x n 1 x n x ¯ ) 2 α n μ F ( x ¯ ) , x n + 1 x ¯ ( 1 α n τ ) x n x ¯ 2 + θ n 2 x n x n 1 2 + 2 θ n x n x n 1 x n x ¯ 2 α n μ F ( x ¯ ) , x n + 1 x ¯ .
Since { x n } is bounded, there exists M 2 > 0 such that x n x ¯ M 2 for all n 1 . Thus, in view of (39), we have
x n + 1 x ¯ 2 ( 1 α n τ ) x n x ¯ 2 + θ n x n x n 1 ( θ n x n x n 1 + 2 M 2 ) + 2 α n μ F ( x ¯ ) , x ¯ x n + 1 .
Therefore, from (41), we get
Γ n + 1 1 ω n Γ n + ω n ϑ n ,
where ω n = α n τ and
ϑ n = 1 τ θ n α n x n x n 1 θ n x n x n 1 + 2 M 2 + 2 μ τ F ( x ¯ ) , x ¯ x n + 1 .
From (C2) and Remark 1, we have n = 1 ω n = and lim sup n ϑ n 0 . Thus, using Lemma 5 and (41), we get Γ n 0 as n . Hence, x n x ¯ as n .
Case 2. Assume that { Γ n } does not decrease at infinity. Let φ : N N be a mapping for all n n 0 (for some n 0 large enough) defined by
φ ( n ) = max { k N : k n , Γ k Γ k + 1 } .
By Lemma 6, { φ ( n ) } n = n 0 is a nondecreasing sequence, φ ( n ) as n and
Γ φ ( n ) Γ φ ( n ) + 1 a n d Γ n Γ φ ( n ) + 1 , n n 0 .
In view of x φ ( n ) x ¯ 2 x φ ( n ) + 1 x ¯ 2 = Γ φ ( n ) Γ φ ( n ) + 1 0 for all n n 0 and (22), for all n n 0 we have
δ φ ( n ) ( 1 ρ φ ( n ) α φ ( n ) τ ) ( 1 δ φ ( n ) ) T φ ( n ) ( z φ ( n ) ) z φ ( n ) 2 ( Γ φ ( n ) Γ φ ( n ) + 1 ) + α φ ( n ) M 1 + ( 1 α φ ( n ) τ ) θ φ ( n ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 ( 1 α φ ( n ) τ ) θ φ ( n ) x φ ( n ) x φ ( n ) 1 2 α φ ( n ) M 1 + ( 1 α φ ( n ) τ ) θ φ ( n ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 ( 1 α φ ( n ) τ ) θ φ ( n ) x φ ( n ) x φ ( n ) 1 2 α φ ( n ) M 1 + ( 1 α φ ( n ) τ ) θ φ ( n ) x φ ( n ) x φ ( n ) 1 Γ φ ( n ) + Γ φ ( n ) 1 + 2 ( 1 α φ ( n ) τ ) θ φ ( n ) x φ ( n ) x φ ( n ) 1 2 .
Thus, from (43), conditions (C3) and (C4), and Remark 1, we have
T φ ( n ) ( z φ ( n ) ) z n 0 , n .
Similarly,
U j φ ( n ) ( y φ ( n ) ) y φ ( n ) 0 , n .
Using similar procedure as above in Case 1, we have lim n x φ ( n ) + 1 x φ ( n ) = 0 and for T : = P C ( I λ ^ f ) , we have
lim n T ( x φ ( n ) ) x φ ( n ) = lim n U j ( x φ ( n ) ) x φ ( n ) = 0
for all j { 1 , , M } . Since { x φ ( n ) } is bounded, there exists a subsequence of { x φ ( n ) } , still denoted by { x φ ( n ) } , which converges weakly to p. By similar argument as above in Case 1, we conclude immediately that p Ω . In addition, by the similar argument as above in Case 1, we have lim sup n F ( x ¯ ) , x ¯ x φ ( n ) 0 . Since lim n x φ ( n ) + 1 x φ ( n ) = 0 , we get lim sup n F ( x ¯ ) , x ¯ x φ ( n ) + 1 0 .
From (41), we have
Γ φ ( n ) + 1 1 ω φ ( n ) Γ φ ( n ) + ω φ ( n ) ϑ φ ( n ) ,
where ω φ ( n ) = α φ ( n ) τ and
ϑ φ ( n ) = 1 τ θ φ ( n ) α φ ( n ) x φ ( n ) x φ ( n ) 1 θ φ ( n ) x φ ( n ) x φ ( n ) 1 + 2 M 2 + 2 μ τ F ( x ¯ ) , x ¯ x φ ( n ) + 1 .
Using Γ φ ( n ) Γ φ ( n ) + 1 0 for all n n 0 and ϑ φ ( n ) > 0 , the last inequality gives
0 ω φ ( n ) Γ φ ( n ) + ω φ ( n ) ϑ φ ( n ) .
Since ω φ ( n ) > 0 , we obtain x φ ( n ) x ¯ 2 = Γ φ ( n ) ϑ φ ( n ) . Moreover, since lim sup n ϑ φ ( n ) 0 , we have lim n x φ ( n ) x ¯ = 0 . Thus, lim n x φ ( n ) x ¯ = 0 together with lim n x φ ( n ) + 1 x φ ( n ) = 0 , gives lim n Γ φ ( n ) + 1 = 0 . Therefore, from (42), we obtain lim n Γ n = 0 , that is, x n x ¯ as n .
This completes the proof. □

4. Applications

The mapping F : H H , given by F ( x ) = x p for a fixed point p H , is one simple example of β -strongly monotone and κ -Lipschitz continuous mapping, where β = 1 and κ = 1 . If F ( x ) = x p for a fixed point p H , then, BVIPO-FM becomes the problem of finding the projection of p onto ( j = 1 M F i x U j ) Γ . When p = 0 , this projection is the minimum-norm solution in ( j = 1 M F i x U j ) Γ .
Let BVIPO-M denote the bilevel variational inequality problem over the intersection of the set of common solution points of finite number of constrained minimization problems, stated as follows: For a closed convex subset C of a real Hilbert space H, a nonlinear mapping F : H H and a real-valued convex functions f j : C R for j { 0 , 1 , , M } , BVIPO-M is the problem given by
f i n d   x ¯ Ω   s u c h   t h a t   F ( x ¯ ) , x x ¯ 0 , x Ω ,
where Ω is the solution-set of
f i n d x C   s u c h   t h a t   f j ( x ) = min x C f j ( x ) , j { 0 , 1 , , M } .
If the gradient of f j ( f j ) is L j -Lipschitz continuous on C, then, for 0 < ς < 2 L j the mapping P C ( I ς f j ) is nonexpansive mapping and { x C : f j ( x ) = min x C f j ( x ) } = F i x ( P C ( I ς f j ) ) . This leads to the following corollary as an immediate consequence of our main theorem for approximation of solution of BVIPO-M, assuming that SVIP ( F , Ω ) is nonempty.
Corollary 1.
If F satisfies condition (A1), f = f 0 satisfy condition (A2), and the gradient of each f j (each f j ) is L j -Lipschitz continuous on C for all j { 1 , , M } , then, for 0 < ς < 2 max { L 1 , , L M } , replacing each U j by P C ( I ς f j ) for all j { 1 , , M } in Algorithm 1 (in Step 4), the sequence { x n } generated by the algorithm strongly converges to the unique solution of BVIPO-M.
Let C be closed convex subset C of a real Hilbert space H, F : H H is a mapping, f : C R is a real-valued convex function, and each g j : C × C R is a bifunction for j { 1 , , M } . BVIPO-EM denotes the bilevel variational inequality problem over the intersection of the set of common solution points of a finite number of equilibrium problems and the set of solution points of the constrained minimization problem given by
find x ¯ Ω such that F ( x ¯ ) , x x ¯ 0 , x Ω ,
where Ω is the solution-set of
find x C such that f ( x ) = min x C f ( x ) and x j = 1 M SEP ( g j , C ) .
If each g j satisfies Condition CO on C for all j { 1 , , M } , then, by Lemma 7 (1) and (3), for each j { 1 , , M } , T r g j is nonexpansive and F i x T r g j = SEP ( g j , C ) . Applying Theorem 1, we obtain the following result for approximation of solution of BVIPO-EM, assuming that SVIP ( F , Ω ) is nonempty.
Corollary 2.
If F satisfy condition (A1), f satisfy condition (A2), and each g j satisfies Condition CO on C for all j { 1 , , M } , then, for r > 0 , replacing each U j by T r g j for all j { 1 , , M } in Algorithm 1 (in Step 4), the sequence { x n } generated by the algorithm strongly converges to the unique solution of BVIPO-EM.
Let C be closed convex subset C of a real Hilbert space H, F : H H is a mapping, f : C R is a real-valued convex function and each F j : C H for j { 1 , , M } is a mapping for j { 1 , , M } . Now, suppose that BVIPO-VM denotes the bilevel variational inequality problem over the intersection of the set of common solution points of finite number of variational inequality problems and the set of solution points of the constrained minimization problem given by
f i n d   x ¯ Ω   s u c h   t h a t   F ( x ¯ ) , x x ¯ 0 , x Ω ,
where Ω is the solution-set of
f i n d   x C   s u c h   t h a t   f ( x ) = min x C f ( x )   a n d   x j = 1 M S V I P ( F j , C ) .
Note that if each F j is η j -inverse strongly monotone on C for all j { 1 , , M } and 0 < ϱ 2 η j , then,
(a) 
P C ( I ϱ F j ) is nonexpansive;
(b) 
x is fixed point of P C ( I ϱ F j ) iff x is the solution of the variational inequality problem VIP ( F j , C ) , i.e., F i x ( P C ( I ϱ F j ) ) = SVIP ( F j , C ) .
By Theorem 1, we have the following corollary for approximation of solution of BVIPO-VM, assuming that SVIP ( F , Ω ) is nonempty.
Corollary 3.
If F satisfy condition (A1), f satisfy condition (A2) and each F j is η j -inverse strongly monotone on C for all j { 1 , , M } , then for 0 < ϱ 2 min { η 1 , , η M } , replacing each U j by P C ( I ϱ F j ) for all j { 1 , , M } in Algorithm 1 (in Step 4), the sequence { x n } generated by the algorithm strongly converges to the unique solution of BVIPO-VM.

5. Numerical Results

Example 1.
Consider the bilevel variational inequality problem
f i n d   x ¯ Ω   s u c h   t h a t   F ( x ¯ ) , x x ¯ 0 , x Ω ,
where Ω is the solution-set of
f i n d   x C   s u c h   t h a t   f j ( x ) = min x C f j ( x ) , j { 0 , 1 , , M }
for H = R N , C = { x = ( x 1 , , x N ) R N : 2 x i 2 , i { 1 , , N } } , and F and f j are given by
F ( x ) = F ( x 1 , , x N ) = ( γ 1 x 1 , , γ N x N ) ,
f j ( x ) = 1 2 ( I P D j ) A j x 2 , j { 0 , 1 , , M } ,
where γ i > 0 for all i { 1 , , N } ,
D j = x = ( x 1 , , x N ) R N : 1 j + 1 x i 1 j + 2 , i { 1 , , N } ,
A j : R N R N is given by A j = σ j I N × N for σ j > 0 ( I N × N is N × N identity matrix) for j { 0 , 1 , , M } . Note the following:
(i) 
F is β-strongly monotone and κ-Lipschitz continuous on H = R N , where β = min { γ i : i = 1 , , N } and κ = max { γ i : i = 1 , , N } .
(ii) 
A j is bounded linear operator, A j = σ j ; and A j is self-adjoint operator.
(iii) 
The gradient of each f j (each f j ) is L j -Lipschitz continuous on C for all j { 0 , 1 , , M } , where L j = σ j 2 and f j is given by (see [38])
f j ( x ) = A j ( I P D j ) A j x = σ j 2 x σ j P D j ( σ j x ) .
(iv) 
For each j { 0 , 1 , , M } ,
{ x C : f j ( x ) = min x C f j ( x ) } = Γ j ,
where Γ j = x R N : 1 σ j ( j + 1 ) x i 1 σ j ( j + 2 ) , i = 1 , , N . Hence,
Ω = j = 1 M Γ j = x R N : L B x i U B , i { 1 , , N } ,
where L B = max 1 σ j ( j + 1 ) : j { 0 , 1 , , M } and U B = min 1 σ j ( j + 2 ) : j { 0 , 1 , , M } .
(v) 
0 is the solution of the given bilevel variational inequality problem, i.e., SVIP( Ω , F ) = { 0 } .
We set σ j = 2 j for each j { 0 , 1 , , M } and M = 4 . Therefore,
Ω = x R N : 1 80 x i 1 96 , i { 1 , , N }
and the gradient of f = f 0 is L-Lipschitz continuous on C where L = L 0 = σ 0 2 = 1 . We will test our experiment for different dimension N and different parameters.
Take θ = 1 2 and γ i = i for each i { 1 , , N } . Thus, F is 1-strongly monotone and N-Lipschitz continuous on R N . Hence, notice that the positive real constants μ, ς, and λ n are chosen to be 0 < ς < 2 max { L 1 , L 2 , L 3 , L 4 } = 1 128 , 0 < μ < min { 2 N 2 , 1 2 } , and 0 < a λ n b < 2 N . We describe the numerical results of Algorithm 1 (applying Corollary 1) for the positive real constants μ and ς given by ς = 1 200 and
μ = 1 3 , i f N = 1 , 2 2 N 2 1 , i f N = 3 , 4 , 5 , .
In Figure 1 and Figure 2 and Table 1, the real sequences { α n } , { ε n } , { ρ n } , { λ n } , { β n } , { θ n } are chosen as follows:
Data 1. 
α n = 1 2 n + 4 , ε n = 1 ( n + 2 ) 2 , ρ n = n + 3 2 n + 4 , λ n = 2 N + 1 , β n = n + 3 2 n + 2 , θ n = θ ¯ n .
Data 2. 
α n = 1 3 n 0.5 + 1 , ε n = 1 3 n 1.5 + n , ρ n = 2 n 0.5 1 3 n 0.5 + 1 , λ n = 1 N , β n = 1 2 , θ n = θ ¯ n .
Data 3. 
α n = 1 5 n , ε n = 1 n 3 , ρ n = 4 n 1 5 n + 1 , λ n = 1 N + 1 , β n = 10 n + 91 11 n + 110 , θ n = θ ¯ n .
The stopping criteria in Table 1 is defined as x n x n 1 10 3 .
Figure 3 demonstrates the behavior of Algorithm 1 for different parameters ρ n (Case 1: ρ n = 1 5 n + 3 ; Case 2: ρ n = 2 n + 2 5 n + 3 ; Case 2: ρ n = 3 n + 3 5 n + 3 ; Case 4: ρ n = 4 n + 2 5 n + 3 ), where α n = 1 5 n + 3 , ε n = 1 ( 5 n + 3 ) 3 , λ n = 1 N + 1 , β n = n + 10 10 n + 90 , θ n = θ ¯ n .
From Figure 1, Figure 2 and Figure 3 and Table 1, it is clear to see that your algorithm depends of the dimension, starting points, and parameters. From Figure 3, we can see that the sequence generated by the algorithm converges faster to the solution of the problem for the choice of ρ n , where ρ ( lim n ρ n = ρ ) is very close to 0.
Example 2.
Consider BVIPO-FM is given by
f i n d   x ¯ Ω   s u c h   t h a t   F ( x ¯ ) , x x ¯ 0 , x Ω ,
where Ω is the solution set of
f i n d   x C   s u c h   t h a t   f ( x ) = min x C f ( x )   a n d   x F i x U
for H = R N = C and F, f, and U are given by
F ( x ) = F ( x 1 , , x N ) = ( a 1 x 1 + b 1 , , a N x N + b N ) ,
f ( x ) = 1 2 ( I P D ) 2 x 2 ,
U x = x ,
where a i > 0 , b i > 0 for all i { 1 , , N } and
D = x R N : 2 max { b i : i = 1 , , N } min { a i : i = 1 , , N } x i 0 , i { 1 , , N } .
We took a i = i and b i = N + 1 i . Thus, F is β-strongly monotone and κ-Lipschitz continuous on H = R N , where β = 1 and κ = N . The gradient of f is L-Lipschitz continuous on C, where L = 1 and f is given by f ( x ) = 4 x 2 P Q ( 2 x ) . Moreover, Ω = { x R N : N x i 0 , i = 1 , , N } and
S V I P ( Ω , F ) = N , ( N 1 ) 2 , ( N 2 ) 3 , , 1 N .
Table 2 illustrates the numerical result of our algorithm, solving BVIPO-FM given in this example for different dimensions and different stopping criteria x n x n 1 x 1 x 0 T O L , where the parameters are given in the following: α n = 1 5 n 1 , ε n = 1 ( 5 n 1 ) 2 , ρ n = 1 5 , λ n = 1 N , β n = 1 2 , θ n = θ ¯ n .
For TOL = 10 5 , N = 4 , x 0 = ( 1 , 2 , 3 , 4 ) , and x 1 = ( 5 , 6 , 7 , 8 ) , the approximate solution obtained after 319 iterations is
x 319 = ( 3.978599508 , 1.487950389 , 0.641608433 , 0.24194702778 ) .

6. Conclusions

We have proposed a strongly convergent inertial algorithm for a class of bilevel variational inequality problem over the intersection of the set of common fixed points of finite number of nonexpansive mappings and the set of solution points of the constrained minimization problem of real-valued convex function (BVIPO-FM). The contribution of our result in this paper is twofold. First, it provides effective way of solving BVIPO-FM, where iterative scheme combines inertial term to speed up the convergence of the algorithm. Second, our result can be applied to find a solution to the bilevel variational inequality problem over the solution set of the problem P, where the problem P (the lower level problem) can be converted as a common fixed point of a finite number of nonexpansive mappings.

Author Contributions

All authors contributed equally in this research paper particularly on the conceptualization, methodology, validation, formal analysis, resource, and writing and preparing the original draft of the manuscript; however, the second author fundamentally plays a great role in supervision and funding acquisition as well. Moreover, the third author particularly wrote the code and run the algorithm in the MATLAB program.

Funding

Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi (KMUTT) and Theoretical and Computational Science (TaCS) Center. Moreover, Poom Kumam was supported by the Thailand Research Fund and the King Mongkut’s University of Technology Thonburi under the TRF Research Scholar Grant No.RSA6080047.The Rajamangala University of Technology Thanyaburi (RMUTTT) (Grant No.NSF62D0604).

Acknowledgments

The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Seifu Endris Yimer is supported by the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi (Grant No. 9/2561).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deb, K.; Sinha, A. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm. Evol. Comput. 2010, 18, 403–449. [Google Scholar] [CrossRef] [PubMed]
  2. Sabach, S.; Shtern, S. A first order method for solving convex bilevel optimization problems. SIAM J. Optim. 2017, 27, 640–660. [Google Scholar] [CrossRef]
  3. Shehu, Y.; Vuong, P.T.; Zemkoho, A. An inertial extrapolation method for convex simple bilevel optimization. Optim. Methods Softw. 2019, 1–19. [Google Scholar] [CrossRef]
  4. Anh, P.K.; Anh, T.V.; Muu, L.D. On bilevel split pseudomonotone variational inequality problems with applications. Acta Math. Vietnam. 2017, 42, 413–429. [Google Scholar] [CrossRef]
  5. Anh, P.N.; Kim, J.K.; Muu, L.D. An extragradient algorithm for solving bilevel pseudomonotone variational inequalities. J. Glob. Optim. 2012, 52, 627–639. [Google Scholar] [CrossRef]
  6. Anh, T.T.; Long, L.B.; Anh, T.V. A projection method for bilevel variational inequalities. J. Inequal. Appl. 2014, 1, 205. [Google Scholar] [CrossRef]
  7. Anh, T.V. A strongly convergent subgradient extragradient-Halpern method for solving a class of bilevel pseudomonotone variational inequalities. Vietnam J. Math. 2017, 45, 317–332. [Google Scholar] [CrossRef]
  8. Anh, T.V. Linesearch methods for bilevel split pseudomonotone variational inequality problems. Numer. Algorithms 2019, 81, 1067–1087. [Google Scholar] [CrossRef]
  9. Anh, T.V.; Muu, L.D. A projection-fixed point method for a class of bilevel variational inequalities with split fixed point constraints. Optimization 2016, 65, 1229–1243. [Google Scholar] [CrossRef]
  10. Chen, J.; Liou, Y.C.; Wen, C.F. Bilevel vector pseudomonotone equilibrium problems: Duality and existence. J. Nonlinear Convex Anal. 2015, 16, 1293–1303. [Google Scholar]
  11. Van Dinh, B.; Muu, L.D. On penalty and gap function methods for bilevel equilibrium problems. J. Appl. Math. 2011, 2011, 646452. [Google Scholar] [CrossRef]
  12. Yuying, T.; Van Dinh, B.; Plubtieng, S. Extragradient subgradient methods for solving bilevel equilibrium problems. J. Inequal. Appl. 2018, 2018, 327. [Google Scholar] [CrossRef] [PubMed]
  13. Bard, J.F. Practical Bilevel Optimization: Algorithms and Spplications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 30. [Google Scholar]
  14. Dempe, S. Annotated Bibliography on Bilevel Programming and Mathematical Programs with Equilibrium Constraints. Optimization 2003, 52, 333–359. [Google Scholar] [CrossRef]
  15. Apostol, R.Y.; Grynenko, A.A.; Semenov, V.V. Iterative algorithms for monotone bilevel variational inequalities. J. Comput. Appl. Math. 2012, 107, 3–14. [Google Scholar]
  16. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  17. Khanh, P.D. Convergence rate of a modified extragradient method for pseudomonotone variational inequalities. Vietnam J. Math. 2017, 45, 397–408. [Google Scholar] [CrossRef]
  18. Kalashnikov, V.V.; Kalashinikova, N.I. Solving two-level variational inequality. J. Glob. Optim. 1996, 45, 289–294. [Google Scholar] [CrossRef]
  19. Calamai, P.H.; Moré, J.J. Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39, 93–116. [Google Scholar] [CrossRef]
  20. Su, M.; Xu, H.K. Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1, 35–43. [Google Scholar]
  21. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  22. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  23. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  24. Ochs, P.; Brox, T.; Pock, T. ipiasco: Inertial proximal algorithm for strongly convex optimization. J. Math. Imaging Vis. 2015, 53, 171–181. [Google Scholar] [CrossRef]
  25. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  26. Bot, R.I.; Csetnek, E.R. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 2016, 71, 519–540. [Google Scholar] [CrossRef]
  27. Bot, R.I.; Csetnek, E.R. A hybrid proximal-extragradient algorithm with inertial effects. Numer. Func. Anal. Opt. 2015, 36, 951–963. [Google Scholar] [CrossRef]
  28. Mann, W.R. Mean value methods in iteration. Proc. Am. Math Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  29. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103. [Google Scholar] [CrossRef]
  30. Combettes, P.L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53, 475–504. [Google Scholar] [CrossRef]
  31. Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
  32. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Prob. 2010, 26, 105018. [Google Scholar] [CrossRef]
  33. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  34. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  35. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  36. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  37. Baillon, J.B.; Haddad, G. Quelques propriétés des opérateurs angle-bornés etn-cycliquement monotones. Isr. J. Math. 1977, 26, 137–150. [Google Scholar] [CrossRef]
  38. Tang, J.; Chang, S.S.; Yuan, F. A strong convergence theorem for equilibrium problems and split feasibility problems in Hilbert spaces. Fixed Point Theory A 2014, 2014, 36. [Google Scholar] [CrossRef] [Green Version]
Figure 1. For N = 100 and x 0 , x 1 are for randomly generated starting points x 0 and x 1 (the same starting points for Data 1 and Data 2).
Figure 1. For N = 100 and x 0 , x 1 are for randomly generated starting points x 0 and x 1 (the same starting points for Data 1 and Data 2).
Mathematics 07 00841 g001
Figure 2. For Data 3 and for randomly generated starting points x 0 and x 1 .
Figure 2. For Data 3 and for randomly generated starting points x 0 and x 1 .
Mathematics 07 00841 g002
Figure 3. For N = 50 and starting points x 0 = 100 ( 1 , , 1 ) R N and x 1 = 50 ( 1 , , 1 ) R N .
Figure 3. For N = 50 and starting points x 0 = 100 ( 1 , , 1 ) R N and x 1 = 50 ( 1 , , 1 ) R N .
Mathematics 07 00841 g003
Table 1. For starting points x 0 = 10 ( 1 , , 1 ) R N and x 1 = 10 x 0 .
Table 1. For starting points x 0 = 10 ( 1 , , 1 ) R N and x 1 = 10 x 0 .
N = 10 N = 1200
Iter(n)CPU(s) x n Iter(n)CPU(s) x n
Data 1100.01650.423190.01820.5739
Data 290.01860.446180.01930.3755
Data 3100.01780.135680.01910.4524
Table 2. For starting points x 0 = 100 ( 1 , , 1 ) R N and x 1 = 100 x 0 .
Table 2. For starting points x 0 = 100 ( 1 , , 1 ) R N and x 1 = 100 x 0 .
TOL = 10 2 TOL = 10 3 TOL = 10 4
Iter(n)CPU(s)Iter(n)CPU(s)Iter(n)CPU(s)
N = 2 40.00345230.31631070.9705
N = 10 100.02217360.48021161.2201
N = 100 210.20681470.92071491.8491

Share and Cite

MDPI and ACS Style

Yimer, S.E.; Kumam, P.; Gebrie, A.G.; Wangkeeree, R. Inertial Method for Bilevel Variational Inequality Problems with Fixed Point and Minimizer Point Constraints. Mathematics 2019, 7, 841. https://doi.org/10.3390/math7090841

AMA Style

Yimer SE, Kumam P, Gebrie AG, Wangkeeree R. Inertial Method for Bilevel Variational Inequality Problems with Fixed Point and Minimizer Point Constraints. Mathematics. 2019; 7(9):841. https://doi.org/10.3390/math7090841

Chicago/Turabian Style

Yimer, Seifu Endris, Poom Kumam, Anteneh Getachew Gebrie, and Rabian Wangkeeree. 2019. "Inertial Method for Bilevel Variational Inequality Problems with Fixed Point and Minimizer Point Constraints" Mathematics 7, no. 9: 841. https://doi.org/10.3390/math7090841

APA Style

Yimer, S. E., Kumam, P., Gebrie, A. G., & Wangkeeree, R. (2019). Inertial Method for Bilevel Variational Inequality Problems with Fixed Point and Minimizer Point Constraints. Mathematics, 7(9), 841. https://doi.org/10.3390/math7090841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop