1. Introduction
Bilevel problem is defined as a mathematical program, where the problem contains another problem as a constraint. Mathematically, bilevel problem is formulated as follows:
where
S is the solution set of the problem
Usually, (
1) is called the upper level problem and (
2) is called the lower level problem. Many real life problems can be modeled as a bilevel problem and some studies have been performed towards solving different kinds of bilevel problems using approximation theory—see, for example, for bilevel optimization problem [
1,
2,
3], for bilevel variational inequality problem [
4,
5,
6,
7,
8,
9], for bilevel equilibrium problems [
10,
11,
12], and [
13,
14] for its practical applications. In [
14], application of bilevel problem (bilevel optimization problem) in transportation (network design, optimal pricing), economics (Stackelberg games, principal-agent problem, taxation, policy decisions), management (network facility location, coordination of multi-divisional firms), engineering (optimal design, optimal chemical equilibria), etc. has been demonstrated. Due to the vast applications of bilevel problems, the research on approximation algorithm for bilevel problems has increased over years and is still in nascent stage.
A simple example of the practical bilevel model is a supplier and a store owner of a business chain (supply chain management), i.e., suppose the supplier will always give his/her best output of some commodities to the store owner in their business’s chain. Since both want to do well in their businesses, the supplier will always give his/her best output to the store owner who in turn would like to do his/her best in the business. In some sense, both would like to minimize their loss or rather maximize their profit and thus act in the optimistic pattern. It is clear that, in this example, the store owner is the upper-level decision maker and the supplier is the lower-level decision maker. Thus, in the study of supply chain management, the bilevel problem can indeed play a fundamental role.
In this paper, our main aim is to solve a bilevel variational inequality problem over the intersection of the set of common fixed points of finite number of nonexpansive mappings, denoted by BVIPO-FM, and the set of solution points of the constrained minimization problem of real-valued convex function. To be precise, let
C be closed convex subset of a real Hilbert space
H,
is a mapping,
is a real-valued convex function, and
is a nonexpansive mapping for each
. Then, BVIPO-FM is given by
where
is the solution set of
The notation
represents the set of fixed points of
, i.e.,
for
. Thus,
where
is the solution set of constrained convex minimization problem given by
The problem (
3) is a classical variational inequality problem, denoted by
, which was studied by many authors—for example, see in [
7,
15,
16,
17] and references therein. The solution set of the variational inequality problem
is denoted by
. Therefore, BVIPO-FM is obtained by solving
, where
. Bilevel problem with upper-level problem is variational inequality problem, which was introduced in [
18]. These problems have received significant attention from the mathematical programming community. Bilevel variational inequality problem can be used to study various bilevel models in optimization, economics, operations research, and transportation.
It is known that the gradient-projection algorithm—given by
where the parameters
are real positive numbers—is one of the powerful methods for solving the minimization problem (
5) (see [
19,
20,
21]). In general, if the gradient
is Lipschitz continuous and strongly monotone, then, the sequence
generated by recursive Formula (
6) converges strongly to a minimizer of (
6), where the parameters
satisfy some suitable conditions. However, if the gradient
is only to be inverse strongly monotone, the sequence
generated by (
6) converges weakly.
In approximation theory, constructing iterative schemes with speedy rate of convergence is usually of great interest. For this purpose, Polyak [
22] proposed an inertial accelerated extrapolation process to solve the smooth convex minimization problem. Since then, there are growing interests by authors working in this direction. Due to this reason, a lot of researchers constructed fast iterative algorithms by using inertial extrapolation, including inertial forward–backward splitting methods [
23,
24], inertial Douglas–Rachford splitting method [
25], inertial forward-–backward-–forward method [
26], inertial proximal-extragradient method [
27], and others.
In this paper, we introduce an algorithm with inertial effect for solving BVIPO-FM using projection method for the variational inequality problem, the well-known Mann iterative scheme [
28] for the nonexpansive mappings
’s, and gradient-projection for the function
f. It is proved that the sequence generated by our proposed algorithm converges strongly to the solution of BVIPO-FM.
2. Preliminary
Let
H be a real Hilbert space
H. The symbols “
” and “
” denote weak and strong convergence, respectively. Recall that for a nonempty closed convex subset
C of
H, the metric projection on
C is a mapping
, defined by
Lemma 1. Let C be a closed convex subset of H. Given and a point , then, if and only if
Definition 1. For , the mapping is said to be L-Lipschitz on C if there exists such that If , then, we call T a contraction mapping on C with constant L. If , then, T is called a nonexpansive mapping on C.
Definition 2. The mapping is said to be firmly nonexpansive if Alternatively, is firmly nonexpansive if T can be expressed aswhere is nonexpansive. The class of firmly nonexpansive mappings belong to the class of nonexpansive mappings.
Definition 3. The mapping is said to be
- (a)
- (b)
β-strongly monotone if there exists a constant such that - (c)
ν-inverse strongly monotone (ν-ism) if there exists such that
Definition 4. The mapping is said to be an averaged mapping if it can be written as the average of the identity mapping I and a nonexpansive mapping, that iswhere and is nonexpansive. More precisely, when (7) holds, we say that T is α-averaged. It is easy to see that firmly nonexpansive mapping (in particular, projection) is
-averaged and 1-inverse strongly monotone mappings. Averaged mappings and
-inverse strongly monotone mapping (
-ism) have received many investigations, see [
29,
30,
31,
32]. The following propositions about averaged mappings and inverse strongly monotone mappings are some of the important facts in our discussion in this paper.
Proposition 1 ([
29,
30])
. Let the operators be given:- (i)
If for some and if S is averaged and V is nonexpansive, then, T is averaged.
- (ii)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
- (iii)
If , for some and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
- (iv)
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then, the composite is α-averaged, where .
Proposition 2 ([
29,
31])
. Let be given. We have- (a)
T is nonexpansive if and only if the complement is -ism;
- (b)
If T is ν-ism and , then is -ism;
- (c)
T is averaged if and only if the complement is ν-ism for some . Indeed, for , T is α-averaged if and only if is -ism.
Lemma 2. (Opial’s condition) For any sequence in the Hilbert space H with , the inequalityholds for each with . Lemma 3. For a real Hilbert space H, we have
- (i)
- (ii)
Lemma 4. Let H be real Hilbert space. Then, and , we have Lemma 5 ([
33])
. Let and be a sequences of non-negative real numbers, and be a sequence of real numbers such thatwhere and .- (i)
If for some , then, is a bounded sequence.
- (ii)
If and , then, as .
Definition 5. Let be a real sequence. Then, decreases at infinity if there exists such that for . In other words, the sequence does not decrease at infinity, if there exists a subsequence of such that for all .
Lemma 6 ([
34])
. Let be a sequence of real numbers that does not decrease at infinity. Additionally, consider the sequence of integers defined byThen, is a nondecreasing sequence verifying and for all , the following two estimates hold: Let
C be closed convex subset of a real Hilbert space
H and given a bifunction
. Then, the problem
is called equilibrium problem (Fan inequality [
35]) of
g on
C, denoted by EP
. The set of all solutions of the EP
is denoted by SEP
, i.e.,
If
for every
, where
A is a mapping from
C into
H, then, the equilibrium problem becomes the variational inequality problem.
We say that the bifunction satisfies Condition CO on C if the following four assumptions are satisfied:
- (i)
, for all
- (ii)
g is monotone on H, i.e., , for all
- (iii)
for each ,
- (iv)
is convex and lower semicontinuous on H for each .
Lemma 7 ([
36])
. If g satisfies Condition on C, then, for each and , the mapping given bysatisfies the following conditions:- (1)
is single-valued;
- (2)
is firmly nonexpansive, i.e., for all , - (3)
Fix, where Fix is the fixed point set of ;
- (4)
is closed and convex.
3. Main Result
In this paper, we are interested in finding a solution to BVIPO-FM, where F and f satisfy the following conditions:
- (A1)
is -strongly monotone and -Lipschitz continuous on H.
- (A2)
The gradient is L-Lipschitz continuous on C.
We are now in a position to state our inertial algorithm and prove its strong convergence to the solution of BVIPO-FM assuming that F satisfies condition (A1), f satisfies condition (A2), and is nonempty.
We have plenty of choices for , , and satisfying parameter restrictions (C3), (C4), and (C5). For example, if we take , and , then, , , (), and . Therefore, (C3), (C4), and (C5) are satisfied.
Remark 1. From (C4) and Step 1 of Algorithm 1, we have that Since is bounded, we also have Note that Step 1 of Algorithm 1 is easily implemented in numerical computation since the value of is a priori known before choosing .
Algorithm 1—Inertial Algorithm for BVIPO-FM |
Initialization: Choose , . Let a positive real constants , and the real sequences , , , , satisfy the following conditions: - (C1)
; - (C2)
; - (C3)
, and ; - (C4)
and ; - (C5)
and ; - (C6)
and ; - (C7)
.
- Step 1.
Given the iterates and ( ), choose such that where
- Step 2.
Evaluate . - Step 3.
Evaluate . - Step 4.
Evaluate for each . - Step 5.
Evaluate . - Step 6.
Compute
where
|
Remark 2. Note that the point solves the minimization problem (5) if and only ifwhere is any fixed positive number. Therefore, the solution set Γ of the problem (5) is closed and convex subset of H, because for the mapping is nonexpansive mapping and solution points of (5) are fixed points of . Moreover, is nonexpansive and hence is closed and convex for each . Lemma 8. For a real number with , the mapping is -averaged.
Proof. Since
is
L-Lipschitz, the gradient
is
-ism [
37], which then implies that
is
-ism. So by Proposition 2
,
is
-averaged. Now since the projection
is
-averaged, we see from Proposition 2
that the composite
is
-averaged. Therefore, for some nonexpansive mapping
T,
can written as
where
. Note that, in view of Remark 2 and (
8), the point
solves the minimization problem (
5) if and only if
□
Lemma 9. For each n, the mapping defined in Step 6 of Algorithm 1 satisfies the inequalitywhere Proof. From (C2), it is easy to see that
This implies that
By the strong monotonicity and the Lipschitz continuity of
F, we have
From (
9) and (
10), we have
where
□
Theorem 1. The sequence generated by Algorithm 1 converges strongly to the unique solution of BVIPO-FM.
Proof. Let .
Now, from the definition of
, we get
Note that for each
n, there is a nonexpansive mapping
such that
, where
for
and
. Now, using Lemma 4 and the fact that
, we have
Let
be the sequence of natural numbers such that
where
. This means that
. Thus, by Lemma 4
Using the definition of
, (
14) and Lemma 9, we get
where
. Observe that by condition (C3) and by Remark 1, we see that
Thus, by Lemma 5 the sequence is bounded. As a consequence, , , , and are also bounded.
Now, using the definition of
and Lemma 3
, we obtain
Again, by Lemma 3
, we have
From (
16) and (
17), and since
, we get
Using the definition of
together with (
14) and Lemma 9, we have
From (
12) and (
13), we obtain
In view of (
19) and (
20), we get
Since the sequence
is bounded, there exists
such that
for all
. Thus, from (
18) and (
21), we get
Let us distinguish the following two cases related to the behavior of the sequence , where .
Case 1. Suppose the sequence decrease at infinity. Thus, there exists such that for . Then, converges and as .
Since
(
) and using condition (C3) and Remark 1 (noting
,
,
and
is bounded), from (
22) we have
The conditions (C2) and (C5) (i.e.,
,
and
), together with (
23) and the fact that
, we obtain
Similarly, from (
23) and the restriction condition imposed on
in (C6), together with conditions (C2) and (C5), we have
Thus, using the definition of
together with (
24) gives
Moreover, using the definition of
and Remark 1, we have
By the definition of
together with (
25) gives
Again, from (
26) and (
29), we obtain
By the definition of
, with the parameter restriction conditions (C2) and (C6) together with (
31) and boundedness of
, we have
Results from (
30) and (
32) give
By definition of
and
, and using (
30), for all
, we have
and this together with (
28), yields
for all
. Thus,
for all
. Therefore, from (
28) and (
34)
for all
. Moreover, from (
24) and (
27)
From (C6), we have
. Thus, let
. Then, using the nonexpansiveness of projection mapping and (C6) of assumption 1 together with (
28) and boundedness of
(
is bouded and
is Lipschitz continuous), we get
Hence, in view of (
27), (
37), and the nonexpansiveness of
T, we get
Let
p be a weak cluster point of
, there exists a subsequence
of
such that
as
. We observe that
because
and
C is weakly closed. Assume
for some
. Since
and
is a nonexpansive mapping, from (
35) and Opial’s condition, one has
which is a contradiction. It must be the case that
for all
. Similarly, using Opial’s condition and (
38), we can show that
, i.e.,
. Therefore,
.
Next, we show that
. Indeed, since
and
, we obtain that
Since
from (
33), by (
39), we have
From (
11), (
14) and (
19), we have
Since
is bounded, there exists
such that
for all
. Thus, in view of (
39), we have
Therefore, from (
41), we get
where
and
From (C2) and Remark 1, we have
and
. Thus, using Lemma 5 and (
41), we get
as
. Hence,
as
.
Case 2. Assume that
does not decrease at infinity. Let
be a mapping for all
(for some
large enough) defined by
By Lemma 6,
is a nondecreasing sequence,
as
and
In view of
for all
and (
22), for all
we have
Thus, from (
43), conditions (C3) and (C4), and Remark 1, we have
Using similar procedure as above in Case 1, we have
and for
, we have
for all
. Since
is bounded, there exists a subsequence of
, still denoted by
, which converges weakly to
p. By similar argument as above in Case 1, we conclude immediately that
. In addition, by the similar argument as above in Case 1, we have
. Since
, we get
.
From (
41), we have
where
and
Using
for all
and
, the last inequality gives
Since
, we obtain
Moreover, since
, we have
Thus,
together with
, gives
. Therefore, from (
42), we obtain
, that is,
as
.
This completes the proof. □
4. Applications
The mapping , given by for a fixed point , is one simple example of -strongly monotone and -Lipschitz continuous mapping, where and . If for a fixed point , then, BVIPO-FM becomes the problem of finding the projection of p onto . When , this projection is the minimum-norm solution in .
Let BVIPO-M denote the bilevel variational inequality problem over the intersection of the set of common solution points of finite number of constrained minimization problems, stated as follows: For a closed convex subset
C of a real Hilbert space
H, a nonlinear mapping
and a real-valued convex functions
for
, BVIPO-M is the problem given by
where
is the solution-set of
If the gradient of () is -Lipschitz continuous on C, then, for the mapping is nonexpansive mapping and . This leads to the following corollary as an immediate consequence of our main theorem for approximation of solution of BVIPO-M, assuming that SVIP is nonempty.
Corollary 1. If F satisfies condition (A1), satisfy condition (A2), and the gradient of each (each ) is -Lipschitz continuous on C for all , then, for , replacing each by for all in Algorithm 1 (in Step 4), the sequence generated by the algorithm strongly converges to the unique solution of BVIPO-M.
Let
C be closed convex subset
C of a real Hilbert space
H,
is a mapping,
is a real-valued convex function, and each
is a bifunction for
. BVIPO-EM denotes the bilevel variational inequality problem over the intersection of the set of common solution points of a finite number of equilibrium problems and the set of solution points of the constrained minimization problem given by
where
is the solution-set of
If each satisfies Condition CO on C for all , then, by Lemma 7 (1) and (3), for each , is nonexpansive and . Applying Theorem 1, we obtain the following result for approximation of solution of BVIPO-EM, assuming that SVIP is nonempty.
Corollary 2. If F satisfy condition (A1), f satisfy condition (A2), and each satisfies Condition CO on C for all , then, for , replacing each by for all in Algorithm 1 (in Step 4), the sequence generated by the algorithm strongly converges to the unique solution of BVIPO-EM.
Let
C be closed convex subset
C of a real Hilbert space
H,
is a mapping,
is a real-valued convex function and each
for
is a mapping for
. Now, suppose that BVIPO-VM denotes the bilevel variational inequality problem over the intersection of the set of common solution points of finite number of variational inequality problems and the set of solution points of the constrained minimization problem given by
where
is the solution-set of
Note that if each is -inverse strongly monotone on C for all and , then,
- (a)
is nonexpansive;
- (b)
is fixed point of iff is the solution of the variational inequality problem VIP, i.e., .
By Theorem 1, we have the following corollary for approximation of solution of BVIPO-VM, assuming that SVIP is nonempty.
Corollary 3. If F satisfy condition (A1), f satisfy condition (A2) and each is -inverse strongly monotone on C for all , then for , replacing each by for all in Algorithm 1 (in Step 4), the sequence generated by the algorithm strongly converges to the unique solution of BVIPO-VM.