1. Introduction and Preliminaries
Let C be a closed, convex and nonempty subset of a Hilbert space H. The inner product and its induced norm of H are denoted by and respectively.
Problem 1. Let be a nonlinear operator. We consider the following variational inequality problem The variational inequality, which serves as an important model in studying a wide class of real problems arising in traffic network, medical imaging, machine learning, transportation, etc. Due to its wide applications, this model unifies a number of optimization-related problems, such as, saddle problems, equilibrium problems, complementary problems, fixed point problems; see, e.g., [
1,
2,
3,
4,
5].
Next, one introduces an important tool in this paper: the nearest point (metric) projection. For all
, there exists a unique nearest point in
C, which is denoted by
, such that
Then
is called the nearest point (metric) projection from
H onto
C. It is known that the projection operator is firmly nonexpansive and can be characterized by
, which is also equivalent to
; see [
6]. As a routine way, one can turn the variational inequality problem into a fixed point problem via a resolvent operator, that is,
for all
Let us recall some definitions of mappings involved in our study. An operator is said to be
- (i)
Sequentially weakly continuous if, for each weak sequence to , in this case we say that converges weakly to ;
- (ii)
Pseudomonotone on H if
- (iii)
Monotone on H if
- (iv)
L-Lipschitz continuous on H if there exists such that .
Suppose that is convex and continuously Fréchet differentiable. Then g is convex if and only if is monotone. At the same time, it is well known that is pseudo-monotone if and only if g is pseudo-convex.
Recently, iterative methods for solving variational inequalities and related optimization problems have been proposed and analyzed by many authors [
7,
8,
9,
10,
11]. Let us start with Korpelevich’s method [
12], which was proposed for the Euclidean case, known as the extragradient method. This method needs two calculations of the projection onto a nonempty closed convex subset, that is, it generates a sequence by the following iteration procedure
where the mapping
is monotone and
L-Lipschitz continuous for some
and
. Recently, the extragradient method has received great attention by many authors [
13]. It has been studied in various ways for solving a more general problem in the setting of Hilbert spaces. Typically, the extragradient method has been successfully applied for solving the pseudomonotone variational inequality, see [
14].
Now, let us consider the inertial extrapolation, which can be regarded as an acceleration procedure of speeding up the convergence properties. Due to its importance, there are increasing interests in studying inertial-type algorithms; see, e.g., [
15,
16,
17,
18,
19] and the references therein. By incorporating the inertial extrapolation into the extragradient method, Dong et al. [
20] introduced the following inertial extragradient algorithm (EAI). Given any
,
for each
, where the mapping
is monotone and Lipschitz continuous with the constant
. The authors showed that
converges weakly to an element of VI(
) under the following conditions:
- (i)
is non-decreasing with and for each ;
- (ii)
such that
and
Note that inertial extragradient algorithm involving the inertial term mentioned above is weakly convergent.
Many problems, arising in a broad range of applied areas, such as image recovery, quantum physics, economics, control theory and mechanics, have been extensively studied in the infinite dimensional setting. In such problems, the norm convergence is essential, since the energy
of the error between the solution
x and the iterate
eventually becomes arbitrarily small. Furthermore, in the context of solving the convex optimization problem
the rate of convergence of
seems to be better in the case when
enjoys a strong convergence than that in the case when
enjoys a weak convergence. Thus, these naturally give rise to a question how to appropriately modify the inertial extragradient method so that the strong convergence is guaranteed. To solve the above question, we will propose two modified inertial extragradient algorithms. The first modification stems from the Mann-type method [
21,
22] and the other one is of viscosity nature [
23].
An obvious disadvantage of Algorithms (
2) and (
3) is the assumption that the mapping
A should be Lipschitz continuous and monotone. To avoid this restrictive assumption, in this paper, we show that our proposed algorithms can solve the pseudo-monotone variational inequality under suitable assumptions. It is worth mentioning that the class of pseudo-monotone mappings virtually contains the class of monotone mappings. Indeed, the scope of the related optimization problems can be enlarged from convex optimization problems to pseudoconvex optimization problems. This guarantees the advantage of modified inertial extragradient methods in comparing with the other solution methods.
The following lemmas will be used in the proof of our main results.
Lemma 1 ([
24]).
Let be nonnegative real sequence with relation where and are real sequences satisfying the restrictions , and . Then as Lemma 2 ([
25]).
Let be a real sequence defined in such that there exists a subsequence of with for all . There exists a nondecreasing sequence such that and the following properties are satisfied by all (sufficiently large) number : and . Indeed, is the largest number of n in the range of such that holds. The rest of the paper is organized as follows. In
Section 2, we give two variants of the inertial extragradient method for solving pseudo-monotone variational inequalities. We also prove strong convergence results for the proposed algorithms. In
Section 3, some numerical experiments are presented to deal with quadratic programming problems, which demonstrates the performances of our methods. Finally, the conclusion is given in the last section.
2. Algorithm and Convergence
Throughout the rest of the paper, one always systematically assumes that the following set of hypotheses:
The feasible set C is a nonempty, closed and convex set in a real Hilbert space H;
The operator is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous for some , with its solution set .
First, we present the algorithm for solving the pseudo-monotone variational inequality which combines the inertial extragradient method and Mann-type method.
The following propositions are known results of the iterative sequences generated by Algorithms 1 and 2, which are crucial for the proof of our convergence theorems, see [
14].
Proposition 1. Assume that A is pseudo-monotone and L-Lipschitz continuous with . Let be a solution of . Setting , we have Proposition 2. The mapping A is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous for some . Assume that . Assume additionally that and . Then the sequence generated by Algorithms 1 and 2 converges weakly to a solution of .
Algorithm 1: |
|
Now we are in a position to establish the main result of this note.
Theorem 1. Let , , be three real sequences in such that for some , , , and . Assume that the sequence is chosen such that . Then the sequence generated by Algorithm 1 converges to the solution in norm.
Proof. Let us fix
. To simplify the notations, one sets
. By applying Proposition 1, together with the definition of
, one easily obtains that
Invoking (
4), the definition of
implies that
Let
be a positive constant such that
. Due to the assumption
. Coming back to (
5), we obtain that
This clearly implies that
is bounded. As a result, we have that sequences
,
and
are bounded as well. Again, by using the definition of
, we find that
On the other hand,
In view of the definition of
, we deduce that
Invoking the boundedness of
and
, there exists a positive constant
such that
By combining inequalities (
7)–(
10) with Proposition 1, one asserts that
Note that
Setting
, we find that
. Furthermore, we can reformulate
as
It follows from the above equality that
Indeed, based on (
9), we have that
Combining (
12) with (
13), we find that
Now we prove that the sequence
converges to 0 by considering two possible cases on
.
Case 1. Suppose that there exists
such that
for all
. This implies that
exists. From conditions
and
, we find that
. Since
, it holds that
It follows from (
11) and (
15) and the conditions
and
that
From the nonexpansivity of
and the
L-Lipschitz continuity of
A, one concludes that
Combining (
16) with (
17), one has
It follows that
From (
16), (
18) and (
19), we obtain that
Recalling that
is bounded, one assumes that there exists a subsequence
of
such that
as
. Invoking (
19), one has that
as
. As a sequence, by use of Proposition 2, we find that
. From the fact
, we obtain that
From the boundedness of
and
, we infer
Combining (
21) with (
22), we further find that
From the condition
,
, (
14), (
20) and (
23), we conclude from Lemma 1 that
. In other words, it entails that
as
.
Case 2: Suppose that there exists a subsequence
of
such that
. In this case, by using Lemma 2, one sees that there exists a nondecreasing sequence
of
such that
and the following inequalities hold for all
,
By using (
11) and (
24), we have that
Recalling that
and
, it follows from (
15) and (
25) that
Using the same arguments as in the proof of
Case 1, one obtains that
and
Coming back to (
14), we have
In light of (
27)–(
29), we have
. Invoking (
24), we obtain that
, which further implies that
, as
. This completes the proof. □
The other algorithm reads as follows.
Algorithm 2: |
|
Now, we are ready to analyze the convergence of Algorithm 2. The outline of its proof is similar to that of Theorem 1.
Theorem 2. Let be a contraction mapping with the contraction parameter . Let , be two real sequences in such that for some , and . Assume that the sequence is chosen such that . Then the sequence generated by Algorithm 2 converges strongly to the solution , where .
Proof. Fixing
and using the same arguments as in the proof of Theorem 1, we infer
and
Now, using (
30) and the definition of
, one sees that
Since
, we set briefly
for some positive constant
. Coming back to (
33), we have that
It entails that
is bounded. This implies that
,
and
are bounded as well. We apply Proposition 1 and (
31) to get that
Again, by using Proposition 1 and (
31), together with (
9), one concludes that
Now let us show that the sequence
converges to zero. To obtain this result, we consider two possible cases on the sequence
.
Case 1: There exists
such that
for all
. Observe that
exists. Due to the condition
, we have that
Based on the conditions
and
, we have that
. Since
and
are bounded and the condition
holds, we find that
Combining (
32) with (
37), we have that
It follows that
In light of (
37)–(
39), we have that
The boundedness of
asserts that there exists a subsequence
of
such that
, as
. Invoking (
39), we observe that
, as
. And hence, it follows from Proposition 2 that
. Invoking
, we deduce that
Recalling the definition of
and the assumption
, we infer that
Combining (
41) with (
42), we find that
Invoking the conditions
,
,
,
, we apply Lemma 1 to get that
, that is,
.
Case 2: Suppose that there is no
such that
is monotonically decreasing. In this case, we can define a mapping
as
i.e.,
is the largest number
i in
such that
increases at
. Note that
is well-defined for all sufficiently large
k. On the other hand,
is a nondecreasing sequence such that
and the following inequalities hold for all
,
The conditions
and
entail that
. Combining (
34) and (
44) with the boundedness of
, we successfully find that
as
. Therefore, it follows from (
36) that
With the help of (
45), using the same arguments as in the proof of
Case 1, we infer that
According to (
35) and (
44), we have
Therefore, combining the condition
with (
46) and (
47), we have that
. And hence, it follows from (
44) that
, as
. This completes the proof. □
3. Numerical Results
In this section, we perform some computational experiments in support of the convergence properties of our proposed methods and compare our methods with Algorithm EAI, see [
20].
All programs are written in Matlab version 5.0 and computed on a PC Desktop Intel(R) Core (TM) i5-8250U CPU @1.60GHz. Consider the quadratic programming problem in the form below
with the following properties (
48)–(
50) in a
n-dimensional Euclidean space. When
is symmetric and positive-definite in
, and consequently
is pseudo-monotone and Lipschitz continuous with the constant
. Meanwhile, we choose the parameters
,
,
and
. We can check that all of conditions in Theorems 1 and 2 are satisfied. We choose randomly initial points
in the following experiments. Let us consider the first example [
26] with data (
48) given by
We apply Algorithm 1 to solve this problem in
. We take the iteration number
as the stopping criterion. As depicted in
Figure 1, one sees that the optimal solution of this problem is unique
.
We use the sequence
defined by
to study the convergence of different algorithms in
. From the definition of the metric projection, if
,
can be considered as an
-solution of this problem. We take the iteration number
as the stopping criterion. To illustrate the computational performance of all the algorithms, the numerical results are shown in
Figure 2. From the changing processes of the values of
, we find that Algorithm 2 has a better behavior than Algorithms 1, EAI. It achieves a more stable and higher precision with the number of iterations. Moreover, the convergence of
to 0, implies that the iterative sequence
converges to the solution of this test.
Now, we show the second example [
27] with the data (
49) expressed as
We apply Algorithm 1 to solve this problem in
. We set the iteration number
as the stopping criterion. As depicted in
Figure 1, one sees that the optimal solution of this problem is unique
.
This problem is solved for
,
matrix,
, 50 vector and
,
matrix,
, 30 vector, respectively. We use Algorithms 1 and 2 to solve this problem and we take the iteration number
as the stopping criterion. The test results are described in
Figure 3 and
Figure 4. Thus, we obtain the changing processes of
with respect to the number of iterations and the running time (
x-axis). From this, we find that the iterative sequences generated by Algorithms 1 and 2 converge to a unique solution.
Next, we consider another example [
27] with the data (
50) written as
We take the iteration number
as the stopping criterion in
. From the results reported in
Figure 5, one has shown the changing processes of the values of
–
(
y-axis) in terms of the number of iterations and the cpu time (
x-axis). Accordingly, one sees that Algorithm 1 has a convergent behavior.